id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
18974125
https://en.wikipedia.org/wiki/Bacillus%20anthracis
Bacillus anthracis
Bacillus anthracis is a gram-positive and rod-shaped bacterium that causes anthrax, a deadly disease to livestock and, occasionally, to humans. It is the only permanent (obligate) pathogen within the genus Bacillus. Its infection is a type of zoonosis, as it is transmitted from animals to humans. It was discovered by a German physician Robert Koch in 1876, and became the first bacterium to be experimentally shown as a pathogen. The discovery was also the first scientific evidence for the germ theory of diseases. B. anthracis measures about 3 to 5 μm long and 1 to 1.2 μm wide. The reference genome consists of a 5,227,419 bp circular chromosome and two extrachromosomal DNA plasmids, pXO1 and pXO2, of 181,677 and 94,830 bp respectively, which are responsible for the pathogenicity. It forms a protective layer called endospore by which it can remain inactive for many years and suddenly becomes infective under suitable environmental conditions. Because of the resilience of the endospore, the bacterium is one of the most popular biological weapons. The protein capsule (poly-D-gamma-glutamic acid) is key to evasion of the immune response. It feeds on the heme of blood protein haemoglobin using two secretory siderophore proteins, IsdX1 and IsdX2. Untreated B. anthracis infection is usually deadly. Infection is indicated by inflammatory, black, necrotic lesions (eschars). The sores usually appear on the face, neck, arms, or hands. Fatal symptoms include a flu-like fever, chest discomfort, diaphoresis (excessive sweating), and body aches. The first animal vaccine against anthrax was developed by French chemist Louis Pasteur in 1881. Different animal and human vaccines are now available. The infection can be treated with common antibiotics such as penicillins, quinolones, and tetracyclines. Description B. anthracis are rod-shaped bacteria, approximately 3 to 5 μm long and 1 to 1.2 μm wide. When grown in culture, they tend to form long chains of bacteria. On agar plates, they form large colonies several millimeters across that are generally white or cream colored. Most B. anthracis strains produce a capsule that gives colonies a slimy mucus-like appearance. It is one of few bacteria known to synthesize a weakly immunogenic and antiphagocytic protein capsule (poly-D-gamma-glutamic acid) that disguises the vegetative bacterium from the host immune system. Most bacteria are surrounded by a polysaccharide capsule rather than poly-g-D-glutamic acid which provides an evolutionary advantage to B. anthracis. Polysaccharides are associated with adhesion of neutrophil-secreted defensins that inactivate and degrade the bacteria. By not containing this macromolecule in the capsule, B. anthracis can evade a neutrophilic attack and continue to propagate infection. The difference in capsule composition is also significant because poly-g-D-glutamic acid has been hypothesized to create a negative charge which protects the vegetative phase of the bacteria from phagocytosis by macrophages. The capsule is degraded to a lower molecular mass and released from the bacterial cell surface to act as a decoy to protect the bacteria from complement. Like Bordetella pertussis, it forms a calmodulin-dependent adenylate cyclase exotoxin known as anthrax edema factor, along with anthrax lethal factor. It bears close genotypic and phenotypic resemblance to Bacillus cereus and Bacillus thuringiensis. All three species share cellular dimensions and morphology. All form oval spores located centrally in an unswollen sporangium. B. anthracis endospores, in particular, are highly resilient, surviving extremes of temperature, low-nutrient environments, and harsh chemical treatment over decades or centuries. The endospore is a dehydrated cell with thick walls and additional layers that form inside the cell membrane. It can remain inactive for many years, but if it comes into a favorable environment, it begins to grow again. It initially develops inside the rod-shaped form. Features such as the location within the rod, the size and shape of the endospore, and whether or not it causes the wall of the rod to bulge out are characteristic of particular species of Bacillus. Depending upon the species, the endospores are round, oval, or occasionally cylindrical. They are highly refractile and contain dipicolinic acid. Electron micrograph sections show they have a thin outer endospore coat, a thick spore cortex, and an inner spore membrane surrounding the endospore contents. The endospores resist heat, drying, and many disinfectants (including 95% ethanol). Because of these attributes, B. anthracis endospores are extraordinarily well-suited to use (in powdered and aerosol form) as biological weapons. Such weaponization has been accomplished in the past by at least five state bioweapons programs—those of the United Kingdom, Japan, the United States, Russia, and Iraq—and has been attempted by several others. Genome structure B. anthracis has a single chromosome which is a circular, 5,227,293-bp DNA molecule. It also has two circular, extrachromosomal, double-stranded DNA plasmids, pXO1 and pXO2. Both the pXO1 and pXO2 plasmids are required for full virulence and represent two distinct plasmid families. pXO1 plasmid The pXO1 plasmid (182 kb) contains the genes that encode for the anthrax toxin components: pag (protective antigen, PA), lef (lethal factor, LF), and cya (edema factor, EF). These factors are contained within a 44.8-kb pathogenicity island (PAI). The lethal toxin is a combination of PA with LF and the edema toxin is a combination of PA with EF. The PAI also contains genes which encode a transcriptional activator AtxA and the repressor PagR, both of which regulate the expression of the anthrax toxin genes. pXO2 plasmid pXO2 encodes a five-gene operon (capBCADE) which synthesizes a poly-γ-D-glutamic acid (polyglutamate) capsule. This capsule allows B. anthracis to evade the host immune system by protecting itself from phagocytosis. Expression of the capsule operon is activated by the transcriptional regulators AcpA and AcpB, located in the pXO2 pathogenicity island (35 kb). AcpA and AcpB expression are under the control of AtxA from pXO1. Strains The 89 known strains of B. anthracis include: Sterne strain (34F2; aka the "Weybridge strain"), used by Max Sterne in his 1930s vaccines Vollum strain, formerly weaponized by the US, UK, and Iraq; isolated from a cow in Oxfordshire, UK, in 1935 Vollum M-36, virulent British research strain; passed through macaques 36 times Vollum 1B, weaponized by the US and UK in the 1940s-60s Vollum-14578, used in UK bio-weapons trials which severely contaminated Gruinard Island in 1942 V770-NP1-R, the avirulent, nonencapsulated strain used in the BioThrax vaccine Anthrax 836, highly virulent strain weaponized by the USSR; discovered in Kirov in 1953 Ames strain, isolated from a cow in Texas in 1981; famously used in AMERITHRAX letter attacks (2001) Ames Ancestor Ames Florida H9401, isolated from human patient in Korea; used in investigational anthrax vaccines Evolution Whole genome sequencing has made reconstruction of the B. anthracis phylogeny extremely accurate. A contributing factor to the reconstruction is B. anthracis being monomorphic, meaning it has low genetic diversity, including the absence of any measurable lateral DNA transfer since its derivation as a species. The lack of diversity is due to a short evolutionary history that has precluded mutational saturation in single nucleotide polymorphisms. A short evolutionary time does not necessarily mean a short chronological time. When DNA is replicated, mistakes occur which become genetic mutations. The buildup of these mutations over time leads to the evolution of a species. During the B. anthracis lifecycle, it spends a significant amount of time in the soil spore reservoir stage, in which DNA replication does not occur. These prolonged periods of dormancy have greatly reduced the evolutionary rate of the organism. Related strains B. anthracis belongs to the B. cereus group consisting of the strains: B. cereus, B. anthracis, B. thuringiensis, B. mycoides, and B. pseudomycoides. The first three strains are pathogenic or opportunistic to insects or mammals, while the last three are not considered pathogenic. The strains of this group are genetically and phenotypically heterogeneous overall, but some of the strains are more closely related and phylogenetically intermixed at the chromosome level. The B. cereus group generally exhibits complex genomes and most carry varying numbers of plasmids. B. cereus is a soil-dwelling bacterium which can colonize the gut of invertebrates as a symbiont and is a frequent cause of food poisoning It produces an emetic toxin, enterotoxins, and other virulence factors. The enterotoxins and virulence factors are encoded on the chromosome, while the emetic toxin is encoded on a 270-kb plasmid, pCER270. B. thuringiensis is an microrganism pathogen and is characterized by production of parasporal crystals of insecticidal toxins Cry and Cyt. The genes encoding these proteins are commonly located on plasmids which can be lost from the organism, making it indistinguishable from B. cereus. A phylogenomic analysis of the Cereus clade combined with average nucleotide identity (ANI) analysis revealed that the B. anthracis species also includes strains annotated as B. cereus and B. thuringiensis. Pseudogene PlcR is a global transcriptional regulator which controls most of the secreted virulence factors in B. cereus and B. thuringiensis. It is chromosomally encoded and is ubiquitous throughout the cell. In B. anthracis, however, the plcR gene contains a single base change at position 640, a nonsense mutation, which creates a dysfunctional protein. While 1% of the B. cereus group carries an inactivated plcR gene, none of them carries the specific mutation found only in B. anthracis. The plcR gene is part of a two-gene operon with papR. The papR gene encodes a small protein which is secreted from the cell and then reimported as a processed heptapeptide forming a quorum-sensing system. The lack of PlcR in B. anthracis is a principle characteristic differentiating it from other members of the B. cereus group. While B. cereus and B. thuringiensis depend on the plcR gene for expression of their virulence factors, B. anthracis relies on the pXO1 and pXO2 plasmids for its virulence. Bacillus cereus biovar anthracis, i.e. B. cereus with the two plasmids, is also capable of causing anthrax. Clinical aspects Pathogenesis B. anthracis possesses an antiphagocytic capsule essential for full virulence. The organism also produces three plasmid-coded exotoxins: edema factor, a calmodulin-dependent adenylate cyclase that causes elevation of intracellular cAMP and is responsible for the severe edema usually seen in B. anthracis infections, lethal toxin which is responsible for causing tissue necrosis, and protective antigen, so named because of its use in producing protective anthrax vaccines, which mediates cell entry of edema factor and lethal toxin. Manifestations in human disease The symptoms in anthrax depend on the type of infection and can take anywhere from 1 day to more than 2 months to appear. All types of anthrax have the potential, if untreated, to spread throughout the body and cause severe illness and even death. Four forms of human anthrax disease are recognized based on their portal of entry. Cutaneous, the most common form (95%), causes a localized, inflammatory, black, necrotic lesion (eschar). Most often the sore will appear on the face, neck, arms, or hands. Development can occur within 1–7 days after exposure. Inhalation, a rare but highly fatal form, is characterized by flu-like symptoms, chest discomfort, diaphoresis, and body aches. Development occurs usually a week after exposure, but can take up to two months. Gastrointestinal, a rare but also fatal (causes death to 25%) type, results from ingestion of spores. Symptoms include: fever and chills, swelling of neck, painful swallowing, hoarseness, nausea and vomiting (especially bloody vomiting), diarrhea, flushing and red eyes, and swelling of abdomen. Symptoms can develop within 1–7 days Injection, symptoms are similar to those of cutaneous anthrax, but injection anthrax can spread throughout the body faster and can be harder to recognize and treat compared to cutaneous anthrax. Symptoms include, fever, chills, a group of small bumps or blisters that may itch, appearing where the pathogen was injected. A painless sore with a black center that appears after the blisters or bumps. Swelling around the sore. Abscesses deep under the skin or in the muscle where the pathogen was injected. This type of entry has never been found in the US. Prevention and treatment A number of anthrax vaccines have been developed for preventive use in livestock and humans. Anthrax vaccine adsorbed (AVA) may protect against cutaneous and inhalation anthrax. However, this vaccine is only used for at-risk adults before exposure to anthrax and has not been approved for use after exposure. Infections with B. anthracis can be treated with β-lactam antibiotics such as penicillin, and others which are active against Gram-positive bacteria. Penicillin-resistant B. anthracis can be treated with fluoroquinolones such as ciprofloxacin or tetracycline antibiotics such as doxycycline. Laboratory research Components of tea, such as polyphenols, have the ability to inhibit the activity both of B. anthracis and its toxin considerably; spores, however, are not affected. The addition of milk to the tea completely inhibits its antibacterial activity against anthrax. Activity against the B. anthracis in the laboratory does not prove that drinking tea affects the course of an infection, since it is unknown how these polyphenols are absorbed and distributed within the body. B. anthracis can be cultured on PLET agar, a selective and differential media designed to select specifically for B. anthracis. Recent research Advances in genotyping methods have led to improved genetic analysis for variation and relatedness. These methods include multiple-locus variable-number tandem repeat analysis (MLVA) and typing systems using canonical single-nucleotide polymorphisms. The Ames ancestor chromosome was sequenced in 2003 and contributes to the identification of genes involved in the virulence of B. anthracis. Recently, B. anthracis isolate H9401 was isolated from a Korean patient suffering from gastrointestinal anthrax. The goal of the Republic of Korea is to use this strain as a challenge strain to develop a recombinant vaccine against anthrax. The H9401 strain isolated in the Republic of Korea was sequenced using 454 GS-FLX technology and analyzed using several bioinformatics tools to align, annotate, and compare H9401 to other B. anthracis strains. The sequencing coverage level suggests a molecular ratio of pXO1:pXO2:chromosome as 3:2:1 which is identical to the Ames Florida and Ames Ancestor strains. H9401 has 99.679% sequence homology with Ames Ancestor with an amino acid sequence homology of 99.870%. H9401 has a circular chromosome (5,218,947 bp with 5,480 predicted ORFs), the pXO1 plasmid (181,700 bp with 202 predicted ORFs), and the pXO2 plasmid (94,824 bp with 110 predicted ORFs). As compared to the Ames Ancestor chromosome above, the H9401 chromosome is about 8.5 kb smaller. Due to the high pathogenecity and sequence similarity to the Ames Ancestor, H9401 will be used as a reference for testing the efficacy of candidate anthrax vaccines by the Republic of Korea. Since the genome of B. anthracis was sequenced, alternative ways to battle this disease are being endeavored. Bacteria have developed several strategies to evade recognition by the immune system. The predominant mechanism for avoiding detection, employed by all bacteria is molecular camouflage. Slight modifications in the outer layer that render the bacteria practically invisible to lysozymes. Three of these modifications have been identified and characterized. These include (1) N-glycosylation of N-acetyl-muramic acid, (2) O-acetylation of N-acetylmuramic acid and (3) N-deacetylation of N-acetyl-glucosamine. Research during the last few years has focused on inhibiting such modifications. As a result the enzymatic mechanism of polysaccharide de-acetylases is being investigated, that catalyze the removal of an acetyl group from N-acetyl-glucosamine and N-acetyl-muramic acid, components of the peptidoglycan layer. Host interactions As with most other pathogenic bacteria, B. anthracis must acquire iron to grow and proliferate in its host environment. The most readily available iron sources for pathogenic bacteria are the heme groups used by the host in the transport of oxygen. To scavenge heme from host hemoglobin and myoglobin, B. anthracis uses two secretory siderophore proteins, IsdX1 and IsdX2. These proteins can separate heme from hemoglobin, allowing surface proteins of B. anthracis to transport it into the cell. B. anthracis must evade the immune system to establish a successful infection. B. anthracis spores are immediately phagocytosed by macrophages and dendritic cells once they enter the host.  The dendritic cells can control the infection through effective intracellular elimination, but the macrophages can transport the bacteria directly inside the host by crossing a thin layer of epithelial or endothelial cells to reach the circulatory system. Normally, in the phagocytosis process, the pathogen is digested upon internalization by the macrophage. However, rather than being degraded, the anthrax spores hijack the function of the macrophage to evade recognition by the host immune system. Phagocytosis of B. anthracis spores begins when the transmembrane receptors on the extracellular membrane of the phagocyte interacts with a molecule on the surface of the spore. CD14, an extracellular protein embedded in the host membrane, binds to rhamnose residues of BclA, a glycoprotein of the B. anthracis exosporium, which promotes inside-out activation of the integrin Mac-1, enhancing spore internalization by macrophages. This cascade results in phagocytic cellular activation and induction of an inflammatory response. Sampling The presence of B. anthracis can be determined through samples taken on non-porous surfaces. Historical background French physician Casimir Davaine (1812–1882) demonstrated the symptoms of anthrax were invariably accompanied by the microbe B. anthracis. German physician Aloys Pollender (1799–1879) is credited for discovery. B. anthracis was the first bacterium conclusively demonstrated to cause disease, by Robert Koch in 1876. The species name anthracis is from the Greek anthrax (ἄνθραξ), meaning "coal" and referring to the most common form of the disease, cutaneous anthrax, in which large, black skin lesions are formed. Throughout the 19th century, Anthrax was an infection that involved several very important medical developments. The first vaccine containing live organisms was Louis Pasteur's veterinary anthrax vaccine.
Biology and health sciences
Gram-positive bacteria
Plants
78478
https://en.wikipedia.org/wiki/Canyon
Canyon
A canyon (from ; archaic British English spelling: cañon), gorge or chasm, is a deep cleft between escarpments or cliffs resulting from weathering and the erosive activity of a river over geologic time scales. Rivers have a natural tendency to cut through underlying surfaces, eventually wearing away rock layers as sediments are removed downstream. A river bed will gradually reach a baseline elevation, which is the same elevation as the body of water into which the river drains. The processes of weathering and erosion will form canyons when the river's headwaters and estuary are at significantly different elevations, particularly through regions where softer rock layers are intermingled with harder layers more resistant to weathering. A canyon may also refer to a rift between two mountain peaks, such as those in ranges including the Rocky Mountains, the Alps, the Himalayas or the Andes. Usually, a river or stream carves out such splits between mountains. Examples of mountain-type canyons are Provo Canyon in Utah or Yosemite Valley in California's Sierra Nevada. Canyons within mountains, or gorges that have an opening on only one side, are called box canyons. Slot canyons are very narrow canyons that often have smooth walls. Steep-sided valleys in the seabed of the continental slope are referred to as submarine canyons. Unlike canyons on land, submarine canyons are thought to be formed by turbidity currents and landslides. Etymology The word canyon is Spanish in origin (, ), with the same meaning. The word canyon is generally used in North America, while the words gorge and ravine (French in origin) are used in Europe and Oceania, though gorge and ravine are also used in some parts of North America. In the United States, place names generally use canyon in the southwest (due to their proximity to Spanish-speaking Mexico) and gorge in the northeast (which is closer to French Canada), with the rest of the country graduating between these two according to geography. In Canada, a gorge is usually narrow while a ravine is more open and often wooded. The military-derived word defile is occasionally used in the United Kingdom. In South Africa, kloof (in Krantzkloof Nature Reserve) is used along with canyon (as in Blyde River Canyon) and gorge (in Oribi Gorge). Formation Most canyons were formed by a process of long-time erosion from a plateau or table-land level. The cliffs form because harder rock strata that are resistant to erosion and weathering remain exposed on the valley walls. Canyons are much more common in arid areas than in wet areas because physical weathering has a more localized effect in arid zones. The wind and water from the river combine to erode and cut away less resistant materials such as shales. The freezing and expansion of water also serves to help form canyons. Water seeps into cracks between the rocks and freezes, pushing the rocks apart and eventually causing large chunks to break off the canyon walls, in a process known as frost wedging. Canyon walls are often formed of resistant sandstones or granite. Sometimes large rivers run through canyons as the result of gradual geological uplift. These are called entrenched rivers, because they are unable to easily alter their course. In the United States, the Colorado River in the Southwest and the Snake River in the Northwest are two examples of tectonic uplift. Canyons often form in areas of limestone rock. As limestone is soluble to a certain extent, cave systems form in the rock. When a cave system collapses, a canyon is left, as in the Mendip Hills in Somerset and Yorkshire Dales in Yorkshire, England. Box canyon A box canyon is a small canyon that is generally shorter and narrower than a river canyon, with steep walls on three sides, allowing access and egress only through the mouth of the canyon. Box canyons were frequently used in the western United States as convenient corrals, with their entrances fenced. Largest The definition of "largest canyon" is imprecise, because a canyon can be large by its depth, its length, or the total area of the canyon system. Also, the inaccessibility of the major canyons in the Himalaya contributes to their not being regarded as candidates for the biggest canyon. The definition of "deepest canyon" is similarly imprecise, especially if one includes mountain canyons, as well as canyons cut through relatively flat plateaus (which have a somewhat well-defined rim elevation). The Yarlung Tsangpo Grand Canyon (or Tsangpo Canyon), along the Yarlung Tsangpo River in Tibet, China, is regarded by some as the deepest canyon on Earth at . It is slightly longer than Grand Canyon in the United States. Others consider the Kali Gandaki Gorge in midwest Nepal to be the deepest canyon, with a difference between the level of the river and the peaks surrounding it. Vying for the deepest canyon in the Americas is the Cotahuasi Canyon and Colca Canyon, in southern Peru. Both have been measured at over deep. Grand Canyon of northern Arizona in the United States, with an average depth of and a volume of , is one of the world's largest canyons. It was among the 28 finalists of the New 7 Wonders of Nature worldwide poll. (Some referred to it as one of the seven natural wonders of the world.) The largest canyon in Europe is Tara River Canyon. The largest canyon in Africa is the Fish River Canyon in Namibia. In August 2013, the discovery of Greenland's Grand Canyon was reported, based on the analysis of data from Operation IceBridge. It is located under an ice sheet. At long, it is believed to be the longest canyon in the world. Despite not being quite as deep or long as Grand Canyon, the Capertee Valley in Australia is actually 1 km wider than Grand Canyon, making it the widest canyon in the world. Cultural significance Some canyons have notable cultural significance. Evidence of archaic humans has been discovered in Africa's Olduvai Gorge. In the southwestern United States, canyons are important archeologically because of the many cliff-dwellings built in such areas, largely by the ancient Pueblo people who were their first inhabitants. Notable examples The following list contains only the most notable canyons of the world, grouped by region. Africa Namibia Fish River Canyon South Africa Blyde River Canyon, Mpumalanga Oribi Gorge, KwaZulu-Natal Tanzania Olduvai Gorge Americas Argentina Atuel Canyon, Mendoza Province Brazil Itaimbezinho Canyon, Rio Grande do Sul Bolivia Torotoro River Canyon Canada Grand Canyon of the Stikine, British Columbia Horseshoe Canyon, Alberta Niagara Gorge, Ontario Ouimet Canyon, Ontario Fraser Canyon, British Columbia Coaticook Gorge, Quebec Thompson Canyon, British Columbia Colombia Chicamocha Canyon, Santander Department Mexico Barranca de Oblatos, Jalisco Copper Canyon, Chihuahua Sumidero Canyon, Chiapas Peru Cañón del Pato, Ancash Region Colca Canyon, Arequipa Region Cotahuasi Canyon, Arequipa Region United States American Fork Canyon, Utah Antelope Canyon, Arizona Apple River Canyon, Illinois Ausable Chasm, New York Big Cottonwood Canyon, Utah Black Canyon of the Gunnison, Colorado Black Hand Gorge, Ohio Blackwater Canyon, West Virginia Blue Creek Canyon, Colorado Bluejohn Canyon, Utah Box Canyon, Colorado Breaks Canyon, Kentucky and Virginia Butterfield Canyon, Utah Cane Creek, Alabama Canyon de Chelly, Arizona Canyonlands National Park, canyons of the Colorado River and its main tributary the Green River, Utah Cheat Canyon, West Virginia Clifton Gorge, Ohio Clifty Canyon, Indiana Cloudland Canyon, Georgia Columbia River Gorge, Oregon and Washington Conkle's Hollow, Ohio Cottonwood Canyon, Utah Crooked River Gorge, Oregon Death Hollow, Utah Desolation Canyon, Utah Dismals Canyon, Alabama Flaming Gorge, Wyoming and Utah Flume Gorge, New Hampshire Glen Canyon, Utah and Arizona Glenwood Canyon, Colorado Gore Canyon, Colorado Grand Canyon, Arizona Grand Canyon of the Yellowstone, Wyoming Grandstaff Canyon, Utah Guffey Gorge, Colorado Gulf Hagas, Maine Hells Canyon, Idaho, Oregon, and Washington Horse Canyon, Utah Kern River Canyon, California Kings Canyon, Utah Kings Canyon, California Leslie Gulch, Oregon Linville Gorge, North Carolina Little Cottonwood Canyon, Utah Little Grand Canyon, Illinois Little River Canyon, Alabama Logan Canyon, Utah Mather Gorge, Maryland Marysvale Canyon, Utah McCormick's Creek Canyon, Indiana Millcreek Canyon, Utah New River Gorge, West Virginia Ninemile Canyon, Utah Ogden Canyon, Utah Oneonta Gorge, Oregon Palo Duro Canyon, Texas Parleys Canyon, Utah Pine Creek Gorge, Pennsylvania Poudre Canyon, Colorado Providence Canyon, Georgia Provo Canyon, Utah Quechee Gorge, Vermont Red River Gorge, Kentucky Rio Grande Gorge, New Mexico Royal Gorge, Colorado Ruby Canyon, Utah Snake River Canyon, Idaho Snow Canyon, Utah Stillwater Canyon, Utah Tallulah Gorge, Georgia Tenaya Canyon, California Tennessee River Gorge, Alabama and Tennessee The Trough, West Virginia Unaweep Canyon, Colorado Uncompahgre Gorge, Colorado Waimea Canyon, Hawaii Walls of Jericho, Alabama Weber Canyon, Utah Westwater Canyon, Utah Wolverine Canyon, Utah White Canyon, Utah Zion Canyon, Utah Asia China Three Gorges, Chongqing Tiger Leaping Gorge, Yunnan Yarlung Zangbo Grand Canyon, Tibet Autonomous Region India Gandikota, Kadapa District, Andhra Pradesh Raneh Falls, Chatarpur district, Madhya Pradesh Idukki, Western Ghats, Kerala Indonesia Cukang Taneuh, Pangandaran, West Java Others Afghanistan – Tang-e Gharu Iran – Haygher Canyon in Fars province Japan – Kiyotsu Gorge in Niigata Prefecture Japan – Tenryū-kyō in Nagano Prefecture Kazakhstan – Charyn Canyon Nepal – Kali Gandaki Gorge Russia – Delyun-Uran (Vitim River) Pakistan – Indus River Gorge through the Himalaya Taiwan – Taroko Gorge in Hualien County Turkey – Ulubey Canyon in Uşak Province Turkey – Ihlara Valley in Aksaray Province Europe United Kingdom Avon Gorge, Bristol Burrington Combe, Somerset Cheddar Gorge, Somerset Corrieshalloch Gorge, Ullapool Ebbor Gorge, Somerset Gordale Scar, North Yorkshire Winnats Pass, Derbyshire France Gorges de l'Ardèche, Auvergne-Rhône-Alpes Gorges de Daluis, Provence-Alpes-Côte d'Azur Gorges du Tarn, Occitanie Grands Goulets, Auvergne-Rhône-Alpes Verdon Gorge, Alpes-de-Haute-Provence Ukraine Aktove canyon Buky Canyon Dniester Canyon Others Albania – Osum Canyon, Kanionet e Skraparit Albania/Montenegro – Cem Bosnia and Herzegovina – Rakitnica, Drina, Neretva, Vrbas, Unac, Čude, Ugar, Prača, Bulgaria – Trigrad Gorge, Kresna Gorge, Iskar Gorge Finland – Korouoma Canyon, Kevo Canyon Germany – Partnach Gorge Greece – Vikos Gorge, Samaria Gorge Greenland – Greenland's Grand Canyon Iceland – Fjaðrárgljúfur Canyon Kosovo – Rugova Canyon, White Drin Canyon, Kacanik Gorge North Macedonia – Matka Canyon Montenegro – Morača, Piva Montenegro/Bosnia and Herzegovina – Tara River Canyon Montenegro/Serbia – Ibar Norway – Sautso Canyon Poland/Slovakia – Dunajec River Gorge Russia – Sulak Canyon, Dagestan Serbia – Lazar's Canyon Serbia/Bosnia and Herzegovina – Lim Serbia/Romania – Iron Gates Slovenia – Vintgar Gorge Switzerland – Aare Gorge Oceania Australia Joffre Gorge, Karijini National Park, Western Australia Katherine Gorge, Northern Territory Kings Canyon, Northern Territory Murchison River Gorge, Western Australia Jamison Valley, New South Wales Capertee Valley, New South Wales – the world's second-widest canyon Shoalhaven Gorge, New South Wales Werribee Gorge, Victoria The Slot Canyons of the Blue Mountains, New South Wales New Zealand Manawatū Gorge, North Island Skippers Canyon, South Island Solar System Ithaca Chasma on Saturn's moon Tethys Valles Marineris on Mars, the largest-known canyon in the Solar System Vid Flumina on Saturn's largest moon Titan is the only known liquid-floored canyon in the Solar System besides Earth Messina Chasmata on Uranus' moon Titania Venus has many craters and canyons on its surface. The troughs on the planet are part of a system of canyons that is more than 6,400 km long.
Physical sciences
Fluvial landforms
null
78490
https://en.wikipedia.org/wiki/Exosphere
Exosphere
The exosphere is a thin, atmosphere-like volume surrounding a planet or natural satellite where molecules are gravitationally bound to that body, but where the density is so low that the molecules are essentially collision-less. In the case of bodies with substantial atmospheres, such as Earth's atmosphere, the exosphere is the uppermost layer, where the atmosphere thins out and merges with outer space. It is located directly above the thermosphere. Very little is known about it due to a lack of research. Mercury, the Moon, Ceres, Europa, and Ganymede have surface boundary exospheres, which are exospheres without a denser atmosphere underneath. The Earth's exosphere is mostly hydrogen and helium, with some heavier atoms and molecules near the base. Surface boundary exosphere Mercury, Ceres and several large natural satellites, such as the Moon, Europa, and Ganymede, have exospheres without a denser atmosphere underneath, referred to as a surface boundary exosphere. Here, molecules are ejected on elliptic trajectories until they collide with the surface. Smaller bodies such as asteroids, in which the molecules emitted from the surface escape to space, are not considered to have exospheres. Earth's exosphere The most common molecules within Earth's exosphere are those of the lightest atmospheric gases. Hydrogen is present throughout the exosphere, with some helium, carbon dioxide, and atomic oxygen near its base. Because it can be hard to define the boundary between the exosphere and outer space, the exosphere may be considered a part of the interplanetary medium or outer space. Earth's exosphere produces Earth's geocorona. Lower boundary The lower boundary of the exosphere is called the thermopause or exobase. It is also called the critical altitude, as this is the altitude where barometric conditions no longer apply. Atmospheric temperature becomes nearly a constant above this altitude. On Earth, the altitude of the exobase ranges from about depending on solar activity. The exobase can be defined in one of two ways: If we define the exobase as the height at which upward-traveling molecules experience one collision on average, then at this position the mean free path of a molecule is equal to one pressure scale height. This is shown in the following. Consider a volume of air, with horizontal area and height equal to the mean free path , at pressure and temperature . For an ideal gas, the number of molecules contained in it is: where is the Boltzmann constant. From the requirement that each molecule traveling upward undergoes on average one collision, the pressure is: where is the mean molecular mass of the gas. Solving these two equations gives: which is the equation for the pressure scale height. As the pressure scale height is almost equal to the density scale height of the primary constituent, and because the Knudsen number is the ratio of mean free path and typical density fluctuation scale, this means that the exobase lies in the region where . The fluctuation in the height of the exobase is important because this provides atmospheric drag on satellites, eventually causing them to fall from orbit if no action is taken to maintain the orbit. Upper boundary In principle, the exosphere covers distances where particles are still gravitationally bound to Earth, i.e. particles still have ballistic orbits that will take them back towards Earth. The upper boundary of the exosphere can be defined as the distance at which the influence of solar radiation pressure on atomic hydrogen exceeds that of Earth's gravitational pull. This happens at half the distance to the Moon or somewhere in the neighborhood of . The exosphere, observable from space as the geocorona, is seen to extend to at least from Earth's surface. Exosphere of other celestial bodies If the atmosphere of a celestial body is very tenuous, like the atmosphere of the Moon or that of Mercury, the whole atmosphere is considered exosphere. The Exosphere of Mercury Many hypotheses exist about the formation of the surface boundary exosphere of Mercury, which has been noted to include elements such as sodium (Na), potassium (K), and calcium (Ca). Each material has been suggested as a result of processes such as impacts, solar wind, and degassing from the terrestrial body that cause the atoms or molecules to form the planet's exosphere. Meteoroids have been reported to commonly impact the surface of Mercury at speeds ranging up to 80 km/s, which are capable of causing vaporization of both the meteor and surface regolith upon contact. These expulsions can result in clouds of mixed materials due to the force of the impact, which are capable of transporting gaseous materials and compounds to Mercury's exosphere. During the impact, the former elements of the colliding bodies are mostly devolved into atoms rather than molecules that can then be reformed during a cooling, quenching process. Such materials have been observed as Na, NaOH, and O2. However, it is theorized that, though different forms of sodium have been released into the Mercury exosphere via meteor impact, it is a small driver for the concentration of both sodium and potassium atoms overall. Calcium is more likely to be a result of impacts, though its transport is thought to be completed through photolysis of its former oxides or hydroxides rather than atoms released during the moment of impact such as sodium, potassium, and iron (Fe).   Another possible method of the exosphere formation of Mercury is due to its unique magnetosphere and solar wind relationship. The magnetosphere of this celestial body is hypothesized to be an incomplete shield from the weathering of solar wind. If accurate, there are openings in the magnetosphere in which solar wind is able to surpass the magnetosphere, reach the body of Mercury, and sputter the components of the surface that become possible sources of material in the exosphere. The weathering is capable of eroding the elements, such as sodium, and transporting them to the atmosphere. However, this occurrence is not constant, and it is unable to account for all atoms or molecules of the exosphere.
Physical sciences
Atmosphere: General
Earth science
78517
https://en.wikipedia.org/wiki/Fluorocarbon
Fluorocarbon
Fluorocarbons are chemical compounds with carbon-fluorine bonds. Compounds that contain many C-F bonds often have distinctive properties, e.g., enhanced stability, volatility, and hydrophobicity. Several fluorocarbons and their derivatives are commercial polymers, refrigerants, drugs, and anesthetics. Nomenclature Perfluorocarbons or PFCs, are organofluorine compounds with the formula CxFy, meaning they contain only carbon and fluorine. The terminology is not strictly followed and many fluorine-containing organic compounds are also called fluorocarbons. Compounds with the prefix perfluoro- are hydrocarbons, including those with heteroatoms, wherein all C-H bonds have been replaced by C-F bonds. Fluorocarbons includes perfluoroalkanes, fluoroalkenes, fluoroalkynes, and perfluoroaromatic compounds. Perfluoroalkanes Chemical properties Perfluoroalkanes are very stable because of the strength of the carbon–fluorine bond, one of the strongest in organic chemistry. Its strength is a result of the electronegativity of fluorine imparting partial ionic character through partial charges on the carbon and fluorine atoms, which shorten and strengthen the bond (compared to carbon-hydrogen bonds) through favorable covalent interactions. Additionally, multiple carbon–fluorine bonds increase the strength and stability of other nearby carbon–fluorine bonds on the same geminal carbon, as the carbon has a higher positive partial charge. Furthermore, multiple carbon–fluorine bonds also strengthen the "skeletal" carbon–carbon bonds from the inductive effect. Therefore, saturated fluorocarbons are more chemically and thermally stable than their corresponding hydrocarbon counterparts, and indeed any other organic compound. They are susceptible to attack by very strong reductants, e.g. Birch reduction and very specialized organometallic complexes. Fluorocarbons are colorless and have high density, up to over twice that of water. They are not miscible with most organic solvents (e.g., ethanol, acetone, ethyl acetate, and chloroform), but are miscible with some hydrocarbons (e.g., hexane in some cases). They have very low solubility in water, and water has a very low solubility in them (on the order of 10 ppm). They have low refractive indices. As the high electronegativity of fluorine reduces the polarizability of the atom, fluorocarbons are only weakly susceptible to the fleeting dipoles that form the basis of the London dispersion force. As a result, fluorocarbons have low intermolecular attractive forces and are lipophobic in addition to being hydrophobic and non-polar. Reflecting the weak intermolecular forces these compounds exhibit low viscosities when compared to liquids of similar boiling points, low surface tension and low heats of vaporization. The low attractive forces in fluorocarbon liquids make them compressible (low bulk modulus) and able to dissolve gas relatively well. Smaller fluorocarbons are extremely volatile. There are five perfluoroalkane gases: tetrafluoromethane (bp −128 °C), hexafluoroethane (bp −78.2 °C), octafluoropropane (bp −36.5 °C), perfluoro-n-butane (bp −2.2 °C) and perfluoro-iso-butane (bp −1 °C). Nearly all other fluoroalkanes are liquids; the most notable exception is perfluorocyclohexane, which sublimes at 51 °C. Fluorocarbons also have low surface energies and high dielectric strengths. Flammability In the 1960s there was a lot of interest in fluorocarbons as anesthetics. The research did not produce any anesthetics, but the research included tests on the issue of flammability, and showed that the tested fluorocarbons were not flammable in air in any proportion, though most of the tests were in pure oxygen or pure nitrous oxide (gases of importance in anesthesiology). In 1993, 3M considered fluorocarbons as fire extinguishants to replace CFCs. This extinguishing effect has been attributed to their high heat capacity, which takes heat away from the fire. It has been suggested that an atmosphere containing a significant percentage of perfluorocarbons on a space station or similar would prevent fires altogether. When combustion does occur, toxic fumes result, including carbonyl fluoride, carbon monoxide, and hydrogen fluoride. Gas dissolving properties Perfluorocarbons dissolve relatively high volumes of gases. The high solubility of gases is attributed to the weak intermolecular interactions in these fluorocarbon fluids. The table shows values for the mole fraction, , of nitrogen dissolved, calculated from the Blood–gas partition coefficient, at 298.15 K (25 °C), 0.101325 MPa. Manufacture The development of the fluorocarbon industry coincided with World War II. Prior to that, fluorocarbons were prepared by reaction of fluorine with the hydrocarbon, i.e., direct fluorination. Because C-C bonds are readily cleaved by fluorine, direct fluorination mainly affords smaller perfluorocarbons, such as tetrafluoromethane, hexafluoroethane, and octafluoropropane. Fowler process A major breakthrough that allowed the large scale manufacture of fluorocarbons was the Fowler process. In this process, cobalt trifluoride is used as the source of fluorine. Illustrative is the synthesis of perfluorohexane: The resulting cobalt difluoride is then regenerated, sometimes in a separate reactor: Industrially, both steps are combined, for example in the manufacture of the Flutec range of fluorocarbons by F2 chemicals Ltd, using a vertical stirred bed reactor, with hydrocarbon introduced at the bottom, and fluorine introduced halfway up the reactor. The fluorocarbon vapor is recovered from the top. Electrochemical fluorination Electrochemical fluorination (ECF) (also known as the Simons' process) involves electrolysis of a substrate dissolved in hydrogen fluoride. As fluorine is itself manufactured by the electrolysis of hydrogen fluoride, ECF is a rather more direct route to fluorocarbons. The process proceeds at low voltage (5 – 6 V) so that free fluorine is not liberated. The choice of substrate is restricted as ideally it should be soluble in hydrogen fluoride. Ethers and tertiary amines are typically employed. To make perfluorohexane, trihexylamine is used, for example: The perfluorinated amine will also be produced: Environmental and health concerns Fluoroalkanes are generally inert and non-toxic. Fluoroalkanes are not ozone depleting, as they contain no chlorine or bromine atoms, and they are sometimes used as replacements for ozone-depleting chemicals. The term fluorocarbon is used rather loosely to include any chemical containing fluorine and carbon, including chlorofluorocarbons, which are ozone depleting. Perfluoroalkanes used in medical procedures are rapidly excreted from the body, primarily via expiration with the rate of excretion as a function of the vapour pressure; the half-life for octafluoropropane is less than 2 minutes, compared to about a week for perfluorodecalin. Low-boiling perfluoroalkanes are potent greenhouse gases, in part due to their very long atmospheric lifetime, and their use is covered by the Kyoto Protocol. The global warming potential (compared to that of carbon dioxide) of many gases can be found in the IPCC 5th assessment report, with an extract below for a few perfluoroalkanes. The aluminium smelting industry has been a major source of atmospheric perfluorocarbons (tetrafluoromethane and hexafluoroethane especially), produced as by-product of the electrolysis process. However, the industry has been actively involved in reducing emissions in recent years. Applications As they are inert, perfluoroalkanes have essentially no chemical uses, but their physical properties have led to their use in many diverse applications. These include: Perfluorocarbon tracer Liquid dielectric Chemical vapor deposition Organic Rankine cycle Fluorous biphasic catalysis Cosmetics Ski waxes As well as several medical uses: Contrast-enhanced ultrasound Oxygen Therapeutics Blood substitute Liquid breathing Eye surgery Tattoo removal Fluoroalkenes and fluoroalkynes Unsaturated fluorocarbons are far more reactive than fluoroalkanes. Although difluoroacetylene is unstable (as is typical for related alkynes, see dichloroacetylene), hexafluoro-2-butyne and related fluorinated alkynes are well known. Polymerization Fluoroalkenes polymerize more exothermically than normal alkenes. Unsaturated fluorocarbons have a driving force towards sp3 hybridization due to the electronegative fluorine atoms seeking a greater share of bonding electrons with reduced s character in orbitals. The most famous member of this class is tetrafluoroethylene, which is used to manufacture polytetrafluoroethylene (PTFE), better known under the trade name Teflon. Environmental and health concerns Fluoroalkenes and fluorinated alkynes are reactive and many are toxic for example perfluoroisobutene. To produce polytetrafluoroethylene various fluorinated surfactants are used, in the process known as Emulsion polymerization, and the surfactant included in the polymer can bioaccumulate. Perfluoroaromatic compounds Perfluoroaromatic compounds contain only carbon and fluorine, like other fluorocarbons, but also contain an aromatic ring. The three most important examples are hexafluorobenzene, octafluorotoluene, and octafluoronaphthalene. Perfluoroaromatic compounds can be manufactured via the Fowler process, like fluoroalkanes, but the conditions must be adjusted to prevent full fluorination. They can also be made by heating the corresponding perchloroaromatic compound with potassium fluoride at high temperature (typically 500 °C), during which the chlorine atoms are replaced by fluorine atoms. A third route is defluorination of the fluoroalkane; for example, octafluorotoluene can be made from perfluoromethylcyclohexane by heating to 500 °C with a nickel or iron catalyst. Perfluoroaromatic compounds are relatively volatile for their molecular weight, with melting and boiling points similar to the corresponding aromatic compound, as the table below shows. They have high density and are non-flammable. For the most part, they are colorless liquids. Unlike the perfluoralkanes, they tend to be miscible with common solvents.
Physical sciences
Halocarbons
Chemistry
78534
https://en.wikipedia.org/wiki/Geomorphology
Geomorphology
Geomorphology () is the scientific study of the origin and evolution of topographic and bathymetric features generated by physical, chemical or biological processes operating at or near Earth's surface. Geomorphologists seek to understand why landscapes look the way they do, to understand landform and terrain history and dynamics and to predict changes through a combination of field observations, physical experiments and numerical modeling. Geomorphologists work within disciplines such as physical geography, geology, geodesy, engineering geology, archaeology, climatology, and geotechnical engineering. This broad base of interests contributes to many research styles and interests within the field. Overview Earth's surface is modified by a combination of surface processes that shape landscapes, and geologic processes that cause tectonic uplift and subsidence, and shape the coastal geography. Surface processes comprise the action of water, wind, ice, wildfire, and life on the surface of the Earth, along with chemical reactions that form soils and alter material properties, the stability and rate of change of topography under the force of gravity, and other factors, such as (in the very recent past) human alteration of the landscape. Many of these factors are strongly mediated by climate. Geologic processes include the uplift of mountain ranges, the growth of volcanoes, isostatic changes in land surface elevation (sometimes in response to surface processes), and the formation of deep sedimentary basins where the surface of the Earth drops and is filled with material eroded from other parts of the landscape. The Earth's surface and its topography therefore are an intersection of climatic, hydrologic, and biologic action with geologic processes, or alternatively stated, the intersection of the Earth's lithosphere with its hydrosphere, atmosphere, and biosphere. The broad-scale topographies of the Earth illustrate this intersection of surface and subsurface action. Mountain belts are uplifted due to geologic processes. Denudation of these high uplifted regions produces sediment that is transported and deposited elsewhere within the landscape or off the coast. On progressively smaller scales, similar ideas apply, where individual landforms evolve in response to the balance of additive processes (uplift and deposition) and subtractive processes (subsidence and erosion). Often, these processes directly affect each other: ice sheets, water, and sediment are all loads that change topography through flexural isostasy. Topography can modify the local climate, for example through orographic precipitation, which in turn modifies the topography by changing the hydrologic regime in which it evolves. Many geomorphologists are particularly interested in the potential for feedbacks between climate and tectonics, mediated by geomorphic processes. In addition to these broad-scale questions, geomorphologists address issues that are more specific or more local. Glacial geomorphologists investigate glacial deposits such as moraines, eskers, and proglacial lakes, as well as glacial erosional features, to build chronologies of both small glaciers and large ice sheets and understand their motions and effects upon the landscape. Fluvial geomorphologists focus on rivers, how they transport sediment, migrate across the landscape, cut into bedrock, respond to environmental and tectonic changes, and interact with humans. Soils geomorphologists investigate soil profiles and chemistry to learn about the history of a particular landscape and understand how climate, biota, and rock interact. Other geomorphologists study how hillslopes form and change. Still others investigate the relationships between ecology and geomorphology. Because geomorphology is defined to comprise everything related to the surface of the Earth and its modification, it is a broad field with many facets. Geomorphologists use a wide range of techniques in their work. These may include fieldwork and field data collection, the interpretation of remotely sensed data, geochemical analyses, and the numerical modelling of the physics of landscapes. Geomorphologists may rely on geochronology, using dating methods to measure the rate of changes to the surface. Terrain measurement techniques are vital to quantitatively describe the form of the Earth's surface, and include differential GPS, remotely sensed digital terrain models and laser scanning, to quantify, study, and to generate illustrations and maps. Practical applications of geomorphology include hazard assessment (such as landslide prediction and mitigation), river control and stream restoration, and coastal protection. Planetary geomorphology studies landforms on other terrestrial planets such as Mars. Indications of effects of wind, fluvial, glacial, mass wasting, meteor impact, tectonics and volcanic processes are studied. This effort not only helps better understand the geologic and atmospheric history of those planets but also extends geomorphological study of the Earth. Planetary geomorphologists often use Earth analogues to aid in their study of surfaces of other planets. History Other than some notable exceptions in antiquity, geomorphology is a relatively young science, growing along with interest in other aspects of the earth sciences in the mid-19th century. This section provides a very brief outline of some of the major figures and events in its development. Ancient geomorphology The study of landforms and the evolution of the Earth's surface can be dated back to scholars of Classical Greece. In the 5th century BC, Greek historian Herodotus argued from observations of soils that the Nile delta was actively growing into the Mediterranean Sea, and estimated its age. In the 4th century BC, Greek philosopher Aristotle speculated that due to sediment transport into the sea, eventually those seas would fill while the land lowered. He claimed that this would mean that land and water would eventually swap places, whereupon the process would begin again in an endless cycle. The Encyclopedia of the Brethren of Purity published in Arabic at Basra during the 10th century also discussed the cyclical changing positions of land and sea with rocks breaking down and being washed into the sea, their sediment eventually rising to form new continents. The medieval Persian Muslim scholar Abū Rayhān al-Bīrūnī (973–1048), after observing rock formations at the mouths of rivers, hypothesized that the Indian Ocean once covered all of India. In his De Natura Fossilium of 1546, German metallurgist and mineralogist Georgius Agricola (1494–1555) wrote about erosion and natural weathering. Another early theory of geomorphology was devised by Song dynasty Chinese scientist and statesman Shen Kuo (1031–1095). This was based on his observation of marine fossil shells in a geological stratum of a mountain hundreds of miles from the Pacific Ocean. Noticing bivalve shells running in a horizontal span along the cut section of a cliffside, he theorized that the cliff was once the pre-historic location of a seashore that had shifted hundreds of miles over the centuries. He inferred that the land was reshaped and formed by soil erosion of the mountains and by deposition of silt, after observing strange natural erosions of the Taihang Mountains and the Yandang Mountain near Wenzhou. Furthermore, he promoted the theory of gradual climate change over centuries of time once ancient petrified bamboos were found to be preserved underground in the dry, northern climate zone of Yanzhou, which is now modern day Yan'an, Shaanxi province. Previous Chinese authors also presented ideas about changing landforms. Scholar-official Du Yu (222–285) of the Western Jin dynasty predicted that two monumental stelae recording his achievements, one buried at the foot of a mountain and the other erected at the top, would eventually change their relative positions over time as would hills and valleys. Daoist alchemist Ge Hong (284–364) created a fictional dialogue where the immortal Magu explained that the territory of the East China Sea was once a land filled with mulberry trees. Early modern geomorphology The term geomorphology seems to have been first used by Laumann in an 1858 work written in German. Keith Tinkler has suggested that the word came into general use in English, German and French after John Wesley Powell and W. J. McGee used it during the International Geological Conference of 1891. John Edward Marr in his The Scientific Study of Scenery considered his book as, 'an Introductory Treatise on Geomorphology, a subject which has sprung from the union of Geology and Geography'. An early popular geomorphic model was the geographical cycle or cycle of erosion model of broad-scale landscape evolution developed by William Morris Davis between 1884 and 1899. It was an elaboration of the uniformitarianism theory that had first been proposed by James Hutton (1726–1797). With regard to valley forms, for example, uniformitarianism posited a sequence in which a river runs through a flat terrain, gradually carving an increasingly deep valley, until the side valleys eventually erode, flattening the terrain again, though at a lower elevation. It was thought that tectonic uplift could then start the cycle over. In the decades following Davis's development of this idea, many of those studying geomorphology sought to fit their findings into this framework, known today as "Davisian". Davis's ideas are of historical importance, but have been largely superseded today, mainly due to their lack of predictive power and qualitative nature. In the 1920s, Walther Penck developed an alternative model to Davis's. Penck thought that landform evolution was better described as an alternation between ongoing processes of uplift and denudation, as opposed to Davis's model of a single uplift followed by decay. He also emphasised that in many landscapes slope evolution occurs by backwearing of rocks, not by Davisian-style surface lowering, and his science tended to emphasise surface process over understanding in detail the surface history of a given locality. Penck was German, and during his lifetime his ideas were at times rejected vigorously by the English-speaking geomorphology community. His early death, Davis' dislike for his work, and his at-times-confusing writing style likely all contributed to this rejection. Both Davis and Penck were trying to place the study of the evolution of the Earth's surface on a more generalized, globally relevant footing than it had been previously. In the early 19th century, authors – especially in Europe – had tended to attribute the form of landscapes to local climate, and in particular to the specific effects of glaciation and periglacial processes. In contrast, both Davis and Penck were seeking to emphasize the importance of evolution of landscapes through time and the generality of the Earth's surface processes across different landscapes under different conditions. During the early 1900s, the study of regional-scale geomorphology was termed "physiography". Physiography later was considered to be a contraction of "physical" and "geography", and therefore synonymous with physical geography, and the concept became embroiled in controversy surrounding the appropriate concerns of that discipline. Some geomorphologists held to a geological basis for physiography and emphasized a concept of physiographic regions while a conflicting trend among geographers was to equate physiography with "pure morphology", separated from its geological heritage. In the period following World War II, the emergence of process, climatic, and quantitative studies led to a preference by many earth scientists for the term "geomorphology" in order to suggest an analytical approach to landscapes rather than a descriptive one. Climatic geomorphology During the age of New Imperialism in the late 19th century European explorers and scientists traveled across the globe bringing descriptions of landscapes and landforms. As geographical knowledge increased over time these observations were systematized in a search for regional patterns. Climate emerged thus as prime factor for explaining landform distribution at a grand scale. The rise of climatic geomorphology was foreshadowed by the work of Wladimir Köppen, Vasily Dokuchaev and Andreas Schimper. William Morris Davis, the leading geomorphologist of his time, recognized the role of climate by complementing his "normal" temperate climate cycle of erosion with arid and glacial ones. Nevertheless, interest in climatic geomorphology was also a reaction against Davisian geomorphology that was by the mid-20th century considered both un-innovative and dubious. Early climatic geomorphology developed primarily in continental Europe while in the English-speaking world the tendency was not explicit until L.C. Peltier's 1950 publication on a periglacial cycle of erosion. Climatic geomorphology was criticized in a 1969 review article by process geomorphologist D.R. Stoddart. The criticism by Stoddart proved "devastating" sparking a decline in the popularity of climatic geomorphology in the late 20th century. Stoddart criticized climatic geomorphology for applying supposedly "trivial" methodologies in establishing landform differences between morphoclimatic zones, being linked to Davisian geomorphology and by allegedly neglecting the fact that physical laws governing processes are the same across the globe. In addition some conceptions of climatic geomorphology, like that which holds that chemical weathering is more rapid in tropical climates than in cold climates proved to not be straightforwardly true. Quantitative and process geomorphology Geomorphology was started to be put on a solid quantitative footing in the middle of the 20th century. Following the early work of Grove Karl Gilbert around the turn of the 20th century, a group of mainly American natural scientists, geologists and hydraulic engineers including William Walden Rubey, Ralph Alger Bagnold, Hans Albert Einstein, Frank Ahnert, John Hack, Luna Leopold, A. Shields, Thomas Maddock, Arthur Strahler, Stanley Schumm, and Ronald Shreve began to research the form of landscape elements such as rivers and hillslopes by taking systematic, direct, quantitative measurements of aspects of them and investigating the scaling of these measurements. These methods began to allow prediction of the past and future behavior of landscapes from present observations, and were later to develop into the modern trend of a highly quantitative approach to geomorphic problems. Many groundbreaking and widely cited early geomorphology studies appeared in the Bulletin of the Geological Society of America, and received only few citations prior to 2000 (they are examples of "sleeping beauties") when a marked increase in quantitative geomorphology research occurred. Quantitative geomorphology can involve fluid dynamics and solid mechanics, geomorphometry, laboratory studies, field measurements, theoretical work, and full landscape evolution modeling. These approaches are used to understand weathering and the formation of soils, sediment transport, landscape change, and the interactions between climate, tectonics, erosion, and deposition. In Sweden Filip Hjulström's doctoral thesis, "The River Fyris" (1935), contained one of the first quantitative studies of geomorphological processes ever published. His students followed in the same vein, making quantitative studies of mass transport (Anders Rapp), fluvial transport (Åke Sundborg), delta deposition (Valter Axelsson), and coastal processes (John O. Norrman). This developed into "the Uppsala School of Physical Geography". Contemporary geomorphology Today, the field of geomorphology encompasses a very wide range of different approaches and interests. Modern researchers aim to draw out quantitative "laws" that govern Earth surface processes, but equally, recognize the uniqueness of each landscape and environment in which these processes operate. Particularly important realizations in contemporary geomorphology include: 1) that not all landscapes can be considered as either "stable" or "perturbed", where this perturbed state is a temporary displacement away from some ideal target form. Instead, dynamic changes of the landscape are now seen as an essential part of their nature. 2) that many geomorphic systems are best understood in terms of the stochasticity of the processes occurring in them, that is, the probability distributions of event magnitudes and return times. This in turn has indicated the importance of chaotic determinism to landscapes, and that landscape properties are best considered statistically. The same processes in the same landscapes do not always lead to the same end results. According to Karna Lidmar-Bergström, regional geography is since the 1990s no longer accepted by mainstream scholarship as a basis for geomorphological studies. Albeit having its importance diminished, climatic geomorphology continues to exist as field of study producing relevant research. More recently concerns over global warming have led to a renewed interest in the field. Despite considerable criticism, the cycle of erosion model has remained part of the science of geomorphology. The model or theory has never been proved wrong, but neither has it been proven. The inherent difficulties of the model have instead made geomorphological research to advance along other lines. In contrast to its disputed status in geomorphology, the cycle of erosion model is a common approach used to establish denudation chronologies, and is thus an important concept in the science of historical geology. While acknowledging its shortcomings, modern geomorphologists Andrew Goudie and Karna Lidmar-Bergström have praised it for its elegance and pedagogical value respectively. Processes Geomorphically relevant processes generally fall into (1) the production of regolith by weathering and erosion, (2) the transport of that material, and (3) its eventual deposition. Primary surface processes responsible for most topographic features include wind, waves, chemical dissolution, mass wasting, groundwater movement, surface water flow, glacial action, tectonism, and volcanism. Other more exotic geomorphic processes might include periglacial (freeze-thaw) processes, salt-mediated action, changes to the seabed caused by marine currents, seepage of fluids through the seafloor or extraterrestrial impact. Aeolian processes Aeolian processes pertain to the activity of the winds and more specifically, to the winds' ability to shape the surface of the Earth. Winds may erode, transport, and deposit materials, and are effective agents in regions with sparse vegetation and a large supply of fine, unconsolidated sediments. Although water and mass flow tend to mobilize more material than wind in most environments, aeolian processes are important in arid environments such as deserts. Biological processes The interaction of living organisms with landforms, or biogeomorphologic processes, can be of many different forms, and is probably of profound importance for the terrestrial geomorphic system as a whole. Biology can influence very many geomorphic processes, ranging from biogeochemical processes controlling chemical weathering, to the influence of mechanical processes like burrowing and tree throw on soil development, to even controlling global erosion rates through modulation of climate through carbon dioxide balance. Terrestrial landscapes in which the role of biology in mediating surface processes can be definitively excluded are extremely rare, but may hold important information for understanding the geomorphology of other planets, such as Mars. Fluvial processes Rivers and streams are not only conduits of water, but also of sediment. The water, as it flows over the channel bed, is able to mobilize sediment and transport it downstream, either as bed load, suspended load or dissolved load. The rate of sediment transport depends on the availability of sediment itself and on the river's discharge. Rivers are also capable of eroding into rock and forming new sediment, both from their own beds and also by coupling to the surrounding hillslopes. In this way, rivers are thought of as setting the base level for large-scale landscape evolution in nonglacial environments. Rivers are key links in the connectivity of different landscape elements. As rivers flow across the landscape, they generally increase in size, merging with other rivers. The network of rivers thus formed is a drainage system. These systems take on four general patterns: dendritic, radial, rectangular, and trellis. Dendritic happens to be the most common, occurring when the underlying stratum is stable (without faulting). Drainage systems have four primary components: drainage basin, alluvial valley, delta plain, and receiving basin. Some geomorphic examples of fluvial landforms are alluvial fans, oxbow lakes, and fluvial terraces. Glacial processes Glaciers, while geographically restricted, are effective agents of landscape change. The gradual movement of ice down a valley causes abrasion and plucking of the underlying rock. Abrasion produces fine sediment, termed glacial flour. The debris transported by the glacier, when the glacier recedes, is termed a moraine. Glacial erosion is responsible for U-shaped valleys, as opposed to the V-shaped valleys of fluvial origin. The way glacial processes interact with other landscape elements, particularly hillslope and fluvial processes, is an important aspect of Plio-Pleistocene landscape evolution and its sedimentary record in many high mountain environments. Environments that have been relatively recently glaciated but are no longer may still show elevated landscape change rates compared to those that have never been glaciated. Nonglacial geomorphic processes which nevertheless have been conditioned by past glaciation are termed paraglacial processes. This concept contrasts with periglacial processes, which are directly driven by formation or melting of ice or frost. Hillslope processes Soil, regolith, and rock move downslope under the force of gravity via creep, slides, flows, topples, and falls. Such mass wasting occurs on both terrestrial and submarine slopes, and has been observed on Earth, Mars, Venus, Titan and Iapetus. Ongoing hillslope processes can change the topology of the hillslope surface, which in turn can change the rates of those processes. Hillslopes that steepen up to certain critical thresholds are capable of shedding extremely large volumes of material very quickly, making hillslope processes an extremely important element of landscapes in tectonically active areas. On the Earth, biological processes such as burrowing or tree throw may play important roles in setting the rates of some hillslope processes. Igneous processes Both volcanic (eruptive) and plutonic (intrusive) igneous processes can have important impacts on geomorphology. The action of volcanoes tends to rejuvenize landscapes, covering the old land surface with lava and tephra, releasing pyroclastic material and forcing rivers through new paths. The cones built by eruptions also build substantial new topography, which can be acted upon by other surface processes. Plutonic rocks intruding then solidifying at depth can cause both uplift or subsidence of the surface, depending on whether the new material is denser or less dense than the rock it displaces. Tectonic processes Tectonic effects on geomorphology can range from scales of millions of years to minutes or less. The effects of tectonics on landscape are heavily dependent on the nature of the underlying bedrock fabric that more or less controls what kind of local morphology tectonics can shape. Earthquakes can, in terms of minutes, submerge large areas of land forming new wetlands. Isostatic rebound can account for significant changes over hundreds to thousands of years, and allows erosion of a mountain belt to promote further erosion as mass is removed from the chain and the belt uplifts. Long-term plate tectonic dynamics give rise to orogenic belts, large mountain chains with typical lifetimes of many tens of millions of years, which form focal points for high rates of fluvial and hillslope processes and thus long-term sediment production. Features of deeper mantle dynamics such as plumes and delamination of the lower lithosphere have also been hypothesised to play important roles in the long term (> million year), large scale (thousands of km) evolution of the Earth's topography (see dynamic topography). Both can promote surface uplift through isostasy as hotter, less dense, mantle rocks displace cooler, denser, mantle rocks at depth in the Earth. Marine processes Marine processes are those associated with the action of waves, marine currents and seepage of fluids through the seafloor. Mass wasting and submarine landsliding are also important processes for some aspects of marine geomorphology. Because ocean basins are the ultimate sinks for a large fraction of terrestrial sediments, depositional processes and their related forms (e.g., sediment fans, deltas) are particularly important as elements of marine geomorphology. Overlap with other fields There is a considerable overlap between geomorphology and other fields. Deposition of material is extremely important in sedimentology. Weathering is the chemical and physical disruption of earth materials in place on exposure to atmospheric or near surface agents, and is typically studied by soil scientists and environmental chemists, but is an essential component of geomorphology because it is what provides the material that can be moved in the first place. Civil and environmental engineers are concerned with erosion and sediment transport, especially related to canals, slope stability (and natural hazards), water quality, coastal environmental management, transport of contaminants, and stream restoration. Glaciers can cause extensive erosion and deposition in a short period of time, making them extremely important entities in the high latitudes and meaning that they set the conditions in the headwaters of mountain-born streams; glaciology therefore is important in geomorphology.
Physical sciences
Geomorphology
null
78547
https://en.wikipedia.org/wiki/Temperate%20grasslands%2C%20savannas%2C%20and%20shrublands
Temperate grasslands, savannas, and shrublands
Temperate grasslands, savannas, and shrublands are terrestrial biomes defined by the World Wide Fund for Nature. The predominant vegetation in these biomes consists of grass and/or shrubs. The climate is temperate and ranges from semi-arid to semi-humid. The habitat type differs from tropical grasslands in the annual temperature regime and the types of species found here. The habitat type is known as prairie in North America, pampas in South America, veld in Southern Africa and steppe in Asia. Generally speaking, these regions are devoid of trees, except for riparian or gallery forests associated with streams and rivers. Steppes/shortgrass prairies are short grasslands that occur in semi-arid climates. Tallgrass prairies are tall grasslands in higher rainfall areas. Heaths and pastures are, respectively, low shrublands and grasslands where forest growth is hindered by human activity but not the climate. Tall grasslands, including the tallgrass prairie of North America, the north-western parts of Eurasian steppe (Ukraine and south of Russia), and the Humid Pampas of Argentina, have moderate rainfall and rich soils which make them ideally suited to agriculture, and tall grassland ecoregions include some of the most productive grain-growing regions in the world. The expanses of grass in North America and Eurasia once sustained migrations of large vertebrates such as bison (Bos bison), saiga (Saiga tatarica), and Tibetan antelopes (Pantholops hodgsoni) and kiang (Equus hemionus). Such phenomena now occur only in isolated pockets, primarily in the Daurian Steppe and Tibetan Plateau. Temperate savannahs, found in Southern South America, parts of West Asia, South Africa and southern Australia, and parts of the United States, are a mixed grassy woodland ecosystem defined by trees being reasonably widely spaced so that the canopy does not close, much like subtropical and tropical savannahs, albeit lacking a year-round warm climate. In many savannas, tree densities are higher and are more regularly spaced than in forests. The Eurasian steppes' and North American Great Plains floral communities have been largely extirpated through conversion to agriculture. Nonetheless, as many as 300 different plant species may grow on less than three acres of North American tallgrass prairie, which also may support more than 3 million individual insects per acre. The Patagonian Steppe and Grasslands are notable for distinctiveness at the generic and familial levels in various taxa. Temperate grasslands, savannas, and shrublands ecoregions
Physical sciences
Grasslands
Earth science
78768
https://en.wikipedia.org/wiki/Proxy%20server
Proxy server
In computer networking, a proxy server is a server application that acts as an intermediary between a client requesting a resource and the server providing that resource. It improves privacy, security, and possibly performance in the process. Instead of connecting directly to a server that can fulfill a request for a resource, such as a file or web page, the client directs the request to the proxy server, which evaluates the request and performs the required network transactions. This serves as a method to simplify or control the complexity of the request, or provide additional benefits such as load balancing, privacy, or security. Proxies were devised to add structure and encapsulation to distributed systems. A proxy server thus functions on behalf of the client when requesting service, potentially masking the true origin of the request to the resource server. Types A proxy server may reside on the user's local computer, or at any point between the user's computer and destination servers on the Internet. A proxy server that passes unmodified requests and responses is usually called a gateway or sometimes a tunneling proxy. A forward proxy is an Internet-facing proxy used to retrieve data from a wide range of sources (in most cases, anywhere on the Internet). A reverse proxy is usually an internal-facing proxy used as a front-end to control and protect access to a server on a private network. A reverse proxy commonly also performs tasks such as load-balancing, authentication, decryption, and caching. Open proxies An open proxy is a forwarding proxy server that is accessible by any Internet user. In 2008, network security expert Gordon Lyon estimated that "hundreds of thousands" of open proxies are operated on the Internet. Anonymous proxy: This server reveals its identity as a proxy server but does not disclose the originating IP address of the client. Although this type of server can be discovered easily, it can be beneficial for some users as it hides the originating IP address. Transparent proxy: This server not only identifies itself as a proxy server, but with the support of HTTP header fields such as X-Forwarded-For, the originating IP address can be retrieved as well. The main benefit of using this type of server is its ability to cache a website for faster retrieval. Reverse proxies A reverse proxy (or surrogate) is a proxy server that appears to clients to be an ordinary server. Reverse proxies forward requests to one or more ordinary servers that handle the request. The response from the original server is returned as if it came directly from the proxy server, leaving the client with no knowledge of the original server. Reverse proxies are installed in the vicinity of one or more web servers. All traffic coming from the Internet and with a destination of one of the neighborhood's web servers goes through the proxy server. The use of "reverse" originates in its counterpart "forward proxy" since the reverse proxy sits closer to the web server and serves only a restricted set of websites. There are several reasons for installing reverse proxy servers: Encryption/SSL acceleration: when secure websites are created, the Secure Sockets Layer (SSL) encryption is often not done by the web server itself, but by a reverse proxy that is equipped with SSL acceleration hardware. Furthermore, a host can provide a single "SSL proxy" to provide SSL encryption for an arbitrary number of hosts, removing the need for a separate SSL server certificate for each host, with the downside that all hosts behind the SSL proxy have to share a common DNS name or IP address for SSL connections. This problem can partly be overcome by using the SubjectAltName feature of X.509 certificates or the SNI extension of TLS. Load balancing: the reverse proxy can distribute the load to several web servers, each serving its own application area. In such a case, the reverse proxy may need to rewrite the URLs in each web page (translation from externally known URLs to the internal locations). Serve/cache static content: A reverse proxy can offload the web servers by caching static content like pictures and other static graphical content. Compression: the proxy server can optimize and compress the content to speed up the load time. Spoon feeding: reduces resource usage caused by slow clients on the web servers by caching the content the web server sent and slowly "spoon feeding" it to the client. This especially benefits dynamically generated pages. Security: the proxy server is an additional layer of defense and can protect against some OS and web-server-specific attacks. However, it does not provide any protection from attacks against the web application or service itself, which is generally considered the larger threat. Extranet publishing: a reverse proxy server facing the Internet can be used to communicate to a firewall server internal to an organization, providing extranet access to some functions while keeping the servers behind the firewalls. If used in this way, security measures should be considered to protect the rest of your infrastructure in case this server is compromised, as its web application is exposed to attack from the Internet. Forward proxy vs. reverse proxy A forward proxy is a server that routes traffic between clients and another system, which is in most occasions external to the network. This means it can regulate traffic according to preset policies, convert and mask client IP addresses, enforce security protocols and block unknown traffic. A forward proxy enhances security and policy enforcement within an internal network. A reverse proxy, instead of protecting the client, is used to protect the servers. A reverse proxy accepts a request from a client, forwards that request to another one of many other servers, and then returns the results from the server that specifically processed the request to the client. Effectively a reverse proxy acts as a gateway between clients, users and application servers and handles all the traffic routing whilst also protecting the identity of the server that physically processes the request. Uses Monitoring and filtering Content-control software A content-filtering web proxy server provides administrative control over the content that may be relayed in one or both directions through the proxy. It is commonly used in both commercial and non-commercial organizations (especially schools) to ensure that Internet usage conforms to acceptable use policy. Content filtering proxy servers will often support user authentication to control web access. It also usually produces logs, either to give detailed information about the URLs accessed by specific users or to monitor bandwidth usage statistics. It may also communicate to daemon-based or ICAP-based antivirus software to provide security against viruses and other malware by scanning incoming content in real-time before it enters the network. Many workplaces, schools, and colleges restrict web sites and online services that are accessible and available in their buildings. Governments also censor undesirable content. This is done either with a specialized proxy, called a content filter (both commercial and free products are available), or by using a cache-extension protocol such as ICAP, that allows plug-in extensions to an open caching architecture. Websites commonly used by students to circumvent filters and access blocked content often include a proxy, from which the user can then access the websites that the filter is trying to block. Requests may be filtered by several methods, such as a URL or DNS blacklists, URL regex filtering, MIME filtering, or content keyword filtering. Blacklists are often provided and maintained by web-filtering companies, often grouped into categories (pornography, gambling, shopping, social networks, etc..). The proxy then fetches the content, assuming the requested URL is acceptable. At this point, a dynamic filter may be applied on the return path. For example, JPEG files could be blocked based on fleshtone matches, or language filters could dynamically detect unwanted language. If the content is rejected then an HTTP fetch error may be returned to the requester. Most web filtering companies use an internet-wide crawling robot that assesses the likelihood that content is a certain type. Manual labor is used to correct the resultant database based on complaints or known flaws in the content-matching algorithms. Some proxies scan outbound content, e.g., for data loss prevention; or scan content for malicious software. Filtering of encrypted data Web filtering proxies are not able to peer inside secure sockets HTTP transactions, assuming the chain-of-trust of SSL/TLS (Transport Layer Security) has not been tampered with. The SSL/TLS chain-of-trust relies on trusted root certificate authorities. In a workplace setting where the client is managed by the organization, devices may be configured to trust a root certificate whose private key is known to the proxy. In such situations, proxy analysis of the contents of an SSL/TLS transaction becomes possible. The proxy is effectively operating a man-in-the-middle attack, allowed by the client's trust of a root certificate the proxy owns. Bypassing filters and censorship If the destination server filters content based on the origin of the request, the use of a proxy can circumvent this filter. For example, a server using IP-based geolocation to restrict its service to a certain country can be accessed using a proxy located in that country to access the service. Web proxies are the most common means of bypassing government censorship, although no more than 3% of Internet users use any circumvention tools. Some proxy service providers allow businesses access to their proxy network for rerouting traffic for business intelligence purposes. In some cases, users can circumvent proxies that filter using blacklists by using services designed to proxy information from a non-blacklisted location. Logging and eavesdropping Proxies can be installed in order to eavesdrop upon the data-flow between client machines and the web. All content sent or accessed – including passwords submitted and cookies used – can be captured and analyzed by the proxy operator. For this reason, passwords to online services (such as webmail and banking) should always be exchanged over a cryptographically secured connection, such as SSL. By chaining the proxies which do not reveal data about the original requester, it is possible to obfuscate activities from the eyes of the user's destination. However, more traces will be left on the intermediate hops, which could be used or offered up to trace the user's activities. If the policies and administrators of these other proxies are unknown, the user may fall victim to a false sense of security just because those details are out of sight and mind. In what is more of an inconvenience than a risk, proxy users may find themselves being blocked from certain Web sites, as numerous forums and Web sites block IP addresses from proxies known to have spammed or trolled the site. Proxy bouncing can be used to maintain privacy. Improving performance A caching proxy server accelerates service requests by retrieving the content saved from a previous request made by the same client or even other clients. Caching proxies keep local copies of frequently requested resources, allowing large organizations to significantly reduce their upstream bandwidth usage and costs, while significantly increasing performance. Most ISPs and large businesses have a caching proxy. Caching proxies were the first kind of proxy server. Web proxies are commonly used to cache web pages from a web server. Poorly implemented caching proxies can cause problems, such as an inability to use user authentication. A proxy that is designed to mitigate specific link related issues or degradation is a Performance Enhancing Proxy (PEPs). These are typically used to improve TCP performance in the presence of high round-trip times or high packet loss (such as wireless or mobile phone networks); or highly asymmetric links featuring very different upload and download rates. PEPs can make more efficient use of the network, for example, by merging TCP ACKs (acknowledgements) or compressing data sent at the application layer. Translation A translation proxy is a proxy server that is used to localize a website experience for different markets. Traffic from the global audience is routed through the translation proxy to the source website. As visitors browse the proxied site, requests go back to the source site where pages are rendered. The original language content in the response is replaced by the translated content as it passes back through the proxy. The translations used in a translation proxy can be either machine translation, human translation, or a combination of machine and human translation. Different translation proxy implementations have different capabilities. Some allow further customization of the source site for the local audiences such as excluding the source content or substituting the source content with the original local content. Accessing services anonymously An anonymous proxy server (sometimes called a web proxy) generally attempts to anonymize web surfing. Anonymizers may be differentiated into several varieties. The destination server (the server that ultimately satisfies the web request) receives requests from the anonymizing proxy server and thus does not receive information about the end user's address. The requests are not anonymous to the anonymizing proxy server, however, and so a degree of trust is present between the proxy server and the user. Many proxy servers are funded through a continued advertising link to the user. Access control: Some proxy servers implement a logon requirement. In large organizations, authorized users must log on to gain access to the web. The organization can thereby track usage to individuals. Some anonymizing proxy servers may forward data packets with header lines such as HTTP_VIA, HTTP_X_FORWARDED_FOR, or HTTP_FORWARDED, which may reveal the IP address of the client. Other anonymizing proxy servers, known as elite or high-anonymity proxies, make it appear that the proxy server is the client. A website could still suspect a proxy is being used if the client sends packets that include a cookie from a previous visit that did not use the high-anonymity proxy server. Clearing cookies, and possibly the cache, would solve this problem. QA geotargeted advertising Advertisers use proxy servers for validating, checking and quality assurance of geotargeted ads. A geotargeting ad server checks the request source IP address and uses a geo-IP database to determine the geographic source of requests. Using a proxy server that is physically located inside a specific country or a city gives advertisers the ability to test geotargeted ads. Security A proxy can keep the internal network structure of a company secret by using network address translation, which can help the security of the internal network. This makes requests from machines and users on the local network anonymous. Proxies can also be combined with firewalls. An incorrectly configured proxy can provide access to a network otherwise isolated from the Internet. Cross-domain resources Proxies allow web sites to make web requests to externally hosted resources (e.g. images, music files, etc.) when cross-domain restrictions prohibit the web site from linking directly to the outside domains. Proxies also allow the browser to make web requests to externally hosted content on behalf of a website when cross-domain restrictions (in place to protect websites from the likes of data theft) prohibit the browser from directly accessing the outside domains. Malicious usages Secondary market brokers Secondary market brokers use web proxy servers to circumvent restrictions on online purchases of limited products such as limited sneakers or tickets. Implementations of proxies Web proxy servers Web proxies forward HTTP requests. The request from the client is the same as a regular HTTP request except the full URL is passed, instead of just the path. GET https://en.wikipedia.org/wiki/Proxy_server HTTP/1.1 Proxy-Authorization: Basic encoded-credentials Accept: text/html This request is sent to the proxy server, the proxy makes the request specified and returns the response. HTTP/1.1 200 OK Content-Type: text/html; charset UTF-8 Some web proxies allow the HTTP CONNECT method to set up forwarding of arbitrary data through the connection; a common policy is to only forward port 443 to allow HTTPS traffic. Examples of web proxy servers include Apache (with mod_proxy or Traffic Server), HAProxy, IIS configured as proxy (e.g., with Application Request Routing), Nginx, Privoxy, Squid, Varnish (reverse proxy only), WinGate, Ziproxy, Tinyproxy, RabbIT and Polipo. For clients, the problem of complex or multiple proxy-servers is solved by a client-server Proxy auto-config protocol (PAC file). SOCKS proxy SOCKS also forwards arbitrary data after a connection phase, and is similar to HTTP CONNECT in web proxies. Transparent proxy Also known as an intercepting proxy, inline proxy, or forced proxy, a transparent proxy intercepts normal application layer communication without requiring any special client configuration. Clients need not be aware of the existence of the proxy. A transparent proxy is normally located between the client and the Internet, with the proxy performing some of the functions of a gateway or router. (Hypertext Transfer Protocol—HTTP/1.1) offers standard definitions: "A 'transparent proxy' is a proxy that does not modify the request or response beyond what is required for proxy authentication and identification". "A 'non-transparent proxy' is a proxy that modifies the request or response in order to provide some added service to the user agent, such as group annotation services, media type transformation, protocol reduction, or anonymity filtering". TCP Intercept is a traffic filtering security feature that protects TCP servers from TCP SYN flood attacks, which are a type of denial-of-service attack. TCP Intercept is available for IP traffic only. In 2009 a security flaw in the way that transparent proxies operate was published by Robert Auger, and the Computer Emergency Response Team issued an advisory listing dozens of affected transparent and intercepting proxy servers. Purpose Intercepting proxies are commonly used in businesses to enforce acceptable use policies and to ease administrative overheads since no client browser configuration is required. This second reason, however is mitigated by features such as Active Directory group policy, or DHCP and automatic proxy detection. Intercepting proxies are also commonly used by ISPs in some countries to save upstream bandwidth and improve customer response times by caching. This is more common in countries where bandwidth is more limited (e.g. island nations) or must be paid for. Issues The diversion or interception of a TCP connection creates several issues. First, the original destination IP and port must somehow be communicated to the proxy. This is not always possible (e.g., where the gateway and proxy reside on different hosts). There is a class of cross-site attacks that depend on certain behaviors of intercepting proxies that do not check or have access to information about the original (intercepted) destination. This problem may be resolved by using an integrated packet-level and application level appliance or software which is then able to communicate this information between the packet handler and the proxy. Intercepting also creates problems for HTTP authentication, especially connection-oriented authentication such as NTLM, as the client browser believes it is talking to a server rather than a proxy. This can cause problems where an intercepting proxy requires authentication, and then the user connects to a site that also requires authentication. Finally, intercepting connections can cause problems for HTTP caches, as some requests and responses become uncacheable by a shared cache. Implementation methods In integrated firewall/proxy servers where the router/firewall is on the same host as the proxy, communicating original destination information can be done by any method, for example Microsoft TMG or WinGate. Interception can also be performed using Cisco's WCCP (Web Cache Control Protocol). This proprietary protocol resides on the router and is configured from the cache, allowing the cache to determine what ports and traffic is sent to it via transparent redirection from the router. This redirection can occur in one of two ways: GRE tunneling (OSI Layer 3) or MAC rewrites (OSI Layer 2). Once traffic reaches the proxy machine itself, interception is commonly performed with NAT (Network Address Translation). Such setups are invisible to the client browser, but leave the proxy visible to the web server and other devices on the internet side of the proxy. Recent Linux and some BSD releases provide TPROXY (transparent proxy) which performs IP-level (OSI Layer 3) transparent interception and spoofing of outbound traffic, hiding the proxy IP address from other network devices. Detection Several methods may be used to detect the presence of an intercepting proxy server: By comparing the client's external IP address to the address seen by an external web server, or sometimes by examining the HTTP headers received by a server. A number of sites have been created to address this issue, by reporting the user's IP address as seen by the site back to the user on a web page. Google also returns the IP address as seen by the page if the user searches for "IP". By comparing the results of online IP checkers when accessed using HTTPS vs. HTTP, as most intercepting proxies do not intercept SSL. If there is suspicion of SSL being intercepted, one can examine the certificate associated with any secure web site, the root certificate should indicate whether it was issued for the purpose of intercepting. By comparing the sequence of network hops reported by a tool such as traceroute for a proxied protocol such as HTTP (port 80) with that for a non-proxied protocol such as SMTP (port 25). By attempting to make a connection to an IP address at which there is known to be no server. The proxy will accept the connection and then attempt to proxy it on. When the proxy finds no server to accept the connection, it may return an error message or simply close the connection to the client. This difference in behavior is simple to detect. For example, most web browsers will generate a browser created error page in the case where they cannot connect to an HTTP server but will return a different error in the case where the connection is accepted and then closed. By serving the end-user specially programmed Adobe Flash SWF applications or Sun Java applets that send HTTP calls back to their server. CGI proxy A CGI web proxy accepts target URLs using a Web form in the user's browser window, processes the request, and returns the results to the user's browser. Consequently, it can be used on a device or network that does not allow "true" proxy settings to be changed. The first recorded CGI proxy, named "rover" at the time but renamed in 1998 to "CGIProxy", was developed by American computer scientist James Marshall in early 1996 for an article in "Unix Review" by Rich Morin. The majority of CGI proxies are powered by one of CGIProxy (written in the Perl language), Glype (written in the PHP language), or PHProxy (written in the PHP language). As of April 2016, CGIProxy has received about two million downloads, Glype has received almost a million downloads, whilst PHProxy still receives hundreds of downloads per week. Despite waning in popularity due to VPNs and other privacy methods, there are still a few hundred CGI proxies online. Some CGI proxies were set up for purposes such as making websites more accessible to disabled people, but have since been shut down due to excessive traffic, usually caused by a third party advertising the service as a means to bypass local filtering. Since many of these users do not care about the collateral damage they are causing, it became necessary for organizations to hide their proxies, disclosing the URLs only to those who take the trouble to contact the organization and demonstrate a genuine need. Suffix proxy A suffix proxy allows a user to access web content by appending the name of the proxy server to the URL of the requested content (e.g. "en.wikipedia.org.SuffixProxy.com"). Suffix proxy servers are easier to use than regular proxy servers, but they do not offer high levels of anonymity, and their primary use is for bypassing web filters. However, this is rarely used due to more advanced web filters. Tor onion proxy software Tor is a system intended to provide online anonymity. Tor client software routes Internet traffic through a worldwide volunteer network of servers for concealing a user's computer location or usage from someone conducting network surveillance or traffic analysis. Using Tor makes tracing Internet activity more difficult, and is intended to protect users' personal freedom and their online privacy. "Onion routing" refers to the layered nature of the encryption service: The original data are encrypted and re-encrypted multiple times, then sent through successive Tor relays, each one of which decrypts a "layer" of encryption before passing the data on to the next relay and ultimately the destination. This reduces the possibility of the original data being unscrambled or understood in transit. I2P anonymous proxy The I2P anonymous network ('I2P') is a proxy network aiming at online anonymity. It implements garlic routing, which is an enhancement of Tor's onion routing. I2P is fully distributed and works by encrypting all communications in various layers and relaying them through a network of routers run by volunteers in various locations. By keeping the source of the information hidden, I2P offers censorship resistance. The goals of I2P are to protect users' personal freedom, privacy, and ability to conduct confidential business. Each user of I2P runs an I2P router on their computer (node). The I2P router takes care of finding other peers and building anonymizing tunnels through them. I2P provides proxies for all protocols (HTTP, IRC, SOCKS, ...). Comparison to network address translators The proxy concept refers to a layer 7 application in the OSI reference model. Network address translation (NAT) is similar to a proxy but operates in layer 3. In the client configuration of layer-3 NAT, configuring the gateway is sufficient. However, for the client configuration of a layer 7 proxy, the destination of the packets that the client generates must always be the proxy server (layer 7), then the proxy server reads each packet and finds out the true destination. Because NAT operates at layer-3, it is less resource-intensive than the layer-7 proxy, but also less flexible. As we compare these two technologies, we might encounter a terminology known as 'transparent firewall'. Transparent firewall means that the proxy uses the layer-7 proxy advantages without the knowledge of the client. The client presumes that the gateway is a NAT in layer 3, and it does not have any idea about the inside of the packet, but through this method, the layer-3 packets are sent to the layer-7 proxy for investigation. DNS proxy A DNS proxy server takes DNS queries from a (usually local) network and forwards them to an Internet Domain Name Server. It may also cache DNS records. Proxifiers Some client programs "SOCKS-ify" requests, which allows adaptation of any networked software to connect to external networks via certain types of proxy servers (mostly SOCKS). Residential proxy (RESIP) A residential proxy is an intermediary that uses a real IP address provided by an Internet Service Provider (ISP) with physical devices such as mobiles and computers of end-users. Instead of connecting directly to a server, residential proxy users connect to the target through residential IP addresses. The target then identifies them as organic internet users. It does not let any tracking tool identify the reallocation of the user. Any residential proxy can send any number of concurrent requests, and IP addresses are directly related to a specific region. Unlike regular residential proxies, which hide the user's real IP address behind another IP address, rotating residential proxies, also known as backconnect proxies, conceal the user's real IP address behind a pool of proxies. These proxies switch between themselves at every session or at regular intervals. Despite the providers assertion that the proxy hosts are voluntarily participating, numerous proxies are operated on potentially compromised hosts, including Internet of things devices. Through the process of cross-referencing the hosts, researchers have identified and analyzed logs that have been classified as potentially unwanted program and exposed a range of unauthorized activities conducted by RESIP hosts. These activities encompassed illegal promotion, fast fluxing, phishing, hosting malware, and more.
Technology
Networks
null
78882
https://en.wikipedia.org/wiki/Pan%20%28genus%29
Pan (genus)
The genus Pan consists of two extant species: the chimpanzee and the bonobo. Taxonomically, these two ape species are collectively termed panins. The two species were formerly collectively called "chimpanzees" or "chimps"; if bonobos were recognized as a separate group at all, they were referred to as "pygmy" or "gracile chimpanzees". Together with humans, gorillas, and orangutans they are part of the family Hominidae (the great apes, or hominids). Native to sub-Saharan Africa, chimpanzees and bonobos are currently both found in the Congo jungle, while only the chimpanzee is also found further north in West Africa. Both species are listed as endangered on the IUCN Red List of Threatened Species, and in 2017 the Convention on Migratory Species selected the chimpanzee for special protection. Chimpanzee and bonobo: comparison The chimpanzee (P. troglodytes), which lives north of the Congo River, and the bonobo (P. paniscus), which lives south of it, were once considered to be the same species, but since 1928 they have been recognized as distinct. In addition, P. troglodytes is divided into four subspecies, while P. paniscus is undivided. Based on genome sequencing, these two extant Pan species diverged around one million years ago. The most obvious differences are that chimpanzees are somewhat larger, more aggressive and male-dominated, while the bonobos are more gracile, peaceful, and female-dominated. Their hair is typically black or brown. Males and females differ in size and appearance. Both chimpanzees and bonobos are some of the most social great apes, with social bonds occurring throughout large communities. Fruit is the most important component of a chimpanzee's diet; but they will also eat vegetation, bark, honey, insects and even other chimpanzees or monkeys. They can live over 30 years in both the wild and captivity. Chimpanzees and bonobos are equally humanity's closest living relatives. They use a variety of sophisticated tools and construct elaborate sleeping nests each night from branches and foliage. Their learning abilities have been extensively studied. There may even be distinctive cultures within populations. Field studies of Pan troglodytes were pioneered by primatologist Jane Goodall. Names The genus name Pan was first introduced by Lorenz Oken in 1816. While Oken did not give a rationale for his choice, it is generally thought to have been inspired by the name of the Greek god Pan. An alternative Theranthropus was suggested by Brookes 1828 and Chimpansee by Voigt 1831. Troglodytes was not available, as it had been given as the name of a genus of wren in 1809, for "cave-dweller", reflecting the tendency of some wrens to forage in dark crevices. The International Commission on Zoological Nomenclature adopted Pan as the only official name of the genus in 1895, though the "cave-dweller" connection was able to be included, albeit at the species level (Pan troglodytes – the common chimpanzee) for one of the two species of Pan. The first use of the name "chimpanze" is recorded in The London Magazine in 1738, glossed as meaning "mockman" in a language of "the Angolans" (apparently from a Bantu language; reportedly modern Vili (Civili), a Zone H Bantu language, has the comparable ci-mpenzi). The spelling chimpanzee is found in a 1758 supplement to Chamber's Cyclopædia. The colloquialism "chimp" was most likely coined some time in the late 1870s. The chimpanzee was named Simia troglodytes by Johann Friedrich Blumenbach in 1776. The species name troglodytes is a reference to the Troglodytae (literally "cave-goers"), an African people described by Greco-Roman geographers. Blumenbach first used it in his De generis humani varietate nativa liber ("On the natural varieties of the human genus") in 1776, Linnaeus 1758 had already used Homo troglodytes for a hypothetical mixture of human and orangutan. The bonobo, in the past also referred to as the "pygmy chimpanzee", was given the species name of paniscus by Ernst Schwarz (1929), a Greek-style diminutive of the theonym Pan used by Cicero. Distribution and habitat There are two species of the genus Pan, both previously called simply chimpanzees: Chimpanzees or Pan troglodytes, are found almost exclusively in the heavily forested regions of Central and West Africa. With at least four commonly accepted subspecies, their population and distribution is much more extensive than the bonobos, in the past also called 'pygmy chimpanzee'. Bonobos, Pan paniscus, are found only in Central Africa, south of the Congo River and north of the Kasai River (a tributary of the Congo), in the humid forest of the Democratic Republic of Congo of Central Africa. Evolutionary history Evolutionary relationship The genus Pan is part of the subfamily Homininae, to which humans also belong. The lineages of chimpanzees and humans separated in a process of speciation between roughly five to twelve million years ago, making them humanity's closest living relative. Research by Mary-Claire King in 1973 found 99% identical DNA between human beings and chimpanzees. For some time, research modified that finding to about 94% commonality, with some of the difference occurring in noncoding DNA, but more recent knowledge puts the difference in DNA between humans, chimpanzees and bonobos at just about 1%–1.2% again. Fossils The chimpanzee fossil record has long been absent and thought to have been due to the preservation bias in relation to their environment. However, in 2005, chimpanzee fossils were discovered and described by Sally McBrearty and colleagues. Existing chimpanzee populations in West and Central Africa are separate from the major human fossil sites in East Africa; however, chimpanzee fossils have been reported from Kenya, indicating that both humans and members of the Pan clade were present in the East African Rift Valley during the Middle Pleistocene. Anatomy and physiology The chimpanzee's arms are longer than its legs. The male common chimp stands up to high. Male adult wild chimps weigh between 40 and 60 kg with females weighing between 27 and 50 kg. When extended, the common chimp's long arms span one and a half times the body's height. The bonobo is slightly shorter and thinner than the common chimpanzee, but has longer limbs. In trees, both species climb with their long, powerful arms; on the ground, chimpanzees usually knuckle-walk, or walk on all fours, clenching their fists and supporting themselves on the knuckles. Chimpanzees are better suited for walking than orangutans, because the chimp's feet have broader soles and shorter toes. The bonobo has proportionately longer upper limbs and walks upright more often than does the common chimpanzee. Both species can walk upright on two legs when carrying objects with their hands and arms. The chimpanzee is tailless; its coat is dark; its face, fingers, palms of the hands, and soles of the feet are hairless. The exposed skin of the face, hands, and feet varies from pink to very dark in both species, but is generally lighter in younger individuals and darkens with maturity. A University of Chicago Medical Centre study has found significant genetic differences between chimpanzee populations. A bony shelf over the eyes gives the forehead a receding appearance, and the nose is flat. Although the jaws protrude, a chimp's lips are thrust out only when it pouts. The brain of a chimpanzee has been measured at a general range of 282–500 cm3. The human brain, in contrast, is about three times larger, with a reported average volume of about 1330 cm3. Chimpanzees reach puberty between the age of eight and ten years. A chimpanzee's testicles are unusually large for its body size, with a combined weight of about compared to a gorilla's or a human's . This relatively great size is generally attributed to sperm competition due to the polygynandrous nature of chimpanzee mating behaviour. Unlike gorillas, chimpanzees and bonobos have long and filiform penises without a glans. Longevity In the wild, chimpanzees live to their 30s, while some captured chimps have reached an age of 70 years and older. Muscle strength Chimpanzees are known for possessing great amount of muscle strength, especially in their arms. However, compared to humans the amount of strength reported in media and popular science is greatly exaggerated with numbers of four to eight times the muscle strength of a human. These numbers stem from two studies in 1923 and 1926 by a biologist named John Bauman. These studies were refuted in 1943 and an adult male chimpanzee was found to pull about the same weight as an adult man. Corrected for their smaller body sizes, chimpanzees were found to be stronger than humans but not anywhere near four to eight times. In the 1960s these tests were repeated and chimpanzees were found to have twice the strength of a human when it came to pulling weights. The reason for the higher strength seen in chimpanzees compared to humans are thought to come from longer skeletal muscle fibers that can generate twice the work output over a wider range of motion compared to skeletal muscle fibers in humans. Behaviour It is suspected that human observers can influence chimpanzee behaviour. For this reason researchers sometimes prefer camera traps and remote microphones rather than human observers. Chimpanzee vs. bonobo Anatomical differences between the common chimpanzee and the bonobo are slight. Both are omnivorous adapted to a mainly frugivorous diet. Yet sexual and social behaviours are markedly different. The common chimpanzee has a troop culture based on beta males led by an alpha male, and highly complex social relationships. The bonobo, on the other hand, has egalitarian, nonviolent, matriarchal, sexually receptive behaviour. Bonobos frequently have sex, sometimes to help prevent and resolve conflicts. Different groups of chimpanzees also have different cultural behaviour with preferences for types of tools. The common chimpanzee tends to display greater aggression than does the bonobo. The average captive chimpanzee sleeps 9 hours and 42 minutes per day. Contrary to what the scientific name (Pan troglodytes) may suggest, chimpanzees do not typically spend their time in caves, but there have been reports of some of them seeking refuge in caves because of the heat during daytime. Chimpanzees Social structure Chimpanzees live in large multi-male and multi-female social groups, which are called communities. Within a community, the position of an individual and the influence the individual has on others dictates a definite social hierarchy. Chimpanzees live in a leaner hierarchy wherein more than one individual may be dominant enough to dominate other members of lower rank. Typically, a dominant male is referred to as the alpha male. The alpha male is the highest-ranking male that controls the group and maintains order during disputes. In chimpanzee society, the 'dominant male' sometimes is not the largest or strongest male but rather the most manipulative and political male that can influence the goings on within a group. Male chimpanzees typically attain dominance by cultivating allies who will support that individual during future ambitions for power. The alpha male regularly displays by puffing his normally slim coat up to increase view size and charge to seem as threatening and as powerful as possible; this behaviour serves to intimidate other members and thereby maintain power and authority, and it may be fundamental to the alpha male's holding on to his status. Lower-ranking chimpanzees will show respect by submissively gesturing in body language or reaching out their hands while grunting. Female chimpanzees will show deference to the alpha male by presenting their hindquarters. Female chimpanzees also have a hierarchy, which is influenced by the position of a female individual within a group. In some chimpanzee communities, the young females may inherit high status from a high-ranking mother. Dominant females will also ally to dominate lower-ranking females: whereas males mainly seek dominant status for its associated mating privileges and sometimes violent domination of subordinates, females seek dominant status to acquire resources such as food, as high-ranking females often have first access to them. Both genders acquire dominant status to improve social standing within a group. Community female acceptance is necessary for alpha male status; females must ensure that their group visits places that supply them with enough food. A group of dominant females will sometimes oust an alpha male which is not to their preference and back another male, in whom they see potential for leading the group as a successful alpha male. The mating system within each community is polygynandrous, with each male and female possibly having multiple sexual partners. Intelligence Chimpanzees make tools and use them to acquire foods and for social displays; they have sophisticated hunting strategies requiring cooperation, influence and rank; they are status conscious, manipulative and capable of deception; they can learn to use symbols and understand aspects of human language including some relational syntax, concepts of number and numerical sequence; and they are capable of spontaneous planning for a future state or event. Tool use In October 1960, Jane Goodall observed the use of tools among chimpanzees. Recent research indicates that chimpanzees' use of stone tools dates back at least 4,300 years (about 2,300 BC). One example of chimpanzee tool usage behavior includes the use of a large stick as a tool to dig into termite mounds, and the subsequent use of a small stick altered into a tool that is used to "fish" the termites out of the mound. Chimpanzees are also known to use smaller stones as hammers and a large one as an anvil in order to break open nuts. In the 1970s, reports of chimpanzees using rocks or sticks as weapons were anecdotal and controversial. However, a 2007 study claimed to reveal the use of spears, which common chimpanzees in Senegal sharpen with their teeth and use to stab and pry Senegal bushbabies out of small holes in trees. Prior to the discovery of tool use by chimpanzees, humans were believed to be the only species to make and use tools; however, several other tool-using species are now known. Nest-building Nest-building, sometimes considered to be a form of tool use, is seen when chimpanzees construct arboreal night nests by lacing together branches from one or more trees to build a safe, comfortable place to sleep; infants learn this process by watching their mothers. The nest provides a sort of mattress, which is supported by strong branches for a foundation, and then lined with softer leaves and twigs; the minimum diameter is and may be located at a height of . Both day and night nests are built, and may be located in groups. A study in 2014 found that the muhimbi tree is favoured for nest building by chimpanzees in Uganda due to its physical properties, such as bending strength, inter-node distance, and leaf surface area. Altruism and emotivity Studies have shown chimpanzees engage in apparently altruistic behaviour within groups. Some researchers have suggested that chimpanzees are indifferent to the welfare of unrelated group members, but a more recent study of wild chimpanzees found that both male and female adults would adopt orphaned young of their group. Also, different groups sometimes share food, form coalitions, and cooperate in hunting and border patrolling. Sometimes, chimpanzees have adopted young that come from unrelated groups. And in some rare cases, even male chimpanzees have been shown to take care of abandoned infant chimpanzees of an unrelated group, though in most cases they would kill the infant. According to a literature summary by James W. Harrod, evidence for chimpanzee emotivity includes display of mourning; "incipient romantic love"; "rain dances"; appreciation of natural beauty (such as a sunset over a lake); curiosity and respect towards other wildlife (such as the python, which is neither a threat nor a food source to chimpanzees); altruism toward other species (such as feeding turtles); and animism, or "pretend play", when chimpanzees cradle and groom rocks or sticks. Communication between chimpanzees Chimpanzees communicate in a manner that is similar to that of human nonverbal communication, using vocalizations, hand gestures, and facial expressions. There is some evidence that they can recreate human speech. Research into the chimpanzee brain has revealed that when chimpanzees communicate, an area in the brain is activated which is in the same position as the language center called Broca's area in human brains. Aggression Adult common chimpanzees, particularly males, can be very aggressive. They are highly territorial and are known to kill others of their species. Hunting Chimpanzees also engage in targeted hunting of smaller primates, such as the red colobus and bush babies. Males who acquire the meat may share it with females to have sex or for grooming. Puzzle solving In February 2013, a study found that chimpanzees solve puzzles for entertainment. Chimpanzees in human history Chimpanzees, as well as other apes, had also been purported to have been known to ancient writers, but mainly as myths and legends on the edge of European and Near Eastern societal consciousness. Apes are mentioned variously by Aristotle. The English word ape translates Hebrew קוף (qof) in English translations of the Bible (1 Kings 10:22), but the word may refer to a monkey rather than an ape proper. The diary of Portuguese explorer Duarte Pacheco Pereira (1506), preserved in the Portuguese National Archive (Torre do Tombo), is probably the first written document to acknowledge that chimpanzees built their own rudimentary tools. The first of these early transcontinental chimpanzees came from Angola and were presented as a gift to Frederick Henry, Prince of Orange in 1640, and were followed by a few of its brethren over the next several years. Scientists described these first chimpanzees as "pygmies", and noted the animals' distinct similarities to humans. The next two decades, a number of the creatures were imported into Europe, mainly acquired by various zoological gardens as entertainment for visitors. Charles Darwin's theory of natural selection (published in 1859) spurred scientific interest in chimpanzees, as in much of life science, leading eventually to numerous studies of the animals in the wild and captivity. The observers of chimpanzees at the time were mainly interested in behaviour as it related to that of humans. This was less strictly and disinterestedly scientific than it might sound, with much attention being focused on whether or not the animals had traits that could be considered 'good'; the intelligence of chimpanzees was often significantly exaggerated, as immortalized in Hugo Rheinhold's Affe mit Schädel (see image, left). By the end of the 19th century, chimpanzees remained very much a mystery to humans, with very little factual scientific information available. In the 20th century, a new age of scientific research into chimpanzee behaviour began. Before 1960, almost nothing was known about chimpanzee behaviour in their natural habitats. In July of that year, Jane Goodall set out to Tanzania's Gombe forest to live among the chimpanzees, where she primarily studied the members of the Kasakela chimpanzee community. Her discovery that chimpanzees made and used tools was groundbreaking, as humans were previously believed to be the only species to do so. The most progressive early studies on chimpanzees were spearheaded primarily by Wolfgang Köhler and Robert Yerkes, both of whom were renowned psychologists. Both men and their colleagues established laboratory studies of chimpanzees focused specifically on learning about the intellectual abilities of chimpanzees, particularly problem-solving. This typically involved basic, practical tests on laboratory chimpanzees, which required a fairly high intellectual capacity (such as how to solve the problem of acquiring an out-of-reach banana). Notably, Yerkes also made extensive observations of chimpanzees in the wild which added tremendously to the scientific understanding of chimpanzees and their behaviour. Yerkes studied chimpanzees until World War II, while Köhler concluded five years of study and published his famous Mentality of Apes in 1925 (which is coincidentally when Yerkes began his analyses), eventually concluding, "chimpanzees manifest intelligent behaviour of the general kind familiar in human beings ... a type of behaviour which counts as specifically human" (1925). The August 2008 issue of the American Journal of Primatology reported results of a year-long study of chimpanzees in Tanzania's Mahale Mountains National Park, which produced evidence of chimpanzees becoming sick from viral infectious diseases they had likely contracted from humans. Molecular, microscopic and epidemiological investigations demonstrated the chimpanzees living at Mahale Mountains National Park have been suffering from a respiratory disease that is likely caused by a variant of a human paramyxovirus. Conservation The US Fish and Wildlife Service finalized a rule on June 12, 2015, creating very strict regulations, practically barring any activity with chimpanzees other than for scientific, preservation-oriented purposes.
Biology and health sciences
Primates
null
78897
https://en.wikipedia.org/wiki/Boraginaceae
Boraginaceae
Boraginaceae, the borage or forget-me-not family, includes about 2,000 species of shrubs, trees, and herbs in 146 to 154 genera with a worldwide distribution. The APG IV system from 2016 classifies the Boraginaceae as single family of the order Boraginales within the asterids. Under the older Cronquist system, it was included in the Lamiales, but clearly is no more similar to the other families in this order than it is to families in several other asterid orders. A revision of the Boraginales, also from 2016, split the Boraginaceae into 11 distinct families: Boraginaceae sensu stricto, Codonaceae, Coldeniaceae, Cordiaceae, Ehretiaceae, Heliotropiaceae, Hoplestigmataceae, Hydrophyllaceae, Lennoaceae, Namaceae, and Wellstediaceae. These plants have alternately arranged leaves, or a combination of alternate and opposite leaves. The leaf blades usually have a narrow shape; many are linear or lance-shaped. They are smooth-edged or toothed, and some have petioles. Most species have bisexual flowers, but some taxa are dioecious. Most pollination is by hymenopterans, such as bees. Most species have inflorescences that have a coiling shape, at least when new, called scorpioid cymes. The flower has a usually five-lobed calyx. The corolla varies in shape from rotate to bell-shaped to tubular, but it generally has five lobes. It can be green, white, yellow, orange, pink, purple, or blue. There are five stamens and one style with one or two stigmas. The fruit is a drupe, sometimes fleshy. Most members of this family have hairy leaves. The coarse character of the hairs is due to cystoliths of silicon dioxide and calcium carbonate. These hairs can induce an adverse skin reaction, including itching and rash in some individuals, particularly among people who handle the plants regularly, such as gardeners. In some species, anthocyanins cause the flowers to change color from red to blue with age. This may be a signal to pollinators that a flower is old and depleted of pollen and nectar. Well-known members of the family include: alkanet (Alkanna tinctoria) borage (Borago officinalis) comfrey (Symphytum spp.) fiddleneck (Amsinckia spp.) forget-me-not (Myosotis spp.) geigertree (Cordia sebestena) green alkanet (Pentaglottis sempervirens) heliotrope (Heliotropium spp.) hound's tongue (Cynoglossum spp.) lungwort (Pulmonaria spp.) oysterplant (Mertensia maritima) purple viper's bugloss/Salvation Jane (Echium plantagineum) Siberian bugloss (Brunnera macrophylla) viper's bugloss (Echium vulgare) Genera According to Kew; Actinocarya Adelinia Adelocaryum Aegonychon Afrotysonia Ailuroglossum Alkanna Amblynotus Amphibologyne Amsinckia Amsinckiopsis Anchusa Ancistrocarya Andersonglossum Anoplocaryum Antiotrema Antiphytum Arnebia Asperugo Borago Bothriospermum Bourreria Brachybotrys Brandella Brunnera Buglossoides Caccinia Cerinthe Chionocharis Codon Coldenia Cordia Craniospermum Crucicaryum Cryptantha Cynoglossopsis Cynoglossum Cynoglottis Cystostemon Dasynotus Decalepidanthus Draperia Echiochilon Echium Ehretia Ellisia Embadium Emmenanthe Eremocarya Eriodictyon Eritrichium Eucrypta Euploca Gastrocotyle Glandora Greeneocharis Gyrocaryum Hackelia Halacsya Halgania Harpagonella Heliocarya Heliotropium Hesperochiron Hoplestigma Hormuzakia Huynhia Hydrophyllum Iberodes Ivanjohnstonia Ixorhea Johnstonella Keraunea Lappula Lasiocaryum Lennoa Lepechiniella Lepidocordia Lindelofia Lithodora Lithospermum Lobostemon Maharanga Mairetis Mattiastrum Megacaryon Melanortocarya Memoremea Mertensia Microcaryum Microparacaryum Microula Mimophytum Moltkia Moltkiopsis Moritzia Myosotidium Myosotis Myriopus Nama Neatostema Nemophila Nesocaryum Nihon Nogalia Nonea Ogastemma Omphalodes Omphalolappula Omphalotrigonotis Oncaglossum Onosma Oreocarya Paracaryum Paramoltkia Pectocarya Pentaglottis Phacelia Pholisma Pholistoma Phyllocara Plagiobothrys Podonosma Pontechium Pseudolappula Pulmonaria Rindera Rochefortia Rochelia Romanzoffia Rotula Sauria Selkirkia Simpsonanthus Sinojohnstonia Solenanthus Stenosolenium Suchtelenia Symphytum Thaumatocaryon Thyrocarpus Tianschaniella Tiquilia Tournefortia Trachelanthus Trachystemon Tricardia Trichodesma Trigonocaryum Trigonotis Turricula Varronia Wellstedia Wigandia
Biology and health sciences
Boraginales
Plants
78918
https://en.wikipedia.org/wiki/Oleaceae
Oleaceae
Oleaceae, also known as the olive family or sometimes the lilac family, is a taxonomic family of flowering shrubs, trees, and a few lianas in the order Lamiales. It presently comprises 28 genera, one of which is recently extinct. The extant genera include Cartrema, which was resurrected in 2012. The number of species in the Oleaceae is variously estimated in a wide range around 700. The flowers are often numerous and highly odoriferous. The family has a subcosmopolitan distribution, ranging from the subarctic to the southernmost parts of Africa, Australia, and South America. Notable members include olive, ash, jasmine, and several popular ornamental plants including privet, forsythia, fringetrees, and lilac. Genera The following 29 extant genera are recognized in the family Oleaceae. Linociera is not included, even though some authors continue to recognize it. Linociera is not easy to distinguish from Chionanthus, mostly because the latter is polyphyletic and not clearly defined. Tribe Myxopyreae Myxopyrum Blume Dimetra Kerr Nyctanthes L. Tribe Forsythieae Abeliophyllum Nakai – white forsythia Forsythia Vahl – forsythia Tribe Fontanesieae Fontanesia Labill. Tribe Jasmineae Chrysojasminum Banfi Menodora Humb. & Bonpl. Jasminum L. – jasmine Tribe Oleeae Subtribe Ligustrinae Syringa L. – lilac Ligustrum L. – privet Subtribe Schreberinae Comoranthus Knobl. Schrebera Roxb. Subtribe Fraxininae Fraxinus L. – ash Subtribe Oleinae Cartrema Raf. Chengiodendron Chionanthus L. – fringe tree Forestiera Poir. – swamp privet Haenianthus Griseb. Hesperelaea A.Gray Nestegis Raf. Noronhia Stadman ex Thouars Notelaea Vent. Olea L. – olive Osmanthus Lour. – osmanthus Phillyrea L. – mock-privet Picconia D.C. Priogymnanthus P.S.Green Tetrapilus Overview The type genus for Oleaceae is Olea, the olives. Recent classifications recognize no subfamilies, but the family is divided into five tribes. The distinctiveness of each tribe has been strongly supported in molecular phylogenetic studies, but the relationships among the tribes were not clarified until 2014. The phylogenetic tree for Oleaceae is a 5-grade that can be represented as {Myxopyreae [Forsythieae (Fontanesieae <Jasmineae + Oleeae>)]}. The major centers of diversity for Oleaceae are in Southeast Asia and Australia. There are also a significant number of species in Africa, China, and North America. In the tropics the family is represented in a variety of habitats, from low-lying dry forest to montane cloud forest. In Oleaceae, the seed dispersal is almost entirely by wind or animals. In the case that the fruit is a berry, the species is mostly dispersed by birds. The wind-dispersed fruits are samaras. Some of the older works have recognized as many as 29 genera in Oleaceae. Today, most authors recognize at least 25, but this number will change because some of these genera have recently been shown to be polyphyletic. Estimates of the number of species in Oleaceae have ranged from 600 to 900. Most of the species number discrepancy is due to the genus Jasminum in which as few as 200 or as many as 450 species have been accepted. In spite of the sparsity of the fossil record, and the inaccuracy of molecular-clock dating, it is clear that Oleaceae is an ancient family that became widely distributed early in its history. Some of the genera are believed to be relictual populations that remained unchanged over long periods because of isolation imposed by geographical barriers like the low-elevation areas that separate mountain peaks. Description Members of the family Oleaceae are woody plants, mostly trees and shrubs; a few are lianas. Some of the shrubs are scandent, climbing by scrambling into other vegetation. Leaves without stipules; simple or pinnately or ternately compound. The family is characterized by opposite leaves. Alternate or whorled arrangements are rarely observed, with some Jasminum species presenting a spiral configuration. The laminas are pinnately veined and can be serrate, dentate or entire at the margin. Domatia are observed in certain taxa. The leaves may be either deciduous or evergreen, with evergreen species predominating in warm temperate and tropical regions, and deciduous species predominating in colder regions. The flowers are most often bisexual and actinomorphic, occurring in racemes or panicles, and often fragrant. The calyx and corolla, when present, are gamosepalous and gamopetalous, respectively, their lobes connate, at least at the base. The androecium has 2 stamens. These are inserted on the corolla tube and alternate with the corolla lobes. The stigmas are two-lobed. The gynoecium consists of a compound pistil with two carpels. The ovary is superior with two locules. The placentation is axile. Ovules usually 2 per locule; sometimes 4, rarely many. Nectary disk, when present, encircling the base of the ovary. The plants are most often hermaphrodite but sometimes polygamomonoecious. The fruit can be a berry, drupe, capsule or samaras. The obvious feature that distinguishes Oleaceae and its sister family, Carlemanniaceae, from all others, is the fact that while the flowers are actinomorphic, the number of stamens is reduced to two. Many members of the family are economically significant. The olive (Olea europaea) is important for its fruit and for the olive oil extracted from it. The ashes (Fraxinus) are valued for their tough wood. Forsythias, lilacs, jasmines, osmanthuses, privets, and fringe trees are valued as ornamental plants in gardens and landscaping. At least two species of jasmine are the source of an essential oil. Their flowers are often added to tea. History Carl Linnaeus named eight of the genera of Oleaceae in 1753 in his Species Plantarum. He did not designate what we now know as plant families, but placed his genera in artificial groups for purposes of identification. After the work of Linnaeus, names for groups that included the genera of Oleaceae were used, but none of them was a valid publication of the family name Oleaceae. For example, Antoine Laurent de Jussieu, in his Genera Plantarum in 1789, placed them in an order which he called "Jasmineae". In 1809, in a flora of Portugal, Johann Centurius Hoffmannsegg and Johann H.F. Link described at the taxonomic rank of family a group which they called "Oleinae".<ref>Johann Centurius von Hoffmannsegg and Johann H.F. Link. 1809. Flore Portugaise ou description de toutes des plantes ... 1:62.</ref> Their description is now regarded as the establishment of what we now know as Oleaceae. The last revision of Oleaceae was published in 2004 in a series entitled The Families and Genera of Vascular Plants. Since that time, molecular phylogenetic work has shown that the next revision of Oleaceae must include substantial changes to the circumscription of genera. Classification Oleaceae is most closely related to the small Indo-Malesian family Carlemanniaceae. These two families form the second most basal clade in the order Lamiales, after Plocospermataceae. The families Plocospermataceae, Carlemanniaceae, Oleaceae, and Tetrachondraceae form a polyphyletic group known as the "basal Lamiales", which is in contrast to the monophyletic "core Lamiales". Taxonomy Oleaceae is one of only a few major plant families for which no well-sampled molecular phylogenetic study has ever been conducted. The only DNA sequence study of the entire family sampled 76 species for two noncoding chloroplast loci, rps16 and trnL–F. Little was determined in this study, largely because the mutation rate in the chloroplast genome of Oleaceae is very low compared to that of most other angiosperm families. Also, the family is notorious for incongruence between phylogenies based on plastid and nuclear DNA. The most likely cause of this incongruence is reticulate evolution resulting from rampant hybridization. The delimitation of genera in Oleaceae has always been especially problematic. Some recent studies of small groups of related genera have shown that some of the genera are not monophyletic. For example, Olea section Tetrapilus is separate from the rest of Olea. It is a distinct group of 23 species and had been named as a genus, Tetrapilus, by João de Loureiro in 1790. The genus Ligustrum has long been suspected of having originated from within Syringa, and this was confirmed in a cladistic comparison of selected chloroplast genes.Osmanthus consists of at least three lineages whose closest relatives are not other lineages of Osmanthus.Chionanthus is highly polyphyletic, with its species scattered across the phylogenetic tree of the subtribe Oleinae. Its African species are closer to Noronhia than to its type species, the North American Chionanthus virginicus. Its Madagascan species are phylogenetically within Noronhia and will be formally transferred to it in a forthcoming paper. The monophyly of Nestegis'' is in considerable doubt, but few of its closest relatives have been sampled in phylogenetic studies.
Biology and health sciences
Lamiales
null
79150
https://en.wikipedia.org/wiki/Feigenbaum%20constants
Feigenbaum constants
In mathematics, specifically bifurcation theory, the Feigenbaum constants and are two mathematical constants which both express ratios in a bifurcation diagram for a non-linear map. They are named after the physicist Mitchell J. Feigenbaum. History Feigenbaum originally related the first constant to the period-doubling bifurcations in the logistic map, but also showed it to hold for all one-dimensional maps with a single quadratic maximum. As a consequence of this generality, every chaotic system that corresponds to this description will bifurcate at the same rate. Feigenbaum made this discovery in 1975, and he officially published it in 1978. The first constant The first Feigenbaum constant or simply Feigenbaum constant is the limiting ratio of each bifurcation interval to the next between every period doubling, of a one-parameter map where is a function parameterized by the bifurcation parameter . It is given by the limit: where are discrete values of at the th period doubling. This gives its numerical value: A simple rational approximation is: , which is correct to 5 significant values (when rounding). For more precision use , which is correct to 7 significant values. Is approximately equal to , with an error of 0.0047% Illustration Non-linear maps To see how this number arises, consider the real one-parameter map Here is the bifurcation parameter, is the variable. The values of for which the period doubles (e.g. the largest value for with no orbit, or the largest with no orbit), are , etc. These are tabulated below: {| class="wikitable" |- ! ! Period ! Bifurcation parameter () ! Ratio |- | 1 || 2 || 0.75 || — |- | 2 || 4 || 1.25 || — |- | 3 || 8 || || 4.2337 |- | 4 || 16 || || 4.5515 |- | 5 || 32 || || 4.6458 |- | 6 || 64 || || 4.6639 |- | 7 || 128 || || 4.6682 |- | 8 || 256 || || 4.6689 |- |} The ratio in the last column converges to the first Feigenbaum constant. The same number arises for the logistic map with real parameter and variable . Tabulating the bifurcation values again: {| class="wikitable" |- ! ! Period ! Bifurcation parameter () ! Ratio |- | 1 || 2 || 3 || — |- | 2 || 4 || || — |- | 3 || 8 || || 4.7514 |- | 4 || 16 || || 4.6562 |- | 5 || 32 || || 4.6683 |- | 6 || 64 || || 4.6686 |- | 7 || 128 || || 4.6680 |- | 8 || 256 || || 4.6768 |- |} Fractals In the case of the Mandelbrot set for complex quadratic polynomial the Feigenbaum constant is the limiting ratio between the diameters of successive circles on the real axis in the complex plane (see animation on the right). {| class="wikitable" |- ! ! Period = ! Bifurcation parameter () ! Ratio |- | 1 || 2 || || — |- | 2 || 4 || || — |- | 3 || 8 || || 4.2337 |- | 4 || 16 || || 4.5515 |- | 5 || 32 || || 4.6459 |- | 6 || 64 || || 4.6639 |- | 7 || 128 || || 4.6668 |- | 8 || 256 || || 4.6740 |- |9 ||512 || ||4.6596 |- |10 ||1024 || ||4.6750 |- |... ||... ||... ||... |- | || || ... || |} Bifurcation parameter is a root point of period- component. This series converges to the Feigenbaum point = −1.401155...... The ratio in the last column converges to the first Feigenbaum constant. Other maps also reproduce this ratio; in this sense the Feigenbaum constant in bifurcation theory is analogous to in geometry and in calculus. The second constant The second Feigenbaum constant or Feigenbaum reduction parameter is given by: It is the ratio between the width of a tine and the width of one of its two subtines (except the tine closest to the fold). A negative sign is applied to when the ratio between the lower subtine and the width of the tine is measured. These numbers apply to a large class of dynamical systems (for example, dripping faucets to population growth). A simple rational approximation is × × = . Properties Both numbers are believed to be transcendental, although they have not been proven to be so. In fact, there is no known proof that either constant is even irrational. The first proof of the universality of the Feigenbaum constants was carried out by Oscar Lanford—with computer-assistance—in 1982 (with a small correction by Jean-Pierre Eckmann and Peter Wittwer of the University of Geneva in 1987). Over the years, non-numerical methods were discovered for different parts of the proof, aiding Mikhail Lyubich in producing the first complete non-numerical proof. Other values The period-3 window in the logistic map also has a period-doubling route to chaos, reaching chaos at , and it has its own two Feigenbaum constants: .
Mathematics
Basics
null
79270
https://en.wikipedia.org/wiki/Cartridge%20%28firearms%29
Cartridge (firearms)
A cartridge, also known as a round, is a type of pre-assembled firearm ammunition packaging a projectile (bullet, shot, or slug), a propellant substance (smokeless powder, black powder substitute, or black powder) and an ignition device (primer) within a metallic, paper, or plastic case that is precisely made to fit within the barrel chamber of a breechloading gun, for convenient transportation and handling during shooting. Although in popular usage the term "bullet" is often used to refer to a complete cartridge, the correct usage only refers to the projectile. Cartridges can be categorized by the type of primer. This can be accomplished by igniting a small charge of an impact-sensitive explosive compound or by an electric-sensitive chemical mixture that is located: at the center of the case head (centerfire); or inside the rim (rimfire); or inside the walls on the fold of the case base that is shaped like a cup (cupfire); or in a sideways projection that is shaped like a pin (pinfire) or a lip (lipfire); or in a small bulge shaped like a nipple at the case base (teatfire). Only small-caliber rimfire cartridges and centerfire cartridges have survived into the modern day. Military and commercial producers continue to pursue the goal of caseless ammunition. Some artillery ammunition uses the same cartridge concept as found in small arms. In other cases, the artillery shell is separate from the propellant charge. A cartridge without a projectile is called a blank; one that is completely inert (contains no active primer and no propellant) is called a dummy; one that failed to ignite and shoot off the projectile is called a dud; and one that ignited but failed to sufficiently push the projectile out of the barrel is called a squib. Design Purpose The cartridge was invented specifically for breechloading firearms. Prior to its invention, the projectiles and propellant were carried separately and had to be individually loaded via the muzzle into the gun barrel before firing, then have a separate ignitor compound (from a burning slow match, to a small charge of gunpowder in a flash pan, to a metallic percussion cap mounted on top of a "nipple" or cone), to serve as a source of activation energy to set off the shot. Such loading procedures often require adding paper/cloth wadding and ramming down repeatedly with a rod to optimize the gas seal, and are thus clumsy and inconvenient, severely restricting the practical rate of fire of the weapon, leaving the shooter vulnerable to the threat of close combat (particularly cavalry charges) as well as complicating the logistics of ammunition. The primary purpose of using a cartridge is to offer a handy pre-assembled "all-in-one" package that is convenient to handle and transport, easily loaded into the breech (rear end) of the barrel, as well as preventing potential propellant loss, contamination or degradation from moisture and the elements. In modern self-loading firearms, the cartridge case also enables the action mechanism to use part of the propellant's energy (carried inside the cartridge itself) and cyclically load new rounds of ammunition to allow quick repeated firing. To perform a firing, the round is first inserted into a "ready" position within the chamber aligned with the bore axis (i.e. "in battery"). While in the chamber, the cartridge case obturates all other directions except the bore to the front, reinforced by a breechblock or a locked bolt from behind, designating the forward direction as the path of least resistance. When the trigger is pulled, the sear disengages and releases the hammer/striker, causing the firing pin to impact the primer embedded in the base of the cartridge. The shock-sensitive chemical in the primer then creates a jet of sparks that travels into the case and ignites the main propellant charge within, causing the powders to deflagrate (but not detonate). This rapid exothermic combustion yields a mixture of highly energetic gases and generates a very high pressure inside the case, often fire-forming it against the chamber wall. When the pressure builds up sufficiently to overcome the fastening friction between the projectile (e.g. bullet) and the case neck, the projectile will detach from the case and, pushed by the expanding high-pressure gases behind it, move down the bore and out the muzzle at extremely high speed. After the bullet exits the barrel, the gases are released to the surroundings as ejectae in a loud blast, and the chamber pressure drops back down to ambient level. The case, which had been elastically expanded by high pressure, contracts slightly, which eases its removal from the chamber when pulled by the extractor. The spent cartridge, with its projectile and propellant gone but the case still containing a used-up primer, then gets ejected from the gun to clear room for a subsequent new round. Components A modern cartridge consists of four main components: the case, the projectile, the propellant, and the primer. Case The main defining component of the cartridge is the case, which gives the cartridge its shape and serves as the integrating housing for other functional components, it acts as a container for the propellant powders and also serves as a protective shell against the elements; it attaches the projectile either at the front end of the cartridge (bullets for pistols, submachine guns, rifles, and machine guns) or inside of the cartridge (wadding/sabot containing either a quantity of shot (pellets) or an individual slug for shotguns), and align it with the barrel bore to the front; it holds the primer at the back end, which receives an impact from a firing pin and is responsible for igniting the main propellant charge inside the case. While historically paper had been used in the earliest cartridges, almost all modern cartridges use metallic casing. The modern metallic case can either be a "bottleneck" one, whose frontal portion near the end opening (known as the "case neck") has a noticeably smaller diameter than the main part of the case ("case body"), with a noticeably angled slope ("case shoulder") in between; or a "straight-walled" one, where there is no narrowed neck and the whole case looks cylindrical. The case shape is meant to match exactly to the chamber of the gun that fires it, and the "neck", "shoulder", and "body" of a bottleneck cartridge have corresponding counterparts in the chamber known as the "chamber neck", "chamber shoulder", and "chamber body". Some cartridges, like the .470 Capstick, have what is known as a "ghost shoulder" which has a very slightly protruding shoulder, and can be viewed as a something between a bottleneck and straight-walled case. A ghost shoulder, rather than a continuous taper on the case wall, helps the cartridge to line up concentrically with the bore axis, contributing to accuracy. The front opening of the case neck, which receives and fastens the bullet via crimping, is known as the . The closed-off rear end of the case body, which holds the primer and technically is the case base, is called the case head as it is the most prominent and frequently the widest part of the case. There is a circumferential flange at the case head called a rim, which provides a lip for the extractor to engage. Depending on whether and how the rim protrudes beyond the maximum case body diameter, the case can be classified as either "rimmed", "semi-rimmed", "rimless", "rebated", or "belted". The shape of a bottleneck cartridge case (e.g. body diameter, shoulder slant angle and position, and neck length) also affects the amount of attainable pressure inside the case, which in turn influences the accelerative capacity of the projectile. Wildcat cartridges are often made by reshaping the case of an existing cartridge. Straight-sided cartridges are less prone to rupturing than tapered cartridges, in particular with higher pressure propellant when used in blowback-operated firearms. In addition to case shape, rifle cartridges can also be grouped according to the case dimensions of a cartridge, this is usually referring to the cartridge's overall length (COL), which in turn dictates the minimal receiver size and operating space (bolt travel) needed by the action, into either "mini-action", "short-action", "long-action" ("standard-action"), or "magnum-action" categories. Mini-action cartridges, are usually intermediate rifle cartridges, with a COL of or shorter in length, which is most commonly exemplified by the .223 Remington; Short-action cartridges, are usually full-powered rifle cartridges with a COL between , which is most commonly exemplified by the .308 Winchester; Long-action ("standard-action") cartridges, are usually traditional full-powered rifle cartridges with a COL between , which is most commonly exemplified by the .30-06 Springfield; Magnum-action cartridges, are usually rifle cartridges that are both longer and more powerful than traditional full-powered rifle long-action cartridges, with a COL between , including some of the long-action cartridges with a case head larger than diameter, which is most commonly exemplified by the .375 Holland & Holland Magnum. The most popular material used to make cartridge cases is brass due to its good corrosion resistance. The head of a brass case can be work-hardened to withstand the high pressures, and allow for manipulation via extraction and ejection without rupturing. The neck and body portion of a brass case is easily annealed to make the case ductile enough to allow reshaping so that it can be handloaded many times, and fire forming can help accurize the shooting. Steel casing is used in some plinking ammunition, as well as in some military training ammunition (mostly from the former Soviet republics of Armenia, Azerbaijan, Belarus, Estonia, Georgia, Kazakhstan, Kyrgyzstan, Latvia, Lithuania, Moldova, Russia, Tajikistan, Turkmenistan, Ukraine, and Uzbekistan), along with Russia and China. Steel is less expensive to make than brass, but it is far less corrosion-resistant and not feasible to reuse and reload. Military forces typically consider service small arms cartridge cases to be disposable, single-use devices. However, the mass of the cartridges can affect how much ammunition a soldier can carry, so the lighter steel cases do have a logistic advantage. Conversely, steel is more susceptible to contamination and damage so all such cases are varnished or otherwise sealed against the elements. One downside caused by the increased strength of steel in the neck of these cases (compared to the annealed neck of a brass case) is that propellant gas can blow back past the neck and leak into the chamber. Constituents of these gases condense on the (relatively cold) chamber wall, and this solid propellant residue can make extraction of fired cases difficult. This is less of a problem for small arms of the former Warsaw Pact nations, which were designed with much looser chamber tolerances than NATO weapons. Aluminum-cased cartridges are available commercially. These are generally not reloaded, as aluminum fatigues easily during firing and resizing. Some calibers also have non-standard primer sizes to discourage reloaders from attempting to reuse these cases. Plastic cases are commonly used in shotgun shells, and some manufacturers offer polymer-cased centerfire pistol and rifle cartridges. Projectile As firearms are projectile weapons, the projectile is the effector component of the cartridge, and is actually responsible for reaching, impacting, and exerting damage onto a target. The word "projectile" is an umbrella term that describes any type of kinetic object launched into ballistic flight, but due to the ubiquity of rifled firearms shooting bullets, the term has become somewhat a technical synonym for bullets among handloaders. The projectile's motion in flight is known as its external ballistics, and its behavior upon impacting an object is known as its terminal ballistics. A bullet can be made of virtually anything (see below), but lead is the traditional material of choice because of its high density, malleability, ductility, and low cost of production. However, at speeds greater than , pure lead will melt more and deposit fouling in rifled bores at an ever-increasing rate. Alloying the lead with a small percentage of tin or antimony can reduce such fouling, but grows less effective as velocities are increased. A cup made of harder metal (e.g. copper), called a gas check, is often placed at the base of a lead bullet to decrease lead deposits by protecting the rear of the bullet against melting when fired at higher pressures, but this too does not work at higher velocities. A modern solution is to cover the bare lead in a protective powder coat, as seen in some rimfire ammunitions. Another solution is to encase a lead core within a thin exterior layer of harder metal (e.g. gilding metal, cupronickel, copper alloys or steel), known as a jacketing. In modern days, steel, bismuth, tungsten, and other exotic alloys are sometimes used to replace lead and prevent release of toxicity into the environment. In armor-piercing bullets, very hard and high-density materials such as hardened steel, tungsten, tungsten carbide, or depleted uranium are used for the penetrator core. Non-lethal projectiles with very limited penetrative and stopping powers are sometimes used in riot control or training situations, where killing or even wounding a target at all would be undesirable. Such projectiles are usually made from softer and lower-density materials, such as plastic or rubber. Wax bullets (such as those used in Simunition training) are occasionally used for force-on-force tactical trainings, and pistol dueling with wax bullets used to be a competitive Olympic sport prior to World War I. For smoothbore weapons such as shotguns, small metallic balls known as shots are typically used, which is usually contained inside a semi-flexible, cup-like sabot called "wadding". When fired, the wadding is launched from the gun as a payload-carrying projectile, loosens and opens itself up after exiting the barrel, and then inertially releases the contained shots as a hail of sub-projectiles. Shotgun shots are usually made from bare lead, though copper/zinc–coated steel balls (such as those used by BB guns) can also be used. Lead pollution of wetlands has led to the BASC and other organizations campaigning for the phasing out of traditional lead shot. There are also unconventional projectile fillings such as bundled flechettes, rubber balls, rock salt and magnesium shards, as well as non-lethal specialty projectiles such as rubber slugs and bean bag rounds. Solid projectiles (e.g. slugs, baton rounds, etc.) are also shot while contained within a wadding, as the wadding obturates the bore better and typically slides less frictionally within the barrel. Propellant When a propellant is ignited and begins to combust, the resulting chemical reaction releases the chemical energy stored within. At the same time, a significant amount of gaseous products are released, which are highly energetic due to the exothermic nature of the reaction. These combustion gases become highly pressurized in a confined space—such as the cartridge casing (reinforced by the chamber wall) occluded from the front by the projectile (bullet, or wadding containing shots/slug) and from behind by the primer (supported by the bolt/breechblock). When the pressure builds up high enough to overcome the crimp friction between the projectile and the case, the projectile separates from the case and gets propelled down the gun barrel, imparting high kinetic energy from the propellant gases and accelerating the projectile to its muzzle velocity. The projectile motion driven by the propellant inside the gun is known as the internal ballistics. Primer Because the main propellant charge are located deep inside the gun barrel and thus impractical to be directly lighted from the outside, an intermediate is needed to relay the ignition. In the earliest black powder muzzleloaders, a fuse was used to direct a small flame through a touch hole into the barrel, which was slow and subjected to disturbance from environmental conditions. The next evolution was to have a small separate charge of finer gunpowder poured into a flash pan, where it could start a "priming" ignition by an external source, when ignited the flame passed through a small hole in the side of the barrel to ignite the main gunpowder charge. The last evolution was to use a small metallic cap filled with a shock sensitive explosive compound that would ignite with a hammer strike. The source of ignition could be a burning slow match (matchlock) placed onto a touch hole, a piece of pyrite (wheellock)/flint (flintlock) striking a steel frizzen, or a shock-sensitive brass or copper percussion cap (caplock) placed over a conical-shaped cone piece with a hollow pipe to create sparks. When the primer powder starts combusting, the flame is transferred through an internal touch hole called a flash hole to provide activation energy for the main powder charge in the barrel. The disadvantage Is that the flash pan cAN still be exposed to the outside, making it difficult (or even impossible) to fire the gun in rainy or humid conditions as wet gunpowder burns poorly. After Edward Charles Howard discovered fulminates in 1800 and the patent by Reverend Alexander John Forsyth expired in 1807, Joseph Manton invented the precursor percussion cap in 1814, which was further developed in 1822 by the English-born American artist Joshua Shaw, and caplock fowling pieces appeared in Regency era England. These guns used a spring-loaded hammer to strike a percussion cap placed over a conical "nipple", which served as both an "anvil" against the hammer strike and a transfer port for the sparks created by crushing the cap, and was easier and quicker to load, more resilient to weather conditions, and more reliable than the preceding flintlocks. Modern primers are basically improved percussion caps with shock-sensitive chemicals (e.g. lead styphnate) enclosed in a small button-shaped capsule. In the early paper cartridges, invented not long after the percussion cap, the primer was located deep inside the cartridge just behind the bullet, requiring a very thin and elongated firing pin to pierce the paper casing. Such guns were known as needle guns, the most famous of which was decisive in the Prussian victory over the Austrians at Königgrätz in 1866. After the metallic cartridge was invented, the primer was relocated backward to the base of the case, either at the center of the case head (centerfire), inside the rim (rimfire), inside a cup-like concavity of the case base (cupfire), in a pin-shaped sideways projection (pinfire), in a lip-like flange (lipfire), or in a small nipple-like bulge at the case base (teat-fire). Today, only the centerfire and rimfire have survived as the mainstream primer designs, while the pinfire also still exists but only in rare novelty miniature guns and a few very small blank cartridges designed as noisemakers. In rimfire ammunitions, the primer compound is moulded integrally into the interior of the protruding case rim, which is crushed between the firing pin and the edge of the barrel breech (serving as the "anvil"). These ammunitions are thus not reloadable, and are usually on the lower end of the power spectrum, although due to the low manufacturing cost some of them (e.g. .22 Long Rifle) are among the most popular and prolific ammunitions currently being used. Centerfire primers are a separately manufactured component, seated into a central recess at the case base known as the primer pocket, and have two types: Berdan and Boxer. Berdan primers, patent by American inventor Hiram Berdan in 1866, are a simple capsule, and the corresponding case has two small flash holes with a bulged bar in between, which serves as the "anvil" for the primer. Boxer primers, patented by Royal Artillery colonel Edward Mounier Boxer also in 1866, are more complex and have an internal tripedal "anvil" built into the primer itself, and the corresponding case has only a single large central flash hole. Commercially, Boxer primers dominate the handloader market due to the ease of depriming and the ability to transfer sparks more efficiently. Due to their small size and charge load, primers lack the power to shoot out the projectile by themselves, but can still put out enough energy to separate the bullet from the casing and push it partway into the barrel – a dangerous condition called a squib load. Firing a fresh cartridge behind a squib load obstructing the barrel will generate dangerously high pressure, leading to a catastrophic failure and potentially causing severe injuries when the gun blows apart in the shooter's hands. Actor Brandon Lee's infamous accidental death in 1993 was believed to be caused by an undetected squib that was dislodged and shot out by a blank. Manufacturing Beginning in the 1860s, early metallic cartridges (e. g. for the Montigny mitrailleuse or the Snider–Enfield rifle) were produced similarly to the paper cartridges, with sides made from thick paper, but with copper (later brass) foil supporting the base of the cartridge and some more details in it holding the primer. In the 1870s, brass foil covered all of the cartridge, and the technology to make solid cases, in which the metallic cartridges described below were developed, but before the 1880s, it was far too expensive and time-consuming for mass production and the metallurgy was not yet perfected. To manufacture cases for cartridges, a sheet of brass is punched into disks. These disks go through a series of drawing dies. The disks are annealed and washed before moving to the next series of dies. The brass needs to be annealed to remove the work-hardening in the material and make the brass malleable again ready for the next series of dies. Manufacturing bullet jackets is similar to making brass cases: there is a series of drawing steps with annealing and washing. Specifications Critical cartridge specifications include neck size, bullet weight and caliber, maximum pressure, headspace, overall length, case body diameter and taper, shoulder design, rim type, etc. Generally, every characteristic of a specific cartridge type is tightly controlled and few types are interchangeable in any way. Exceptions do exist but generally, these are only where a shorter cylindrical rimmed cartridge can be used in a longer chamber, (e.g., .22 Short in .22 Long Rifle chamber, .32 H&R Magnum in .327 Federal Magnum chamber, and .38 Special in a .357 Magnum chamber). Centerfire primer type (Boxer or Berdan, see below) is interchangeable, although not in the same case. Deviation in any of these specifications can result in firearm damage and, in some instances, injury or death. Similarly, the use of the wrong type of cartridge in any given gun can damage the gun, or cause bodily injury. Cartridge specifications are determined by several standards organizations, including SAAMI in the United States, and C.I.P. in many European states. NATO also performs its own tests for military cartridges for its member nations; due to differences in testing methods, NATO cartridges (headstamped with the NATO cross) may present an unsafe combination when loaded into a weapon chambered for a cartridge certified by one of the other testing bodies. Bullet diameter is measured either as a fraction of an inch (usually in 1/100 or in 1/1000) or in millimeters. Cartridge case length can also be designated in inches or millimeters. History Paper cartridges have been in use for centuries, with a number of sources dating their usage as far back as the late 14th and early 15th centuries. Historians note their use by soldiers of Christian I, Elector of Saxony and his son in the late 16th century, while the Dresden Armoury has evidence dating their use to 1591. Capo Bianco wrote in 1597 that paper cartridges had long been in use by Neapolitan soldiers. Their use became widespread by the 17th century. The 1586 round consisted of a charge of powder and a bullet in a paper cartridge. Thick paper is still known as "cartridge paper" from its use in these cartridges. Another source states the cartridge appeared in 1590. King Gustavus Adolphus of Sweden had his troops use cartridges in the 1600s. The paper formed a cylinder with twisted ends; the ball was at one end, and the measured powder filled the rest. This cartridge was used with muzzle-loading military firearms, probably more often than for sporting shooting, the base of the cartridge being ripped or bitten off by the soldier, the powder poured into the barrel, and the paper and bullet rammed down the barrel. In the Civil War era cartridge, the paper was supposed to be discarded, but soldiers often used it as a wad. To ignite the charge an additional step was required where a finer-grained powder called priming powder was poured into the pan of the gun to be ignited by the firing mechanism. The evolving nature of warfare required a firearm that could load and fire more rapidly, resulting in the flintlock musket (and later the Baker rifle), in which the pan was covered by furrowed steel. This was struck by the flint and fired the gun. In the course of loading, a pinch of powder from the cartridge would be placed into the pan as priming, before the rest of the cartridge was rammed down the barrel, providing charge and wadding. Later developments rendered this method of priming unnecessary, as, in loading, a portion of the charge of powder passed from the barrel through the vent into the pan, where it was held by the cover and hammer. The next important advance in the method of ignition was the introduction of the copper percussion cap. This was only generally applied to the British military musket (the Brown Bess) in 1842, a quarter of a century after the invention of percussion powder and after an elaborate government test at Woolwich in 1834. The invention that made the percussion cap possible was patented by the Rev. A. J. Forsyth in 1807 and consisted of priming with a fulminating powder made of potassium chlorate, sulfur, and charcoal, which ignited by concussion. This invention was gradually developed, and used, first in a steel cap, and then in a copper cap, by various gunmakers and private individuals before coming into general military use nearly thirty years later. The alteration of the military flint-lock to the percussion musket was easily accomplished by replacing the powder pan with a perforated nipple and by replacing the cock or hammer that held the flint with a smaller hammer that had a hollow to fit on the nipple when released by the trigger. The shooter placed a percussion cap (now made of three parts of potassium chlorate, two of fulminate of mercury and powdered glass) on the nipple. The detonating cap thus invented and adopted brought about the invention of the modern cartridge case, and rendered possible the general adoption of the breech-loading principle for all varieties of rifles, shotguns, and pistols. This greatly streamlined the reloading procedure and paved the way for semi- and full-automatic firearms. However, this big leap forward came at a price: it introduced an extra component into each round – the cartridge case – which had to be removed before the gun could be reloaded. While a flintlock, for example, is immediately ready to reload once it has been fired, adopting brass cartridge cases brought in the problems of extraction and ejection. The mechanism of a modern gun must not only load and fire the piece but also provide a method of removing the spent case, which might require just as many added moving parts. Many malfunctions occur during this process, either through a failure to extract a case properly from the chamber or by allowing the extracted case to jam the action. Nineteenth-century inventors were reluctant to accept this added complication and experimented with a variety of caseless or self-consuming cartridges before finally accepting that the advantages of brass cases far outweighed this one drawback. Integrated cartridges The first integrated cartridge was developed in Paris in 1808 by the Swiss gunsmith Jean Samuel Pauly in association with French gunsmith François Prélat. Pauly created the first fully self-contained cartridges: the cartridges incorporated a copper base with integrated mercury fulminate primer powder (the major innovation of Pauly), a round bullet and either brass or paper casing. The cartridge was loaded through the breech and fired with a needle. The needle-activated centerfire breech-loading gun would become a major feature of firearms thereafter. Pauly made an improved version, protected by a patent, on 29 September 1812. Probably no invention connected with firearms has wrought such changes in the principle of gun construction as those effected by the "expansive cartridge case". This invention has completely revolutionized the art of gun making, has been successfully applied to all descriptions of firearms and has produced a new and important industry: that of cartridge manufacture. Its essential feature is preventing gas from escaping the breech when the gun is fired, by means of an expansive cartridge case containing its own means of ignition. Previous to this invention shotguns and sporting rifles were loaded by means of powder flasks and shot bags or flasks, bullets, wads, and copper caps, all carried separately. One of the earliest efficient modern cartridge cases was the pinfire cartridge, developed by French gunsmith Casimir Lefaucheux in 1836. It consisted of a thin weak shell made of brass and paper that expanded from the force of the explosion. This fit perfectly in the barrel and thus formed an efficient gas check. A small percussion cap was placed in the middle of the base of the cartridge and was ignited by means of a brass pin projecting from the side and struck by the hammer. This pin also afforded the means of extracting the cartridge case. This cartridge was introduced in England by Lang, of Cockspur Street, London, about 1845. In the American Civil War (1861–1865) a breech-loading rifle, the Sharps, was introduced and produced in large numbers. It could be loaded with either a ball or a paper cartridge. After that war, many were converted to the use of metal cartridges. The development by Smith & Wesson (among many others) of revolver handguns that used metal cartridges helped establish cartridge firearms as the standard in the United States by the late 1860s and early 1870s, although many continue to use percussion revolvers well after that. Modern metallic cartridges Most of the early all-metallic cartridges were of the pinfire and rimfire types. The first centerfire metallic cartridge was invented by Jean Samuel Pauly in the first decades of the 19th century. However, although it was the first cartridge to use a form of obturation, a feature integral to a successful breech-loading cartridge, Pauly died before it was converted to percussion cap ignition. Frenchman Louis-Nicolas Flobert invented the first rimfire metallic cartridge in 1845. His cartridge consisted of a percussion cap with a bullet attached to the top. Flobert then made what he called "parlor guns" for this cartridge, as these rifles and pistols were designed to be shot in indoor shooting parlors in large homes. These 6mm Flobert cartridges do not contain any powder. The only propellant substance contained in the cartridge is the percussion cap. In English-speaking countries, the 6mm Flobert cartridge corresponds to .22 BB Cap and .22 CB Cap ammunition. These cartridges have a relatively low muzzle velocity of around 700 ft/s (210 m/s). French gunsmith Benjamin Houllier improved the Lefaucheux pinfire cardboard cartridge and patented in Paris in 1846, the first fully metallic pinfire cartridge containing powder in a metallic cartridge. He also included in his patent claims rim and centerfire primed cartridges using brass or copper casings. Houllier commercialised his weapons in association with the gunsmiths Blanchard or Charles Robert. In the United States, in 1857, the Flobert cartridge inspired the .22 Short, specially conceived for the first American revolver using rimfire cartridges, the Smith & Wesson Model 1. A year before, in 1856, the LeMat revolver was the first American breech-loading firearm, but it used pinfire cartridges, not rimfire. Formerly, an employee of the Colt's Patent Firearms Manufacturing Company, Rollin White, had been the first in America to conceive the idea of having the revolver cylinder bored through to accept metallic cartridges (circa 1852), with the first in the world to use bored-through cylinders probably having been Lefaucheux in 1845, who invented a pepperbox-revolver loaded from the rear using bored-through cylinders. Another possible claimant for the bored-through cylinder is a Frenchman by the name of Perrin, who allegedly produced in 1839 a pepperbox revolver with a bored-through cylinder to order. Other possible claimants include Devisme of France in 1834 or 1842 who claimed to have produced a breech-loading revolver in that period though his claim was later judged as lacking in evidence by French courts and Hertog & Devos and Malherbe & Rissack of Belgium who both filed patents for breech-loading revolvers in 1853. However, Samuel Colt refused this innovation. White left Colt, went to Smith & Wesson to rent a license for his patent, and this is how the S&W Model 1 saw the light of day in 1857. The patent didn't definitely expire until 1870, allowing Smith & Wesson competitors to design and commercialize their own revolving breech-loaders using metallic cartridges. Famous models of that time are the Colt Open Top (1871–1872) and Single Action Army "Peacemaker" (1873). But in rifles, the lever-action mechanism patents were not obstructed by Rollin White's patent infringement because White only held a patent concerning drilled cylinders and revolving mechanisms. Thus, larger caliber rimfire cartridges were soon introduced after 1857, when the Smith & Wesson .22 Short ammunition was introduced for the first time. Some of these rifle cartridges were used in the American Civil War, including the .44 Henry and 56-56 Spencer (both in 1860). However, the large rimfire cartridges were soon replaced by centerfire cartridges, which could safely handle higher pressures. In 1867, the British war office adopted the Eley–Boxer metallic centerfire cartridge case in the Pattern 1853 Enfield rifles, which were converted to Snider-Enfield breech-loaders on the Snider principle. This consisted of a block opening on a hinge, thus forming a false breech against which the cartridge rested. The priming cap was in the base of the cartridge and was discharged by a striker passing through the breech block. Other European powers adopted breech-loading military rifles from 1866 to 1868, with paper instead of metallic cartridge cases. The original Eley-Boxer cartridge case was made of thin-coiled brass—occasionally these cartridges could break apart and jam the breech with the unwound remains of the case upon firing. Later the solid-drawn, centerfire cartridge case, made of one entire solid piece of tough hard metal, an alloy of copper, with a solid head of thicker metal, has been generally substituted. Centerfire cartridges with solid-drawn metallic cases containing their own means of ignition are almost universally used in all modern varieties of military and sporting rifles and pistols. Around 1870, machined tolerances had improved to the point that the cartridge case was no longer necessary to seal a firing chamber. Precision-faced bolts would seal as well, and could be economically manufactured. However, normal wear and tear proved this system to be generally infeasible. Factory vs. handloading Nomenclature The name of any given cartridge does not necessarily reflect any cartridge or gun dimension. The name is merely the standardized and accepted moniker. SAAMI (Sporting Arms and Ammunition Manufacturers' Institute) and the European counterpart (CIP) and members of those organizations specify correct cartridge names. It is incomplete to refer to a cartridge as a certain "caliber" (e.g., "30-06 caliber"), as the word caliber only describes the bullet diameter. The correct full name for this round is .30–'06 Springfield. The "-'06" means it was introduced in 1906. In sporting arms, the only consistent definition of "caliber" is bore diameter, and dozens of unique .30-caliber round types exist. There is considerable variation in cartridge nomenclature. Names sometimes reflect various characteristics of the cartridge. For example, the .308 Winchester uses a bullet of 308/1000-inch diameter and was standardized by Winchester. Conversely, cartridge names often reflect nothing related to the cartridge in any obvious way. For example, the .218 Bee uses a bullet of 224/1000-inch diameter, fired through a .22-in bore, etc. The 218 and Bee portions of this cartridge name reflect nothing other than the desires of those who standardized that cartridge. Many similar examples exist, for example: .219 Zipper, .221 Fireball, .222 Remington, .256 Winchester, .280 Remington, .307 Winchester, .356 Winchester. Where two numbers are used in a cartridge name, the second number may reflect a variety of things. Frequently the first number reflects bore diameter (inches or millimeters). The second number reflects case length (in inches or mm). For example, the 7.62×51mm NATO refers to a bore diameter of 7.62 mm and has an overall case length of 51 mm, with a total length of 71.1 mm. The commercial version is the .308 Winchester. In older black powder cartridges, the second number typically refers to powder charge, in grains. For example, the .50-90 Sharps has a .50-inch bore and used a nominal charge of of black powder. Many such cartridges were designated by a three-number system (e.g., 45–120–3 Sharps: 45-caliber bore, 120 grains of (black) powder, 3-inch long case). Other times, a similar three-number system indicated bore (caliber), charge (grains), and bullet weight (grains). The 45-70-500 Government is an example. Often, the name reflects the company or individual who standardized it, such as the .30 Newton, or some characteristic important to that person. The .38 Special actually has a nominal bullet diameter of (jacketed) or (lead) while the case has a nominal diameter of , hence the name. This is historically logical: the hole drilled through the chambers of .36-caliber cap-and-ball revolvers when converting those to work with cartridges was , and the cartridge made to work in those revolvers was logically named the .38 Colt. The original cartridges used a heeled bullet like a .22 rimfire where the bullet was the same diameter as the case. Early Colt Army .38s have a bore diameter that will allow a .357" diameter bullet to slide through the barrel. The cylinder is bored straight through with no step. Later versions used an inside the case lubricated bullet of .357" diameter instead of the original .38" with a reduction in bore diameter. The difference in .38 Special bullet diameter and case diameter reflects the thickness of the case mouth (approximately 11/1000-inch per side). The .357 Magnum evolved from the .38 Special. The .357 was named to reflect bullet diameter (in thousandths inch), not case diameter. "Magnum" was used to indicate its longer case and higher operating pressure. Classification Cartridges are classified by some major characteristics. One classification is the location of the primer. Early cartridges began with the pinfire, then the rimfire, and finally the centerfire. Another classification describes how cartridges are located in the chamber (headspace). Rimmed cartridges are located with the rim near the cartridge head; the rim is also used to extract the cartridge from the chamber. Examples are the .22 long rifle and .303 British. In a rimless cartridge, the cartridge head diameter is about the same as or smaller than the body diameter. The head will have a groove so the cartridge can be extracted from the chamber. Locating the cartridge in the chamber is accomplished by other means. Some rimless cartridges are necked down, and they are positioned by the cartridge's shoulder. An example is the .30-06 Springfield. Pistol cartridges may be located by the end of the brass case. An example is the .45 ACP. A belted cartridge has a larger diameter band of thick metal near the head of the cartridge. An example is the .300 Weatherby Magnum. An extreme version of the rimless cartridge is the rebated case; guns employing advanced primer ignition need such a case because the case moves during firing (i.e., it is not located at a fixed position). An example is the 20mm×110RB. Centerfire A centerfire cartridge has a centrally located primer held within a recess in the case head. Most centerfire brass cases used worldwide for sporting ammunition use Boxer primers. It is easy to remove and replace Boxer primers using standard reloading tools, facilitating reuse. Some European- and Asian-manufactured military and sporting ammunition uses Berdan primers. Removing the spent primer from (decapping) these cases requires the use of a special tool because the primer anvil (on which the primer compound is crushed) is an integral part of the case and the case, therefore, does not have a central hole through which a decapping tool can push the primer out from the inside, as is done with Boxer primers. In Berdan cases, the flash holes are located to the sides of the anvil. With the right tool and components, reloading Berdan-primed cases is perfectly feasible. However, Berdan primers are not readily available in the U.S. Rimfire Rimfire priming was a popular solution before centerfire priming was perfected. In a rimfire case, centrifugal force pushes a liquid priming compound into the internal recess of the folded rim as the manufacturer spins the case at a high rate and heats the spinning case to dry the priming compound mixture in place within the hollow cavity formed within the rim fold at the perimeter of the case interior. In the mid to late 19th century, many rimfire cartridge designs existed. Today only a few, mostly for use in small-caliber guns, remain in general and widespread use. These include the .17 Mach II, .17 Hornady Magnum Rimfire (HMR), 5mm Remington Magnum (Rem Mag), .22 (BB, CB, Short, Long, Long Rifle), and .22 Winchester Magnum Rimfire (WMR). Compared to modern centerfire cases used in the strongest types of modern guns, existing rimfire cartridge designs use loads that generate relatively low chamber pressures because of limitations of feasible gun design, as the rim has little or no lateral support from the gun. Such support would require very close tolerances in the design of the chamber, bolt, and firing pin. Because that is not cost-effective method, it is necessary to keep rimfire load pressure low enough so that the stress generated by chamber pressure would not push the case rim outward and cause the rim to expand significantly. Also, the wall of the folded rim must be both thin and ductile enough to easily deform, as necessary to allow the blow from the firing pin to crush the rim, thereby igniting the primer compound, and it must do so without rupturing the case. If the rim is too thick, it will be too resistant to deformation and if it is too hard, the rim will be to brittle and crack, rather than deform. Modern centerfire cartridges are often loaded to maximum chamber pressure. Conversely, no commercialized rimfire has ever been loaded above maximum chamber pressure. However, with careful gun design and production, no fundamental reason exists that higher pressures could not be used. Despite the relative lower chamber pressure, modern rimfire magnums are commonly found in .17-caliber (4.5 mm), .20-caliber (5mm), and .22-caliber (5.6 mm) that can generate muzzle energies comparable to smaller caliber centerfire cartridges. Today, .22 LR (.22 Long Rifle) accounts for the vast majority of all rimfire ammunition produced. Standard .22 LR rounds use an essentially pure lead bullet plated with a typical 95% copper, 5% zinc combination. These are offered in supersonic and subsonic types, as well as target shooting, plinking, and hunting versions. These cartridges are usually coated with hard wax for fouling control. The .22 LR and related .22 rimfire cartridges use a heeled bullet, where the external diameter of the case is the same as the diameter of the forward portion of the bullet and where the rearward portion of the bullet, which extends into the case, is necessarily smaller in diameter than the main body of the bullet. Semi-automatic vs. revolver cartridges Most revolver cartridges are rimmed at the base of the case, which seats against the edge of the cylinder chamber to provide headspace control (to keep the cartridge from moving too far forward into the chamber) and to facilitate easy extraction. Nearly every centerfire semi-automatic pistol cartridge is "rimless", where the rim is of the same diameter as the case body but separated by a circumferential groove in between, into which the extractor engages the rim by hooking. A "semi-rimmed" cartridge is essentially a rimless one but the rim diameter is slightly larger than the case body, and a "rebated rimless" cartridge is one with the rim smaller in diameter. All such cartridges' headspace on the case mouth (although some, such as .38 Super, at one time seated on the rim, this was changed for accuracy reasons), which prevents the round from entering too far into the chamber. Some cartridges have a rim that is significantly smaller than the case body diameter. These are known as rebated-rim designs and almost always allow a handgun to fire multiple caliber cartridges with only a barrel and magazine change. Projectile designs A shotgun shell loaded with multiple metallic "shot", which are small, generally spherical projectiles. Shotgun slug: A single solid projectile designed to be fired from a shotgun. Baton round: a generally non-lethal projectile fired from a riot gun. Bullets Armor-piercing (AP): A hard bullet made from steel or tungsten alloys in a pointed shape typically covered by a thin layer of lead and or a copper or brass jacket. The lead and jacket are intended to prevent barrel wear from the hard-core materials. AP bullets are sometimes less effective on unarmored targets than FMJ bullets are. This has to do with the reduced tendency of AP projectiles to yaw (turn sideways after impact). Full metal jacket (FMJ): Made with a lead core surrounded by a full covering of brass, copper, or mild steel. These usually offer very little deformation or terminal performance expansion, but will occasionally yaw (turn sideways). Despite the name, an FMJ bullet typically has an exposed lead base, which is not visible in an intact cartridge. Glaser safety slug: Copper jackets filled with bird shot and covered by a crimped polymer endcap. Upon impact with flesh, the projectile is supposed to fragment, with the birdshot spreading like a miniature shotgun pattern. Jacketed hollow point (JHP): Soon after the invention of the JSP, Woolwich Arsenal in Great Britain experimented with this design even further by forming a hole or cavity in the nose of the bullet while keeping most of the exterior profile intact. These bullets could theoretically deform even faster and expand to a larger diameter than the JSP. In personal defense use, concerns have arisen over whether clothing, especially heavy materials like denim, can clog the cavity of JHP bullets and cause expansion failures. Jacketed soft point (JSP): In the late 19th century, the Indian Army at Dum-Dum Arsenal, near Kolkata, developed a variation of the FMJ design where the jacket did not cover the nose of the bullet. The soft lead nose was found to expand in the flesh while the remaining jacket still prevented lead fouling in the barrel. The JSP roughly splits the difference between FMJ and JHP. It gives more penetration than JHP but has better terminal ballistic characteristics than the FMJ. Round nose lead (RNL): An unjacketed lead bullet. Although largely supplanted by jacketed ammunition, this is still common for older revolver cartridges. Some hunters prefer roundnose ammunition for hunting in brush because they erroneously believe that such a bullet deflects less than sharp-nosed spitzer bullets, regardless of the fact that this belief has been repeatedly proven not to be true. Refer to American Rifleman magazine. Flat nose lead (FNL): Similar to round nose lead, with a flattened nose. Common in cowboy action shooting and plinking ammunition loads. Total metal jacket (TMJ): Featured in some Speer cartridges, the TMJ bullet has a lead core completely and seamlessly enclosed in brass, copper or other jacket metal, including the base. According to Speer's literature, this prevents hot propellant gases from vaporizing lead from the base of the bullet, reducing lead emissions. Sellier & Bellot produce a similar version that they call TFMJ, with a separate end cap of jacket material. Wadcutter (WC): Similar to the FNL, but completely cylindrical, in some instances with a slight concavity in the nose. This bullet derives its name from its popularity for target shooting, because the form factor cuts neat holes in paper targets, making scoring easier and more accurate and because it typically cuts a larger hole than a round nose bullet, a hit centered at the same spot can touch the next smaller ring and therefore score higher. Semi-wadcutter (SWC) identical to the WC with a smaller diameter flap pointed conical or radiused nose added. Has the same advantages for target shooters but is easier to load into the gun and works more reliably in semi-automatic guns. This design is also superior for some hunting applications. Truncated cone: Also known as round nose flat point, etc. Descriptive of typical modern commercial cast bullet designs. The Hague Convention of 1899 bans the use of expanding projectiles against the military forces of other nations. Some countries accept this as a blanket ban against the use of expanding projectiles against anyone, while others use JSP and HP against non-military forces such as terrorists and criminals. Common cartridges Ammunition types are listed numerically. .22 Long Rifle (22 LR): A round that is often used for target shooting and the hunting of small game such as squirrels. Because of the small size of this round, the smallest self-defence handguns chambered in 22 rimfire (though less effective than most centrefire handguns cartridges) can be concealed in situations where a handgun chambered for a centerfire cartridge could not. The .22 LR is the most commonly fired sporting arms cartridge, primarily because, when compared to any centerfire ammunition, .22 LR ammunition is much less expensive and because recoil generated by the light .22 bullet at modest velocity is very mild. .22-250 Remington: A very popular round for medium to long range small game and varmint hunting, pest control, and target shooting. The 22–250 is one of the most popular rounds for fox hunting and other pest control in Western Europe due to its flat trajectory and very good accuracy on rabbit to fox-sized pests. .300 Winchester Magnum: One of the most popular big game hunting rounds of all time. Also, as a long-range sniping round, it is favored by US Navy SEALs and the German Bundeswehr. While not in the same class as the .338 Lapua Magnum, it has roughly the same power as 7 mm Remington Magnum, and easily exceeds the performance of 7.62×51 mm NATO. .30-06 Springfield (7.62×63 mm): The standard US Army rifle round for the first half of the 20th century. It is a full-power rifle round suitable for hunting most North American game and most big game worldwide. .303 British: the standard British Empire military rifle cartridge from 1888 to 1954. .308 Winchester: the commercial name of a centerfire cartridge based on the military 7.62×51 mm NATO round. Two years prior to the NATO adoption of the 7.62×51 mm NATO T65 in 1954, Winchester (a subsidiary of the Olin Corporation) branded the round and introduced it to the commercial hunting market as the .308 Winchester. The Winchester Model 70 and Model 88 rifles were subsequently chambered for this round. Since then, the 308 Winchester has become the most popular short-action big-game hunting round worldwide. It is also commonly used for civilian and military target events, military sniping, and police sharpshooting. .357 Magnum: Using a lengthened version of the .38 Special case that was loaded to about twice the maximum chamber pressure of the.38 Special and was rapidly accepted for use by hunters and law enforcement officers. At the time of its introduction, .357 Magnum bullets were claimed to easily pierce the steel body panels of automobiles and crack engine blocks (to disable the vehicle). .375 Holland & Holland Magnum: designed for hunting African big game in the early 20th century and legislated as the minimum diameter rifle caliber for African big game hunting during the mid-20th century .40 S&W: A shorter-cased version of the 10mm Auto. .44 Magnum: A high-powered pistol round designed primarily for hunting. .45 ACP: The standard US pistol round for about 75 years. Typical .45 ACP loads are subsonic. .45 Colt: a more powerful 45-calibre revolver round using a longer cartridge. The .45 Colt was designed for the Colt Single Action Army and adopted by the US Army in 1873. Other 45-calibre single-action and double-action revolvers also use this round. .45-70 Government: Adopted by the US Army in 1873 as their standard service rifle cartridge for the Springfield Model 1873 rifle. Most commercial loadings of this cartridge are constrained by the possibility that someone might attempt to fire a modern loading in a vintage rifle or replica. However, current production rifles from Marlin, Ruger, and Browning can accept loads that generate nearly twice the pressure generated by the original black powder cartridges. .50 BMG (12.7×99 mm NATO): Originally designed to destroy aircraft in the First World War, this round still serves an anti-materiel round against light armor. It is used in heavy machine guns and high-powered sniper rifles. Such rifles can be used, amongst other things, for destroying military matériel such as sensitive parts of grounded aircraft and armored transports. Civilian shooters use these for long-distance target shooting. 5.45×39 mm Soviet: The Soviet adaptation of the 5.56×45 mm NATO round. 5.56×45 mm NATO: Adopted by the US military in the 1960s, it later became the NATO standard rifle round in the early 1980s, displacing the 7.62×51 mm. Remington later adopted this military round as the .223 Remington, a very popular round for small game hunting. 7×64 mm: One of the most popular long-range game hunting rounds in Europe, especially in the countries such as France and (formerly) Belgium where the possession of firearms chambered for a (former) military round is forbidden or is more heavily restricted. This round is offered by European rifle makers in both bolt-action rifles and a rimmed version. The 7×65 mmR is chambered in double and combination rifles. Another reason for its popularity is its flat trajectory, very good penetration, and high versatility, depending on what bullet and load are used. Combined with a large choice of different 7 mm bullets available the 7×64mm is used on everything from fox and geese to red deer, Scandinavian moose and European brown bear equivalent to the North American black bear. The 7×64mm essentially duplicates the performance of the 270 Winchester and 280 Remington. 7 mm Remington Magnum: A long-range hunting round. 7.62×39mm: The standard Soviet/ComBloc rifle round from the mid-1940s to the mid-1970s, this is easily one of the most widely distributed rounds in the world due to the distribution of the ubiquitous Kalashnikov AK-47 series. 7.62×51mm NATO: This was the standard NATO rifle round until its replacement by the 5.56×45mm. It is currently the standard NATO sniper rifle and medium machinegun chambering. In the 1950s, it was the standard NATO round for rifles, but recoil and weight proved problematic for the new battle rifle designs such as the FN FAL. Standardized commercially as the 308 Winchester. 7.62×54mmR: The standard Russian rifle round from the 1890s to the mid-1940s. The "R" stands for rimmed. The 7.62×54mmR rifle round is a Russian design dating back to 1891. Originally designed for the Mosin-Nagant rifle, it was used during the late Tsarist era and throughout the Soviet period, in machine guns and rifles such as the SVT-40. The Winchester Model 1895 was also chambered for this cartridge per contract with the Russian government. It is still in use by the Russian military in the Dragunov and other sniper rifles and some machine guns. The round is colloquially known as the "7.62 Russian". This name sometimes causes people to confuse this round with the "7.62 Soviet" round, which refers to the 7.62 × 39 round used in the SKS and AK-47 rifles. 7.65×17mm Browning SR (32 ACP): A very small pistol round. However, this was the predominant Police Service round in Europe until the mid-1970s. The "SR" stands for semi-rimmed, meaning the case rim is slightly larger than the case body diameter. 8×57mm IS: The standard German service rifle round from 1888 to 1945, the 8×57mmIS (aka 8 mm Mauser) has seen wide distribution around the globe through commercial, surplus, and military sales, and is still a popular and commonly used hunting round in most of Europe, partly because of the abundance of affordable hunting rifles in this chambering as well as a broad availability of different hunting, target, and military surplus ammunition available. 9×19mm Parabellum: Invented for the German military at the turn of the 20th century, the wide distribution of the 9×19mm Parabellum round made it the logical choice for the NATO standard pistol and Submachine gun round. 9.3×62mm: Very common big game hunting round in Scandinavia along with the 6.5×55mm, where it is used as a very versatile hunting round on anything from small and medium game with lightweight cast lead bullets to the largest European big game with heavy soft point hunting bullets. The 9.3×62mm is also very popular in the rest of Europe for Big game, especially driven Big game hunts due to its effective stopping power on running game. And, it is the single round smaller than the 375 H&H Magnum that has routinely been allowed for legal hunting of dangerous African species. 12.7×108mm: The 12.7×108mm round is a heavy machine gun and anti-materiel rifle round used by the Soviet Union, the former Warsaw Pact, modern Russia, and other countries. It is the approximate Russian equivalent of the NATO .50 BMG (12.7×99mm NATO) round. The differences between the two are the bullet shape, the types of powder used, and that the case of the 12.7×108mm is 9 mm longer and marginally more powerful. 14.5×114mm: The 14.5×114 mm is a heavy machine gun and anti-materiel rifle round used by the Soviet Union, the former Warsaw Pact, modern Russia, and other countries. Its most common use is in the KPV heavy machine gun found on several Russian Military vehicles. Snake shot Snake shot (AKA: bird shot, rat shot and dust shot) refers to handgun and rifle rounds loaded with small lead shot. Snake shot is generally used for shooting at snakes, rodents, birds, and other pests at very close range. The most common snake shot cartridge is .22 Long Rifle loaded with No. 12 shot. From a standard rifle these can produce effective patterns only to a distance of about – but in a smoothbore shotgun this can extend as far as . Caseless ammunition Many governments and companies continue to develop caseless ammunition (where the entire case assembly is either consumed when the round fires or whatever remains is ejected with the bullet). So far, none has been successful enough to reach the civilian market and gain commercial success. Even within the military market, use is limited. Around 1848, Sharps introduced a rifle and paper cartridge (containing everything but the primer) system. When new, these guns had significant gas leaks at the chamber end, and with use these leaks progressively worsened. This problem plagues caseless cartridges and gun systems to this day. The Daisy Heddon VL Single Shot Rifle, which used a caseless round in .22 caliber, was produced by the air gun company, beginning in 1968. Apparently, Daisy never considered the gun an actual firearm. In 1969, the ATF ruled it was in fact a firearm, which Daisy was not licensed to produce. Production of the guns and the ammo was discontinued in 1969. They are still available on the secondary market, mainly as collector items, as most owners report that accuracy is not very good. In 1989, Heckler & Koch, a prominent German firearms manufacturer, began advertising the G11 assault rifle, which shot a 4.73×33 square caseless round. The round was mechanically fired, with an integral primer. In 1993 Voere of Austria began selling a gun and caseless ammunition. Their system used a primer, electronically fired at 17.5 ± 2 volts. The upper and lower limits prevent fire from either stray currents or static electricity. The direct electrical firing eliminates the mechanical delays associated with a striker, reducing lock time and allowing for easier adjustment of the rifle trigger. In both instances, the "case" was molded directly from solid nitrocellulose, which is itself relatively strong and inert. The bullet and primer were glued into the propellant block. Trounds The "Tround" ("Triangular Round") was a unique type of cartridge designed in 1958 by David Dardick, for use in specially designed Dardick 1100 and Dardick 1500 open-chamber firearms. As their name suggests, Trounds were triangular in cross-section and were made of plastic or aluminum, with the cartridge completely encasing the powder and projectile. The Tround design was also produced as a cartridge adaptor, to allow conventional .38 Special and 22 Long Rifle cartridges to be used with the Dardick firearms. Eco-friendly cartridges They are meant to prevent pollution and are mostly biodegradable (metals being the exception) or fully. They are also meant to be used on older guns. Blank ammunition A blank is a charged cartridge that does not contain a projectile or alternatively uses a non-metallic (for instance, wooden) projectile that pulverizes when hitting a blank firing adapter. To contain the propellant, the opening where the projectile would normally be located is crimped shut, and/or it is sealed with some material that disperses rapidly upon leaving the barrel. This sealing material can still potentially cause harm at extremely close range. Actor Jon-Erik Hexum died when he shot himself in the head with a blank, and actor Brandon Lee was famously killed during the filming of The Crow when a blank fired behind a bullet that was stuck in the bore drove that bullet through his abdomen and into his spine. The gun had not been properly deactivated and a primed case with a bullet instead of a dummy had been used previously. Someone pulled the trigger and the primer drove the bullet silently into the bore. Blanks are used in training, but do not always cause a gun to behave the same as live ammunition does; recoil is always far weaker, and some automatic guns only cycle correctly when the gun is fitted with a blank-firing adaptor to confine gas pressure within the barrel to operate the gas system. Blanks can also be used to launch a rifle grenade, although later systems used a "bullet trap" design that captures a bullet from a conventional round, speeding deployment. This also negates the risk of mistakenly firing a live bullet into the rifle grenade, causing it to instantly explode instead of propelling it forward. Blanks are also used as dedicated launchers for propelling a grappling hook, rope line or flare, or for a training lure for training gun dogs. The power loads used in a variety of nail guns are essentially rimfire blanks. Dummy rounds Drill rounds are inert versions of cartridges used for education and practice during military training. Other than the lack of propellant and primer, these are the same size as normal cartridges and will fit into the mechanism of a gun in the same way as a live cartridge does. Because dry-firing (releasing the firing pin with an empty chamber) a gun can sometimes lead to firing pin (striker) damage, dummy rounds termed snap caps are designed to protect centerfire guns from possible damage during "dry-fire" trigger control practices. To distinguish drill rounds and snap-caps from live rounds these are marked distinctively. Several forms of markings are used; e.g. setting colored flutes in the case, drilling holes through the case, coloring the bullet or cartridge, or a combination of these. In the case of centerfire drill rounds, the primer will often be absent, its mounting hole in the base is left open. Because these are mechanically identical to live rounds, which are intended to be loaded once, fired, and then discarded, drill rounds have a tendency to become significantly worn and damaged with repeated passage through magazines and firing mechanisms, and must be frequently inspected to ensure that these are not so degraded as to be unusable. For example, the cases can become torn or misshapen and snag on moving parts, or the bullet can become separated and stay in the breech when the case is ejected. ECI (Empty chamber indicator) The brightly colored ECI is an inert cartridge base designed to prevent a live round from being unintentionally chambered, to reduce the chances of an accidental discharge from mechanical or operator failure. An L-shaped flag is visible from the outside so that the shooter and other people concerned are instantly aware of the situation of the weapon. The ECI is usually tethered to its weapon by a short string and can be quickly ejected to make way for a live round if the situation suddenly warrants it. This safety device is standard-issue in the Israel Defense Forces known as ("Mek-Porek"). Snap cap A snap cap is a device that is shaped like a standard cartridge but contains no primer, propellant, or projectile. It is used to ensure that dry firing firearms of certain designs does not cause damage. A small number of rimfire and centerfire firearms of older design should not be test-fired with the chamber empty, as this can lead to weakening or breakage of the firing pin and increased wear to other components in those firearms. In the instance of a rimfire weapon of primitive design, dry firing can also cause deformation of the chamber edge. For this reason, some shooters use a snap cap in an attempt to cushion the weapon's firing pin as it moves forward. Some snap caps contain a spring-dampened fake primer, or one made of plastic, or none at all; the springs or plastic absorb force from the firing pin, allowing the user to safely test the function of the firearm action without damaging its components. Snap caps and action-proving dummy rounds also work as a training tool to replace live rounds for loading and unloading drills, as well as training for misfires or other malfunctions, as they function identically to a live "dud" round that has not ignited. Usually, one snap-cap is usable for 300 to 400 clicks. After that, due to the hole at the false primer, the firing pin does not reach it.
Technology
Ammunition
null
79308
https://en.wikipedia.org/wiki/Skunk
Skunk
Skunks are mammals in the family Mephitidae. They are known for their ability to spray a liquid with a strong, unpleasant scent from their anal glands. Different species of skunk vary in appearance from black-and-white to brown, cream or ginger colored, but all have warning coloration. While related to polecats and other members of the weasel family, skunks have as their closest relatives the Old World stink badgers. Taxonomy In alphabetical order, the living species of skunks are: Family Mephitidae Genus: Conepatus Conepatus chinga – Molina's hog-nosed skunk Conepatus humboldtii – Humboldt's hog-nosed skunk Conepatus leuconotus – American hog-nosed skunk Conepatus semistriatus – striped hog-nosed skunk Genus: Mephitis Mephitis macroura – hooded skunk Mephitis mephitis – striped skunk Genus: Spilogale Spilogale angustifrons – southern spotted skunk Spilogale gracilis – western spotted skunk Spilogale putorius – eastern spotted skunk Spilogale pygmaea – pygmy spotted skunk Terminology The word skunk is dated from the 1630s, adapted from a southern New England Algonquian language (probably Abenaki) , from Proto-Algonquian , from 'to urinate' + 'fox'. Skunk has historic use as an insult, attested from 1841. In 1634, a skunk was described in The Jesuit Relations: In Southern United States dialect, the term polecat is sometimes used as a colloquial nickname for a skunk, even though polecats are only distantly related to skunks. As a verb, skunk is used to describe the act of overwhelmingly defeating an opponent in a game or competition. Skunk is also used to refer to certain strong-smelling strains of Cannabis whose smell has been compared to that of a skunk's spray. Description Skunk species vary in size from about long and in weight from about (spotted skunks) to (hog-nosed skunks). They have moderately elongated bodies with relatively short, well-muscled legs and long front claws for digging. They have five toes on each foot. Although the most common fur color is black and white, some skunks are brown or grey and a few are cream-colored. All skunks are striped, even from birth. They may have a single thick stripe across the back and tail, two thinner stripes, or a series of white spots and broken stripes (in the case of the spotted skunk). Behavior Skunks are crepuscular and solitary animals when not breeding, though in the colder parts of their range, they may gather in communal dens for warmth. During the day they shelter in burrows, which they can dig with their powerful front claws. For most of the year the normal home range for skunks is in diameter, with males expanding during breeding season to travel per night. Skunks are not true hibernators in the winter, but do den up for extended periods of time. However, they remain generally inactive and feed rarely, going through a dormant stage. Over winter, multiple females (as many as 12) huddle together; males often den alone. Often, the same winter den is repeatedly used. Although they have excellent senses of smell and hearing, they have poor vision, being unable to see objects more than about away, making them vulnerable to death by road traffic. They are short-lived; their lifespan in the wild can reach seven years, with an average of six years. In captivity, they may live for up to 10 years. Reproduction Skunks mate in early spring and are polygynous (that is, successful males are uninhibited from mating with additional females). Before giving birth (usually in May), the female excavates a den to house her litter of four to seven kits. Skunks are placental, with a gestation period of about 66 days. When born, skunk kits are blind and deaf, but already covered by a soft layer of fur. About three weeks after birth, they first open their eyes; the kits are weaned about two months after birth. They generally stay with their mother until they are ready to mate, roughly at one year of age. The mother is protective of her kits, spraying at any sign of danger. The male plays no part in raising the young. Diet Skunks are omnivorous, eating both plant and animal material and changing their diets as the seasons change. They eat insects, larvae, earthworms, grubs, rodents, lizards, salamanders, frogs, snakes, birds, moles, and eggs. They also commonly eat berries, roots, leaves, grasses, fungi and nuts. In settled areas, skunks also seek garbage left by humans. Less often, skunks may be found acting as scavengers, eating bird and rodent carcasses left by cats or other animals. Pet owners, particularly those of cats, may experience a skunk finding its way into a garage or basement where pet food is kept. Skunks commonly dig holes in lawns in search of grubs and worms. Skunks use their long claws to break apart rotting logs to find insects that live within them. They also use those claws to help dig for insects, which leaves behind pits, which are easy signs of foraging. The claws also help with pinning down live and active prey. Skunks are one of the primary predators of the honeybee, relying on their thick fur to protect them from stings. The skunk scratches at the front of the beehive and eats the guard bees that come out to investigate. Mother skunks are known to teach this behavior to their young. Spray Skunks are notorious for their anal scent glands, which they can use as a defensive weapon. They are similar to, though much more developed than, the glands found in species of the family Mustelidae. Skunks have two glands, one on each side of the anus. These glands produce the skunk's spray, which is a mixture of sulfur-containing chemicals such as thiols (traditionally called mercaptans), which have an offensive odor. The thiols also make their spray highly flammable. A skunk's spray is powerful enough to ward off bears and other potential attackers. Muscles located next to the scent glands allow them to spray with a high degree of accuracy, as far as . The spray can also cause irritation and even temporary blindness, and is sufficiently powerful to be detected by a human nose up to 5.6 km (3.5 miles) downwind. Their chemical defense is effective, as illustrated by this extract from Charles Darwin's 1839 book The Voyage of the Beagle: Skunks carry just enough for five or six successive sprays – about 15 cm3 – and require up to ten days to produce another supply. Their bold black and white coloration makes their appearance memorable. It is to a skunk's advantage to warn possible predators off without expending scent: black and white aposematic warning coloration aside, threatened skunks will go through an elaborate routine of hisses, foot-stamping, and tail-high deimatic or threat postures before resorting to spraying. Skunks usually do not spray other skunks, except among males in the mating season. If they fight over den space in autumn, they do so with teeth and claws. Most predators of the Americas, such as wolves, foxes, and badgers, seldom attack skunks, presumably out of fear of being sprayed. The exceptions are reckless predators whose attacks fail once they are sprayed, dogs, and the great horned owl, which is the skunk's only regular predator. In one case, the remains of 57 striped skunks were found in a single great horned owl nest. Mitigation Skunks are common in suburban areas, and domestic dogs are often sprayed by skunks. There are many misconceptions about the removal of skunk odor, including the pervasive idea that tomato juice will neutralize the odor. These household remedies are ineffective, and only appear to work due to olfactory fatigue. In 1993, The American chemist Paul Krebaum has developed a formula that chemically neutralizes skunk spray by changing the odor-causing thiols into odorless acids, which is endorsed by the Humane Society of the United States for sprayed dogs. It involves hydrogen peroxide, baking soda, and liquid dish soap. Skunk spray is composed mainly of three low-molecular-weight thiol compounds, (E)-2-butene-1-thiol, 3-methyl-1-butanethiol, and 2-quinolinemethanethiol, as well as acetate thioesters of these. These compounds are detectable by the human nose at concentrations of only 11.3 parts per billion. Relations with humans Bites It is rare for a healthy skunk to bite a human, though a tame skunk whose scent glands have been removed (usually on behalf of those who will keep it as a pet) may defend itself by biting. There are, however, few recorded incidents of skunks biting humans. Skunk bites in humans can result in infection with the rabies virus. The Centers for Disease Control (CDC) recorded 1,494 cases of rabies in skunks in the United States for the year 2006—about 21.5% of reported cases in all species. Skunks in fact are less prominent than raccoons as vectors of rabies. (However, this varies regionally in the United States, with raccoons dominating along the Atlantic coast and the eastern Gulf of Mexico, while skunks instead predominate throughout the Midwest, including the western Gulf, and in California.) As pets Mephitis mephitis, the striped skunk, is the most social skunk and the one most commonly kept as a pet. In the US, skunks can legally be kept as pets in 17 states. When a skunk is kept as a pet, its scent glands are often surgically removed. In the UK, skunks can be kept as pets, but the Animal Welfare Act 2006 made it illegal to remove their scent glands.
Biology and health sciences
Carnivora
null
79327
https://en.wikipedia.org/wiki/Atlantic%20canary
Atlantic canary
The Atlantic canary (Serinus canaria), known worldwide simply as the wild canary and also called the island canary, common canary, or canary, is a small passerine bird belonging to the genus Serinus in the true finch family, Fringillidae. It is native to the Canary Islands, the Azores, and Madeira. It has two subspecies: the wild or common canary (Serinus canaria canaria) and domestic canary (Serinus canaria domestica). Wild birds are mostly yellow-green, with brownish streaking on the back. The species is common in captivity and a number of colour varieties have been bred. This bird is the natural symbol of the Canary Islands, together with the Canary Island date palm. Description The Atlantic canary can range from in length, with a wingspan of and a weight of , with an average of around . The male has a largely yellow-green head and underparts with a yellower forehead, face and supercilium. The lower belly and undertail-coverts are whitish and there are some dark streaks on the sides. The upperparts are grey-green with dark streaks and the rump is dull yellow. The female is similar to the male but duller with a greyer head and breast and less yellow underparts. Juvenile birds are largely brown with dark streaks. It is about 10% larger, longer and less contrasted than its relative the European serin, and has more grey and brown in its plumage and relatively shorter wings. The song is a silvery twittering similar to the songs of the European serin and citril finch. Taxonomy The Atlantic canary was classified by Linnaeus in 1758 in his Systema Naturae. Linnaeus originally classified the Atlantic canary as a subspecies of the European serin and assigned them to the genus Fringilla. Decades later, Cuvier reclassified them into the genus Serinus and there they have remained. The Atlantic canary's closest relative is the European serin, and the two can produce on average 25% fertile hybrids if crossed. Etymology The bird is named after the Canary Islands, not the other way around. The islands' name is derived from the Latin name canariae insulae ("islands of dogs") used by Arnobius, referring to the large dogs kept by the inhabitants of the islands. A legend of the islands, however, states that it was the conquistadors who named the islands after a fierce tribe inhabiting the largest island of the group, known as the 'Canarii'. The colour canary yellow is in turn named after the yellow domestic canary, produced by a mutation which suppressed the melanins of the original dull greenish wild Atlantic canary colour. Distribution and habitat It is endemic to the Canary Islands, Azores and Madeira in the region known as Macaronesia in the eastern Atlantic Ocean. In the Canary Islands, it is common on Tenerife, La Gomera, La Palma and El Hierro, but more local on Gran Canaria, and rare on Lanzarote and Fuerteventura, where it has only recently begun breeding. It is common in Madeira including Porto Santo and the Desertas Islands, and has been recorded on the Savage Islands. In the Azores, it is common on all islands. The population has been estimated at 80,000-90,000 pairs in the Canary Islands, 30,000-60,000 pairs in the Azores and 4,000-5,000 pairs in Madeira. It occurs in a wide variety of habitats from pine and laurel forests to sand dunes. It is most common in semiopen areas with small trees such as orchards and copses. It frequently occurs in man-made habitats such as parks and gardens. It is found from sea-level up to at least 760 m in Madeira, 1,100 m in the Azores and to above 1,500 m in the Canary Islands. It has become established on Midway Atoll in the northwest Hawaiian Islands, where it was first introduced in 1911. It was also introduced to neighbouring Kure Atoll, but failed to become established there. Birds were introduced to Bermuda in 1930 and quickly started breeding, but they began to decline in the 1940s after scale insects devastated the population of Bermuda cedar, and by the 1960s they had died out. The species also occurs in Puerto Rico, but is not yet established there. They are also found on Ascension Island. Behavior Reproduction It is a gregarious bird which often nests in groups with each pair defending a small territory. The cup-shaped nest is built 1–6 m above the ground in a tree or bush, most commonly at 3–4 m. It is well-hidden amongst leaves, often at the end of a branch or in a fork. It is made of twigs, grass, moss and other plant material and lined with soft material including hair and feathers. The eggs are laid between January and July in the Canary Islands, from March to June with a peak of April and May in Madeira and from March to July with a peak of May and June in the Azores. They are pale blue or blue-green with violet or reddish markings concentrated at the broad end. A clutch contains 3 to 4 or occasionally 5 eggs and 2–3 broods are raised each year. The eggs are incubated for 13–14 days and the young birds leave the nest after 14–21 days, most commonly after 15–17 days. Inbreeding depression occurs in S. canaria and is more severe during early development under the stressful conditions associated with hatching asynchrony. Hatching asynchrony leads to differences in age and thus in size, so that the environment of the first hatched is relatively benign, compared to that of the last hatched. Feeding It typically feeds in flocks, foraging on the ground or amongst low vegetation. It mainly feeds on seeds such as those of weeds, grasses and figs. It also feeds on other plant material and small insects.
Biology and health sciences
Passerida
Animals
79449
https://en.wikipedia.org/wiki/Multiple%20birth
Multiple birth
A multiple birth is the culmination of one multiple pregnancy, wherein the mother gives birth to two or more babies. A term most applicable to vertebrate species, multiple births occur in most kinds of mammals, with varying frequencies. Such births are often named according to the number of offspring, as in twins and triplets. In non-humans, the whole group may also be referred to as a litter, and multiple births may be more common than single births. Multiple births in humans are the exception and can be exceptionally rare in the largest mammals. A multiple pregnancy may be the result of the fertilization of a single egg that then splits to create identical fetuses, or it may be the result of the fertilization of multiple eggs that create fraternal ("non-identical") fetuses, or it may be a combination of these factors. A multiple pregnancy from a single zygote is called monozygotic, from two zygotes is called dizygotic, or from three or more zygotes is called polyzygotic. Similarly, the siblings themselves from a multiple birth may be referred to as monozygotic if they are identical or as dizygotic (in cases of twins) or polyzygotic (for three or more siblings) if they are fraternal, i.e., non-identical. Each fertilized ovum (zygote) may produce a single embryo, or it may split into two or more embryos, each carrying the same genetic material. Fetuses resulting from different zygotes are called fraternal and share only 50% of their genetic material, as ordinary full siblings from separate births do. Fetuses resulting from the same zygote share 100% of their genetic material and hence are called identical. Identical twins are always the same sex. Terminology Terms used for the number of offspring in a multiple birth, where a number higher than three ends with the suffix -uplet: two offspring – twins three offspring – triplets four offspring – quadruplets five offspring – quintuplets six offspring – sextuplets seven offspring – septuplets eight offspring – octuplets nine offspring – nonuplets Terms used for multiple births or the genetic relationships of their offspring are based on the zygosity of the pregnancy: Monozygotic – multiple (typically two) fetuses produced by the splitting of a single zygote Polyzygotic – multiple fetuses produced by two or more zygotes: Dizygotic – multiple (typically two) fetuses produced by two zygotes Trizygotic – three or more fetuses produced by three zygotes Sesquizygotic – an egg which is fertilized by 2 sperms, which produce 2 fetuses Multiple pregnancies are also classified by how the fetuses are surrounded by one or more placentas (chorionicity) and amniotic sacs (amnionicity). Human multiple births In humans, the average length of pregnancy (2 weeks fewer than gestation) is 38 weeks with a single fetus. This average decreases for each additional fetus: to 36 weeks for twin births, 32 weeks for triplets, and 30 weeks for quadruplets. With the decreasing gestation time, the risks from immaturity at birth and subsequent viability increase with the size of the sibling group. Only as of the twentieth century have more than four all survived infancy. Recent history has also seen increasing numbers of multiple births. In the United States, it has been estimated that by 2011, 36% of twin births, and 78% of triplet and higher-order births resulted from conception by assisted reproductive technology. Twins Twins are by far the most common form of multiple births in humans. The U.S. Centers for Disease Control and Prevention report more than 132,000 sets of twins out of 3.9 million births of all kinds each year, about 3.4%, or 1 in 30. Compared to other multiple births, twin births account for 97% of them in the US. Without fertility treatments, the probability is about 1 in 60; with fertility treatments, it can be as high as 20-25%. Dizygotic (fraternal) twins can be caused by a hyperovulation gene in the mother. Although the father's genes do not influence the woman's chances of having twins, he could influence his children's chances of having twins by passing on a copy of the hyperovulation gene to them. Monozygotic (identical) twins do not run in families. The twinning is random, due to the egg splitting, so all parents have an equal chance of conceiving identical twins. Triplets Triplets can be either fraternal, identical, or a combination of both. The most common are strictly fraternal triplets, which come from a polyzygotic pregnancy of three eggs. Less common are triplets from a dizygotic pregnancy, where one zygote divides into two identical fetuses, and the other does not. Least common are identical triplets, three fetuses from one egg. In this case, sometimes the original zygote divides into two and then one of those two zygotes divides again but the other does not, or the original zygote divides into three. Triplets are far less common than twins, according to the U.S. Centers for Disease Control and Prevention, accounting for only about 4,300 sets in 3.9 million births, just a little more than .1%, or 1 in 1,000. According to the American Society of Reproductive Medicine, only about 10% of these are identical triplets: about 1 in ten thousand. Nevertheless, only 4 sets of identical triplets were reported in the U.S. during 2015, about one in a million. According to Victor Khouzami, Chairman of Obstetrics at Greater Baltimore Medical Center, "No one really knows the incidence". Identical triplets or quadruplets are very rare and result when the original fertilized egg splits and then one of the resultant cells splits again (for triplets) or, even more rarely, a further split occurs (for quadruplets). The odds of having identical triplets is unclear. News articles and other non-scientific organizations give odds from one in 60,000 to one in 200 million pregnancies. Quadruplets Quadruplets are much rarer than twins or triplets. As of 2007, there were approximately 3,556 sets recorded worldwide. Quadruplet births are becoming increasingly common due to fertility treatments. There are around 70 sets of all-identical quadruplets worldwide. Many sets of quadruplets contain a mixture of identical and fraternal siblings, such as three identical and one fraternal, two identical and two fraternal, or two pairs of identicals. One famous set of identical quadruplets was the Genain quadruplets, all of whom developed schizophrenia. Quadruplets are sometimes referred to as "quads" in Britain. Quintuplets Quintuplets occur naturally in 1 in 55,000,000 births. The first quintuplets known to survive infancy were the identical female Canadian Dionne quintuplets, born in 1934. Quintuplets are sometimes referred to as "quins" in the UK and "quints" in North America. A famous set of all-girl quintuplets are the Busby quints from the TV series OutDaughtered. Sextuplets Born in Liverpool, England, on November 18, 1983, the Walton sextuplets were the world's first all-female surviving sextuplets, and the world's fourth known set of surviving sextuplets. Another well-known set of sextuplets is the Gosselin sextuplets, born on May 10, 2004, in Hershey, Pennsylvania. Reality television shows Jon & Kate Plus 8 and later Kate Plus 8 have chronicled the lives of these sextuplets. Other shows of this nature include Table for 12 and Sweet Home Sextuplets. Very high-order multiple births In 1997, the McCaughey septuplets, born in Des Moines, Iowa, became the first septuplets known to survive infancy. The first surviving set of octuplets on record are the Suleman octuplets, born in 2009 in Bellflower, California. In 2019, all 8 children celebrated their 10th birthday. Multiple births of as many as 9 babies have been born alive; In May 2021, the Cissé nonuplets were born in Morocco to Halima Cissé, a 25-year-old woman from Mali. , two years since their births, all 9 are still living and reportedly in good health. The list of multiple births covers notable examples. Causes and frequency The frequency of N multiple births from natural pregnancies has been given as approximately 1:89N−1 (Hellin's law) and as about 1:80N−1. This gives: 1:89 (= 1.1%) or 1:80 (= 1.25%) for twins 1:892 (= 1:7921, about 0.013%) or 1:802 (= 1:6400) for triplets 1:893 (= approx. 0.000142%, less than 1:700,000) or 1:803 for quadruplets North American dizygotic twinning occurs about once in 83 conceptions, and triplets about once in 8000 conceptions. US figures for 2010 were: Twins – 132,562 (3.31%) Triplets – 5,503 (0.14%) Quadruplets – 313 (0.0078%) Quintuplets and more – 37 (0.00092%) Human multiple births can occur either naturally (the woman ovulates multiple eggs or the fertilized egg splits into two) or as the result of infertility treatments such as in vitro fertilization (several embryos are often transferred to compensate for lower quality) or fertility drugs (which can cause multiple eggs to mature in one ovulatory cycle). For reasons that are not yet known, the older a woman is, the more likely she is to have a multiple birth naturally. It is theorized that this is due to the higher level of follicle-stimulating hormone that older women sometimes have as their ovaries respond more slowly to FSH stimulation. The number of multiple births has increased since the late 1970s. For example, in Canada between 1979 and 1999, the number of multiple birth babies increased 35%. Before the advent of ovulation-stimulating drugs, triplets were quite rare (approximately 1 in 8000 births) and higher-order births much rarer still. Much of the increase can probably be attributed to the impact of fertility treatments, such as in-vitro fertilization. Younger patients who undergo treatment with fertility medication containing artificial FSH, followed by intrauterine insemination, are particularly at risk for multiple births of higher order. Certain factors appear to increase the likelihood that a woman will naturally conceive multiples. These include: mother's age: women over 35 are more likely to have multiples than younger women mother's use of fertility drugs: approximately 35% of pregnancies arising through the use of fertility treatments such as IVF involve more than one child Women conceiving multiples over the age of 35 increase the risk of having fetuses with certain conditions and complications that are not as common in women who are pregnant. The increasing use of fertility drugs and consequent increased rate of multiple births has made the phenomenon of multiples more frequent and hence more visible. In 2004 the birth of sextuplets, six children, to Pennsylvania couple Kate and Jon Gosselin helped them to launch their television series, originally Jon & Kate Plus 8 and (following their divorce) Kate Plus 8, which became the highest-rated show on the TLC network. Risks Premature birth and low birth weight Babies born from multiple-birth pregnancies are much more likely to result in premature birth than those from single pregnancies. 51% of twins and 91% of triplets are born preterm, compared to 9.4% in singletons. 14% of twins and 41% of triplets are even born very preterm, compared to 1.7% in singletons. Drugs known as betamimetics can be used to relax the muscles of the uterus and delay birth in singleton pregnancies. There is some evidence that these drugs can also reduce the risk of preterm birth for twin pregnancies, but existing studies are small. More data is required before solid conclusions can be drawn. Likewise, existing studies are too small to determine if a cervical suture is effective for reducing prematurity in cases of multiple birth. As a result of preterm birth, multiples tend to have lower birth weight than singletons. Exceptions are possible, however, as with the Kupresak triplets, born in 2008 in Mississauga, Ontario, Canada. Their combined weight was 17 lbs, 2.7 oz, which set a world record. Two of the triplets were similar in size and, as expected, moderately low birth weight. The two combined weighed 9 lbs, 2.7 oz. The third triplet, however, was much larger and weighed 8 lbs. individually. Cerebral palsy Cerebral palsy is more common among multiple births than single births, being 2.3 per 1,000 survivors in singletons, 13 in twins, and 45 in triplets in North West England. This is likely a side effect of premature birth and low birth weight. Behavioral issues Premature birth is associated with a higher risk for a breadth of behavioral and socioemotional difficulties that begin in childhood and continue through teenagehood and often into adulthood. Conditions where the risk is greatest include attention deficit hyperactivity disorder, autism spectrum disorder, and anxiety disorders. Incomplete separation Multiples may be monochorionic, sharing the same chorion, with resultant risk of twin-to-twin transfusion syndrome. Monochorionic multiples may even be monoamniotic, sharing the same amniotic sac, resulting in risk of umbilical cord compression and nuchal cord. In very rare cases, there may be conjoined twins, possibly impairing function of internal organs. Mortality rate (stillbirth) Multiples are also known to have a higher mortality rate. It is more common for multiple births to be stillborn, while for singletons the risk is not as high. A literary review on multiple pregnancies shows a study done on one set each of septuplets and octuplets, two sets of sextuplets, 8 sets of quintuplets, 17 sets of quadruplets, and 228 sets of triplets. By doing this study, Hammond found that the mean gestational age (how many weeks when birthed) at birth was 33.4 weeks for triplets and 31 weeks for quadruplets. This shows that stillbirth happens usually 3–5 weeks before the woman reaches full term and also that for sextuplets or higher it almost always ends in death of the fetuses. Though multiples are at a greater risk of being stillborn, there is inconclusive evidence whether the actual mortality rate is higher in multiples than in singletons. Prevention in IVF Today many multiple pregnancies are the result of in vitro fertilisation (IVF). In a 1997 study of 2,173 embryo transfers performed as part of in vitro fertilisation (IVF), 34% were successfully delivered pregnancies. The overall multiple pregnancy rate was 31.3% (24.7% twins, 5.8% triplets, and .08% quadruplets). Because IVFs are producing more multiples, a number of efforts are being made to reduce the risk of multiple births- specifically triplets or more. Medical practitioners are doing this by limiting the number of embryos per embryo transfer to one or two. That way, the risks for the mother and fetuses are decreased. The appropriate number of embryos to be transferred depends on the age of the woman, whether it is the first, second or third full IVF cycle attempt and whether there are top-quality embryos available. According to a guideline from The National Institute for Health and Care Excellence (NICE) in 2013, the number of embryos transferred in a cycle should be chosen as in following table: Also, it is recommended to use single embryo transfer in all situations if a top-quality blastocyst is available. Management Bed rest has not been found to change outcomes and therefore is not generally recommended outside of a research study. Selective reduction (procedure) Selective reduction is the practice of reducing the number of fetuses in a multiple pregnancy; it is also called "multifetal reduction". The procedure generally takes two days; the first day for testing in order to select which fetuses to remove, and the second day for the procedure itself, in which potassium chloride is injected into the heart of each selected fetus under the guidance of ultrasound imaging. Risks of the procedure include bleeding requiring transfusion, rupture of the uterus, retained placenta, infection, a miscarriage, and prelabor rupture of membranes. Each of these appears to be rare. There are also ethical concerns about this procedure, since it is a form of abortion, and also because of concerns over which fetuses are terminated and why. Selective reduction was developed in the mid-1980s, as people in the field of assisted reproductive technology became aware of the risks that multiple pregnancies carried for the mother and for the fetuses. Care in pregnancy Women with a multiple pregnancy are usually seen more regularly by midwives or doctors than those with singleton pregnancies because of the higher risks of complications. However, there is currently no evidence to suggest that specialised antenatal services produce better outcomes for mother or babies than 'normal' antenatal care. Women with a multiple pregnancy are also encouraged after 24 weeks to be on bed rest. This recommendation is not a requirement for women with a multiple pregnancy, but it has been used as a method to prevent complications. Some doctors may prescribe this method to be on the safe side and if they believe it is necessary. Nutrition As preterm birth is such a risk for women with multiple pregnancies, it has been suggested that these women should be encouraged to follow a high-calorie diet to increase the birth weights of the babies. Evidence around this subject is not yet good enough to advise women to do this because the long term effects of the high-calorie diets on the mother are not known. Cesarean section or vaginal delivery A study in 2013 involving 106 participating centers in 25 countries came to the conclusion that, in a twin pregnancy of a gestational age between 32 weeks 0 days and 38 weeks 6 days, and the first twin is in cephalic presentation, planned Cesarean section does not significantly decrease or increase the risk of fetal or neonatal death or serious neonatal disability, as compared with planned vaginal delivery. In this study, 44% of the women planned for vaginal delivery still ended up having Cesarean section for unplanned reasons such as pregnancy complications. In comparison, it has been estimated that 75% of twin pregnancies in the United States were delivered by Cesarean section in 2008. Also in comparison, the rate of Cesarean section for all pregnancies in the general population varies between 40% and 14%. Fetal position (the way the babies are lying in the womb) usually determines if they are delivered by caesarean section or vaginally. A review of good quality research on this subject found that if the twin that will be born first (i.e. is lowest in the womb) is head down there is no good evidence that caesarean section will be safer than a vaginal birth for the mother or babies. Monoamniotic twins (twins that form after the splitting of a fertilised egg and share the same amniotic fluid sac) are at more risk of complications than twins that have their own sacs. There is also insufficient evidence around whether to deliver the babies early by caesarean section or to wait for labour to start naturally while running checks on the babies' wellbeing. The birth of this type of twins should therefore be decided with the mother and her family and should take into account the need for good neonatal care services. Cesarean delivery is needed when first twin is in non cephalic presentation or when it is a monoamniotic twin pregnancy. Neonatal intensive care Multiple-birth infants are usually admitted to neonatal intensive care or a special care nursery in the hospital immediately after being born. The records for all the triplet pregnancies managed and delivered from 1992 to 1996 were looked over to see what the neonatal statistics were. Kaufman found from reviewing these files that during a five-year period, 55 triplet pregnancies (i.e. 165 babies) were delivered. Of the 165 babies 149 were admitted to neonatal intensive care after the delivery. Society and culture Insurance coverage Iran In Iran, the Iranian government provides free housing to families that have birthed quintuplets. United States A study by the U.S. Agency for Healthcare Research and Quality found that, in 2011, pregnant women covered by private insurance in the United States were older and more likely to have multiple gestation than women covered by Medicaid. Cultural aspects Certain cultures consider multiple births a portent of either good or evil. Mayan culture saw twins as a blessing, and was fascinated by the idea of two bodies looking alike. The Mayans used to believe that twins were one soul that had fragmented. In Ancient Rome, the legend of the twin brothers who founded the city (Romulus and Remus) made the birth of twin boys a blessing, while twin girls were seen as an unlucky burden, since both would have to be provided with an expensive dowry at about the same time. In Greek mythology, fraternal twins Castor and Polydeuces, and Heracles and Iphicles, are sons of two different fathers. One of the twins (Polydeuces, Heracles) is the illegitimate son of the god Zeus; his brother is the son of their mother's mortal husband. A similar pair of twin sisters are Helen (of Troy) and Clytemnestra (who are also sisters of Castor and Polydeuces). The theme occurs in other mythologies as well, and is called superfecundation. In certain medieval European chivalric romances, such as Marie de France's Le Fresne, a woman cites a multiple birth (often to a lower-class woman) as proof of adultery on her part; while this may reflect a widespread belief, it is invariably treated as malicious slander, to be justly punished by the accuser having a multiple birth of her own, and the events of the romance are triggered by her attempt to hide one or more of the children. A similar effect occurs in the Knight of the Swan romance, in the Beatrix variants of the Swan-Children; her taunt is punished by giving birth to seven children at once, and her wicked mother-in-law returns her taunt before exposing the children. Ethics of multiple births Medically assisted procreation In vitro fertilization (IVF) was first successfully completed in the 1970s as a form of assisted reproductive technology. Out of all the assisted reproductive technology available that is currently in practice, in vitro fertilization has the highest chance of producing multiple offspring. Per each female egg, IVF currently has a 60–70% chance of conceiving. Fertilization is made possible by administering a fertility drug to the eggs or by directly injecting semen into the eggs. There is an increased chance for women over the age of 35 to have multiple births. IVF is a common genetic and ethical topic. Through IVF individuals can produce offspring successfully when natural procreation is not viable. However, in vitro can become genetically specific and allow for the selection of particular genes or expressible traits to be dominantly present in the formed embryo. Ethical dilemmas arise when determining health care coverage and the deviation from natural selection and gene variations. In regards to multiple births different ethical concerns arise from the use of in vitro fertilization. Overall, multiple pregnancies can cause potential harm to the mother and children due to potential complications. Such complications can include uterine bleeding and children not receiving equal nutrients. IVF has also revealed some pre term deliveries and lower birth weights in babies. While some view medically assisted procreation as a saving grace to have children, others consider these procedure to be unnatural and costly to the community. Multifetal pregnancy reduction Multifetal pregnancy reduction is the reduction of one or more embryos from the bearing woman. Selective reduction usually occurs for pregnancies assisted by assisted reproductive technology (ART). The first multifetal pregnancy reductions to occur in a clinical setting took place in the 1980s. The procedure aims to reduce pregnancies down to approximately one to two fetuses. The overarching purpose of the procedure is not primarily to simply terminate life, but to increase the survival and success of the mother and babies. However, multifetal pregnancy reduction raises some ethical questions. The main argument is similar to abortion ethics in reduction of fetus versus fetus life. The protection of maternal well being versus harm of newly formed fetal life is an extension of the aforementioned ethical question. It can be viewed that all life is important and that no life should be terminated without consent from the life that is being terminated. A polar opposite viewpoint advocates for the right of choice, that being the choice to terminate a pregnancy due to desire or pregnancy risks. Overall, most multifetal pregnancy reductions that occur as a result of ART are being done for the protection of the child-bearer's health and to maximize the health of the remaining fetuses.
Biology and health sciences
Human reproduction
Biology
79487
https://en.wikipedia.org/wiki/Tahr
Tahr
Tahrs ( , ) or tehrs ( ) are large artiodactyl ungulates related to goats and sheep. There are three species, all native to Asia. Previously thought to be closely related to each other and placed in a single genus, Hemitragus, genetic studies have since proven that they are not so closely related and they are now considered as members of three separate monotypic genera: Hemitragus is now reserved for the Himalayan tahr, Nilgiritragus for the Nilgiri tahr, and Arabitragus for the Arabian tahr. Ranges While the Arabian tahr of Oman and the Nilgiri tahr of South India both have small ranges and are considered endangered, the Himalayan tahr remains relatively widespread in the Himalayas, and has been introduced to the Southern Alps of New Zealand, where it is hunted recreationally. Also, a population exists on Table Mountain in South Africa, descended from a pair of tahrs that escaped from a zoo in 1936, but most of these have been culled. As for the Nilgiri tahr, research indicates its presence to be in the mountain ranges of southern India. Totalling ~1400 individuals in 1998, its largest remaining population appears to survive between the Indian states of Tamil Nadu and Kerala, where it may be vulnerable to poachers and illegal hunting. Behavior A routine of feeding during the morning followed by a long rest period, then feeding in the evening, constitutes the tahr's daily routine. Tahrs are not generally active or feed at night and can be found at the same location morning and evening. Tamilnadu Nilgiri Tahr is the state animal of Tamil Nadu. It has references from Tamil Sangam Literature like Cilappatikaram and Cīvaka Cintāmaṇi. In 2023, Tamil nadu government has declared October 7 as Nilgiri Tahr Day in honour of E. R. C. Davidar
Biology and health sciences
Bovidae
Animals
79537
https://en.wikipedia.org/wiki/Flight%20instruments
Flight instruments
Flight instruments are the instruments in the cockpit of an aircraft that provide the pilot with data about the flight situation of that aircraft, such as altitude, airspeed, vertical speed, heading and much more other crucial information in flight. They improve safety by allowing the pilot to fly the aircraft in level flight, and make turns, without a reference outside the aircraft such as the horizon. Visual flight rules (VFR) require an airspeed indicator, an altimeter, and a compass or other suitable magnetic direction indicator. Instrument flight rules (IFR) additionally require a gyroscopic pitch-bank (artificial horizon), direction (directional gyro) and rate of turn indicator, plus a slip-skid indicator, adjustable altimeter, and a clock. Flight into instrument meteorological conditions (IMC) require radio navigation instruments for precise takeoffs and landings. The term is sometimes used loosely as a synonym for cockpit instruments as a whole, in which context it can include engine instruments, navigational and communication equipment. Many modern aircraft have electronic flight instrument systems. Most regulated aircraft have these flight instruments as dictated by the US Code of Federal Regulations, Title 14, Part 91. They are grouped according to pitot-static system, compass systems, and gyroscopic instruments. Pitot-static systems Instruments which are pitot-static systems use air pressure differences to determine speed and altitude. Altimeter The altimeter shows the aircraft's altitude above sea-level by measuring the difference between the pressure in a stack of aneroid capsules inside the altimeter and the atmospheric pressure obtained through the static system. The most common unit for altimeter calibration worldwide is hectopascals (hPa), except for North America and Japan where inches of mercury (inHg) are used. The altimeter is adjustable for local barometric pressure which must be set correctly to obtain accurate altitude readings, usually in either feet or meters. As the aircraft ascends, the capsules expand and the static pressure drops, causing the altimeter to indicate a higher altitude. The opposite effect occurs when descending. With the advancement in aviation and increased altitude ceiling, the altimeter dial had to be altered for use both at higher and lower altitudes. Hence when the needles were indicating lower altitudes i.e. the first 360-degree operation of the pointers was delineated by the appearance of a small window with oblique lines warning the pilot that he or she is nearer to the ground. This modification was introduced in the early sixties after the recurrence of air accidents caused by the confusion in the pilot's mind. At higher altitudes, the window will disappear. Airspeed indicator The airspeed indicator shows the aircraft's speed relative to the surrounding air. Knots is the currently most used unit, but kilometers per hour is sometimes used instead. The airspeed indicator works by measuring the ram-air pressure in the aircraft's pitot tube relative to the ambient static pressure. The indicated airspeed (IAS) must be corrected for nonstandard pressure and temperature in order to obtain the true airspeed (TAS). The instrument is color coded to indicate important airspeeds such as the stall speed, never-exceed airspeed, or safe flap operation speeds. Vertical speed indicator The VSI (also sometimes called a variometer, or rate of climb indicator) senses changing air pressure, and displays that information to the pilot as a rate of climb or descent in feet per minute, meters per second or knots. Compass systems Magnetic compass The compass shows the aircraft's heading relative to magnetic north. Errors include Variation, or the difference between magnetic and true direction, and Deviation, caused by the electrical wiring in the aircraft, which requires a Compass Correction Card. Additionally, the compass is subject to Dip Errors. While reliable in steady level flight it can give confusing indications when turning, climbing, descending, or accelerating due to the inclination of the Earth's magnetic field. For this reason, the heading indicator is also used for aircraft operation, but periodically calibrated against the compass. Gyroscopic systems Attitude Indicator The attitude indicator (also known as an artificial horizon) shows the aircraft's relation to the horizon. From this the pilot can tell whether the wings are level (roll) and if the aircraft nose is pointing above or below the horizon (pitch). Attitude is always presented to users in the unit degrees (°). The attitude indicator is a primary instrument for instrument flight and is also useful in conditions of poor visibility. Pilots are trained to use other instruments in combination should this instrument or its power fail. Heading indicator The heading indicator (also known as the directional gyro, or DG) displays the aircraft's heading in compass points, and with respect to magnetic north when set with a compass. Bearing friction causes drift errors from precession, which must be periodically corrected by calibrating the instrument to the magnetic compass. In many advanced aircraft (including almost all jet aircraft), the heading indicator is replaced by a horizontal situation indicator (HSI) which provides the same heading information, but also assists with navigation. Turn indicator These include the Turn-and-Slip Indicator and the Turn Coordinator, which indicate rotation about the longitudinal axis. They include an inclinometer to indicate if the aircraft is in Coordinated flight, or in a Slip or Skid. Additional marks indicate a Standard rate turn. The turn rate is most commonly expressed in either degrees per second (deg/s) or minutes per turn (min/tr). Flight director systems These include the Horizontal Situation Indicator (HSI) and Attitude Director Indicator (ADI). The HSI combines the magnetic compass with navigation signals and a Glide slope. The navigation information comes from a VOR/Localizer, or GNSS. The ADI is an Attitude Indicator with computer-driven steering bars, a task reliever during instrument flight. Navigational systems Very-High Frequency Omnidirectional Range (VOR) The VOR indicator instrument includes a Course deviation indicator (CDI), Omnibearing Selector (OBS), TO/FROM indicator, and Flags. The CDI shows an aircraft's lateral position in relation to a selected radial track. It is used for orientation, tracking to or from a station, and course interception. On the instrument, the vertical needle indicates the lateral position of the selected track. A horizontal needle allows the pilot to follow a glide slope when the instrument is used with an ILS. Nondirectional Radio Beacon (NDB) The Automatic direction finder (ADF) indicator instrument can be a fixed-card, movable card, or a Radio magnetic indicator (RMI). An RMI is remotely coupled to a gyrocompass so that it automatically rotates the azimuth card to represent aircraft heading. While simple ADF displays may have only one needle, a typical RMI has two, coupled to different ADF receivers, allowing for position fixing using one instrument. Layout Most aircraft are equipped with a standard set of flight instruments which give the pilot information about the aircraft's attitude, airspeed, and altitude. T arrangement Most US aircraft built since the 1940s have flight instruments arranged in a standardized pattern called the "T" arrangement. The attitude indicator is in the top center, airspeed to the left, altimeter to the right and heading indicator under the attitude indicator. The other two, turn-coordinator and vertical-speed, are usually found under the airspeed and altimeter, but are given more latitude in placement. The magnetic compass will be above the instrument panel, often on the windscreen centerpost. In newer aircraft with glass cockpit instruments the layout of the displays conform to the basic T arrangement. Early history In 1929, Jimmy Doolittle became the first pilot to take off, fly and land an airplane using instruments alone, without a view outside the cockpit. In 1937, the British Royal Air Force (RAF) chose a set of six essential flight instruments which would remain the standard panel used for flying in instrument meteorological conditions (IMC) for the next 20 years. They were: altimeter (feet) airspeed indicator (knots) turn and bank indicator (turn direction and coordination) vertical speed indicator (feet per minute) artificial horizon (attitude indication) directional gyro / heading indicator (degrees) This panel arrangement was incorporated into all RAF aircraft built to official specification from 1938, such as the Miles Master, Hawker Hurricane, Supermarine Spitfire, and 4-engined Avro Lancaster and Handley Page Halifax heavy bombers, but not the earlier light single-engined Tiger Moth trainer, and minimized the type-conversion difficulties associated with blind flying, since a pilot trained on one aircraft could quickly become accustomed to any other if the instruments were identical. This basic six set, also known as a "six pack", was also adopted by commercial aviation. After the Second World War the arrangement was changed to: (top row) airspeed, artificial horizon, altimeter, (bottom row) turn and bank indicator, heading indicator, vertical speed. Further development In glass cockpits the flight instruments are shown on monitors. Primary flight display, is given a central place on the panel, superseding the artificial horizon, often, with a horizontal situation indicator next to it or integrated with the PFD. The indicated airspeed, altimeter, and vertical speed indicator are displayed as moving "tapes" with the indicated airspeed to the left of the horizon and the altimeter and the vertical speed to the right in the same layout as in most older style "clock cockpits".
Technology
Aircraft components
null
79648
https://en.wikipedia.org/wiki/Pinto%20bean
Pinto bean
The pinto bean () is a variety of common bean (Phaseolus vulgaris). In Spanish they are called . It is the most popular bean by crop production in Northern Mexico and the Southwestern United States, and is most often eaten whole (sometimes in broth), or mashed and then refried. Prepared either way, it is a common filling for burritos, tostadas, or tacos in Mexican cuisine, also as a side or as part of an entrée served with a side tortilla or sopaipilla in New Mexican cuisine. In South America, it is known as the , literally "strawberry bean". In Portuguese, the Brazilian name is (literally " bean"; contrary to popular belief, the beans were not named after Rio de Janeiro, but after a pig breed that has the same color as the legume), which differs from the name in Portugal: . Additionally, the young immature pods may be harvested and cooked as green pinto beans. There are a number of different varieties of pinto bean, notably some originating from Northern Spain, where an annual fair is dedicated to the bean. In many languages, "pinto" means "colored" or "painted", as derived from the Late Latin and Classical Latin . In Spanish, it means "painted", "dappled", or "spotted". The coloration of pinto beans is similar to that of pinto horses. Use The dried pinto bean is the bean commonly used reconstituted or canned in many dishes, especially refried beans. It is popular in chili con carne, although kidney beans, black beans, and many others may be used in other locales. Pinto beans are often found in Brazilian cuisine. Legumes, mainly the common bean, are a staple food everywhere in the country, cultivated since 3000 BC, along with starch-rich foods, such as rice, manioc, pasta, and other wheat-based products, polenta and other corn-based products, potatoes and yams. Pinto beans are also a very important ingredient in Spanish cuisine and Mexican cuisine. In Spanish cuisine pinto beans are mostly used in a dish named after them. In the Southern United States, pinto beans were once a staple, especially during the winter months. Some organizations and churches in rural areas still sponsor "pinto bean suppers" for social gatherings and fund raisers. Varieties Pinto bean varieties include: 'Burke', 'Hidatsa', and 'Othello'. The alubia pinta alavesa, or the "Alavese pinto bean", a red variety of the pinto bean, originated in Añana, a town and municipality located in the province of Álava, in the Basque Country of northern Spain. In October, the Feria de la alubia pinta alavesa (Alavese pinto bean fair) is celebrated in Pobes. Cooking Pinto beans are often soaked, which greatly shortens cooking time. If unsoaked, they are frequently boiled rapidly for 10 minutes. They will then generally take two to three hours to cook on a stove to soften. In a pressure cooker they will cook very rapidly, perhaps 3 minutes if soaked, and 20-45 minutes if unsoaked. Cooking times vary considerably however and may depend on the source of the bean, hardness of the cooking water and many other factors. Nutrition A nutrient-dense legume, the pinto bean contains many essential nutrients. It is a good source of protein, phosphorus and manganese, and very high in dietary fiber and folate. Rice and pinto beans served with cornbread or maize tortillas are often a staple meal where meat is unavailable. This combination contains the essential amino acids necessary for humans in adequate amounts: maize complements beans' relative scarcity of methionine and cystine and beans complement maize's relative scarcity of lysine and tryptophan. Studies have indicated pinto beans can lower the levels of both HDL and LDL cholesterol. Pinto beans have also been shown to contain the phytoestrogen coumestrol, which has a variety of possible health effects.
Biology and health sciences
Pulses
Plants
79658
https://en.wikipedia.org/wiki/Lychee
Lychee
Lychee ( , ; Litchi chinensis; ) is a monotypic taxon and the sole member in the genus Litchi in the soapberry family, Sapindaceae. There are three distinct subspecies of lychee. The most common is the Indochinese lychee found in South China, Malaysia, and northern Vietnam. The other two are the Philippine lychee (locally called alupag or matamata) found only in the Philippines and the Javanese lychee cultivated in Indonesia and Malaysia. The tree has been introduced throughout Southeast Asia and South Asia. Cultivation in China is documented from the 11th century. China is the main producer of lychees, followed by India, Vietnam, other countries in Southeast Asia, other countries in South Asia, Madagascar, and South Africa. A tall evergreen tree, it bears small fleshy sweet fruits. The outside of the fruit is a pink-red, rough-textured soft shell. Lychee seeds contain methylene cyclopropyl glycine which has caused hypoglycemia associated with outbreaks of encephalopathy in undernourished Indian and Vietnamese children who consumed lychee fruit. Taxonomy Litchi chinensis is the sole member of the genus Litchi in the soapberry family, Sapindaceae. It was described and named by French naturalist Pierre Sonnerat in his account "Voyage aux Indes Orientales et à la Chine, fait depuis 1774 jusqu'à 1781" (translation: "Voyage to the East Indies and China, made between 1774 and 1781"), which was published in 1782. There are three subspecies, determined by flower arrangement, twig thickness, fruit, and a number of stamens. Litchi chinensis subsp. chinensis is the only commercialized lychee. It grows wild in southern China, northern Vietnam, and Cambodia. It has thin twigs, flowers typically have six stamens, fruit are smooth or with protuberances up to . Litchi chinensis subsp. philippinensis (Radlk.) Leenh. It is common in the wild in the Philippines and rarely cultivated. Locally called alupag, mata-mata, or matamata due to its eye-like appearance when the fruit is opened, it has thin twigs, six to seven stamens, long oval fruit with spiky protuberances up to . Litchi chinensis subsp. javensis. It is only known in cultivation, in Malaysia and Indonesia. It has thick twigs, flowers with seven to eleven stamens in sessile clusters, smooth fruit with protuberances up to . Description Tree Litchi chinensis is an evergreen tree that is frequently less than tall, sometimes reaching . Its evergreen leaves, long, are pinnate, having 4 to 8 alternate, elliptic-oblong to lanceolate, abruptly pointed, leaflets, The bark is grey-black, the branches a brownish-red. Its evergreen leaves are long, with leaflets in two to four pairs. Lychee are similar in foliage to the family Lauraceae, likely due to convergent evolution. They are adapted by developing leaves that repel water, and are called laurophyll or lauroid leaves. Flowers grow on a terminal inflorescence with many panicles on the current season's growth. The panicles grow in clusters of ten or more, reaching or longer, holding hundreds of small white, yellow, or green flowers that are distinctively fragrant. Fruit The lychee bears fleshy fruits that mature in 80–112 days depending on climate, location, and cultivar. Fruits vary in shape from round to ovoid to heart-shaped, up to 5 cm long and 4 cm wide (2.0 in × 1.6 in), weighing approximately 20 g. The thin, tough skin is green when immature, ripening to red or pink-red, and is smooth or covered with small sharp protuberances roughly textured. The rind is inedible but easily removed to expose a layer of translucent white flesh with a floral smell and a sweet flavor. The skin turns brown and dry when left out after harvesting. The fleshy, edible portion of the fruit is an aril, surrounding one dark brown inedible seed that is 1 to 3.3 cm long and 0.6 to 1.2 cm wide (0.39–1.30 by 0.24–0.47 in). Some cultivars produce a high percentage of fruits with shriveled aborted seeds known as 'chicken tongues'. These fruits typically have a higher price, due to having more edible flesh. Since the floral flavor is lost in the process of canning, the fruit is usually eaten fresh. History Cultivation of lychee began in the region of southern China, going back to 1059 AD, Malaysia, and northern Vietnam. Unofficial records in China refer to lychee as far back as 2000 BC. Wild trees still grow in parts of southern China and on Hainan Island. The fruit was used as a delicacy in the Chinese Imperial Court. In the 1st century during the Han dynasty, fresh lychees were a popular tribute item, and in such demand at the Imperial Court that a special courier service with fast horses would bring the fresh fruit from Guangdong. There was great demand for lychee in the Song Dynasty (960-1279), according to Cai Xiang, in his Li chi pu (Treatise on Lychees). It was also the favorite fruit of Emperor Li Longji (Xuanzong)'s favored concubine Yang Yuhuan (Yang Guifei). The emperor had the fruit delivered at great expense to the capital. The lychee attracted the attention of European travelers, such as the Spanish bishop, explorer, and sinologist Juan González de Mendoza in his History of the great and mighty kingdom of China (1585; English translation 1588), based on the reports of Spanish friars who had visited China in the 1570s gave the fruit high praise: Later the lychee was described and introduced to the West in 1656 by Michal Boym, a Polish Jesuit missionary (at that time Polish–Lithuanian Commonwealth). Lychee trees were introduced to Jamaica by Chinese immigrants in the 18th century, where the fruit is associated with the Chinese Jamaican community. The fruit is featured in a popular Jamaican cake, called lychee cake, which is made of a light sponge cake, cream, and fruit, which has been one of the most popular cakes in Jamaica since its creation by baker Selena Wong in 1988. Lychee was introduced in the north-western parts of Indian Subcontinent (then British Raj) in 1932 and remained an exotic plant until the 1960s when commercial production began. The crop's production expanded from Begum Kot (Lahore District) in Punjab to Hazara, Haripur, Sialkot and Mirpur Khas. Double domestication Genomic studies indicate that the lychee resulted from double domestication by independent cultivation in two different regions of ancient China. Cultivation and uses Lychees are extensively grown in southern China, Taiwan, Vietnam and the rest of tropical Southeast Asia, the Indian Subcontinent, and in tropical regions of many other countries. They require a tropical climate that is frost-free and is not below the temperature of . Lychees require a climate with high summer heat, rainfall, and humidity, growing optimally on well-drained, slightly acidic soils rich in organic matter and mulch. Some 200 cultivars exist, with early and late maturing forms suited to warmer and cooler climates, respectively, although mainly eight cultivars are used for commerce in China. They are also grown as an ornamental tree, as well as for their fruit. The most common way of propagating lychee is through a method called air layering or marcotting. Air-layers, or marcotts, are made by cutting a branch of a mature tree, covering the cut with a rooting medium, such as peat or sphagnum moss, then wrapping the medium with polyethylene film and allowing the cut to root. Once significant rooting has occurred, the marcott is cut from the branch and potted. According to folklore, a lychee tree that is not producing much fruit can be girdled, leading to more fruit production. When the central opening of trees is carried out as part of training and pruning, stereo fruiting can be achieved for higher orchard productivity. Lychees are commonly sold fresh in Asian markets. The red rind turns dark brown when the fruit is refrigerated, but the taste isn't affected. It is also sold canned year-round. The fruit can be dried with the rind intact, at which point the flesh shrinks and darkens. Cultivars There are numerous lychee cultivars, with considerable confusion regarding their naming and identification. The same cultivar grown in different climates can produce very different fruit. Cultivars can also have different synonyms in various parts of the world. Southeast Asian countries, along with Australia, use the original Chinese names for the main cultivars. India grows more than a dozen different cultivars. South Africa grows mainly the "Mauritius" cultivar. Most cultivars grown in the United States were imported from China, except for the "Groff", which was developed in the state of Hawaii. Different cultivars of lychee are popular in various growing regions and countries. In China, popular cultivars include Sanyuehong, Baitangying, Baila, Muzaffarpur, Samastipur, Shuidong, Feizixiao, Dazou, Heiye, Nuomici, Guiwei, Huaizhi, Lanzhu, and Chenzi. In Vietnam, the most popular cultivar is Vai Thieu Hai Duong. In the US, production is based on several cultivars, including Mauritius, Brewster, and Hak Ip. India grows more than a dozen named cultivars, including Shahi (Highest Pulp %), Dehradun, Early Large Red, Kalkattia and Rose Scented. Nutrients Raw lychee fruit is 82% water, 17% carbohydrates, 1% protein, and contains negligible fat (table). In a 100-gram (3.5 oz) reference amount, raw lychee fruit supplies 66 calories of food energy. The raw pulp is rich in vitamin C, having 72 mg per 100 grams – an amount representing 79% of the Daily Value – but contains no other micronutrients in significant content (table). Phytochemicals Lychees have moderate amounts of polyphenols, including flavan-3-ol monomers and dimers as major compounds representing about 87% of total polyphenols, which declined in content during storage or browning. Cyanidin-3-glucoside represented 92% of total anthocyanins. Poisoning In 1962, it was found that lychee seeds contained methylenecyclopropylglycine (MCPG), a homologue of hypoglycin A, which caused hypoglycemia in human and animal studies. Since the end of the 1990s, unexplained outbreaks of encephalopathy occurred, appearing to affect only children in India (where it is called chamki bukhar), and northern Vietnam (where it was called Ac Mong encephalitis after the Vietnamese word for nightmare) during the lychee harvest season from May to June. A 2013 investigation by the U.S. Centers for Disease Control and Prevention (CDC), in India, showed that cases were linked to the consumption of lychee fruit, causing a noninflammatory encephalopathy that mimicked symptoms of Jamaican vomiting sickness. Because low blood sugar (hypoglycemia) of less than 70 mg/dL in the undernourished children on admission was common, and associated with a poorer outcome (44% of all cases were fatal) the CDC identified the illness as a hypoglycemic encephalopathy. The investigation linked the illness to hypoglycin A and MCPG toxicity, and to malnourished children eating lychees (particularly unripe ones) on an empty stomach. The CDC report recommended that parents ensure their children limit lychee consumption and have an evening meal, elevating blood glucose levels that may be sufficient to deter illness. Earlier studies had incorrectly concluded that transmission may occur from direct contact with lychees contaminated by bat saliva, urine, or guano or with other vectors, such as insects found in lychee trees or sand flies, as in the case of Chandipura virus. A 2017 study found that pesticides used in the plantations could be responsible for the encephalitis and deaths of young children in Bangladesh. Gallery
Biology and health sciences
Sapindales
null
79683
https://en.wikipedia.org/wiki/Rutaceae
Rutaceae
The Rutaceae () is a family, commonly known as the rue or citrus family, of flowering plants, usually placed in the order Sapindales. Species of the family generally have flowers that divide into four or five parts, usually with strong scents. They range in form and size from herbs to shrubs and large trees. The most economically important genus in the family is Citrus, which includes the orange (C. × sinensis), lemon (C. × limon), grapefruit (C. × paradisi), and lime (various). Boronia is a large Australian genus, some members of which are plants with highly fragrant flowers and are used in commercial oil production. Other large genera include Zanthoxylum, several species of which are cultivated for Sichuan pepper, Melicope, and Agathosma. About 160 genera are in the family Rutaceae. Characteristics Most species are trees or shrubs, a few are herbs (the type genus Ruta, Boenninghausenia and Dictamnus), frequently aromatic with glands on the leaves, sometimes with thorns. The leaves are usually opposed and compound, and without stipules. Pellucid glands, a type of oil gland, are found in the leaves responsible for the aromatic smell of the family's members; traditionally they have been the primary synapomorphic characteristic to identify the Rutaceae. Flowers are bractless, solitary or in cyme, rarely in raceme, and mainly pollinated by insects. They are radially or (rarely) laterally symmetric, and generally hermaphroditic. They have four or five petals and sepals, sometimes three, mostly separate, eight to ten stamen (five in Skimmia, many in Citrus), usually separate or in several groups. Usually a single stigma with 2 to 5 united carpels, sometimes ovaries separate but styles combined. The fruit of the Rutaceae are very variable: berries, drupes, hesperidia, samaras, capsules, and follicles all occur. Seed number also varies widely. Taxonomy The family is closely related to the Sapindaceae, Simaroubaceae, and Meliaceae, and all are usually placed into the same order, although older systems separate that order into Rutales and Sapindales. The families Flindersiaceae and Ptaeroxylaceae are sometimes kept separate, but nowadays generally are placed in the Rutaceae, as are the former Cneoraceae. Subfamilies In 1896, Engler published a division of the family Rutaceae into seven subfamilies. One, Rhabdodendroideae, is no longer considered to belong to the Rutaceae, being treated as the segregate family Rhabdodendraceae, containing only the genus Rhabdodendron. Two monogeneric subfamilies, Dictyolomatoideae and Spathelioideae, are now included in the subfamily Cneoroideae, along with genera Engler placed in other families. The remaining four Engler subfamilies were Aurantioideae, Rutoideae, Flindersioideae and Toddalioideae. Engler's division into subfamilies largely relied on the characteristics of the fruit, as did others used until molecular phylogenetic methods were applied. Molecular methods have shown that only Aurantioideae can be clearly differentiated from other members of the family based on fruit. They have not supported the circumscriptions of Engler's three other main subfamilies. In 2012, Groppo et al. divided Rutaceae into only two subfamilies, retaining Cneoroideae but placing all the remaining genera in a greatly enlarged subfamily Rutoideae s.l. A 2014 classification by Morton and Telmer also retained Engler's Aurantioideae, but split the remaining Rutoideae s.l. into a smaller Rutoideae and a much larger Amyridoideae s.l., containing most of Engler's Rutoideae. Until 2021, molecular phylogenetic methods had only sampled between 20% and 40% of the genera of Rutaceae. A 2021 study by Appelhans et al. sampled almost 90% of the genera. The two main clades recognized by Groppo et al. in 2012 were upheld, but Morton and Telmer's Rutoideae was paraphyletic and their Amyridoideae was polyphyletic and did not include the type genus. Applehans et al. divided the family into six subfamilies, shown below in the cladogram produced in their study. The large subfamily Zanthoxyloideae was shown to contain distinct clades, but the authors considered that a revised classification at the tribal level was not yet feasible at the time their paper was published. Notable species The family is of great economic importance in warm temperate and subtropical climates for its numerous edible fruits of the genus Citrus, such as the orange, lemon, calamansi, lime, kumquat, mandarin and grapefruit. Non-citrus fruits include the white sapote (Casimiroa edulis), orangeberry (Glycosmis pentaphylla), limeberry (Triphasia trifolia), and the bael (Aegle marmelos). The curry tree, Murraya koenigii, is of culinary importance in the Indian subcontinent and elsewhere, as its leaves are used as a spice to flavour dishes. Spices are also made from a number of species in the genus Zanthoxylum, notably Sichuan pepper. Other plants are grown in horticulture: Murraya and Skimmia species, for example. Ruta, Zanthoxylum and Casimiroa species are medicinals. Several plants are also used by the perfume industry, such as the Western Australian Boronia megastigma. The genus Pilocarpus has species (P. jaborandi, and P. microphyllus from Brazil, and P. pennatifolius from Paraguay) from which the medicine pilocarpine, used to treat glaucoma, is extracted.
Biology and health sciences
Sapindales
Plants
79713
https://en.wikipedia.org/wiki/Cantaloupe
Cantaloupe
The cantaloupe ( ) is a type of true melon (Cucumis melo) with sweet, aromatic, and usually orange flesh. Originally, cantaloupe refers to the true cantaloupe or European cantaloupe with non- to slightly netted and often ribbed rind. Today, it also refers to the muskmelon with strongly netted rind, which is called cantaloupe in North America (hence the name American cantaloupe), rockmelon in Australia and New Zealand, and spanspek in Southern Africa. Cantaloupes range in mass from . Etymology and origin The cantaloupe is said to have been introduced to Europe from Armenia. It acquired its name because it was first cultivated at Cantalupo, the Pope's country estate. It was first mentioned in English literature in 1739. The cantaloupe most likely originated in a region from South Asia to Africa. It was later introduced to Europe, and around 1890, became a commercial crop in the United States. The South African English name dates back at least as far as 18th-century Dutch Suriname: J. van Donselaar wrote in 1770, " is the name for the form that grows in Suriname which, because of its thick skin and little flesh, is less consumed." A common etymology involves the Spanish-born , who ate canteloupe for breakfast while her husband and 19th-century governor of Cape Colony, Sir Harry Smith, ate bacon and eggs; the fruit was termed Spanish bacon (Afrikaans ) by locals as a result. However, the term had been in use long before that point. Types The true or European cantaloupe (Cantalupensis Group sensu stricto), which has non- to slightly netted rind and orange flesh, includes the following types: Sub-group Prescott with deeply ribbed rind, such as 'Prescott Fond Blanc'. Sub-group Saccharinus with speckled and slightly ribbed rind, such as 'Sucrin de Honfleur' Sub-group Charentais with non-speckled, slightly ribbed and green-sutured rind. The Israeli cantaloupe (Sub-group Ha'Ogen) is similar to the European one, but it has green flesh. The muskmelon or American cantaloupe (formerly Reticulatus Group but now merged into Cantalupensis Group), which has strongly netted rind and orange flesh, includes the following types: Sub-group American Western with non- to slightly ribbed and wholly netted rind. Sub-group American Eastern with more or less ribbed rind of which the sutures are not or less netted. Other similar types A melon with netted rind is not necessarily a cantaloupe. Many varieties of Chandalak Group and Ameri Group also have netted rind. The Japanese muskmelon (Sub-group Earl's) resembles the American cantaloupe in netted rind, but differs in green flesh and non-dehiscent peduncles (which means the melon does not detach from the stalk when it is ripe). Therefore, some horticulturists classify the Japanese muskmelon under Inodorus Group instead of Cantalupensis or Reticulatus Group. Production In 2016, global production of melons, including cantaloupes, totaled 31.2 million tons, with China accounting for 51% of the world total (15.9 million tons). Other significant countries growing cantaloupe were Turkey, Iran, Egypt, and India producing 1 to 1.9 million tons, respectively. California grows 75% of the cantaloupes in the US. Uses Culinary Cantaloupe is normally eaten as a fresh fruit, as a salad, or as a dessert with ice cream or custard. Melon pieces wrapped in prosciutto are a familiar antipasto. The seeds are edible and may be dried for use as a snack. Because the surface of a cantaloupe can contain harmful bacteria—in particular, Salmonella—it is recommended that a melon be washed and scrubbed thoroughly before cutting and consumption to prevent risk of Salmonella or other bacterial pathogens. A moldy cantaloupe in a Peoria, Illinois, market in 1943 was found to contain the highest yielding strain of mold for penicillin production, after a worldwide search. Nutrition Raw cantaloupe is 90% water, 8% carbohydrates, 0.8% protein and 0.2% fat (table). In a reference amount of , raw cantaloupe supplies of food energy, and is a rich source (20% or more of the Daily Value, DV) of vitamin A (29% DV) and a moderate source of vitamin C (13% DV). Other micronutrients are in negligible amounts (less than 10% DV) (table).
Biology and health sciences
Melons
Plants
79717
https://en.wikipedia.org/wiki/U.S.%20Route%2066
U.S. Route 66
U.S. Route 66 or U.S. Highway 66 (US 66 or Route 66) is one of the original highways in the United States Numbered Highway System. It was established on November 11, 1926, with road signs erected the following year. The highway, which became one of the most famous roads in the United States, ran from Chicago, Illinois, through Missouri, Kansas, Oklahoma, Texas, New Mexico, and Arizona before terminating in Santa Monica in Los Angeles County, California, covering a total of . It was recognized in popular culture by both the 1946 hit song "(Get Your Kicks on) Route 66" and the Route 66 television series, which aired on CBS from 1960 to 1964. It was also featured in the Disney/Pixar animated feature film franchise Cars, beginning in 2006. In John Steinbeck's novel The Grapes of Wrath (1939), the highway symbolizes escape, loss, and the hope of a new beginning; Steinbeck dubbed it the Mother Road. Other designations and nicknames include the Will Rogers Highway and the Main Street of America, the latter nickname shared with U.S. Route 40. US 66 was a primary route for those who migrated west, especially during the Dust Bowl of the 1930s, and it supported the economies of the communities through which it passed. People doing business along the route became prosperous, and they later fought to keep it alive in the face of the growing threat of being bypassed by the more advanced controlled-access highways of the Interstate Highway System in the 1960s and 70s. US 66 underwent many improvements and realignments over its lifetime, but it was officially removed from the United States Highway System in 1985 after it was entirely replaced by segments of the Interstate Highway System. Portions of the road that passed through Illinois, Missouri, Oklahoma, New Mexico, and Arizona have been communally designated a National Scenic Byway by the name "Historic Route 66", returning the name to some maps. Several states have adopted significant bypassed sections of the former US 66 into their state road networks as State Route 66 and much of the former route within San Bernardino County, California, is designated as County Route 66. The corridor is also being redeveloped into U.S. Bicycle Route 66, a part of the United States Bicycle Route System that was developed in the 2010s. History |-align=center |California | |-align=center |Arizona | |-align=center |New Mexico | |-align=center |Texas | |-align=center |Oklahoma | |-align=center |Kansas | |-align=center |Missouri | |-align=center |Illinois | |-align=center |Total | |} Before the U.S. Highway System In 1857, Lt. Edward Fitzgerald Beale, a naval officer in the service of the U.S. Army Corps of Topographical Engineers, was ordered by the War Department to build a government-funded wagon road along the 35th Parallel. His secondary orders were to test the feasibility of the use of camels as pack animals in the southwestern desert. This road became part of US 66. Parts of the original Route 66 from 1913, prior to its official naming and commissioning, can still be seen north of the Cajon Pass. The paved road becomes a dirt road, south of Cajon, which was also the original Route 66. Before a nationwide network of numbered highways was adopted by the states, auto trails were marked by private organizations. The route that became US 66 was covered by three highways: The Lone Star Route passed through St. Louis on its way from Chicago to Cameron, Louisiana (although US 66 would take a shorter route through Bloomington rather than Peoria). The transcontinental National Old Trails Road led via St. Louis to Los Angeles, but was not followed until New Mexico. Instead, US 66 used one of the main routes of the Ozark Trails system, which ended at the National Old Trails Road just south of Las Vegas, New Mexico. Again, a shorter route was taken, here following the Postal Highway between Oklahoma City and Amarillo. The National Old Trails Road became the rest of the route to Los Angeles. Legislation for public highways first appeared in 1916, with revisions in 1921, but the government did not execute a national highway construction plan until Congress enacted an even more comprehensive version of the act in 1925. The original inspiration for a road between Chicago and Los Angeles was planned by entrepreneurs Cyrus Avery of Tulsa, Oklahoma, and John Woodruff of Springfield, Missouri, who lobbied the American Association of State Highway Officials (AASHO) for the creation of a route following the 1925 plans. From the outset, public road planners intended US 66 to connect the main streets of rural and urban communities along its course for the most practical of reasons: Most small towns had no prior access to a major national thoroughfare. Birthplace and rise of US 66 The numerical designation 66 was assigned to the Chicago-to-Los Angeles route on April 30, 1926, in Springfield, Missouri. A placard in Park Central Square was dedicated to the city by the Route 66 Association of Missouri, and traces of the "Mother Road" are still visible in downtown Springfield, along Kearney Street, Glenstone Avenue, College, and St. Louis streets and on Route 266 to Halltown, Missouri. Championed by Avery when the first talks about a national highway system began, US 66 was first signed into law in 1927 as one of the original U.S. Highways, although it was not completely paved until 1938. Avery was adamant that the highway have a round number and had proposed number 60 to identify it. A controversy erupted over the number 60, largely from delegates from Kentucky who wanted a Virginia Beach–Los Angeles highway to be US 60 and US 62 between Chicago and Springfield, Missouri. Arguments and counterarguments continued throughout February, including a proposal to split the proposed route through Kentucky into Route 60 North (to Chicago) and Route 60 South (to Newport News). The final conclusion was to have US 60 run between Virginia Beach, Virginia, and Springfield, Missouri, and the Chicago–L.A. route be US 62. Avery and highway engineer John Page settled on "66", which was unassigned, despite the fact that in its entirety, US 66 was north of US 60. The state of Missouri released its 1926 state highway map with the highway labeled as US 60. After the new federal highway system was officially created, Cyrus Avery called for the establishment of the U.S. Highway 66 Association to promote the complete paving of the highway from end to end and to promote travel down the highway. In 1927, in Tulsa, the association was officially established with John T. Woodruff of Springfield, Missouri, elected the first president. In 1928, the association made its first attempt at publicity, the "Bunion Derby", a footrace from Los Angeles to New York City, of which the path from Los Angeles to Chicago would be on US 66. The publicity worked: several dignitaries, including Will Rogers, greeted the runners at certain points on the route. The race ended in Madison Square Garden, where the $25,000 first prize (equal to $ in ) was awarded to Andy Hartley Payne, a Cherokee runner from Oklahoma. The U.S. Highway 66 Association also placed its first advertisement in the July 16, 1932, issue of the Saturday Evening Post. The ad invited Americans to take US 66 to the 1932 Summer Olympics in Los Angeles. A U.S. Highway 66 Association office in Oklahoma received hundreds of requests for information after the ad was published. The association went on to serve as a voice for businesses along the highway until it disbanded in 1976. Traffic grew on the highway because of the geography through which it passed. Much of the highway was essentially flat and this made the highway a popular truck route. The Dust Bowl of the 1930s saw many farming families, mainly from Oklahoma, Arkansas, Kansas, and Texas, heading west for agricultural jobs in California. US 66 became the main road of travel for these people, often derogatorily called "Okies" or "Arkies". During the Depression, it gave some relief to communities located on the highway. The route passed through numerous small towns and, with the growing traffic on the highway, helped create the rise of mom-and-pop businesses, such as service stations, restaurants, and motor courts, all readily accessible to passing motorists. Much of the early highway, like all the other early highways, was gravel or graded dirt. Due to the efforts of the U.S. Highway 66 Association, US 66 became the first highway to be completely paved in 1938. Several places were dangerous: more than one part of the highway was nicknamed "Bloody 66" and gradually work was done to realign these segments to remove dangerous curves. One section through the Black Mountains outside Oatman, Arizona, was fraught with hairpin turns and was the steepest along the entire route, so much so that some early travellers, too frightened at the prospect of driving such a potentially dangerous road, hired locals to navigate the winding grade. The section remained as US 66 until 1953 and is still open to traffic today as the Oatman Highway. Despite such hazards in some areas, US 66 continued to be a popular route. Notable buildings include the art deco–styled U-Drop Inn, constructed in 1936 in Shamrock, in Wheeler County east of Amarillo, Texas, listed on the National Register of Historic Places. A restored Magnolia fuel station is also located in Shamrock as well as Vega, in Oldham County, west of Amarillo. During World War II, more migration west occurred because of war-related industries in California. US 66, already popular and fully paved, became one of the main routes and also served for moving military equipment. Fort Leonard Wood in Missouri was located near the highway, which was locally upgraded quickly to a divided highway to help with military traffic. When Richard Feynman was working on the Manhattan Project at Los Alamos, he used to travel nearly to visit his wife, who was dying of tuberculosis, in a sanatorium located on US 66 in Albuquerque. In the 1950s, US 66 became the main highway for vacationers heading to Los Angeles. The road passed through the Painted Desert and near the Grand Canyon. Meteor Crater in Arizona was another popular stop. This sharp increase in tourism in turn gave rise to a burgeoning trade in all manner of roadside attractions, including teepee-shaped motels, frozen custard stands, Indian curio shops, and reptile farms. Meramec Caverns near St. Louis, began advertising on barns, billing itself as the "Jesse James hideout". The Big Texan advertised a free steak dinner to anyone who could consume the entire meal in one hour. It also marked the birth of the fast-food industry: Red's Giant Hamburg in Springfield, Missouri, site of the first drive-through restaurant, and the first McDonald's in San Bernardino, California. Changes like these to the landscape further cemented 66's reputation as a near-perfect microcosm of the culture of America, now linked by the automobile. Changes in routing Many sections of US 66 underwent major realignments. In 1930, between the Illinois cities of Springfield and East St. Louis, US 66 was shifted farther east to what is now roughly Interstate 55 (I-55). The original alignment, marked as Temporary 66, followed the current Illinois Route 4 (IL 4). From downtown St. Louis to Gray Summit, Missouri, US 66 originally went down Market Street and Manchester Road, which is largely Route 100. In 1932, this route was changed and the original alignment was never viewed as anything more than temporary. The planned route was down Watson Road, which is now Route 366 but Watson Road had not been completed yet. In Oklahoma, from west of El Reno to Bridgeport, US 66 turned north to Calumet and then west to Geary, then southwest across the South Canadian River over a suspension toll bridge into Bridgeport. In 1933, a straighter cut-off route was completed from west of El Reno to south of Bridgeport, crossing over a 38-span steel pony truss bridge over the South Canadian River, bypassing Calumet and Geary by several miles. From west of Santa Rosa, New Mexico, to north of Los Lunas, New Mexico, the road originally turned north from current I-40 along much of what is now US 84 to near Las Vegas, New Mexico, followed (roughly) I-25—then the decertified US 85 through Santa Fe and Albuquerque to Los Lunas and then turned northwest along the present New Mexico State Road 6 (NM 6) alignment to a point near Laguna. In 1937, a straight-line route was completed from west of Santa Rosa through Moriarty and east–west through Albuquerque and west to Laguna. This newer routing saved travelers as much as four hours of travel through New Mexico. According to legend, the rerouting was done at the behest of Democratic Governor Arthur T. Hannett to punish the Republican Santa Fe Ring, which had long dominated New Mexico out of Santa Fe. In 1940, the first freeway in Los Angeles was incorporated into US 66; this was the Arroyo Seco Parkway, later known as the Pasadena Freeway; now again known as Arroyo Seco Parkway. In 1953, the Oatman Highway through the Black Mountains was completely bypassed by a new route between Kingman, Arizona, and Needles, California; by the 1960s, Oatman, Arizona, was virtually abandoned as a ghost town. Since the 1950s, as Interstates were being constructed, sections of US 66 not only saw the traffic drain to them, but often the route number itself was moved to the faster means of travel. In some cases, such as to the east of St. Louis, this was done as soon as the Interstate was finished to the next exit. The displacement of US 66 signage to the new freeways, combined with restrictions in the 1965 Highway Beautification Act that often denied merchants on the old road access to signage on the freeway, became factors in the closure of many established US 66 businesses as travelers could no longer easily find or reach them. In 1936, US 66 was extended from downtown Los Angeles to Santa Monica to end at US 101 Alt., today the intersection of Olympic and Lincoln Boulevards. Even though there is a plaque dedicating US 66 as the Will Rogers Highway placed at the intersection of Ocean Boulevard and Santa Monica Boulevard, the highway never terminated there. US 66 was rerouted around several larger cities via bypass or beltline routes to permit travelers to avoid city traffic congestion. Some of those cities included Springfield, Illinois; St. Louis, Missouri; Rolla, Missouri; Springfield, Missouri; Joplin, Missouri; and Oklahoma City, Oklahoma. The route was also a foundation for many chain stores back in the 1920s, sprouting up next to it to increase business and sales. Decline The beginning of the decline for US 66 came in 1956 with the signing of the Interstate Highway Act by President Dwight D. Eisenhower who was influenced by his experiences in 1919 as a young Army officer crossing the country in a truck convoy (following the route of the Lincoln Highway), and his appreciation of the Autobahn network as a necessary component of a national defense system. During its nearly 60-year existence, US 66 was under constant change. As highway engineering became more sophisticated, engineers constantly sought more direct routes between cities and towns. Increased traffic led to a number of major and minor realignments of US 66 through the years, particularly in the years immediately following World War II when Illinois began widening US 66 to four lanes through virtually the entire state from Chicago to the Mississippi River just east of St. Louis, and included bypasses around virtually all of the towns. By the early to mid-1950s, Missouri also upgraded its sections of US 66 to four lanes complete with bypasses. Most of the newer four-lane 66 paving in both states was upgraded to freeway status in later years. One notable remnant of US 66 is Veterans Parkway, signed as the Interstate 55 Business route, in Bloomington, Illinois. The sweeping curve on the southeast side of the city originally was intended to easily handle traffic at speeds up to , as part of an effort to make US 66 an Autobahn equivalent for military transport. In 1953, the first major bypassing of US 66 occurred in Oklahoma with the opening of the Turner Turnpike between Tulsa and Oklahoma City. The new toll road paralleled US 66 for its entire length and bypassed each of the towns along US 66. The Turner Turnpike was joined in 1957 by the new Will Rogers Turnpike, which connected Tulsa with the Oklahoma-Missouri border west of Joplin, Missouri, again paralleling US 66 and bypassing the towns in northeastern Oklahoma in addition to its entire stretch through Kansas. Both Oklahoma turnpikes were soon designated as I-44, along with the US 66 bypass at Tulsa that connected the city with both turnpikes. In some cases, such as many areas in Illinois, the new Interstate Highway not only paralleled the old US 66, it actually used much of the same roadway. A typical approach was to build one new set of lanes, then move one direction of traffic to it, while retaining the original set of lanes for traffic flowing in the opposite direction. Then a second set of lanes for traffic flowing in the other direction would be constructed, finally followed by abandoning the other old set of lanes or converting them into a frontage road. The same scenario was used in western Oklahoma, when US 66 was initially upgraded to a four-lane highway such as from Sayre to Erick to the Texas border at Texola in 1957 and 1958 where the old paving was retained for westbound traffic and a new parallel lane built for eastbound traffic (much of this section was entirely bypassed by I-40 in 1975), and on two other sections; from Canute to Elk City in 1959 and Hydro to Weatherford in 1960, both of which were upgraded with the construction of a new westbound lane in 1966 to bring the highway up to full interstate standards and demoting the old US 66 paving to frontage road status. In the initial process of constructing I-40 across western Oklahoma, the state also included projects to upgrade the through routes in El Reno, Weatherford, Clinton, Canute, Elk City, Sayre, Erick, and Texola to four-lane highways not only to provide seamless transitions from the rural sections of I-40 from both ends of town but also to provide easy access to those cities in later years after the I-40 bypasses were completed. In New Mexico, as in most other states, rural sections of I-40 were to be constructed first with bypasses around cities to come later. However, some business and civic leaders in cities along US 66 were completely opposed to bypassing fearing loss of business and tax revenues. In 1963, the New Mexico Legislature enacted legislation that banned the construction of interstate bypasses around cities by local request. This legislation was short-lived, however, due to pressures from Washington and threat of loss of federal highway funds so it was rescinded by 1965. In 1964, Tucumcari and San Jon became the first cities in New Mexico to work out an agreement with state and federal officials in determining the locations of their I-40 bypasses as close to their business areas as possible in order to permit easy access for highway travelers to their localities. Other cities soon fell in line including Santa Rosa, Moriarty, Grants and Gallup although it wasn't until well into the 1970s that most of those cities would be bypassed by I-40. By the late 1960s, most of the rural sections of US 66 had been replaced by I-40 across New Mexico with the most notable exception being the strip from the Texas border at Glenrio west through San Jon to Tucumcari, which was becoming increasingly treacherous due to heavier and heavier traffic on the narrow two-lane highway. During 1968 and 1969, this section of US 66 was often referred to by locals and travelers as "Slaughter Lane" due to numerous injury and fatal accidents on this stretch. Local and area business and civic leaders and news media called upon state and federal highway officials to get I-40 built through the area. Disputes over proposed highway routing in the vicinity of San Jon held up construction plans for several years as federal officials proposed that I-40 run some north of that city while local and state officials insisted on following a proposed route that touched the northern city limits of San Jon. In November 1969, a truce was reached when federal highway officials agreed to build the I-40 route just outside the city, therefore providing local businesses dependent on highway traffic easy access to and from the freeway via the north–south highway that crossed old US 66 in San Jon. I-40 was completed from Glenrio to the east side of San Jon in 1976 and extended west to Tucumcari in 1981, including the bypasses around both cities. Originally, highway officials planned for the last section of US 66 to be bypassed by interstates in Texas, but as was the case in many places, lawsuits held up construction of the new interstates. The US Highway 66 Association had become a voice for the people who feared the loss of their businesses. Since the interstates only provided access via ramps at interchanges, travelers could not pull directly off a highway into a business. At first, plans were laid out to allow mainly national chains to be placed in interstate medians. Such lawsuits effectively prevented this on all but toll roads. Some towns in Missouri threatened to sue the state if the US 66 designation was removed from the road, though lawsuits never materialized. Several businesses were well known to be on US 66, and fear of losing the number resulted in the state of Missouri officially requesting the designation "Interstate 66" for the St. Louis to Oklahoma City section of the route, but it was denied. In 1984, Arizona also saw its final stretch of highway decommissioned with the completion of I-40 just north of Williams, Arizona. Finally, with decertification of the highway by the American Association of State Highway and Transportation Officials the following year, US 66 officially ceased to exist. With the decommissioning of US 66, no single interstate route was designated to replace it, with the route being covered by Interstate 55 from Chicago to St. Louis, Interstate 44 from St. Louis to Oklahoma City, Interstate 40 from Oklahoma City to Barstow; Interstate 15 from Barstow to San Bernardino, and a combination of California State Route 66, I-210 and State Route 2 (SR 2) or I-10 from San Bernardino across the Los Angeles metropolitan area to Santa Monica. After decertification When the highway was decommissioned, sections of the road were disposed of in various ways. Within many cities, the route became a "business loop" for the interstate. Some sections became state roads, local roads, or private drives, or were abandoned completely. Although it is no longer possible to drive US 66 uninterrupted all the way from Chicago to Los Angeles, much of the original route and alternate alignments are still drivable with careful planning. Some stretches are quite well preserved, including one between Springfield, Missouri, and Tulsa, Oklahoma. Some sections of US 66 still retain their historic "sidewalk highway" form, never having been resurfaced to make them into full-width highways. These old sections have a single, paved lane, concrete curbs to mark the edge of the lane, and gravel shoulders for passing. Some states have kept the 66 designation for parts of the highway, albeit as state roads. In Missouri, Routes 366, 266, and 66 are all original sections of the highway. State Highway 66 (SH-66) in Oklahoma remains as the alternate "free" route near its turnpikes. "Historic Route 66" runs for a significant distance in and near Flagstaff, Arizona. Farther west, a long segment of US 66 in Arizona runs significantly north of I-40, and much of it is designated as State Route 66 (SR 66). This runs from Seligman to Kingman, Arizona, via Peach Springs. A surface street stretch between San Bernardino and La Verne (known as Foothill Boulevard) to the east of Los Angeles retains its number as SR 66. Several county roads and city streets at various places along the old route have also retained the "66" number. Revival The first Route 66 associations were founded in Arizona in 1987 and, in 1989, Missouri (incorporated in 1990) and Illinois. Other groups in the other US 66 states soon followed. In 1990, the state of Missouri declared US 66 in that state a "State Historic Route". The first "Historic Route 66" marker in Missouri was erected on Kearney Street at Glenstone Avenue in Springfield, Missouri (now replaced—the original sign has been placed at Route 66 State Park near Eureka). Other historic markers now line—at times sporadically—the entire length of road. In many communities, local groups have painted or stenciled the "66" and U.S. Route shield or outline directly onto the road surface, along with the state's name. This is common in areas where conventional signage for "Historic Route 66" is a target of repeated theft by souvenir hunters. Various sections of the road itself have been placed on the National Register of Historic Places. The Arroyo Seco Parkway in the Los Angeles Area and US 66 in New Mexico have been made into National Scenic Byways. Williams Historic Business District and Urban Route 66, Williams were added to the National Register of Historic Places in 1984 and 1989, respectively. In 2005, the State of Missouri made the road a state scenic byway from Illinois to Kansas. In the cities of Rancho Cucamonga, Rialto, and San Bernardino in California, there are US 66 signs erected along Foothill Boulevard, and also on Huntington Drive in the city of Arcadia. "Historic Route 66" signs may be found along the old route on Colorado Boulevard in Pasadena, and along Foothill Boulevard in San Dimas, La Verne, and Claremont, California. The city of Glendora, California, renamed Alosta Avenue, its section of US 66, by calling it "Route 66". Flagstaff, Arizona, renamed all but a few blocks of Santa Fe Avenue as "Route 66". Until 2017, when it was moved to the nearby Millennium Park, the annual June Chicago Blues Festival was held each year in Grant Park and included a "Route 66 Roadhouse" stage on Columbus Avenue, a few yards north of old US 66/Jackson Boulevard (both closed to traffic for the festival), and a block west of the route's former eastern terminus at US 41 Lake Shore Drive. Since 2001, Springfield, Illinois has annually held its "International Route 66 Mother Road Festival" in its downtown district surrounding the Old State Capitol. Many preservation groups have tried to save and even landmark the old motels and neon signs along the road in some states. In 1999, President Bill Clinton signed a National Route 66 Preservation Bill that provided for $10 million in matching fund grants for preserving and restoring the historic features along the route. In 2008, the World Monuments Fund added US 66 to the World Monuments Watch as sites along the route such as gas stations, motels, cafés, trading posts and drive-in movie theaters are threatened by development in urban areas and by abandonment and decay in rural areas. The National Park Service developed a Route 66 Discover Our Shared Heritage Travel Itinerary describing over one hundred individual historic sites. As the popularity and mythical stature of US 66 has continued to grow, demands have begun to mount to improve signage, return US 66 to road atlases and revive its status as a continuous routing. The U.S. Route 66 Recommissioning Initiative is a group that seeks to recertify US 66 as a US Highway along a combination of historic and modern alignments. The group's redesignation proposal does not enjoy universal support, as requirements that the route meet modern US Highway system specifications could force upgrades that compromise its historic integrity or require US 66 signage be moved to Interstate highways for some portions of the route. In 2018, the AASHTO designated the first sections of U.S. Bicycle Route 66, part of the United States Bicycle Route System, in Kansas and Missouri. National Museum of American History The National Museum of American History in Washington, D.C. has a section on US 66 in its "America on the Move" exhibition. In the exhibit is a portion of pavement of the route taken from Bridgeport, Oklahoma and a restored car and truck of the type that would have been driven on the road in the 1930s. Also on display is a "Hamons Court" neon sign that hung at a gas station and tourist cabins near Hydro, Oklahoma, a "CABINS" neon sign that pointed to Ring's Rest tourist cabins in Muirkirk, Maryland, as well as several post cards a traveler sent back to his future wife while touring the route. Museums and monuments in Oklahoma Elk City, Oklahoma has the National Route 66 & Transportation Museum, which encompasses all eight states through which the Mother Road ran. Clinton has the Oklahoma Route 66 Museum, designed to display the iconic ideas, images, and myths of the Mother Road. A memorial museum to the Route's namesake, Will Rogers, is located in Claremore, while his birthplace ranch is maintained in Oologah. In Sapulpa, the Heart of Route 66 Auto Museum features a replica gas pump, the world's tallest. Tulsa has multiple sites, starting with the Cyrus Avery Centennial Plaza, located at the east end of the historic 11th Street Bridge over which the route passed, and which includes a giant sculpture weighing called "East Meets West". The sculpture depicts the Avery family riding west in a Model T Ford meeting an eastbound horse-drawn carriage. In 2020, Avery Plaza Southwest opened, at the west end of the bridge, which features a "neon park" with replicas of the neon signs from Tulsa-area Route 66 motels of the era, including the Tulsa Auto Court, the Oil Capital Motel, and the famous bucking-bronco sign of the Will Rogers Motor Court. Future plans for that site also include a Route 66 Museum. Also, Tulsa has installed "Route 66 Rising", a sculpture on the road's former eastern approach to town at East Admiral Place and Mingo Road. On Tulsa's Southwest Boulevard, between W. 23rd and W. 24th Streets there is a granite marker dedicated to Route 66 as the Will Rogers Highway which features an image of namesake Will Rogers together with information on the route from Michael Wallis, author of Route 66: The Mother Road; and, at Howard Park just past W. 25th Street, three Indiana limestone pillars are dedicated to Route 66 through Tulsa, with Route 66 #1 devoted to Transportation, Route 66 #2 devoted to Tulsa Industry and Native American Heritage, and Route 66 #3 devoted to Art Deco Architecture and American Culture. At 3770 Southwest Blvd. is the Route 66 Historical Village, which includes a tourism information center modeled after a 1920s-1930s gas station, and other period-appropriate artifacts such as the Frisco 4500 steam locomotive with train cars. Elsewhere, Tulsa has constructed twenty-nine historical markers scattered along the 26-mile route of the highway through Tulsa, containing tourist-oriented stories, historical photos, and a map showing the location of historical sites and the other markers. The markers are mostly along the highway's post-1932 alignment down 11th Street, with some along the road's 1926 path down Admiral Place. Museum and Hall of Fame in Illinois The Route 66 Association of Illinois maintains their Museum and Hall of Fame in Pontiac. This free museum contains memorabilia and artifacts relating to Route 66, particularly in Illinois, as well as displays relating to the members of the Hall of Fame. Among items on display are the VW Microbus and "land yacht" belonging to the late Bob Waldmire. Route description Over the years, US 66 received numerous nicknames. Right after US 66 was commissioned, it was known as "The Great Diagonal Way" because the Chicago-to-Oklahoma City stretch ran northeast to southwest. Later, US 66 was advertised by the U.S. Highway 66 Association as "The Main Street of America". The title had also been claimed by supporters of US 40, but the US 66 group was more successful. In the John Steinbeck novel The Grapes of Wrath, the highway is called "The Mother Road", its prevailing title today. Lastly, US 66 was unofficially named "The Will Rogers Highway" by the U.S. Highway 66 Association in 1952, although a sign along the road with that name appeared in the John Ford film, The Grapes of Wrath, which was released in 1940, twelve years before the association gave the road that name. A plaque dedicating the highway to Will Rogers is still located in Santa Monica, California. There are more plaques like this; one can be found in Galena, Kansas. It was originally located on the Kansas-Missouri state line, but moved to the Howard Litch Memorial Park in 2001. California US 66 had its western terminus in California, and covered in the state. The terminus was located at the Pacific Coast Highway, then US 101 Alternate and now SR 1, at Lincoln and Olympic Boulevards in Santa Monica, California. The highway ran through major cities such as Santa Monica, Los Angeles, Pasadena, and San Bernardino. San Bernardino also contains one of the two surviving Wigwam Motels along US 66. The highway had major intersections with US 101 in Hollywood, I-5 in Los Angeles, I-15, and I-40 in Barstow, and US 95 in Needles. It also ran concurrent to I-40 at California's very eastern end. Arizona In Arizona, the highway originally covered in the state. Along much of the way, US 66 paralleled I-40. It entered across the Topock Gorge, passing through Oatman along the way to Kingman. Between Kingman and Seligman, the route is still signed as SR 66. Notably, just between Seligman and Flagstaff, Williams was the last point on US 66 to be bypassed by an Interstate. The route also passed through the once-incorporated community of Winona. Holbrook contains one of the two surviving Wigwam Motels on the route. New Mexico US 66 covered in the state and passed through many Indian reservations in the western half of New Mexico. East of those reservations, the highway passed through Albuquerque, Santa Fe, and Las Vegas. As in Arizona, in New Mexico, U.S. 66 paralleled I-40. Texas US 66 covered in the Texas Panhandle, travelling in an east–west line between Glenrio, New Mexico and Texas and Texola, Oklahoma. Adrian, in the western Panhandle, was notable as the midpoint of the route. East of there, the highway passed through Amarillo (famous for the Cadillac Ranch), Conway, Groom, and Shamrock. Oklahoma and Kansas The highway covered in Oklahoma. Today, it is marked by I-40 west of Oklahoma City, and SH-66 east of there. After entering at Texola, US 66 passed through Sayre, Elk City, and Clinton before entering Oklahoma City. Beyond Oklahoma City, the highway passed through Edmond on its way to Tulsa. Past there, US 66 passed through Miami, North Miami, Commerce, and Quapaw before entering Kansas where it covered only . Only three towns are located on the route in Kansas: Galena, Riverton and Baxter Springs. Missouri US 66 covered in Missouri. Upon entering from Galena, Kansas, the highway passed through Joplin. From there, it passed through Carthage, Springfield, where Red's Giant Hamburg, the world's first drive-thru stands, Waynesville, Devils Elbow, Lebanon and Rolla before passing through St. Louis. Illinois US 66 covered in Illinois. It entered Illinois in East St. Louis after crossing the Mississippi River. Near there, it passed by Cahokia Mounds, a UNESCO World Heritage Site. The highway then passed through Hamel, Springfield, passing by the Illinois State Capitol, Bloomington-Normal, Pontiac, and Gardner. It then entered the Chicago area, originally through Joliet and later through Plainfield. After passing through the suburbs, U.S. 66 entered Chicago itself, where it terminated at Lake Shore Drive starting in 1938, having originally ended at Michigan Avenue. Special routes Several alternate alignments of US 66 occurred because of traffic issues. Business routes (BUS), bypass routes (BYP), alternate routes (ALT), and "optional routes" (OPT) (an early designation for alternate routes) came into being. U.S. Route 66 Alternate: Bolingbrook–Gardner, Illinois U.S. Route 66 Business: Towanda–Bloomington, Illinois U.S. Route 66 Business: Lincoln, Illinois U.S. Route 66 Business: Springfield, Illinois U.S. Route 66 Business: Mitchell–East St. Louis, Illinois U.S. Route 66 Business: St. Louis–Sunset Hills, Missouri U.S. Route 66 Optional: Venice, Illinois–St. Louis, Missouri U.S. Route 66 Bypass: Mitchell, Illinois–Sunset Hills, Missouri U.S. Route 66 Business: Springfield, Missouri U.S. Route 66 Bypass: Springfield, Missouri U.S. Route 66 Alternate Business: Springfield, Missouri U.S. Route 66 Alternate: Carthage, Missouri U.S. Route 66 Business: Carterville–Webb City, Missouri U.S. Route 66 Alternate: Webb City–Joplin, Missouri U.S. Route 66 Business: Joplin, Missouri U.S. Route 66 Bypass: Joplin, Missouri U.S. Route 66 Business: Tulsa, Oklahoma U.S. Route 66 Business: Oklahoma City, Oklahoma U.S. Route 66 Business: Clinton, Oklahoma U.S. Route 66 Business: Amarillo, Texas U.S. Route 66 Business: San Bernardino, California U.S. Route 66 Alternate: Pasadena–Los Angeles, California In popular culture US 66 has been a fixture in popular culture. American pop-culture artists publicized US 66 and the experience, through song and television. Bobby Troup wrote "(Get Your Kicks on) Route 66", which was popularized by Nat King Cole with the King Cole Trio, and later covered by artists ranging from Chuck Berry and Glenn Frey to The Manhattan Transfer, John Mayer, and Brian Setzer, as well as the Rolling Stones in their eponymous debut album. The highway lent its name to the Route 66 TV series in the 1960s, which itself had a popular theme song written and arranged by Nelson Riddle. The novel The Grapes of Wrath, adapted to film in 1940, depicts the Joad family traveling to California on US 66 after being evicted from their small farm in Oklahoma. 66 is the path of a people in flight, refugees from dust and shrinking land, from the thunder of tractors and shrinking ownership, from the desert's slow northward invasion, from the twisting winds that howl up out of Texas, from the floods that bring no richness to the land and steal what little richness is there. From all of these the people are in flight, and they come into 66 from the tributary side roads, from the wagon tracks and the rutted country roads. 66 is the mother road, the road of flight. The 2006 animated film Cars had the working title Route 66, and described the decline of the fictional Radiator Springs, nearly a ghost town once its mother road, US 66, was bypassed by Interstate 40. The title was eventually changed to simply Cars to avoid confusion with the 1960s television series. On April 30, 2022, the 96th anniversary of the route's numerical designation, Route 66 was honored with a video Google Doodle.
Technology
Ground transportation networks
null
79726
https://en.wikipedia.org/wiki/Toll%20road
Toll road
A toll road, also known as a turnpike or tollway, is a public or private road for which a fee (or toll) is assessed for passage. It is a form of road pricing typically implemented to help recoup the costs of road construction and maintenance. Toll roads have existed in some form since antiquity, with tolls levied on passing travelers on foot, wagon, or horseback; a practice that continued with the automobile, and many modern tollways charge fees for motor vehicles exclusively. The amount of the toll usually varies by vehicle type, weight, or number of axles, with freight trucks often charged higher rates than cars. Tolls are often collected at toll plazas, toll booths, toll houses, toll stations, toll bars, toll barriers, or toll gates. Some toll collection points are automatic, and the user deposits money in a machine which opens the gate once the correct toll has been paid. To cut costs and minimise time delay, many tolls are collected with electronic toll collection equipment which automatically communicates with a toll payer's transponder or uses automatic number-plate recognition to charge drivers by debiting their accounts. Criticisms of toll roads include the time taken to stop and pay the toll, and the cost of the toll booth operators—up to about one-third of revenue in some cases. Automated toll-paying systems help minimise both of these. Others object to paying "twice" for the same road, namely in fuel taxes and in tolls. In addition to toll roads, toll bridges and toll tunnels are also used by public authorities to generate funds to repay the cost of building the structures. Some tolls are set aside to pay for future maintenance or enhancement of infrastructure, or are applied as a general fund by local governments, not being earmarked for transport facilities. This is sometimes limited or prohibited by central government legislation. Also, road congestion pricing schemes have been implemented in a limited number of urban areas as a transportation demand management tool to try to reduce traffic congestion and air pollution. History Ancient times Toll roads have existed for at least the last 2,700 years, as tolls had to be paid by travellers using the Susa–Babylon highway under the regime of Ashurbanipal, who reigned in the seventh century BC. Aristotle and Pliny refer to tolls in Arabia and other parts of Asia. In India, before the fourth century BC, the Arthashastra notes the use of tolls. Germanic tribes charged tolls to travellers across mountain passes. Middle Ages Most roads were not freely open to travel on in Europe during the Middle Ages, and the toll was one of many feudal fees paid for rights of usage in everyday life. Some major European "highways", such as the Via Regia and Via Imperii, offered protection to travelers in exchange for paying the royal toll. Many modern European roads were originally constructed as toll roads in order to recoup the costs of construction and maintenance, and to generate revenue from passing travelers. In 14th-century England, some of the most heavily used roads were repaired with money raised from tolls by pavage grants. Widespread toll roads sometimes restricted traffic so much, by their high tolls, that they interfered with trade and cheap transportation needed to alleviate local famines or shortages. Tolls were used in the Holy Roman Empire in the 14th and 15th centuries. 17th-century Dahomey After significant road construction undertaken by the West African kingdom of Dahomey, toll booths were also established with the function of collecting yearly taxes based on the goods carried by the people of Dahomey and their occupation. In some cases, officials imposed fines for public nuisance before allowing people to pass. 19th century Industrialisation in Europe needed major improvements to the transport infrastructure which included many new or substantially improved roads, financed from tolls. The A5 road in Britain was built to provide a robust transport link between Britain and Ireland and had a toll house every few miles. 20th century In the 20th century, road tolls were introduced in Europe to finance the construction of motorway networks and specific transport infrastructure such as bridges and tunnels. Italy was the first country in the world to build motorways reserved for fast traffic and for motor vehicles only. The Autostrada dei Laghi ("Lakes Motorway"), the first built in the world, connecting Milan to Lake Como and Lake Maggiore, and now parts of the Autostrada A8 and Autostrada A9, was devised by Piero Puricelli and was inaugurated in 1924. Piero Puricelli, a civil engineer and entrepreneur, received the first authorization to build a public-utility fast road in 1921, and completed the construction (one lane in each direction) between 1924 and 1926. Piero Puricelli decided to cover the expenses by introducing a toll. It was followed by Greece, which made users pay for the network of motorways around and between its cities in 1927. Later in the 1950s and 1960s, France, Spain, and Portugal started to build motorways largely with the aid of concessions, allowing rapid development of this infrastructure without massive state debts. Since then, road tolls have been introduced in the majority of the EU member states. In the United States, prior to the introduction of the Interstate Highway System and the large federal grants supplied to states to build it, many states constructed their first freeways by floating bonds backed by toll revenues. The first major fully grade separated toll road was the Pennsylvania Turnpike in 1940. This was followed up by other toll roads, such as the Maine Turnpike in 1947, the Blue Star Turnpike in 1950, the New Jersey Turnpike in 1951, the Garden State Parkway in 1952, the West Virginia Turnpike and New York State Thruway in 1954, the Massachusetts Turnpike in 1957, and the Chicago Skyway and Indiana Toll Road in 1958. Other toll roads were also established around this time. With the establishment of the Interstate Highway System in the late 1950s, toll road construction in the U.S. slowed down considerably, as the federal government now provided the bulk of funding to construct new freeways, and regulations required that such Interstate highways be free from tolls. Many older toll roads were added to the Interstate System under a grandfather clause that allowed tolls to continue to be collected on toll roads that predated the system. Some of these such as the Connecticut Turnpike and the Richmond–Petersburg Turnpike later removed their tolls when the initial bonds were paid off. Many states, however, have maintained the tolling of these roads as a consistent source of revenue. As the Interstate Highway System approached completion during the 1980s, states began constructing toll roads again to provide new freeways which were not part of the original interstate system funding. Houston's outer beltway of interconnected toll roads began in 1983, and many states followed over the last two decades of the 20th century adding new toll roads, including the tollway system around Orlando, Florida, Colorado's E-470, and Georgia State Route 400. 21st century London, in an effort to reduce traffic within the city, instituted the London congestion charge in 2003, effectively making all roads within the centre of the city tolled. In the United States, as states looked for ways to construct new freeways without federal funding again, to raise revenue for continued road maintenance, and to control congestion, new toll road construction saw significant increases during the first two decades of the 21st century. Spurred on by two innovations, the electronic toll collection system, and the advent of high-occupancy and express lane tolls, many areas of the U.S. saw large road building projects in major urban areas. Electronic toll collection, first introduced in the 1980s, reduces operating costs by removing toll collectors from roads. Tolled express lanes, by which certain lanes of a freeway are designated "toll only", increases revenue by allowing a free-to-use highway to collect revenue by allowing drivers to bypass traffic jams by paying a toll. The E-ZPass system, compatible with many state systems, is the largest ETC system in the U.S., and is used for both fully tolled highways and tolled express lanes. Maryland Route 200 and the Triangle Expressway in North Carolina were the first toll roads built without toll booths, with drivers charged via ETC or by optical license plate recognition and are billed by mail. In addition, many older toll roads are also being upgraded to an all-electronic tolling system, abandoning the hybrid systems they adopted during the late 20th century. These include the Massachusetts Turnpike, one of the oldest American toll roads, which went all-electronic in 2016, and the Pennsylvania Turnpike, America's oldest toll freeway, which went all-electronic in 2020, along with the Illinois Tollway, which both accelerated their transitions to such due to the COVID-19 pandemic. By country Toll roads in the United Kingdom Turnpike trusts were established in England and Wales from about 1706 in response to the need for better roads than the few and poorly-maintained tracks then available. Turnpike trusts were set up by individual Acts of Parliament, with powers to collect road tolls to repay loans for building, improving, and maintaining the principal roads in Britain. At their peak, in the 1830s, over 1,000 trusts administered around of turnpike road in England and Wales, taking tolls at almost 8,000 toll-gates. The trusts were ultimately responsible for the maintenance and improvement of most of the main roads in England and Wales, which were used to distribute agricultural and industrial goods economically. The tolls were a source of revenue for road building and maintenance, paid for by road users and not from general taxation. The turnpike trusts were gradually abolished from the 1870s. Most trusts improved existing roads, but some new roads, usually only short stretches, were also built. Thomas Telford's Holyhead road followed Watling Street from London but was exceptional in creating a largely new route beyond Shrewsbury, and especially beyond Llangollen. Built in the early 19th century, with many toll booths along its length, most of it is now the A5. In the modern day, one major toll road is the M6 Toll, relieving traffic congestion on the M6 in Birmingham. A few notable bridges and tunnels continue as toll roads including the Dartford Crossing and Mersey Gateway bridge. Toll roads in Canada Some cities in Canada had toll roads in the 19th century. Roads radiating from Toronto required users to pay at toll gates along the street (Yonge Street, Bloor Street, Davenport Road, Kingston Road) but the toll gates disappeared after 1895. Toll roads in the United States In the eastern United States of the 18th and 19th century, hundreds of private turnpikes were created to facilitate travel between towns and cities, typically outside built-up areas. 19th-century plank roads were usually operated as toll roads. One of the first US motor roads, the Long Island Motor Parkway (which opened on October 10, 1908) was built by William Kissam Vanderbilt II, the great-grandson of Cornelius Vanderbilt. The road was closed in 1938 when it was taken over by the state of New York in lieu of back taxes. Toll roads in Russia The first toll road in St. Petersburg appeared in the 2000s. The Western High-Speed Diameter (WHSD) is a multilane motorway running from the South to the North. The road connects the southwest of the city, including the Sea Port area, with the Ring Road, Vasilievsky Island, Kurortny district and the Scandinavia motorway. The WHSD is divided into three sections: Southern, Central and Northern. The entire stretch of the WHSD was opened for traffic in 2016. There are 16 toll plazas on the WHSD. Paying toll by transponder is mostly recommended for frequent drivers. The Flow+ toll collection system was implemented on the WHSD. The system was designed for automatic calculation of the driving distance of a vehicle equipped with a transponder. The system does not require constructing toll plazas at each entrance to or exit from the highway. Transponders mounted on vehicles are read by signal receivers installed at the entrance and exit ramps. Toll roads in Italy In Italy the only toll roads are the autostrade (Italian for motorways). Major exceptions are the beltways around some larger cities (tangenziali) which are not part of a thoroughfare motorway, and the Autostrada A2 between Salerno and Reggio di Calabria which is operated by the government-owned ANAS. Both are toll free. On Italian motorways, the toll applies to almost all motorways not managed by Anas. The collection of motorway tolls, from a tariff point of view, is managed mainly in two ways: either through the "closed motorway system" (km travelled) or through the "open motorway system" (flat-rate toll). Given the multiplicity of operators, the toll is only requested when exiting the motorway and not when the motorway operator changes. This system was made possible following article 14 of law 531 of 12 August 1982. From a technical point of view, however, the mixed barrier/free-flow system is active where, at the entrance and exit from the motorways, there are lanes dedicated to the collection of a ticket (on entry) and the delivery of the ticket with simultaneous payment (on exit) and other lanes where, during transit without the need to stop, an electronic toll system present in the vehicles records the data and debits the toll, generally into the bank account previously communicated by the customer, to the manager of his device. In Italy, this occurs through the Autostrade per l'Italia interchange system. The Autostrada A36, Autostrada A59 and Autostrada A60 are exclusively free-flow. On these motorways, those who do not have the electronic toll device on board must proceed with the payment by subsequently communicating the data to the motorway manager (by telephone, online or by going to the offices dedicated to payment). The closed motorway system is applied to most Italian motorways. It requires the driver of the vehicle to collect a special ticket at the entrance to the motorway and pay the amount due upon exit. If equipped with an electronic toll system the two procedures are completely automatic and the driver on the detection lanes located at the entrances and exits from the motorways subject to toll payment must only proceed at a maximum speed of without the need to stop. The amount is directly proportional to the distance travelled by the vehicle, the coefficient of its class and a variable coefficient from motorway to motorway, called the kilometre rate. Unlike the closed motorway system, in the open system, the road user does not pay based on the distance travelled. Motorway barriers are arranged along the route (however not at every junction), at which the user pays a fixed sum, depending only on the class of the vehicle. The user can therefore travel along sections of the motorway without paying any toll as the barriers may not be present on the section travelled. Charging methods Road tolls were levied traditionally for a specific access (e.g. city) or for a specific infrastructure (e.g. roads, bridges). These concepts were widely used until the last century. However, the evolution in technology made it possible to implement road tolling policies based on different concepts. The different charging concepts are designed to suit different requirements regarding purpose of the charge, charging policy, the network to the charge, tariff class differentiation, et cetera: Time-based charges and access fees: In a time-based charging regime, a road user has to pay for a given period of time in which they may use the associated infrastructure. For the practically identical access fees, the user pays for the access to a restricted zone for a period or several days. Motorway and other infrastructure tolling: The term tolling is used for charging a well-defined special and comparatively costly infrastructure, like a bridge, a tunnel, a mountain pass, a motorway concession, or the whole motorway network of a country. Classically a toll is due when a vehicle passes a tolling station, be it a manual barrier-controlled toll plaza or a free-flow multi-lane station. Distance or area charging: In a distance or area charging system concept, vehicles are charged per total distance driven in a defined area. Some toll roads charge a toll in only one direction. Examples include the Sydney Harbour Bridge, Sydney Harbour Tunnel, and Eastern Distributor (these all charge tolls city-bound) in Australia, in the United States, crossings between Pennsylvania and New Jersey operated by Delaware River Port Authority and crossings between New Jersey and New York operated by Port Authority of New York and New Jersey. This technique is practical where the detour to avoid the toll is large or the toll differences are small. Collection methods Traditionally, tolls were paid by hand at a toll gate. Although payments may still be made in cash, it is more common now to pay using an electronic toll collection system. In some places, payment is made using transponders which are affixed to the windscreen. Three systems of toll roads exist: open (with mainline barrier toll plazas); closed (with entry/exit tolls); and open road (no toll booths, only electronic toll collection gantries at entrances and exits or at strategic locations on the median of the road). Some toll roads use a combination of the three systems. On an open toll system, all vehicles stop at various locations along the highway to pay a toll. (This is different from "open road tolling", where no vehicles stop to pay a toll.) While this may save money from the lack of need to construct toll booths at every exit, it can cause traffic congestion while traffic queues at the mainline toll plazas (toll barriers). It is also possible for motorists to enter an 'open toll road' after one toll barrier and exit before the next one, thus travelling on the toll road toll-free. Most open toll roads have ramp tolls or partial access junctions to prevent this practice, known in the U.S. as "shunpiking". With a closed toll system, vehicles collect a ticket when entering the highway. In some cases, the ticket displays the toll to be paid on exit. Upon exit, the driver must pay the amount listed for the given exit. Should the ticket be lost, a driver must typically pay the maximum amount possible for travel on that highway. Short toll roads with no intermediate entries or exits may have only one toll plaza at one end, with motorists travelling in either direction paying a flat fee either when they enter or when they exit the toll road. In a variant of the closed toll system, mainline barriers are present at the two endpoints of the toll road, and each interchange has a ramp toll that is paid upon exit or entry. In this case, a motorist pays a flat fee at the ramp toll and another flat fee at the end of the toll road; no ticket is necessary. In addition, with most systems, motorists may pay tolls only with cash or change; debit and credit cards are not accepted. However, some toll roads may have travel plazas with ATMs so motorists can stop and withdraw cash for the tolls. The toll is calculated by the distance travelled on the toll road or the specific exit chosen. In the United States, for instance, the Kansas Turnpike, Ohio Turnpike, New Jersey Turnpike, most of the Indiana Toll Road, New York State Thruway, and Florida's Turnpike currently implement closed systems. The Union Toll Plaza on the Garden State Parkway was the first ever to use an automated toll collection machine. A plaque commemorating the event includes the first quarter collected at its toll booths. The first major deployment of an RFID electronic toll collection system in the United States was on the Dallas North Tollway in 1989 by Amtech (see TollTag). The Amtech RFID technology used on the Dallas North Tollway was originally developed at Sandia Labs for use in tagging and tracking livestock. In the same year, the Telepass active transponder RFID system was introduced across Italy. Several US states now use mobile tolling platforms to facilitate use of payment via smartphones. Highway 407 in the province of Ontario, Canada, has no toll booths, and instead reads a transponder mounted on the windshields of each vehicle using the road (the rear licence plates of vehicles lacking a transponder are photographed when they enter and exit the highway). This made the highway the first all-automated toll highway in the world. A bill is mailed monthly for usage of the 407. Lower charges are levied on frequent 407 users who carry electronic transponders in their vehicles. The approach has not been without controversy: In 2003 the 407 ETR settled a class action with a refund to users. Throughout most of the East Coast of the United States, E-ZPass (operated under the brand I-Pass in Illinois) is accepted on almost all toll roads. Similar systems include SunPass in Florida, FasTrak in California, Good to Go in Washington state, and ExpressToll in Colorado. The systems use a small radio transponder mounted in or on a customer's vehicle to deduct toll fares from a pre-paid account as the vehicle passes through the toll barrier. This reduces manpower at toll booths and increases traffic flow and fuel efficiency by reducing the need for complete stops to pay tolls at these locations. By designing a toll gate specifically for electronic collection, it is possible to carry out open-road tolling, where the customer does not need to slow at all when passing through the toll gate. The U.S. state of Texas is using a system that has no toll booths. Drivers without a TollTag have their license plate photographed automatically and the registered owner will receive a monthly bill, at a higher rate than those vehicles with TollTags. A similar variation of automatic collection is the Toll Roads in Orange County, CA, US, wherein all entry or collection points are equipped with high-speed cameras which read license plates and users will have 7 calendar days to pay online using their plate number or else set up an account for automatic debits. The first all-electronic toll road in the northeastern United States, the InterCounty Connector (Maryland Route 200) was partially opened to traffic in February 2011, and the final segment was completed in November 2014. The first section of another all-electronic toll road, the Triangle Expressway, opened at the beginning of 2012 in North Carolina. Financing and management Some toll roads are managed under such systems as the Build-Operate-Transfer (BOT) system. Private companies build the roads and are given a limited franchise. Ownership is transferred to the government when the franchise expires. This type of arrangement is prevalent in Australia, Canada, Hong Kong, Indonesia, India, South Korea, Japan, and the Philippines. The BOT system is a fairly new concept that is becoming more popular in the United States, with California, Delaware, Florida, Illinois, Indiana, Mississippi, Texas, and Virginia already building and operating toll roads under this scheme. Pennsylvania, Massachusetts, New Jersey, and Tennessee are also considering the BOT methodology for future highway projects. The more traditional means of managing toll roads in the United States is through semi-autonomous public authorities. Kansas, Maryland, Massachusetts, New Hampshire, New Jersey, New York, North Carolina, Ohio, Oklahoma, Pennsylvania, and West Virginia manage their toll roads in this manner. While most of the toll roads in California, Delaware, Florida, Texas, and Virginia are operating under the BOT arrangement, a few of the older toll roads in these states are still operated by public authorities. In France, some toll roads are operated by private or public companies, with specific taxes collected by the state. Arguments against toll roads Toll roads have been criticised as being inefficient in various ways: They require vehicles to stop or slow down (except open road tolling); manual toll collection wastes time and raises vehicle operating costs. Collection costs can reduce revenue by up to a third, and revenue theft is considered to be comparatively easy. Where the tolled roads are less congested than the parallel "free" roads, the traffic diversion resulting from the tolls increases congestion on the road system and reduces its usefulness. There are concerns about government surveillance associated with both electronic tolls and some forms of "classical" toll collection. A number of additional criticisms are also directed at toll roads in general: Toll roads are a form of regressive taxation; that is, compared to conventional taxes for funding roads, they benefit wealthier citizens more than poor citizens. If toll roads are owned or managed by private for profit entities, the citizens may lose money overall compared to conventional public funding because the private owners or operators of the toll system will naturally seek to profit from the roads. The managing entities, whether public or private, may not correctly account for the overall social costs, particularly to the poor, when setting pricing and thus may hurt the neediest segments of society. Arguments in favor of toll roads Tolls help internalize some of the externalities of automobiles, that is costs automobile traffic imposes on society that are not borne by users. Through dynamic pricing trips that do not have to occur at rush hour can be moved to other times of the day or be avoided altogether. This makes more efficient use of existing road capacity. Gallery
Technology
Road infrastructure
null
79745
https://en.wikipedia.org/wiki/Cluster%20munition
Cluster munition
A cluster munition is a form of air-dropped or ground-launched explosive weapon that releases or ejects smaller submunitions. Commonly, this is a cluster bomb that ejects explosive bomblets that are designed to kill personnel and destroy vehicles. Other cluster munitions are designed to destroy runways or electric power transmission lines. Because cluster bombs release many small bomblets over a wide area, they pose risks to civilians both during attacks and afterwards. Unexploded bomblets can kill or maim civilians and/or unintended targets long after a conflict has ended, and are costly to locate and remove. This failure rate ranges from 2 percent to 40 percent or more. Cluster munitions are prohibited for those nations that ratified the Convention on Cluster Munitions, adopted in Dublin, Ireland, in May 2008. The Convention entered into force and became binding international law upon ratifying states on 1 August 2010, six months after being ratified by 30 states. As of 10 February 2022, a total of 123 states have joined the Convention, as 110 states parties and 13 signatories. Development The first significantly operationally used cluster bomb was the German SD-2 or Sprengbombe Dickwandig 2 kg, commonly referred to as the Butterfly Bomb. It was used in World War II to attack both civilian and military targets, including on Tokyo and Kyushu. The technology was developed independently by the United States, Russia and Italy (see Thermos bomb). The US used the M41 fragmentation bomb wired together in clusters of 6 or 25 with highly sensitive or proximity fuzes. From the 1970s to the 1990s cluster bombs became standard air-dropped munitions for many nations, in a wide variety of types. They have been produced by 34 countries and used in at least 23. Artillery shells that employ similar principles have existed for decades. They are typically referred to as ICM (Improved Conventional Munitions) shells. The US military slang terms for them are "firecracker" or "popcorn" shells, for the many small explosions they cause in the target area. Types A basic cluster bomb consists of a hollow shell and then two to more than 2,000 submunitions or bomblets contained within it. Some types are dispensers that are designed to be retained by the aircraft after releasing their munitions. The submunitions themselves may be fitted with small parachute retarders or streamers to slow their descent (allowing the aircraft to escape the blast area in low-altitude attacks). Modern cluster bombs and submunition dispensers can be multiple-purpose weapons containing a combination of anti-armor, anti-personnel, and anti-materiel munitions. The submunitions themselves may also be multi-purpose, such as combining a shaped charge, to attack armour, with a fragmenting case, to attack infantry, material, and light vehicles. They may also have an incendiary function. Since the 1990s submunition-based weapons have been designed that deploy smart submunitions, using thermal and visual sensors to locate and attack particular targets, usually armored vehicles. Weapons of this type include the US CBU-97 sensor-fuzed weapon, first used in combat during Operation Iraqi Freedom, the 2003 invasion of Iraq. Some munitions specifically intended for anti-tank use can be set to self-destruct if they reach the ground without locating a target, theoretically reducing the risk of unintended civilian deaths and injuries. Although smart submunition weapons are much more expensive than standard cluster bombs, fewer smart submunitions are required to defeat dispersed and mobile targets, partly offsetting their cost. Because they are designed to prevent indiscriminate area effects and unexploded ordnance risks, some smart munitions are excluded from coverage by the Convention on Cluster Munitions. Incendiary Incendiary cluster bombs are intended to start fires, just like conventional incendiary bombs (firebombs). They contain submunitions of white phosphorus or napalm, and can be combined anti-personnel and anti-tank submunitions to hamper firefighting efforts. In urban areas they have been preceded by the use of conventional explosive bombs to fracture the roofs and walls of buildings to expose their flammable contents. One of the earliest examples is the so-called Molotov bread basket used by the Soviet Union in the Winter War of 1939–40. Incendiary clusters were extensively used by both sides in the strategic bombings of World War II. They caused firestorms and conflagrations in the bombing of Dresden in World War II and the firebombing of Tokyo. Some modern bomb submunitions deliver a highly combustible thermobaric aerosol that results in a high pressure explosion when ignited. Anti-personnel Anti-personnel cluster bombs use explosive fragmentation to kill troops and destroy soft (unarmored) targets. Along with incendiary cluster bombs, these were among the first types of cluster bombs produced by Nazi Germany during World War II. They were used during the Blitz with delay and booby-trap fusing to hamper firefighting and other damage-control efforts in the target areas. They were also used with a contact fuze when attacking entrenchments. These weapons were widely used during the Vietnam War when many thousands of tons of submunitions were dropped on Laos, Cambodia and Vietnam. Anti-tank Most anti-armor munitions contain shaped charge warheads to pierce the armor of tanks and armored fighting vehicles. In some cases, guidance is used to increase the likelihood of successfully hitting a vehicle. Modern guided submunitions, such as those found in the U.S. CBU-97, can use either a shaped charge or an explosively formed penetrator. Unguided shaped-charge submunitions are designed to be effective against entrenchments that incorporate overhead cover. To simplify supply and increase battlefield effectiveness by allowing a single type of round to be used against nearly any target, submunitions that incorporate both fragmentation and shaped-charge effects are produced. Anti-electrical An anti-electrical weapon, the CBU-94/B, was first used by the U.S. in the Kosovo War in 1999. These consist of a TMD (Tactical Munitions Dispenser) filled with 202 submunitions. Each submunition contains a small explosive charge that disperses 147 reels of fine conductive fiber of either carbon or aluminum-coated glass. Their purpose is to disrupt and damage electric power transmission systems by producing short circuits in high-voltage power lines and electrical substations. On the initial attack, these knocked out 70% of the electrical power supply in Serbia. History of use Vietnam War During the Vietnam War, the US used cluster bombs in air strikes against targets in Vietnam, Laos, and Cambodia. Of the 260 million cluster bomblets that rained down on Laos between 1964 and 1973, particularly on Xieng Khouang province, 80 million failed to explode. As of 2009 about 7,000 people have been injured or killed by explosives left from the Vietnam War era in Vietnam's Quảng Trị province alone. South Lebanon conflict, 1978 During the South Lebanon conflict in 1978, the IDF used cluster bombs provided by the United States. According to U.S. President Jimmy Carter, this use of the cluster bombs violated the legal agreement between Israel and the U.S. because the weapons had been provided for defensive purposes against an attack on Israel. Israel also transferred American weapons to Saad Haddad's Lebanese militia, a violation of American law. Carter's administration prepared to notify Congress that American weapons were being used illegally, which would have resulted in military aid to Israel being cut off. The American consul in Jerusalem informed the Israeli government of their plans and, according to Carter, Prime Minister Begin said that the operation was over. Western Sahara war, 1975–1991 During the 16-year-long conflict on the territory of Western Sahara, the Royal Moroccan Army (RMA) dropped cluster bombs. The RMA used both artillery-fired and air-dropped cluster munitions. BLU-63, M42 and MK118 submunitions were used at multiple locations in Bir Lahlou, Tifarity, Mehaires, Mijek and Awganit. More than 300 cluster strike areas have been recorded in the MINURSO Mine Action Coordination Center database. Soviet–Afghan War, 1979–1989 During the Soviet-Afghan War, the Soviets dealt harshly with Mujaheddin rebels and those who supported them, including leveling entire villages to deny safe havens to their enemy and the usage of cluster bombs. Falklands War Sea Harriers of the Royal Navy dropped BL755 cluster bombs on Argentinian positions during the Falklands War of 1982. Grenada, 1983 The United States dropped 21 Rockeye cluster bombs during its invasion of Grenada. Nagorno Karabakh War, 1992–1994, 2016, 2020 The armed conflict between Azerbaijan and Armenia in Nagorno Karabakh in 1992–1994 led to the use of cluster munitions against military and civilian targets in the region. As of 2010, remain off-limits due to contamination with unexploded cluster ordnance. HALO Trust has made major contributions to the cleanup effort. During renewed hostilities in April 2016, HALO Trust reported the use of cluster bombs by Azerbaijan, having found cluster munitions in the villages of Nerkin Horatagh and Kiçik Qarabəy. Azerbaijan reported that the Armenian forces had used cluster munition against Azerbaijani civilians in the given period. According to the Cluster Munition Monitor report in 2010, neither Armenia nor Azerbaijan not acceded to become a member of the Convention on Cluster Munitions. Further use of cluster munition was reported during the 2020 Nagorno-Karabakh war. The Armenian-populated city of Stepanakert came under bombardment throughout the war, beginning on the first day. Human Rights Watch reported that residential neighborhoods in Stepanakert which lacked any identifiable military targets were hit by the Azerbaijani Army with cluster munitions. Human Rights Watch also identified Azerbaijani usage of cluster munitions in Hadrut. Human Rights Watch also reported the use of cluster munitions by the Armenian forces during the months-long bombardment of Tartar, missile attacks on Barda and on Goranboy. Amnesty International also confirmed that the Armenian forces had used cluster munitions in Barda, which resulted in the deaths of 25 Azerbaijani civilians, according to Azerbaijan. First Chechen War, 1995 Used by Russia, see also 1995 Shali cluster bomb attack Yugoslavia, 1999 Used by the US, the UK and Netherlands. About 2,000 cluster bombs containing 380,000 sub-munitions were dropped on Yugoslavia during the NATO bombing of Yugoslavia, in 1999, of which the Royal Air Force dropped 531 RBL755 cluster bombs. Both the Americans and the British utilised cluster bombs. On 7 May 1999, between the time of 11:30 and 11:40, a NATO attack was carried out with two containers of cluster bombs and fell in the central part of the city: The Pathology building next to the Medical Center of Nis in the south of the city, Next to the building of "Banovina" including the main market, bus station next to the Niš Fortress and "12th February" Health Centre Parking of "Niš Express" near river Nišava River. Reports claimed that 15 civilians were killed, 8 civilians were seriously injured, 11 civilians had sustained minor injuries, 120 housing units were damaged and 47 were destroyed and that 15 cars were damaged. Overall during the operation, at least 23 Serb civilians were killed by cluster munitions. At least six Serbs, including three children were killed by bomblets after the operation ended, and up to in six areas remain "cluster contaminated", according to Serbian government, including on Mt. Kopaonik near the slopes of the ski resort. The UK contributed £86,000 to the Serbian Mine Action Centre. Afghanistan, 2001–2002 The United States used cluster munitions during the initial stages of Operation Enduring Freedom. Iraq, 1991, 2003–2006 Used by the United States and the United Kingdom 1991: During the Gulf War, the United States, France, and the United Kingdom dropped 61,000 cluster bombs, containing 20 million submunitions, according to Human Rights Watch (HRW). The US accounted for 57,000 of these droppings. The US Department of Defense estimated that 1.2 to 1.5 million submunitions did not explode. According to human rights organizations, unexploded submunitions have caused over 4,000 civilian casualties, including 1,600 deaths, in Iraq and Kuwait. 2003–2006: United States and allies attacked Iraq with 13,000 cluster munitions, containing two million submunitions during Operation Iraqi Freedom, according to the HRW. The majority were DPICMs, or Dual-purpose improved conventional munitions. At multiple times, coalition forces used cluster munitions in residential areas, and the country remains among the most contaminated to this day, bomblets posing a threat to both US military personnel in the area, and local civilians. When these weapons were fired on Baghdad on 7 April 2003 many of the bomblets failed to explode on impact. Afterward, some of them exploded when touched by civilians. USA Today reported that "the Pentagon presented a misleading picture during the war of the extent to which cluster weapons were being used and of the civilian casualties they were causing." On 26 April, General Richard Myers, chairman of the Joint Chiefs of Staff, said that the US had caused only one civilian casualty. Lebanon, 1978, 1982 and 2006 Extensively used by Israel during the 1978 Israeli invasion of Lebanon, the 1982–2000 occupation of Lebanon and also by Hezbollah in the 2006 Lebanon War. During the Israeli-Lebanese conflict in 1982, Israel used cluster munitions, many of them American-made, on targets in southern Lebanon. Israel also used cluster bombs in the 2006 Lebanon War. Two types of cluster munitions were transferred to Israel from the U.S. The first was the CBU-58 which uses the BLU-63 bomblet. This cluster bomb is no longer in production. The second was the MK-20 Rockeye, produced by Honeywell Incorporated in Minneapolis. The CBU-58 was used by Israel in Lebanon in both 1978 and 1982. The Israeli Defense company Israel Military Industries also manufactures the more up-to-date M-85 cluster bomb. Hezbollah fired Chinese-manufactured cluster munitions against Israeli civilian targets, using 122 mm rocket launchers during the 2006 war, hitting Kiryat Motzkin, Nahariya, Karmiel, Maghar, and Safsufa. A total of 113 rockets and 4,407 submunitions were fired into Israel during the war. According to the United Nations Mine Action Service, Israel dropped up to four million submunitions on Lebanese soil, of which one million remain unexploded. According to a report prepared by Lionel Beehner for the Council on Foreign Relations, the United States restocked Israel's arsenal of cluster bombs, triggering a State Department investigation to determine whether Israel had violated secret agreements it had signed with the United States on their use. As Haaretz reported in November 2006, the Israel Defense Forces Chief of Staff Dan Halutz wanted to launch an investigation into the use of cluster bombs during the Lebanon war. Halutz claimed that some cluster bombs had been fired against his direct order, which stated that cluster bombs should be used with extreme caution and not be fired into populated areas. The IDF apparently disobeyed this order. Human Rights Watch said there was evidence that Israel had used cluster bombs very close to civilian areas and described them as "unacceptably inaccurate and unreliable weapons when used around civilians" and that "they should never be used in populated areas". Human Rights Watch has accused Israel of using cluster munitions in an attack on Bilda, a Lebanese village, on 19 July which killed 1 civilian and injured 12, including 7 children. The Israeli "army defended ... the use of cluster munitions in its offensive with Lebanon, saying that using such munitions was 'legal under international law' and the army employed them 'in accordance with international standards. Foreign Ministry Spokesman Mark Regev added, "[I]f NATO countries stock these weapons and have used them in recent conflicts – in FR Yugoslavia, Afghanistan and Iraq – the world has no reason to point a finger at Israel." Georgia, 2008 Georgia and Russia both were accused of using cluster munitions during the 2008 Russo-Georgian War. Georgia admitted use; Russia denied it. Georgia admitted using cluster bombs during the war, according to Human Rights Watch but stressed they were only used against military targets. The Georgian army used LAR-160 multiple rocket launchers to fire MK4 LAR 160 type rockets (with M-85 bomblets) with a range of 45 kilometers the Georgian Minister of Defense (MoD) said. Human Rights Watch accused the Russian Air Force of using RBK-250 cluster bombs during the conflict. A high-ranking Russian military official denied use of cluster bombs. The Dutch government, after investigating the death of a Dutch citizen, claimed that a cluster munition was propelled by an 9K720 Iskander tactical missile (used by Russia at the time of conflict, and not used by Georgia). Sri Lanka, 2008/2009 In 2009, the U.S. Department of State prepared a report on incidents in Sri Lanka between January and May 2009 that could constitute violations of international humanitarian law or crimes against humanity. This report documented the use of cluster munitions by Sri Lanka’s government forces. Photos and eyewitness accounts described the use of such weapons in several attacks on civilian areas, including an incident on March 7, 2009, in Valayanmadam, where two cluster bombs exploded, causing significant civilian casualties and injuries. The reports suggest that cluster munitions were used in areas declared as safe zones for civilians. According to Gordon Weiss, who was the spokesperson for the UN in Colombo, the "largest remaining functioning hospital" in the Vanni region of Sri Lanka was bombed. The Sri Lankan military has been accused of bombing the hospital with cluster munitions, but cluster bombs were not used in the bombing of the hospital. The government has denied using cluster munitions, but in 2012 unexploded cluster bombs were found, according to Allan Poston, who was the technical advisor for the UN Development Program’s mine action group in Sri Lanka. An article published by The Guardian in 2016 provided photographic evidence and testimonies from former de-miners and civilians pointing to the use of Russian-made cluster bombs in areas that the government had declared as "no-fire zones." Libya, 2011 It was reported in April 2011 that Colonel Gaddafi's forces had used cluster bombs in the conflict between government forces and rebel forces trying to overthrow Gaddafi's government, during the battle of Misrata These reports were denied by the government, and the Secretary of State of the US, Hillary Clinton said she was "not aware" of the specific use of cluster or other indiscriminate weapons in Misurata even though a New York Times investigation refuted those claims. Syria, 2012 During the Syrian uprising, a few videos of cluster bombs first appeared in 2011, but escalated in frequency near the end of 2012. As Human Rights Watch reported on 13 October 2012, "Eliot Higgins, who blogs on military hardware and tactics used in Syria under the pseudonym 'Brown Moses', compiled a list of the videos showing cluster munition remnants in Syria's various governorates." The type of bombs have been reported to be RBK-250 cluster bombs with AO-1 SCH bomblets (of Soviet design). Designed by the Soviet Union for use on tank and troop formations, PTAB-2.5M bomblets were used on civilian targets in Mare' in December 2012 by the Syrian government. According to the seventh annual Cluster Munition Report, there is ″compelling evidence″ that Russia has used cluster munitions during their involvement in Syria. South Sudan, 2013 Cluster bombs remnants were discovered by a UN de-mining team in February 2014 on a section of road near the Jonglei state capital, Bor. The strategic town was the scene of heavy fighting, changing hands several times during the South Sudanese Civil War, which erupted in the capital Juba on 15 December 2013 before spreading to other parts of the country. According to UNMAS, the site was contaminated with the remnants of up to eight cluster bombs and an unknown quantity of bomblets. Ukraine, 2014 Human Rights Watch reported that "Ukrainian government forces used cluster munitions in populated areas in Donetsk city in early October 2014." Also "circumstances indicate that anti-government forces might also have been responsible for the use of cluster munitions". Saudi Arabian-led intervention in Yemen, 2015–2022 British-supplied and U.S.-supplied cluster bombs have been used by Saudi Arabian-led military coalition against Houthi militias in Yemen, according to Human Rights Watch and Amnesty International. Saudi Arabia is not signatory to the Convention on Cluster Munitions. Ethiopia, 2021 The New York Times journalist Christiaan Triebert revealed that the Ethiopian Air Force bombings of Samre during the Tigray War are evidenced by multiple photos of the tails of Soviet-era cluster bombs, likely RBK-250. Ethiopia is not signatory to the Convention on Cluster Munitions. Russian invasion of Ukraine, 2022 Human Rights Watch reported the use of cluster munitions by the Russian Armed Forces during the 2022 invasion of Ukraine. HRMMU reported 16 credible allegations that Russian armed forces used cluster munitions in populated areas, resulting in civilian casualties and other damage. On 24 February 2022, a Russian 9M79-series Tochka ballistic missile with a 9N123 cluster munition warhead containing 50 9N24 fragmentation submunitions impacted outside a hospital in Vuhledar in Donetsk Oblast, Ukraine. The attack killed four civilians and wounded ten. Further use of cluster munitions, such as the Uragan 9M27K and BM-30 Smerch 9M55K cluster rockets, is being investigated by Bellingcat through a public appeal for evidence on Twitter. According to HRW and Amnesty International, Russian troops used cluster munition during an attack on the city of Okhtyrka on the morning of 25 February 2022. A 220 mm Uragan rocket dropped cluster munitions on a kindergarten in the town. As a result of that, people were killed, including a child. The same day, non-precision guided missiles bearing cluster munitions were deployed against Kharkiv, killing at least nine civilians and injuring 37. The United Nations High Commissioner for Human Rights announced on 30 March 2022 that they had credible reports indicating that Russian armed forces had used cluster munitions in populated areas of Ukraine at least 24 times since the start of the conflict on 24 February. In early March 2022, The New York Times reported the first use of a cluster munition by Ukrainian troops during the invasion near Husarivka farm. It landed close to the Russian army's headquarters. According to the report, nobody died in that strike. On 14 March 2022, an attack with a Tochka-U missile equipped with cluster sub-munitions was reported in the city of Donetsk. HRMMU confirmed at least 15 civilian deaths, and 36 injured in this incident, and at the time of its report was working to corroborate other alleged casualties and whether they were caused by cluster sub-munitions. On 7 December 2022, it was revealed that Ukraine was seeking access to US stockpiles of cluster munitions, due to a shortage of ammunition for HIMARS type and 155 mm artillery systems. The US has stockpiled its cluster munitions and had been considering the Ukrainian request. Ukraine claimed it would give them an edge over Russian artillery, as well as preventing depletion of other US and Western stocks. On 6 July 2023, U.S. president Joe Biden approved the provision of DPICM cluster munitions to Ukraine to help Ukrainian forces with the ongoing counteroffensive to liberate Russian-occupied southeastern Ukraine, bypassing U.S. law prohibiting the transfer of cluster munitions with a failure rate greater than one percent. The weapon system could be used in both HIMARS and 155 mm shell projectiles. Defense Department official Laura Cooper said that the munitions "would be useful, especially against dug-in Russian positions on the battlefield." According to the Pentagon, Ukraine will receive an "improved" version of cluster munitions with a failure rate of about 2 percent, while the Russian cluster bombs fail at 40 percent or more. However, according to a report prepared for Congress, experts in cleanup operations "have frequently reported failure rates of 10% to 30%." The failure rate of cluster munitions used by Ukraine is reportedly as high as 20 percent. Paul Hannon, of the Cluster Munition Coalition (CMC), said the Biden administration's decision will "contribute to the terrible casualties being suffered by Ukrainian civilians both immediately and for years to come". On 10 July, Cambodian Prime Minister Hun Sen warned Ukraine of using cluster munitions on Twitter writing: "It would be the greatest danger for Ukrainians for many years or up to a hundred years if cluster bombs are used in Russian-occupied areas in the territory of Ukraine," Sen further cited his country's "painful experience" from the Vietnam War that has killed or maimed tens of thousands of Cambodians. On the same day, the Royal United Services Institute (RUSI) released a study citing the use of cluster munitions from the Vietnam War. United States Army studies from that war showed that it takes approximately 13.6 high explosive shells for each enemy soldier killed. A shell firing DPICMs relied on average only 1.7 shells to kill an enemy soldier. RUSI used an example of a trench, a direct hit by a high explosive round will spread shrapnel "within line of sight of the point of detonation". This also reduces the wear and tear on the barrels of 155 mm artillery weapons systems. On 16 July 2023, Russian President Vladimir Putin claimed that Russia had "sufficient stockpiles" of its own cluster munitions and threatened to take "reciprocal action" if Ukraine used US-supplied cluster munitions against Russian forces in Ukraine. On 20 July 2023, The Washington Post reported that Ukrainian forces had begun to use US-supplied cluster munitions against Russian forces in the south-east of the country, according to Ukrainian officials. Threat to civilians While all weapons are dangerous, cluster bombs pose a particular threat to civilians for two reasons: they have a wide area of effect, and they consistently leave behind a large number of unexploded bomblets. The unexploded bomblets can remain dangerous for decades after the end of a conflict. For example, while the United States cluster bombing of Laos stopped in 1973, cluster bombs and other unexploded munitions continued to cause over 100 casualties per year to Laotian civilians . Cluster munitions are opposed by many individuals and hundreds of groups, such as the Red Cross, the Cluster Munition Coalition and the United Nations, because of the high number of civilians that have fallen victim to the weapon. Since February 2005, Handicap International called for cluster munitions to be prohibited and collected hundreds of thousands of signatures to support its call. 98% of 13,306 recorded cluster munitions casualties that are registered with Handicap International are civilians, while 27% are children. The area affected by a single cluster munition, known as its footprint, can be very large; a single unguided M26 MLRS rocket can effectively cover an area of . In US and most allied services, the M26 has been replaced by the M30 guided missile fired from the MLRS. The M30 has greater range and accuracy but a smaller area of coverage. Because of the weapon's broad area of effect, they have often been documented as striking both civilian and military objects in the target area. This characteristic of the weapon is particularly problematic for civilians when cluster munitions are used in or near populated areas, as documented in a research report by Human Rights Watch. In some cases, like the Zagreb rocket attack, civilians were deliberately targeted by such weapons. Unexploded ordnance The other serious problem, also common to explosive weapons is unexploded ordnance (UXO) of cluster bomblets left behind after a strike. These bomblets may be duds or in some cases the weapons are designed to detonate at a later stage. In both cases, the surviving bomblets are live and can explode when handled, making them a serious threat to civilians and military personnel entering the area. In effect, the UXOs can function like land mines. Even though cluster bombs are designed to explode prior to or on impact, there are always some individual submunitions that do not explode on impact. As of 2000, the US-made MLRS with M26 warhead and M77 submunitions which were supposed to have a 5% dud rate, but studies have shown that some have a much higher rate. The rate in acceptance tests prior to the Gulf War for this type ranged from 2% to a high of 23% for rockets cooled to before testing. The M483A1 DPICM artillery-delivered cluster bombs have a reported dud rate of 14%. In July 2023, the failure rate of Russian cluster bombs during the 2022 Russian invasion of Ukraine was reported to be at 40 percent or more. Given that each cluster bomb can contain hundreds of bomblets and be fired in volleys, even a small failure rate can lead each strike to leave behind hundreds or thousands of UXOs scattered randomly across the strike area. For example, after the 2006 Israel-Lebanon conflict, UN experts estimated that as many as one million unexploded bomblets may contaminate the hundreds of cluster munition strike sites in Lebanon. In addition, some cluster bomblets, such as the BLU-97/B used in the CBU-87, are brightly colored to increase their visibility and warn off civilians. However, the yellow color, coupled with their small and nonthreatening appearance, is attractive to young children who wrongly believe them to be toys. This problem was exacerbated in the War in Afghanistan (2001–2021), when US forces dropped humanitarian rations from airplanes with similar yellow-colored packaging as the BLU-97/B, yellow being the NATO standard colour for high explosive filler in air weapons. The rations packaging was later changed first to blue and then to clear in the hope of avoiding such hazardous confusion. As of 1993, the US military was developing new cluster bombs that it claimed could have a much lower (less than 1%) dud rate. Sensor-fused weapons that contain a limited number of submunitions that are capable of autonomously engaging armored targets may provide a viable, if costly, alternative to cluster munitions that will allow multiple target engagement with one shell or bomb while avoiding the civilian deaths and injuries consistently documented from the use of cluster munitions. In the 1980s the Spanish firm Esperanza y Cia developed a 120 mm caliber mortar bomb which contained 21 anti-armor submunitions. What made the 120 mm "Espin" unique was the electrical impact fusing system which totally eliminated dangerous duds. The system operated on a capacitor in each submunition which was charged by a wind generator in the nose of the projectile after being fired. If for whatever reason the electrical fuse fails to function on impact, approximately 5 minutes later the capacitor bleeds out, therefore neutralizing the submunition's electronic fuse system. Civilian deaths In Vietnam, people are still being killed as a result of cluster bombs and other objects left by the US and Vietnamese military forces. Hundreds of people are killed or injured annually by unexploded ordnance. Some 270 million cluster submunitions were dropped on Laos in the 1960s and 1970s; approximately one third of these submunitions failed to explode and continue to pose a threat today. Within the first year after the end of the Kosovo War, more than 100 civilians died from unexploded bombs and mines. During the war, NATO planes dropped nearly 1,400 cluster bombs in Kosovo. Cluster bombs make up to 40% of mines and unexploded bombs in Kosovo. Israel used cluster bombs in Lebanon in 1978 and in the 1980s. Those weapons used more than two decades ago by Israel continue to affect Lebanon. During the 2006 war in Lebanon, Israel fired large numbers of cluster bombs in Lebanon, containing an estimated more than 4 million cluster submunitions. In the first month following the ceasefire, unexploded cluster munitions killed or injured an average of 3–4 people per day. Locations Countries and disputed territories (listed in italic) that have been affected by cluster munitions as of August 2023 include: Afghanistan Angola Azerbaijan (mainly Nagorno Karabakh) Bosnia & Herzegovina Cambodia Chad Croatia Democratic Republic of the Congo Donetsk People's Republic Eritrea Ethiopia Germany Iran Iraq Laos Lebanon Libya Luhansk People's Republic Malta Montenegro Serbia South Sudan Sudan Syria Tajikistan Ukraine United Kingdom Vietnam Yemen Kosovo Western Sahara As of August 2019, it is unclear, whether Colombia and Georgia are contaminated. Albania, the Republic of the Congo, Grenada, Guinea-Bissau, Mauritania, Mozambique, Norway, Zambia, Uganda, and Thailand completed clearance of areas contaminated by cluster munition remnants in previous years. International legislation Cluster bombs fall under the general rules of international humanitarian law, but were not specifically covered by any currently binding international legal instrument until the signature of the Convention on Cluster Munitions in December 2008. This international treaty stemmed from an initiative by Stoltenberg's Second Cabinet known as the Oslo Process which was launched in February 2007 to prohibit cluster munitions. More than 100 countries agreed to the text of the resulting Convention on Cluster Munitions in May 2008 which sets out a comprehensive ban on these weapons. This treaty was signed by 94 states in Oslo on 3–4 December 2008. The Oslo Process was launched largely in response to the failure of the Convention on Certain Conventional Weapons (CCW) where five years of discussions failed to find an adequate response to these weapons. The Cluster Munition Coalition (CMC) is campaigning for the widespread accession to and ratification of the Convention on Cluster Munitions. A number of sections of the Protocol on explosive remnants of war (Protocol V to the 1980 Convention), 28 November 2003 occasionally address some of the problems associated with the use of cluster munitions, in particular Article 9, which mandates States Parties to "take generic preventive measures aimed at minimising the occurrence of explosive remnants of war". In June 2006, Belgium was the first country to issue a ban on the use (carrying), transportation, export, stockpiling, trade and production of cluster munitions, and Austria followed suit on 7 December 2007. There has been legislative activity on cluster munitions in several countries, including Austria, Australia, Denmark, France, Germany, Luxembourg, Netherlands, Norway, Sweden, Switzerland, United Kingdom and United States. In some of these countries, ongoing discussions concerning draft legislation banning cluster munitions, along the lines of the legislation adopted in Belgium and Austria will now turn to ratification of the global ban treaty. Norway and Ireland have national legislation prohibiting cluster munitions and were able to deposit their instruments of ratification to the Convention on Cluster Munitions immediately after signing it in Oslo on 3 December 2008. International treaties Other weapons, such as land mines, have been banned in many countries under specific legal instruments for several years, notably the Ottawa Treaty to ban land mines, and some of the Protocols in the Convention on Certain Conventional Weapons that also help clearing the lands contaminated by left munitions after the end of conflicts and provides international assistance to the affected populations. However, until the adoption of the Convention on Cluster Munitions in Dublin in May 2008 cluster bombs were not banned by any international treaty and were considered legitimate weapons by some governments. To increase pressure for governments to come to an international treaty on 13 November 2003, the Cluster Munition Coalition (CMC) was established with the goal of addressing the impact of cluster munitions on civilians. International governmental deliberations in the Convention on Certain Conventional Weapons turned on the broader problem of explosive remnants of war, a problem to which cluster munitions have contributed in a significant way. There were consistent calls from the Cluster Munition Coalition, the International Committee of the Red Cross (ICRC) and a number of UN agencies, joined by approximately 30 governments, for international governmental negotiations to develop specific measures that would address the humanitarian problems cluster munitions pose. This did not prove possible in the conventional multilateral forum. After a reversal in the US position, in 2007 deliberations did begin on cluster munitions within the Convention on Certain Conventional Weapons. There was a concerted effort led by the US to develop a new protocol to the Convention on Certain Conventional Weapons, but this proposal was rejected by over 50 states, together with civil society, ICRC and UN agencies. The discussions ended with no result in November 2011, leaving the 2008 Convention on Cluster Munitions as the single international standard on the weapons. In February 2006, Belgium announced its decision to ban the weapon by law. Then Norway announced a national moratorium in June and Austria announced its decision in July to work for an international instrument on the weapon. The international controversy over the use and impact of cluster munitions during the war between Lebanon and Israel in July and August 2006 added weight to the global campaign for a ban treaty. A new flexible multilateral process similar to the process that led to the ban on anti-personnel land mines in 1997 (the Ottawa Treaty) began with an announcement in November 2006 in Geneva as well at the same time by the Government of Norway that it would convene an international meeting in early 2007 in Oslo to work towards a new treaty prohibiting cluster munitions. Forty-nine governments attended the meeting in Oslo 22–23 February 2007 in order to reaffirm their commitment to a new international ban on the weapon. A follow-up meeting in this process was held in Lima in May where around 70 states discussed the outline of a new treaty, Hungary became the latest country to announce a moratorium and Peru launched an initiative to make Latin America a cluster munition free zone. In addition, the ICRC held an experts meeting on cluster munitions in April 2007 which helped clarify technical, legal, military and humanitarian aspects of the weapon with a view to developing an international response. Further meetings took place in Vienna on 4–7 December 2007, and in Wellington on 18–22 February 2008 where a declaration in favor of negotiations on a draft convention was adopted by more than 80 countries. In May 2008 after around 120 countries had subscribed to the Wellington Declaration and participated in the Dublin Diplomatic Conference from 19 to 30 May 2008. At the end of this conference, 107 countries agreed to adopt the Convention on Cluster Munitions, that bans cluster munitions and was opened for signature in Oslo on 3–4 December 2008 where it was signed by 94 countries. In July 2008, United States Defense Secretary Robert M. Gates implemented a policy to eliminate by 2018 all cluster bombs that do not meet new safety standards. In November 2008, ahead of the signing Conference in Oslo, the European Parliament passed a resolution calling on all European Union governments to sign and ratify the Convention. On 16 February 2010 Burkina Faso became the 30th state to deposit its instrument of ratification for the Convention on Cluster Munitions. This means that the number of States required for the Convention to enter into force had been reached. The treaty's obligations became legally binding on the 30 ratifying States on 1 August 2010 and subsequently for other ratifying States. Convention on Cluster Munitions Taking effect on 1 August 2010, the Convention on Cluster Munitions bans the stockpiling, use and transfer of virtually all existing cluster bombs and provides for the clearing up of unexploded munitions. It had been signed by 108 countries, of which 38 had ratified it by the affected date, but many of the world's major military powers including the United States, Russia, India, Brazil and China are not signatories to the treaty. The Convention on Cluster Munitions entered into force on 1 August 2010, six months after it was ratified by 30 states. As of 26 September 2018, a total of 120 states had joined the Convention, as 104 States parties and 16 signatories. For an updated list of countries, see Convention on Cluster Munitions#State parties United States policy According to the US State Department, the U.S. suspended operational use of cluster munitions in 2003, however, Amnesty International published a report that the U.S. used them in Yemen during the 2009 al-Majalah camp attack. U.S. arguments favoring the use of cluster munitions are that their use reduces the number of aircraft and artillery systems needed to support military operations and if they were eliminated, significantly more money would have to be spent on new weapons, ammunition, and logistical resources. Also, militaries would need to increase their use of massed artillery and rocket barrages to get the same coverage, which would destroy or damage more key infrastructures. The U.S. was initially against any CCW limitation negotiations, but dropped its opposition in June 2007. Cluster munitions have been determined as needed for ensuring the country's national security interests, but measures were taken to address humanitarian concerns of their use, as well as pursuing their original suggested alternative to a total ban of pursuing technological fixes to make the weapons no longer viable after the end of a conflict. In May 2008, then-Acting Assistant Secretary of State for Political-Military Affairs Stephen Mull stated that the U.S. military relies upon cluster munitions as an important part of their war strategy. Mull emphasized that "U.S. forces simply cannot fight by design or by doctrine without holding out at least the possibility of using cluster munitions." The U.S. Army ceased procurement of GMLRS cluster rockets in December 2008 because of a submunition dud rate as high as five percent. Pentagon policy was to have all cluster munitions used after 2018 to have a submunition unexploded ordnance rate of less than one percent. To achieve this, the Army undertook the Alternative Warhead Program (AWP) to assess and recommend technologies to reduce or eliminate cluster munition failures, as some 80 percent of U.S. military cluster weapons reside in Army artillery stockpiles. In July 2012, the U.S. fired at a target area with 36 Guided Multiple Launch Rocket System (GMLRS) unitary warhead rockets. Analysis indicated that capability gaps existed as cluster munitions require approval by the Combatant Commander which reduced the advantage of responsive precision fire. The same effect could have been made by four Alternative Warhead (AW) GMLRS rockets under development by the AWP to engage the same target set as cluster munitions. Without access to the AW, the operation required using nine times as many rockets, cost nine times as much ($3.6 million compared to $400,000), and took 40 times as long (more than 20 minutes compared to less than 30 seconds) to execute. Starting with the Omnibus Appropriations Act, 2009 (P.L. 111-8) annual Consolidated Appropriations Act legislation has placed export moratorium language on cluster weapons since then. On 19 May 2011 the Defense Security Cooperation Agency issued a memorandum prohibiting the sale of all but the CBU-97B CBU-105 Sensor Fuzed Weapon because the others have been demonstrated to have a unexploded ordnance rate of greater than 1%. On 30 November 2017, the Pentagon put off indefinitely their planned ban on using cluster bombs after 2018, as they had been unable to produce submunitions with failure rates of 1% or less. Since it is unclear how long it might take to achieve that standard, a months-long policy review concluded the deadline should be postponed; deployment of existing cluster weapons is left to commanders' discretion to authorize their use when deemed necessary "until sufficient quantities" of safer versions are developed and fielded. Users Countries At least 25 countries have used cluster munitions in recent history (since the creation of the United Nations). Countries listed in bold have signed and ratified the Convention on Cluster Munitions, agreeing in principle to ban cluster bombs. Countries listed in italic have signed, but not yet ratified the Convention on Cluster Munitions. / / / (responsibility denied) / (responsibility denied) In addition, at least three countries that no longer exist (the Soviet Union, Yugoslavia and the Democratic Republic of Afghanistan) used cluster bombs. In some cases, the responsibility or even the use of cluster munition is denied by the local government. Non-state armed groups Very few violent non-state actors have used cluster munitions and their delivery systems due to the complexity. As of August 2019, cluster munitions have been used in conflicts by non-state actors in at least six countries. Croatian militia Northern Alliance / Serbian militia Separatist forces of the war in Donbass Producers At least 31 nations have produced cluster munitions in recent history (since the creation of the United Nations). Many of these nations still have stocks of these munitions. Most (but not all) of them are involved in recent wars or long unsolved international conflicts; however most of them did not use the munitions they produced. Countries listed in bold have signed and ratified the Convention on Cluster Munitions, agreeing in principle to ban cluster bombs. As of February 2024, countries marked with an Asterisk (*) officially ceased production of cluster munitions; countries marked with two asterisks (**) unofficially ceased production of cluster munitions. * * * * * * ** /* ** * * ** ** * * * * * * ** Countries with stocks As of September 2022, at least 51 countries have stockpiles of cluster munitions. Countries listed in bold have signed and ratified the Convention on Cluster Munitions, agreeing in principle that their stockpiles should be destroyed. Countries listed in italic have signed, but not yet ratified the Convention on Cluster Munitions; countries marked with an Asterisk (*) are in the process of destroying their stockpiles. * * * * Financiers According to BankTrack, an international network of NGOs specializing in control of financial institutions, many major banks and other financial corporations either directly financed, or provided financial services to companies producing cluster munition in 2005–2012. Among other, BankTrack 2012 report names ABN AMRO, Bank of America, Bank of China, Bank of Tokyo Mitsubishi UFJ, Barclays, BBVA, BNP Paribas, Citigroup, Commerzbank AG, Commonwealth Bank of Australia, Crédit Agricole, Credit Suisse Group, Deutsche Bank, Goldman Sachs, HSBC, Industrial Bank of China, ING Group, JPMorgan Chase, Korea Development Bank, Lloyds TSB, Merrill Lynch, Morgan Stanley, Royal Bank of Canada, Royal Bank of Scotland, Sberbank, Société Générale, UBS, Wells Fargo. Many of these financial companies are connected to such producers of cluster munitions as Alliant Techsystems, China Aerospace Science and Technology Corporation, Hanwha, Norinco, Singapore Technologies Engineering, Textron, and others. According to Pax Christi, a Netherlands-based NGO, in 2009, around 137 financial institutions financed cluster munition production. Out of 137 institutions, 63 were based in the US, another 18 in the EU (the United Kingdom, France, Germany, Italy etc.), 16 were based in China, 4 in Singapore, 3 in each of: Canada, Japan, Taiwan, 2 in Switzerland, and 4 other countries had 1 financial institution involved.
Technology
Explosive weapons
null
80130
https://en.wikipedia.org/wiki/3%20Juno
3 Juno
Juno (minor-planet designation: 3 Juno) is a large asteroid in the asteroid belt. Juno was the third asteroid discovered, in 1804, by German astronomer Karl Harding. It is tied with three other asteroids as the thirteenth largest asteroid, and it is one of the two largest stony (S-type) asteroids, along with 15 Eunomia. (Ceres is the largest asteroid.) It is estimated to contain 1% of the total mass of the asteroid belt. History Discovery Juno was discovered on 1 September 1804, by Karl Ludwig Harding. It was the third asteroid found, but was initially considered to be a planet; it was reclassified as an asteroid and minor planet during the 1850s. Name and symbol Juno is named after the mythological Juno, the highest Roman goddess. The adjectival form is Junonian (from Latin jūnōnius), with the historical final n of the name (still seen in the French form, Junon) reappearing, analogous to Pluto ~ Plutonian. 'Juno' is the international name for the asteroid, subject to local variation: Italian Giunone, French Junon, Russian Юнона (Yunona), etc. The old astronomical symbol of Juno, still used in astrology, is a scepter topped by a star, . There were many graphic variants with a more elaborated scepter, such as , sometimes tilted at an angle to provide more room for decoration. The generic asteroid symbol of a disk with its discovery number, , was introduced in 1852 and quickly became the norm. The scepter symbol was resurrected for astrological use in 1973. Characteristics Juno is one of the larger asteroids, perhaps tenth by size and containing approximately 1% the mass of the entire asteroid belt. It is the second-most-massive S-type asteroid after 15 Eunomia. Even so, Juno has only 3% the mass of Ceres. The orbital period of Juno is 4.36578 years. Amongst S-type asteroids, Juno is unusually reflective, which may be indicative of distinct surface properties. This high albedo explains its relatively high apparent magnitude for a small object not near the inner edge of the asteroid belt. Juno can reach +7.5 at a favourable opposition, which is brighter than Neptune or Titan, and is the reason for it being discovered before the larger asteroids Hygiea, Europa, Davida, and Interamnia. At most oppositions, however, Juno only reaches a magnitude of around +8.7—only just visible with binoculars—and at smaller elongations a telescope will be required to resolve it. It is the main body in the Juno family. Juno was originally considered a planet, along with 1 Ceres, 2 Pallas, and 4 Vesta. In 1811, Schröter estimated Juno to be as large as 2290 km in diameter. All four were reclassified as asteroids as additional asteroids were discovered. Juno's small size and irregular shape preclude it from being designated a dwarf planet. Juno orbits at a slightly closer mean distance to the Sun than Ceres or Pallas. Its orbit is moderately inclined at around 12° to the ecliptic, but has an extreme eccentricity, greater than that of Pluto. This high eccentricity brings Juno closer to the Sun at perihelion than Vesta and further out at aphelion than Ceres. Juno had the most eccentric orbit of any known body until 33 Polyhymnia was discovered in 1854, and of asteroids over 200 km in diameter only 324 Bamberga has a more eccentric orbit. Juno rotates in a prograde direction with an axial tilt of approximately 50°. The maximum temperature on the surface, directly facing the Sun, was measured at about 293 K on 2 October 2001. Taking into account the heliocentric distance at the time, this gives an estimated maximum temperature of 301 K (+28 °C) at perihelion. Spectroscopic studies of the Junonian surface permit the conclusion that Juno could be the progenitor of chondrites, a common type of stony meteorite composed of iron-bearing silicates such as olivine and pyroxene. Infrared images reveal that Juno possesses an approximately 100 km-wide crater or ejecta feature, the result of a geologically young impact. Based on MIDAS infrared data using the Hale Telescope, an average radius of 135.7±11 was reported in 2004. Observations Juno was the first asteroid for which an occultation was observed. It passed in front of a dim star (SAO 112328) on 19 February 1958. Since then, several occultations by Juno have been observed, the most fruitful being the occultation of SAO 115946 on 11 December 1979, which was registered by 18 observers. Juno occulted the magnitude 11.3 star PPMX 9823370 on 29 July 2013, and 2UCAC 30446947 on 30 July 2013. Radio signals from spacecraft in orbit around Mars and on its surface have been used to estimate the mass of Juno from the tiny perturbations induced by it onto the motion of Mars. Juno's orbit appears to have changed slightly around 1839, very likely due to perturbations from a passing asteroid, whose identity has not been determined. In 1996, Juno was imaged by the Hooker Telescope at Mount Wilson Observatory at visible and near-IR wavelengths, using adaptive optics. The images spanned a whole rotation period and revealed an irregular shape and a dark albedo feature, interpreted as a fresh impact site. Oppositions Juno reaches opposition from the Sun every 15.5 months or so, with its minimum distance varying greatly depending on whether it is near perihelion or aphelion. Sequences of favorable oppositions occur every 10th opposition, i.e. just over every 13 years. The last favorable oppositions were on 1 December 2005, at a distance of 1.063 AU, magnitude 7.55, and on 17 November 2018, at a minimum distance of 1.036 AU, magnitude 7.45. The next favorable opposition will be 30 October 2031, at a distance of 1.044 AU, magnitude 7.42.
Physical sciences
Solar System
Astronomy
80169
https://en.wikipedia.org/wiki/Buckwheat
Buckwheat
Buckwheat (Fagopyrum esculentum) or common buckwheat is a flowering plant in the knotweed family Polygonaceae cultivated for its grain-like seeds and as a cover crop. Buckwheat originated around the 6th millennium BCE in the region of what is now Yunnan Province in southwestern China. The name "buckwheat" is used for several other species, such as Fagopyrum tataricum, a domesticated food plant raised in Asia. Despite its name, buckwheat is not closely related to wheat. Buckwheat is not a cereal, nor is it even a member of the grass family. It is related to sorrel, knotweed, and rhubarb. Buckwheat is considered a pseudocereal, because its seeds' high starch content allows them to be used in cooking like a cereal. Etymology The name "buckwheat" or "beech wheat" comes from its tetrahedral seeds, which resemble the much larger seeds of the beech nut from the beech tree, and the fact that it is used like wheat. The word may be a translation of Middle Dutch : "beech" (Modern Dutch ; see PIE *bhago-) and "wheat" (Mod. Dut. , antiquated ), or maybe a native formation on the same model as the Dutch word. Description Buckwheat is a herbaceous annual flowering plant growing to about , with red stems and pink and white flowers resembling those of knotweeds. The leaves are arrow-shaped and the fruits are achenes about 5–7 mm with 3 prominent sharp angles. Distribution Fagopyrum esculentum is native to south-central China and Tibet, and has been introduced into suitable climates across Eurasia, Africa and the Americas. History The wild ancestor of common buckwheat is F. esculentum ssp. ancestrale. F. homotropicum is interfertile with F. esculentum and the wild forms have a common distribution, in Yunnan, a southwestern province of China. The wild ancestor of tartary buckwheat is F. tataricum ssp. potanini. Common buckwheat was domesticated and first cultivated in inland Southeast Asia, possibly around 6000 BCE, and from there spread to Central Asia and Tibet, and then to the Middle East and Europe, which it reached by the 15th century. Domestication most likely took place in the western Yunnan region of China. The oldest remains found in China so far date to circa 2600 BCE, while buckwheat pollen found in Japan dates from as early as 4000 BCE. It is the world's highest-elevation domesticate, being cultivated in Yunnan on the edge of the Tibetan Plateau or on the plateau itself. Buckwheat was one of the earliest crops introduced by Europeans to North America. Dispersal around the globe was complete by 2006, when a variety developed in Canada was widely planted in China. In India, buckwheat flour is known as kuttu ka atta and has long been culturally associated with many festivals like Shivratri, Navaratri and Janmashtami. On the day of these festivals, food items made only from buckwheat are consumed. Cultivation Buckwheat is a short-season crop that grows well in low-fertility or acidic soils; too much fertilizer – especially nitrogen – reduces yields, and the soil must be well drained. In hot climates buckwheat can be grown only by sowing late in the season, so that it blooms in cooler weather. The presence of pollinators greatly increases yield. Nectar from flowering buckwheat produces a dark-colored honey. The buckwheat plant has a branching root system with a primary taproot that reaches deeply into moist soil. It grows tall. Buckwheat has tetrahedral seeds and produces a flower that is usually white, although can also be pink or yellow. Buckwheat branches freely, as opposed to tillering or producing suckers, enabling more complete adaption to its environment than other cereal crops. Buckwheat is raised for grain only where a brief time is available for growth, either because the buckwheat is an early or a second crop in the season, or because the total growing season is limited. It establishes quickly, which suppresses summer weeds, and can be a reliable cover crop in summer to fit a small slot of warm season. Buckwheat has a growing period of only 10–12 weeks and it can be grown in high latitude or northern areas. Buckwheat is sometimes used as a green manure, as a plant for erosion control or as wildlife cover and feed. Production In 2022, world production of buckwheat was 2.2 million tonnes, led by Russia with 55% of the world total, followed by China with 23% and Ukraine with 7%. Biological control F. esculentum is often studied and used as a pollen and nectar source to increase natural predator numbers to control crop pests. Berndt et al. 2002 found that the results were not entirely promising in one vineyard in New Zealand but the same team - Berndt et al. 2006, four years later and studying a number of vineyards up and down New Zealand - did find a significant increase in 22 parasitoids, especially Dolichogenidea tasmanica, as did Irvin et al. 1999 for D. t. in Canterbury orchards. Gurr et al. 1998 showed that floral nectaries - and not shelter in or alternate hosts on F. esculentum - were responsible for this increase, and Stephens et al. 1998 for Anacharis spp. on Micromus tasmaniae. Stephens et al. 1998 also first demonstrated a great increase of A. spp. on M. t. (which also commonly predates on F. e.). Cullen et al. 2013 found that vineyards around Waipara had not continued planting buckwheat, suggesting a need for further technique development so that buckwheat will integrate well with real-world vineyard practice. English-Loeb et al. 2003 found that it does sustain greater numbers of Anagrus parasitoids on Erythroneura leafhoppers, and Balzan and Wäckers 2013 found the same for Necremnus artynes and Ferracini et al. 2012 for Necremnus tutae on Tuta absoluta, and thereby act as pest controls in tomato, potato, and to a lesser degree other Solanaceous and non-Solanaceous horticulturals. Kalinova and Moudry 2003 found that further companion planting with other flowers at the wrong time of year may actually cause F. esculentum to be killed by frosts it would have otherwise survived, and Colley and Luna 2000 found that it may delay its flowering to not coincide with the natural enemy it was planted to feed. Foti et al. 2016 found significant short-chain carboxylic acid variation to be the most likely explanation for biocontrol performance variation between cultivars. Phytochemicals Buckwheat contains diverse phytochemicals, including rutin, tannins, catechin-7-O-glucoside in groats, and fagopyrins, which are located mainly in the cotyledons of the buckwheat plant. It has almost no levels of inorganic arsenic. Aromatic compounds Salicylaldehyde (2-hydroxybenzaldehyde) was identified as a characteristic component of buckwheat aroma. 2,5-dimethyl-4-hydroxy-3(2H)-furanone, (E,E)-2,4-decadienal, phenylacetaldehyde, 2-methoxy-4-vinylphenol, (E)-2-nonenal, decanal and hexanal also contribute to its aroma. They all have odour activity value of more than 50, but the aroma of these substances in an isolated state does not resemble buckwheat. Nutrition With a 100-gram serving of dry buckwheat providing of food energy, or cooked, buckwheat is a rich source (20% or more of the Daily Value, DV) of protein, dietary fiber, four B vitamins and several dietary minerals, with content especially high (47 to 65% DV) in niacin, magnesium, manganese and phosphorus (table). Buckwheat is 72% carbohydrates, 10% dietary fiber, 3% fat, 13% protein, and 10% water. Gluten-free As buckwheat contains no gluten, it may be eaten by people with gluten-related disorders, such as celiac disease, non-celiac gluten sensitivity or dermatitis herpetiformis. Nevertheless, buckwheat products may have gluten contamination. Potential adverse effects Cases of severe allergic reactions to buckwheat and buckwheat-containing products have been reported. Buckwheat contains fluorescent phototoxic fagopyrins. Seeds, flour, and teas are generally safe when consumed in normal amounts, but fagopyrism can appear in people with diets based on high consumption of buckwheat sprouts, and particularly flowers or fagopyrin-rich buckwheat extracts. Symptoms of fagopyrism in humans may include skin inflammation in sunlight-exposed areas, cold sensitivity, and tingling or numbness in the hands. Culinary use The fruit is an achene, similar to sunflower seed, with a single seed inside a hard outer hull. The starchy endosperm is white and makes up most or all of buckwheat flour. The seed coat is green or tan, which darkens buckwheat flour. The hull is dark brown or black, and some may be included in buckwheat flour as dark specks. The dark flour is known as blé noir (black wheat) in French, along with the name sarrasin (saracen). Similarly, in Italy, it is known as grano saraceno (saracen grain). The grain can be prepared by simple dehulling, milling into farina, to whole-grain flour or to white flour. The grain can be fractionated into starch, germ and hull for specialized uses. Buckwheat groats are commonly used in western Asia and eastern Europe. The porridge was common, and is often considered the definitive peasant dish. It is made from roasted groats that are cooked with broth to a texture similar to rice or bulgur. The dish was taken to America by Jewish, Ukrainian, Russian, and Polish immigrants who called it kasha, as it is known today, who mixed it with pasta or used it as a filling for cabbage rolls (stuffed cabbage), knishes, and blintzes. Groats were the most widely used form of buckwheat worldwide during the 20th century, eaten primarily in Estonia, Latvia, Lithuania, Russia, Ukraine, Belarus, and Poland, called grechka (Greek [grain]) in Belarusian, Ukrainian and Russian languages. Buckwheat noodles have been eaten in Tibet and northern China for centuries, where the growing season is too short to raise wheat. A wooden press is used to press the dough into hot boiling water when making buckwheat noodles. Old presses found in Tibet and Shanxi share the same basic design features. The Japanese and Koreans may have learned the process of making buckwheat noodles from them. Buckwheat noodles play a major role in the cuisines of Japan (soba) and Korea (naengmyeon, makguksu and memil guksu). Soba noodles are the subject of deep cultural importance in Japan. The difficulty of making noodles from flour with no gluten has resulted in a traditional art developed around their manufacture by hand. A jelly called memilmuk in Korea is made from buckwheat starch. Noodles also appear in Italy, with pasta di grano saraceno in Apulia region of Southern Italy and pizzoccheri in the Valtellina region of Northern Italy. Buckwheat pancakes are eaten in several countries. They are known as buckwheat blini in Russia, galettes bretonnes in France, ployes in Acadia, poffertjes in the Netherlands, boûketes in the Wallonia region of Belgium, kuttu ki puri in India and kachhyamba in Nepal. Similar pancakes were a common food in American pioneer days. They are light and airy when baked. The buckwheat flour gives the pancakes an earthy, mildly mushroom-like taste. Yeasted patties called hrechanyky are made in Ukraine. Buckwheat is a permitted sustenance during fasting in several traditions. In India, on Hindu fasting days (Navaratri, Ekadashi, Janmashtami, Maha Shivaratri, etc.), fasting people in northern states of India eat foods made of buckwheat flour. Eating cereals such as wheat or rice is prohibited during such fasting days. While strict Hindus do not even drink water during their fast, others give up cereals and salt and instead eat non-cereal foods such as buckwheat (kuttu). In the Russian Orthodox tradition, it is eaten on the St. Philip fast. Buckwheat honey is dark, strong and aromatic. Because it does not complement other honeys, it is normally produced as a monofloral honey. Beverages Beer In recent years, buckwheat has been used as a substitute for other grains in gluten-free beer. Although it is not an actual cereal (being a pseudocereal), buckwheat can be used in the same way as barley to produce a malt that can form the basis of a mash that will brew a beer without gliadin or hordein (together gluten) and therefore can be suitable for coeliacs or others sensitive to certain glycoproteins. Whisky Buckwheat whisky is a type of distilled alcoholic beverage made entirely or principally from buckwheat. It is produced in the Brittany region of France and in the United States. Shōchū Buckwheat is a Japanese distilled beverage produced since the 16th Century. The taste is milder than barley shōchū. Tea Buckwheat tea, known as kuqiao-cha (苦荞茶) in China, memil-cha () in Korea and soba-cha () in Japan, is a tea made from roasted buckwheat. Upholstery filling Buckwheat hulls are used as filling for a variety of upholstered goods, including pillows. The hulls are durable and do not insulate or reflect heat as much as synthetic filling. They are sometimes marketed as an alternative natural filling to feathers for those with allergies. However, medical studies to measure the health effects of pillows manufactured with unprocessed and uncleaned hulls concluded that such buckwheat pillows do contain higher levels of a potential allergen that may trigger asthma in susceptible individuals than do new synthetic-filled pillows.
Biology and health sciences
Caryophyllales
null
80207
https://en.wikipedia.org/wiki/Sodium%20chloride
Sodium chloride
Sodium chloride , commonly known as edible salt, is an ionic compound with the chemical formula NaCl, representing a 1:1 ratio of sodium and chlorine ions. It is transparent or translucent, brittle, hygroscopic, and occurs as the mineral halite. In its edible form, it is commonly used as a condiment and food preservative. Large quantities of sodium chloride are used in many industrial processes, and it is a major source of sodium and chlorine compounds used as feedstocks for further chemical syntheses. Another major application of sodium chloride is deicing of roadways in sub-freezing weather. Uses In addition to the many familiar domestic uses of salt, more dominant applications of the approximately 250 million tonnes per year production (2008 data) include chemicals and de-icing. Chemical functions Salt is used, directly or indirectly, in the production of many chemicals, which consume most of the world's production. Chlor-alkali industry It is the starting point for the chloralkali process, the industrial process to produce chlorine and sodium hydroxide, according to the chemical equation 2 NaCl + 2 H2O ->[electrolysis] Cl2 + H2 + 2 NaOH This electrolysis is conducted in either a mercury cell, a diaphragm cell, or a membrane cell. Each of those uses a different method to separate the chlorine from the sodium hydroxide. Other technologies are under development due to the high energy consumption of the electrolysis, whereby small improvements in the efficiency can have large economic paybacks. Some applications of chlorine include PVC thermoplastics production, disinfectants, and solvents. Sodium hydroxide is extensively used in many different industries enabling production of paper, soap, and aluminium etc. Soda-ash industry Sodium chloride is used in the Solvay process to produce sodium carbonate and calcium chloride. Sodium carbonate, in turn, is used to produce glass, sodium bicarbonate, and dyes, as well as a myriad of other chemicals. In the Mannheim process, sodium chloride is used for the production of sodium sulfate and hydrochloric acid. Miscellaneous industrial uses Sodium chloride is heavily used, so even relatively minor applications can consume massive quantities. In oil and gas exploration, salt is an important component of drilling fluids in well drilling. It is used to flocculate and increase the density of the drilling fluid to overcome high downwell gas pressures. Whenever a drill hits a salt formation, salt is added to the drilling fluid to saturate the solution in order to minimize the dissolution within the salt stratum. Salt is also used to increase the curing of concrete in cemented casings. In textiles and dyeing, salt is used as a brine rinse to separate organic contaminants, to promote "salting out" of dyestuff precipitates, and to blend with concentrated dyes to increase yield in dyebaths and make the colors look sharper. One of its main roles is to provide the positive ion charge to promote the absorption of negatively charged ions of dyes. For use in the pulp and paper industry, it is used to manufacture sodium chlorate, which is then reacted with sulfuric acid and a reducing agent such as methanol to manufacture chlorine dioxide, a bleaching chemical that is widely used to bleach wood pulp. In tanning and leather treatment, salt is added to animal hides to inhibit microbial activity on the underside of the hides and to attract moisture back into the hides. In rubber manufacture, salt is used to make buna, neoprene, and white rubber types. Salt brine and sulfuric acid are used to coagulate an emulsified latex made from chlorinated butadiene. Salt also is added to secure the soil and to provide firmness to the foundation on which highways are built. The salt acts to minimize the effects of shifting caused in the subsurface by changes in humidity and traffic load. Water softening Hard water contains calcium and magnesium ions that interfere with action of soap and contribute to the buildup of a scale or film of alkaline mineral deposits in household and industrial equipment and pipes. Commercial and residential water-softening units use ion-exchange resins to remove ions that cause the hardness. These resins are generated and regenerated using sodium chloride. Road salt The second major application of salt is for deicing and anti-icing of roads, both in grit bins and spread by winter service vehicles. In anticipation of snowfall, roads are optimally "anti-iced" with brine (concentrated solution of salt in water), which prevents bonding between the snow-ice and the road surface. This procedure obviates the heavy use of salt after the snowfall. For de-icing, mixtures of brine and salt are used, sometimes with additional agents such as calcium chloride and/or magnesium chloride. The use of salt or brine becomes ineffective below . Salt for de-icing in the United Kingdom predominantly comes from a single mine in Winsford in Cheshire. Prior to distribution it is mixed with <100 ppm of sodium ferrocyanide as an anticaking agent, which enables rock salt to flow freely out of the gritting vehicles despite being stockpiled prior to use. In recent years this additive has also been used in table salt. Other additives had been used in road salt to reduce the total costs. For example, in the US, a byproduct carbohydrate solution from sugar-beet processing was mixed with rock salt and adhered to road surfaces about 40% better than loose rock salt alone. Because it stayed on the road longer, the treatment did not have to be repeated several times, saving time and money. In the technical terms of physical chemistry, the minimum freezing point of a water-salt mixture is for 23.31 wt% of salt. Freezing near this concentration is however so slow that the eutectic point of can be reached with about 25 wt% of salt. Environmental effects Road salt ends up in fresh-water bodies and could harm aquatic plants and animals by disrupting their osmoregulation ability. The omnipresence of salt in coastal areas poses a problem in any coating application, because trapped salts cause great problems in adhesion. Naval authorities and ship builders monitor the salt concentrations on surfaces during construction. Maximal salt concentrations on surfaces are dependent on the authority and application. The IMO regulation is mostly used and sets salt levels to a maximum of 50 mg/m2 soluble salts measured as sodium chloride. These measurements are done by means of a Bresle test. Salinization (increasing salinity, aka freshwater salinization syndrome) and subsequent increased metal leaching is an ongoing problem throughout North America and European fresh waterways. In highway de-icing, salt has been associated with corrosion of bridge decks, motor vehicles, reinforcement bar and wire, and unprotected steel structures used in road construction. Surface runoff, vehicle spraying, and windblown salt also affect soil, roadside vegetation, and local surface water and groundwater supplies. Although evidence of environmental loading of salt has been found during peak usage, the spring rains and thaws usually dilute the concentrations of sodium in the area where salt was applied. A 2009 study found that approximately 70% of the road salt being applied in the Minneapolis-St Paul metro area is retained in the local watershed. Substitution Some agencies are substituting beer, molasses, and beet juice instead of road salt. Airlines utilize more glycol and sugar rather than salt-based solutions for deicing. Food industry and agriculture Salt is added to food, either by the food producer or by the consumer, as a flavor enhancer, preservative, binder, fermentation-control additive, texture-control agent, and color developer. The salt consumption in the food industry is subdivided, in descending order of consumption, into other food processing, meat packers, canning, baking, dairy, and grain mill products. Salt is added to promote color development in bacon, ham and other processed meat products. As a preservative, salt inhibits the growth of bacteria. Salt acts as a binder in sausages to form a binding gel made up of meat, fat, and moisture. Salt also acts as a flavor enhancer and as a tenderizer. It is used as a cheap and safe desiccant because of its hygroscopic properties, making salting an effective method of food preservation historically; the salt draws water out of bacteria through osmotic pressure, keeping it from reproducing, a major source of food spoilage. Even though more effective desiccants are available, few are safe for humans to ingest. Many microorganisms cannot live in a salty environment: water is drawn out of their cells by osmosis. For this reason salt is used to preserve some foods, such as bacon, fish, or cabbage. In many dairy industries, salt is added to cheese as a color-, fermentation-, and texture-control agent. The dairy subsector includes companies that manufacture creamery butter, condensed and evaporated milk, frozen desserts, ice cream, natural and processed cheese, and specialty dairy products. In canning, salt is primarily added as a flavor enhancer and preservative. It also is used as a carrier for other ingredients, dehydrating agent, enzyme inhibitor and tenderizer. In baking, salt is added to control the rate of fermentation in bread dough. It also is used to strengthen the gluten (the elastic protein-water complex in certain doughs) and as a flavor enhancer, such as a topping on baked goods. The food-processing category also contains grain mill products. These products consist of milling flour and rice and manufacturing cereal breakfast food and blended or prepared flour. Salt is also used a seasoning agent, e.g. in potato chips, pretzels, and cat and dog food. Sodium chloride is used in veterinary medicine as emesis-causing agent. It is given as warm saturated solution. Emesis can also be caused by pharyngeal placement of small amount of plain salt or salt crystals. For watering plants to use sodium chloride () as a fertilizer, moderate concentration is used to avoid potential toxicity: per liter is generally safe and effective for most plants. Medicine Sodium chloride is used together with water as one of the primary solutions for intravenous therapy. Nasal spray often contains a saline solution. Sodium chloride is also available as an oral tablet, and is taken to treat low sodium levels. Firefighting Sodium chloride is the principal extinguishing agent in dry-powder fire extinguishers that are used on combustible metal fires, for metals such as magnesium, zirconium, titanium, and lithium (Class D extinguishers). The salt forms an oxygen-excluding crust that smothers the fire. Cleanser Since at least medieval times, people have used salt as a cleansing agent rubbed on household surfaces. It is also used in many brands of shampoo, toothpaste, and popularly to de-ice driveways and patches of ice. Infrared optics Sodium chloride crystals have a transmittance of at least 90% (through 1 mm) for infrared light having wavelengths in the range 0.2– 18 μm. They were used in optical components such as windows and lenses, where few non-absorbing alternatives existed in that spectral range. While inexpensive, NaCl crystals are soft and hygroscopic – when exposed to the water in ambient air, they gradually cover with "frost". This limits application of NaCl to dry environments, vacuum-sealed areas, or short-term uses such as prototyping. Materials that are mechanically stronger and less sensitive to moisture, such as zinc selenide and chalcogenide glasses, more widely used than NaCl. Chemistry Solid sodium chloride In solid sodium chloride, each ion is surrounded by six ions of the opposite charge as expected on electrostatic grounds. The surrounding ions are located at the vertices of a regular octahedron. In the language of close-packing, the larger chloride ions (167 pm in size) are arranged in a cubic array whereas the smaller sodium ions (116 pm) fill all the cubic gaps (octahedral voids) between them. This same basic structure is found in many other compounds and is commonly known as the NaCl structure or rock salt crystal structure. It can be represented as a face-centered cubic (fcc) lattice with a two-atom basis or as two interpenetrating face centered cubic lattices. The first atom is located at each lattice point, and the second atom is located halfway between lattice points along the fcc unit cell edge. Solid sodium chloride has a melting point of 801 °C and liquid sodium chloride boils at 1465 °C. Atomic-resolution real-time video imaging allows visualization of the initial stage of crystal nucleation of sodium chloride. The Thermal conductivity of sodium chloride as a function of temperature has a maximum of 2.03 W/(cm K) at and decreases to 0.069 at . It also decreases with doping. From cold (sub-freezing) solutions, salt crystallises with water of hydration as hydrohalite (the dihydrate NaCl·2). In 2023, it was discovered that under pressure, sodium chloride can form the hydrates NaCl·8.5H2O and NaCl·13H2O. Aqueous solutions The attraction between the Na+ and Cl− ions in the solid is so strong that only highly polar solvents like water dissolve NaCl well. When dissolved in water, the sodium chloride framework disintegrates as the Na+ and Cl− ions become surrounded by polar water molecules. These solutions consist of metal aquo complex with the formula [Na(H2O)8]+, with the Na–O distance of 250 pm. The chloride ions are also strongly solvated, each being surrounded by an average of six molecules of water. Solutions of sodium chloride have very different properties from pure water. The eutectic point is for 23.31% mass fraction of salt, and the boiling point of saturated salt solution is near . pH of sodium chloride solutions The pH of a sodium chloride solution remains ≈7 due to the extremely weak basicity of the Cl− ion, which is the conjugate base of the strong acid HCl. In other words, NaCl has no effect on system pH in diluted solutions where the effects of ionic strength and activity coefficients are negligible. Stoichiometric and structure variants Common salt has a 1:1 molar ratio of sodium and chlorine. In 2013, compounds of sodium and chloride of different stoichiometries have been discovered; five new compounds were predicted (e.g., Na3Cl, Na2Cl, Na3Cl2, NaCl3, and NaCl7). The existence of some of them has been experimentally confirmed at high pressures and other conditions: cubic and orthorhombic NaCl3, two-dimensional metallic tetragonal Na3Cl and exotic hexagonal NaCl. This indicates that compounds violating chemical intuition are possible, in simple systems under non-ambient conditions. Occurrence Salt is found in the Earth's crust as the mineral halite (rock salt), and a tiny amount exists as suspended sea salt particles in the atmosphere. These particles are the dominant cloud condensation nuclei far out at sea, which allow the formation of clouds in otherwise non-polluted air. Production Salt is currently mass-produced by evaporation of seawater or brine from brine wells and salt lakes. Mining of rock salt is also a major source. China is the world's main supplier of salt. In 2017, world production was estimated at 280 million tonnes, the top five producers (in million tonnes) being China (68.0), United States (43.0), India (26.0), Germany (13.0), and Canada (13.0). Salt is also a byproduct of potassium mining.
Physical sciences
Salts
null
80381
https://en.wikipedia.org/wiki/Health
Health
Health has a variety of definitions, which have been used for different purposes over time. In general, it refers to physical and emotional well-being, especially that associated with normal functioning of the human body, absent of disease, pain (including mental pain), or injury. Health can be promoted by encouraging healthful activities, such as regular physical exercise and adequate sleep, and by reducing or avoiding unhealthful activities or situations, such as smoking or excessive stress. Some factors affecting health are due to individual choices, such as whether to engage in a high-risk behavior, while others are due to structural causes, such as whether the society is arranged in a way that makes it easier or harder for people to get necessary healthcare services. Still, other factors are beyond both individual and group choices, such as genetic disorders. History The meaning of health has evolved over time. In keeping with the biomedical perspective, early definitions of health focused on the theme of the body's ability to function; health was seen as a state of normal function that could be disrupted from time to time by disease. An example of such a definition of health is: "a state characterized by anatomic, physiologic, and psychological integrity; ability to perform personally valued family, work, and community roles; ability to deal with physical, biological, psychological, and social stress". Then, in 1948, in a radical departure from previous definitions, the World Health Organization (WHO) proposed a definition that aimed higher, linking health to well-being, in terms of "physical, mental, and social well-being, and not merely the absence of disease and infirmity". Although this definition was welcomed by some as being innovative, it was also criticized for being vague and excessively broad and was not construed as measurable. For a long time, it was set aside as an impractical ideal, with most discussions of health returning to the practicality of the biomedical model. Just as there was a shift from viewing disease as a state to thinking of it as a process, the same shift happened in definitions of health. Again, the WHO played a leading role when it fostered the development of the health promotion movement in the 1980s. This brought in a new conception of health, not as a state, but in dynamic terms of resiliency, in other words, as "a resource for living". In 1984, WHO revised the definition of health defined it as "the extent to which an individual or group is able to realize aspirations and satisfy needs and to change or cope with the environment. Health is a resource for everyday life, not the objective of living; it is a positive concept, emphasizing social and personal resources, as well as physical capacities." Thus, health referred to the ability to maintain homeostasis and recover from adverse events. Mental, intellectual, emotional and social health referred to a person's ability to handle stress, to acquire skills, to maintain relationships, all of which form resources for resiliency and independent living. This opens up many possibilities for health to be taught, strengthened and learned. Since the late 1970s, the federal Healthy People Program has been a visible component of the United States' approach to improving population health. In each decade, a new version of Healthy People is issued, featuring updated goals and identifying topic areas and quantifiable objectives for health improvement during the succeeding ten years, with assessment at that point of progress or lack thereof. Progress has been limited to many objectives, leading to concerns about the effectiveness of Healthy People in shaping outcomes in the context of a decentralized and uncoordinated US health system. Healthy People 2020 gives more prominence to health promotion and preventive approaches and adds a substantive focus on the importance of addressing social determinants of health. A new expanded digital interface facilitates use and dissemination rather than bulky printed books as produced in the past. The impact of these changes to Healthy People will be determined in the coming years. Systematic activities to prevent or cure health problems and promote good health in humans are undertaken by health care providers. Applications with regard to animal health are covered by the veterinary sciences. The term "healthy" is also widely used in the context of many types of non-living organizations and their impacts for the benefit of humans, such as in the sense of healthy communities, healthy cities or healthy environments. In addition to health care interventions and a person's surroundings, a number of other factors are known to influence the health status of individuals. These are referred to as the "determinants of health", which include the individual's background, lifestyle, economic status, social conditions and spirituality; Studies have shown that high levels of stress can affect human health. In the first decade of the 21st century, the conceptualization of health as an ability opened the door for self-assessments to become the main indicators to judge the performance of efforts aimed at improving human health. It also created the opportunity for every person to feel healthy, even in the presence of multiple chronic diseases or a terminal condition, and for the re-examination of determinants of health (away from the traditional approach that focuses on the reduction of the prevalence of diseases). Determinants In general, the context in which an individual lives is of great importance for both his health status and quality of life. It is increasingly recognized that health is maintained and improved not only through the advancement and application of health science, but also through the efforts and intelligent lifestyle choices of the individual and society. According to the World Health Organization, the main determinants of health include the social and economic environment, the physical environment, and the person's individual characteristics and behaviors. More specifically, key factors that have been found to influence whether people are healthy or unhealthy include the following: Education and literacy Employment/working conditions Income and social status Physical environments Social environments Social support networks Biology and genetics Culture Gender Health care services Healthy child development Personal health practices and coping skills An increasing number of studies and reports from different organizations and contexts examine the linkages between health and different factors, including lifestyles, environments, health care organization and health policy, one specific health policy brought into many countries in recent years was the introduction of the sugar tax. Beverage taxes came into light with increasing concerns about obesity, particularly among youth. Sugar-sweetened beverages have become a target of anti-obesity initiatives with increasing evidence of their link to obesity.—such as the 1974 Lalonde report from Canada; the Alameda County Study in California; and the series of World Health Reports of the World Health Organization, which focuses on global health issues including access to health care and improving public health outcomes, especially in developing countries. The concept of the "health field," as distinct from medical care, emerged from the Lalonde report from Canada. The report identified three interdependent fields as key determinants of an individual's health. These are: Biomedical: all aspects of health, physical and mental, developed within the human body as influenced by genetic make-up. Environmental: all matters related to health external to the human body and over which the individual has little or no control; Lifestyle: the aggregation of personal decisions (i.e., over which the individual has control) that can be said to contribute to, or cause, illness or death; The maintenance and promotion of health is achieved through different combination of physical, mental, and social well-being—a combination sometimes referred to as the "health triangle." The WHO's 1986 Ottawa Charter for Health Promotion further stated that health is not just a state, but also "a resource for everyday life, not the objective of living. Health is a positive concept emphasizing social and personal resources, as well as physical capacities." Focusing more on lifestyle issues and their relationships with functional health, data from the Alameda County Study suggested that people can improve their health via exercise, enough sleep, spending time in nature, maintaining a healthy body weight, limiting alcohol use, and avoiding smoking. Health and illness can co-exist, as even people with multiple chronic diseases or terminal illnesses can consider themselves healthy. The environment is often cited as an important factor influencing the health status of individuals. This includes characteristics of the natural environment, the built environment and the social environment. Factors such as clean water and air, adequate housing, and safe communities and roads all have been found to contribute to good health, especially to the health of infants and children. Some studies have shown that a lack of neighborhood recreational spaces including natural environment leads to lower levels of personal satisfaction and higher levels of obesity, linked to lower overall health and well-being. It has been demonstrated that increased time spent in natural environments is associated with improved self-reported health, suggesting that the positive health benefits of natural space in urban neighborhoods should be taken into account in public policy and land use. Genetics, or inherited traits from parents, also play a role in determining the health status of individuals and populations. This can encompass both the predisposition to certain diseases and health conditions, as well as the habits and behaviors individuals develop through the lifestyle of their families. For example, genetics may play a role in the manner in which people cope with stress, either mental, emotional or physical. For example, obesity is a significant problem in the United States that contributes to poor mental health and causes stress in the lives of many people. One difficulty is the issue raised by the debate over the relative strengths of genetics and other factors; interactions between genetics and environment may be of particular importance. Potential issues A number of health issues are common around the globe. Disease is one of the most common. According to GlobalIssues.org, approximately 36 million people die each year from non-communicable (i.e., not contagious) diseases, including cardiovascular disease, cancer, diabetes and chronic lung disease. Among communicable diseases, both viral and bacterial, AIDS/HIV, tuberculosis, and malaria are the most common, causing millions of deaths every year. Another health issue that causes death or contributes to other health problems is malnutrition, especially among children. One of the groups malnutrition affects most is young children. Approximately 7.5 million children under the age of 5 die from malnutrition, usually brought on by not having the money to find or make food. Bodily injuries are also a common health issue worldwide. These injuries, including bone fractures and burns, can reduce a person's quality of life or can cause fatalities including infections that resulted from the injury (or the severity injury in general). Lifestyle choices are contributing factors to poor health in many cases. These include smoking cigarettes, and can also include a poor diet, whether it is overeating or an overly constrictive diet. Inactivity can also contribute to health issues and also a lack of sleep, excessive alcohol consumption, and neglect of oral hygiene. There are also genetic disorders that are inherited by the person and can vary in how much they affect the person (and when they surface). Although the majority of these health issues are preventable, a major contributor to global ill health is the fact that approximately 1 billion people lack access to health care systems. Arguably, the most common and harmful health issue is that a great many people do not have access to quality remedies. Mental health The World Health Organization describes mental health as "a state of well-being in which the individual realizes his or her own abilities, can cope with the normal stresses of life, can work productively and fruitfully, and is able to make a contribution to his or her community". Mental health is not just the absence of mental illness. Mental illness is described as 'the spectrum of cognitive, emotional, and behavioral conditions that interfere with social and emotional well-being and the lives and productivity of people. Having a mental illness can seriously impair, temporarily or permanently, the mental functioning of a person. Other terms include: 'mental health problem', 'illness', 'disorder', 'dysfunction'. Approximately twenty percent of all adults in the US are considered diagnosable with a mental disorder. Mental disorders are the leading cause of disability in the United States and Canada. Examples of these disorders include schizophrenia, ADHD, major depressive disorder, bipolar disorder, anxiety disorder, post-traumatic stress disorder and autism.  Many factors contribute to mental health problems, including: Biological factors, such as genes or brain chemistry Family history of mental health problems Life experiences, such as trauma or abuse Maintaining Achieving and maintaining health is an ongoing process, shaped by both the evolution of health care knowledge and practices as well as personal strategies and organized interventions for staying healthy. Diet An important way to maintain one's personal health is to have a healthy diet. A healthy diet includes a variety of plant-based and animal-based foods that provide nutrients to the body. Such nutrients provide the body with energy and keep it running. Nutrients help build and strengthen bones, muscles, and tendons and also regulate body processes (i.e., blood pressure). Water is essential for growth, reproduction and good health. Macronutrients are consumed in relatively large quantities and include proteins, carbohydrates, and fats and fatty acids. Micronutrients – vitamins and minerals – are consumed in relatively smaller quantities, but are essential to body processes. The food guide pyramid is a pyramid-shaped guide of healthy foods divided into sections. Each section shows the recommended intake for each food group (i.e., protein, fat, carbohydrates and sugars). Making healthy food choices can lower one's risk of heart disease and the risk of developing some types of cancer, and can help one maintain their weight within a healthy range. The Mediterranean diet is commonly associated with health-promoting effects. This is sometimes attributed to the inclusion of bioactive compounds such as phenolic compounds, isoprenoids and alkaloids. Exercise Physical exercise enhances or maintains physical fitness and overall health and wellness. It strengthens one's bones and muscles and improves the cardiovascular system. According to the National Institutes of Health, there are four types of exercise: endurance, strength, flexibility, and balance. The CDC states that physical exercise can reduce the risks of heart disease, cancer, type 2 diabetes, high blood pressure, obesity, depression, and anxiety. For the purpose of counteracting possible risks, it is often recommended to start physical exercise gradually as one goes. Participating in any exercising, whether it is housework, yardwork, walking or standing up when talking on the phone, is often thought to be better than none when it comes to health. Sleep Sleep is an essential component to maintaining health. In children, sleep is also vital for growth and development. Ongoing sleep deprivation has been linked to an increased risk for some chronic health problems. In addition, sleep deprivation has been shown to correlate with both increased susceptibility to illness and slower recovery times from illness. In one study, people with chronic insufficient sleep, set as six hours of sleep a night or less, were found to be four times more likely to catch a cold compared to those who reported sleeping for seven hours or more a night. Due to the role of sleep in regulating metabolism, insufficient sleep may also play a role in weight gain or, conversely, in impeding weight loss. Additionally, in 2007, the International Agency for Research on Cancer, which is the cancer research agency for the World Health Organization, declared that "shiftwork that involves circadian disruption is probably carcinogenic to humans", speaking to the dangers of long-term nighttime work due to its intrusion on sleep. In 2015, the National Sleep Foundation released updated recommendations for sleep duration requirements based on age, and concluded that "Individuals who habitually sleep outside the normal range may be exhibiting signs or symptoms of serious health problems or, if done volitionally, may be compromising their health and well-being." Role of science Health science is the branch of science focused on health. There are two main approaches to health science: the study and research of the body and health-related issues to understand how humans (and animals) function, and the application of that knowledge to improve health and to prevent and cure diseases and other physical and mental impairments. The science builds on many sub-fields, including biology, biochemistry, physics, epidemiology, pharmacology, medical sociology. Applied health sciences endeavor to better understand and improve human health through applications in areas such as health education, biomedical engineering, biotechnology and public health. Organized interventions to improve health based on the principles and procedures developed through the health sciences are provided by practitioners trained in medicine, nursing, nutrition, pharmacy, social work, psychology, occupational therapy, physical therapy and other health care professions. Clinical practitioners focus mainly on the health of individuals, while public health practitioners consider the overall health of communities and populations. Workplace wellness programs are increasingly being adopted by companies for their value in improving the health and well-being of their employees, as are school health services to improve the health and well-being of children. Role of medicine and medical science Contemporary medicine is in general conducted within health care systems. Legal, credentialing and financing frameworks are established by individual governments, augmented on occasion by international organizations, such as churches. The characteristics of any given health care system have significant impact on the way medical care is provided. From ancient times, Christian emphasis on practical charity gave rise to the development of systematic nursing and hospitals and the Catholic Church today remains the largest non-government provider of medical services in the world. Advanced industrial countries (with the exception of the United States) and many developing countries provide medical services through a system of universal health care that aims to guarantee care for all through a single-payer health care system, or compulsory private or co-operative health insurance. This is intended to ensure that the entire population has access to medical care on the basis of need rather than ability to pay. Delivery may be via private medical practices or by state-owned hospitals and clinics, or by charities, most commonly by a combination of all three. Most tribal societies provide no guarantee of healthcare for the population as a whole. In such societies, healthcare is available to those that can afford to pay for it or have self-insured it (either directly or as part of an employment contract) or who may be covered by care financed by the government or tribe directly. Transparency of information is another factor defining a delivery system. Access to information on conditions, treatments, quality, and pricing greatly affects the choice by patients/consumers and, therefore, the incentives of medical professionals. While the US healthcare system has come under fire for lack of openness, new legislation may encourage greater openness. There is a perceived tension between the need for transparency on the one hand and such issues as patient confidentiality and the possible exploitation of information for commercial gain on the other. Delivery Provision of medical care is classified into primary, secondary, and tertiary care categories. Primary care medical services are provided by physicians, physician assistants, nurse practitioners, or other health professionals who have first contact with a patient seeking medical treatment or care. These occur in physician offices, clinics, nursing homes, schools, home visits, and other places close to patients. About 90% of medical visits can be treated by the primary care provider. These include treatment of acute and chronic illnesses, preventive care and health education for all ages and both sexes. Secondary care medical services are provided by medical specialists in their offices or clinics or at local community hospitals for a patient referred by a primary care provider who first diagnosed or treated the patient. Referrals are made for those patients who required the expertise or procedures performed by specialists. These include both ambulatory care and inpatient services, Emergency departments, intensive care medicine, surgery services, physical therapy, labor and delivery, endoscopy units, diagnostic laboratory and medical imaging services, hospice centers, etc. Some primary care providers may also take care of hospitalized patients and deliver babies in a secondary care setting. Tertiary care medical services are provided by specialist hospitals or regional centers equipped with diagnostic and treatment facilities not generally available at local hospitals. These include trauma centers, burn treatment centers, advanced neonatology unit services, organ transplants, high-risk pregnancy, radiation oncology, etc. Modern medical care also depends on information – still delivered in many health care settings on paper records, but increasingly nowadays by electronic means. In low-income countries, modern healthcare is often too expensive for the average person. International healthcare policy researchers have advocated that "user fees" be removed in these areas to ensure access, although even after removal, significant costs and barriers remain. Separation of prescribing and dispensing is a practice in medicine and pharmacy in which the physician who provides a medical prescription is independent from the pharmacist who provides the prescription drug. In the Western world there are centuries of tradition for separating pharmacists from physicians. In Asian countries, it is traditional for physicians to also provide drugs. Role of public health Public health has been described as "the science and art of preventing disease, prolonging life and promoting health through the organized efforts and informed choices of society, organizations, public and private, communities and individuals." It is concerned with threats to the overall health of a community based on population health analysis. The population in question can be as small as a handful of people or as large as all the inhabitants of several continents (for instance, in the case of a pandemic). Public health has many sub-fields, but typically includes the interdisciplinary categories of epidemiology, biostatistics and health services. environmental health, community health, behavioral health, and occupational health are also important areas of public health. The focus of public health interventions is to prevent and manage diseases, injuries and other health conditions through surveillance of cases and the promotion of healthy behavior, communities, and (in aspects relevant to human health) environments. Its aim is to prevent health problems from happening or re-occurring by implementing educational programs, developing policies, administering services and conducting research. In many cases, treating a disease or controlling a pathogen can be vital to preventing it in others, such as during an outbreak. Vaccination programs and distribution of condoms to prevent the spread of communicable diseases are examples of common preventive public health measures, as are educational campaigns to promote vaccination and the use of condoms (including overcoming resistance to such). Public health also takes various actions to limit the health disparities between different areas of the country and, in some cases, the continent or world. One issue is the access of individuals and communities to health care in terms of financial, geographical or socio-cultural constraints. Applications of the public health system include the areas of maternal and child health, health services administration, emergency response, and prevention and control of infectious and chronic diseases. The great positive impact of public health programs is widely acknowledged. Due in part to the policies and actions developed through public health, the 20th century registered a decrease in the mortality rates for infants and children and a continual increase in life expectancy in most parts of the world. For example, it is estimated that life expectancy has increased for Americans by thirty years since 1900, and worldwide by six years since 1990. Self-care strategies Personal health depends partially on the active, passive, and assisted cues people observe and adopt about their own health. These include personal actions for preventing or minimizing the effects of a disease, usually a chronic condition, through integrative care. They also include personal hygiene practices to prevent infection and illness, such as bathing and washing hands with soap; brushing and flossing teeth; storing, preparing and handling food safely; and many others. The information gleaned from personal observations of daily living – such as about sleep patterns, exercise behavior, nutritional intake and environmental features – may be used to inform personal decisions and actions (e.g., "I feel tired in the morning so I am going to try sleeping on a different pillow"), as well as clinical decisions and treatment plans (e.g., a patient who notices his or her shoes are tighter than usual may be having exacerbation of left-sided heart failure, and may require diuretic medication to reduce fluid overload). Personal health also depends partially on the social structure of a person's life. The maintenance of strong social relationships, volunteering, and other social activities have been linked to positive mental health and also increased longevity. One American study among seniors over age 70, found that frequent volunteering was associated with reduced risk of dying compared with older persons who did not volunteer, regardless of physical health status. Another study from Singapore reported that volunteering retirees had significantly better cognitive performance scores, fewer depressive symptoms, and better mental well-being and life satisfaction than non-volunteering retirees. Prolonged psychological stress may negatively impact health, and has been cited as a factor in cognitive impairment with aging, depressive illness, and expression of disease. Stress management is the application of methods to either reduce stress or increase tolerance to stress. Relaxation techniques are physical methods used to relieve stress. Psychological methods include cognitive therapy, meditation, and positive thinking, which work by reducing response to stress. Improving relevant skills, such as problem solving and time management skills, reduces uncertainty and builds confidence, which also reduces the reaction to stress-causing situations where those skills are applicable. Occupational In addition to safety risks, many jobs also present risks of disease, illness and other long-term health problems. Among the most common occupational diseases are various forms of pneumoconiosis, including silicosis and coal worker's pneumoconiosis (black lung disease). Asthma is another respiratory illness that many workers are vulnerable to. Workers may also be vulnerable to skin diseases, including eczema, dermatitis, urticaria, sunburn, and skin cancer. Other occupational diseases of concern include carpal tunnel syndrome and lead poisoning. As the number of service sector jobs has risen in developed countries, more and more jobs have become sedentary, presenting a different array of health problems than those associated with manufacturing and the primary sector. Contemporary problems, such as the growing rate of obesity and issues relating to stress and overwork in many countries, have further complicated the interaction between work and health. Many governments view occupational health as a social challenge and have formed public organizations to ensure the health and safety of workers. Examples of these include the British Health and Safety Executive and in the United States, the National Institute for Occupational Safety and Health, which conducts research on occupational health and safety, and the Occupational Safety and Health Administration, which handles regulation and policy relating to worker safety and health.
Biology and health sciences
Health, fitness, and medicine
null
80393
https://en.wikipedia.org/wiki/Echidna
Echidna
Echidnas (), sometimes known as spiny anteaters, are quill-covered monotremes (egg-laying mammals) belonging to the family Tachyglossidae , living in Australia and New Guinea. The four extant species of echidnas and the platypus are the only living mammals that lay eggs and the only surviving members of the order Monotremata. The diet of some species consists of ants and termites, but they are not closely related to the American true anteaters or to hedgehogs. Their young are called puggles. Echidnas evolved between 20 and 50 million years ago, descending from a platypus-like monotreme. This ancestor was aquatic, but echidnas adapted to life on land. Etymology Echidnas are possibly named after Echidna, a creature from Greek mythology who was half-woman, half-snake, as the animal was perceived to have qualities of both mammals and reptiles. An alternative explanation is a confusion with . Physical characteristics Echidnas are medium-sized, solitary mammals covered with coarse hair and spines. The spines are modified hairs and are made of keratin, the same fibrous protein that makes up fur, claws, nails, and horn sheaths in animals. Superficially, they resemble the anteaters of South America and other spiny mammals such as hedgehogs and porcupines. They are usually black or brown in coloration. There have been several reports of albino echidnas with pink eyes and white spines. They have elongated and slender snouts that function as both mouth and nose, and which have electrosensors to find earthworms, termites, ants, and other burrowing prey. This is similar to the platypus, which has 40,000 electroreceptors on its bill, but the long-beaked echidna has only 2,000, while the short-beaked echidna, which lives in a drier environment, has no more than 400 at the tip of its snout. Echidnas have short, strong limbs with large claws, and are powerful diggers. Their hind claws are elongated and curved backwards to aid in digging. Echidnas have tiny mouths and toothless jaws, and feed by tearing open soft logs, anthills and the like, and licking off prey with their long, sticky tongues. The ears are slits on the sides of their heads under the spines. The external ear is created by a large cartilaginous funnel, deep in the muscle. At 33 °C (91.4 °F), echidnas also possess the second-lowest active body temperature of all mammals, behind the platypus. Despite their appearance, echidnas are capable swimmers, as they evolved from platypus-like ancestors. When swimming, they expose their snout and some of their spines, and are known to journey to water to bathe. The first European drawing of an echidna was made in Adventure Bay, Tasmania by 's third lieutenant George Tobin during William Bligh's second breadfruit voyage. Diet The short-beaked echidna's diet consists mostly of ants and termites, while the Zaglossus (long-beaked) species typically eat worms and insect larvae. The tongues of long-beaked echidnas have sharp, tiny spines that help them capture their prey. They have no teeth, so they break down their food by grinding it between the bottoms of their mouths and their tongues. Echidnas' faeces are long and are cylindrical in shape; they are usually broken and unrounded, and composed largely of dirt and ant-hill material. Like all mammals, echidnas feed their young on milk, which contains various factors to sustain their growth and development. Habitat Echidnas do not tolerate extreme temperatures; they shelter from harsh weather in caves and rock crevices. Echidnas are found in forests and woodlands, hiding under vegetation, roots or piles of debris. They sometimes use the burrows (both abandoned and in use) of animals such as rabbits and wombats. Individual echidnas have large, mutually overlapping territories. Anatomy Echidnas and platypuses are the only egg-laying mammals, the monotremes. The average lifespan of an echidna in the wild is estimated at 14–16 years. Fully grown females can weigh about , the males 33% larger, at about . Though the internal reproductive organs differ, both sexes possess an identical single cloaca opening for urination, defecation, and mating. Male echidnas have non-venomous spurs on the hind feet, similar to the venomous male platypus. Due to their low metabolism and accompanying stress resistance, echidnas are long-lived for their size; the longest recorded lifespan for a captive echidna is 50 years, with anecdotal accounts of wild individuals reaching 45 years. The echidna's brain is half neocortex, compared to 80% of a human brain. Contrary to previous research, the echidna does enter REM sleep, but only in a comfortable temperature around 25 °C (77 °F). At lower or higher temperatures of 15 °C (59 °F) and 28 °C (82 °F), REM sleep is suppressed. Reproduction The female lays a single soft-shelled, leathery egg 22 days after mating, and deposits it directly into her pouch. An egg weighs and is about long. While hatching, the baby echidna opens the leather shell with a reptile-like egg tooth. Hatching takes place after 10 days of gestation; the young echidna, called a puggle, born larval and fetus-like, then sucks milk from the pores of the two milk patches (monotremes have no teats) and remains in the pouch for 45 to 55 days, at which time it starts to develop spines. The mother digs a nursery burrow and deposits the young, returning every five days to suckle it until it is weaned at seven months. Puggles will stay within their mother's den for up to a year before leaving.Male echidnas have a four-headed penis. During mating, the heads on one side "shut down" and do not grow in size; the other two are used to release semen into the female's two-branched reproductive tract. Each time it copulates, it alternates heads in sets of two. When not in use, the penis is retracted inside a preputial sac in the cloaca. The male echidna's penis is long when erect, and its shaft is covered with penile spines. These may be used to induce ovulation in the female. It is a challenge to study the echidna in its natural habitat, and they show no interest in mating while in captivity. Prior to 2007, no one had ever seen an echidna ejaculate. There have been previous attempts, trying to force the echidna to ejaculate through the use of electrically stimulated ejaculation in order to obtain semen samples but this has only resulted in the penis swelling. Breeding season begins in late June and extends through September. During mating season, a female may be followed by a line or "train" of up to ten males, the youngest trailing last, and some males switching between lines. Threats Echidnas are very timid. When frightened, they attempt to partially bury themselves and curl into a ball similar to a hedgehog. Strong front arms allow echidnas to dig in and hold fast against a predator pulling them from the hole. Their many predators include feral cats, foxes, domestic dogs, and goannas. Snakes pose a large threat when they slither into echidna burrows and prey on the spineless young puggles. They are easily stressed and injured by handling. Some ways to help echidnas include picking up litter, causing less pollution, planting vegetation for shelter, supervising pets, reporting hurt echidnas, and leaving them undisturbed. Evolution The divergence between oviparous (egg-laying) and viviparous (offspring develop internally) mammals is believed to date to the Triassic period. Most findings from genetics studies (especially of nuclear genes) are in agreement with the paleontological dating, but some other evidence, like mitochondrial DNA, give slightly different dates. Molecular clock data suggest echidnas split from platypuses between 19 and 48 million years ago, so that platypus-like fossils dating back to over 112.5 million years ago represent basal forms, rather than close relatives of the modern platypus. This would imply that echidnas evolved from water-foraging ancestors that returned to land living, which put them in competition with marsupials. Although extant monotremes lack adult teeth (platypuses have teeth only as juveniles), many extinct monotreme species have been identified based on the morphology of their teeth. Of the eight genes involved in tooth development, four have been lost in both platypus and echidna, indicating that the loss of teeth occurred before the echidna-platypus split. Further evidence of water-foraging ancestors can be found in some of the echidna's anatomy, including hydrodynamic streamlining, dorsally projecting hind limbs acting as rudders, and locomotion founded on hypertrophied humeral long-axis rotation, which provides an efficient swimming stroke. Oviparous reproduction in monotremes may give them an advantage over marsupials in some environments. Their observed adaptive radiation contradicts the assumption that monotremes are frozen in morphological and molecular evolution. It has been suggested that echidnas originally evolved in New Guinea when it was isolated from Australia and from marsupials. This would explain their rarity in the fossil record, their abundance in present times in New Guinea, and their original adaptation to terrestrial niches, presumably without competition from marsupials. Taxonomy Echidnas are a small clade with two extant genera and four species. The genus Zaglossus includes three extant and two fossil species, with only one extant species from the genus Tachyglossus. Zaglossus The three living Zaglossus species are endemic to New Guinea. They are rare and are hunted for food. They forage in leaf litter on the forest floor, eating earthworms and insects. The species are Western long-beaked echidna (Z. bruijni), of the highland forests; Sir David's long-beaked echidna (Z. attenboroughi), discovered by Western science in 1961 (described in 1998) and preferring a still higher habitat; Eastern long-beaked echidna (Z. bartoni), of which four distinct subspecies have been identified. Tachyglossus The short-beaked echidna (Tachyglossus aculeatus) is found in southern, southeast and northeast New Guinea, and also occurs in almost all Australian environments, from the snow-clad Australian Alps to the deep deserts of the Outback, essentially anywhere ants and termites are available. It is smaller than the Zaglossus species, and it has longer hair. Despite the similar dietary habits and methods of consumption to those of an anteater, there is no evidence supporting the idea that echidna-like monotremes have been myrmecophagic (ant or termite-eating) since the Cretaceous. The fossil evidence of invertebrate-feeding bandicoots and rat-kangaroos, from around the time of the platypus–echidna divergence and pre-dating Tachyglossus, show evidence that echidnas expanded into new ecospace despite competition from marsupials. Additionally, extinct echidnas continue to be described by taxonomists; Megalibgwilia The genus Megalibgwilia is known only from fossils: M. owenii from Late Pleistocene sites in Australia; M. robusta from Pliocene sites in Australia. Murrayglossus The genus Murrayglossus is known only from fossils: M. hacketti (previously classified in the genus Zaglossus) from Pleistocene of Western Australia. As food The Kunwinjku people of Western Arnhem Land (Australia) call the echidna ngarrbek, and regard it as a prized food and "good medicine". The echidna is hunted at night, gutted, and filled with hot stones and mandak (Persoonia falcata) leaves. According to Larrakia elders Una Thompson and Stephanie Thompson Nganjmirra, once captured, an echidna is carried attached to the wrist like a thick bangle. In popular culture The echidna appears on the reverse of the Australian five-cent coin. An echidna named Millie was one of the three official mascots for the 2000 Summer Olympics in Sydney. The Sonic the Hedgehog franchise features a race of anthropomorphic echidnas, the most prominent being Knuckles.
Biology and health sciences
Monotremes
null
80503
https://en.wikipedia.org/wiki/Decimal%20separator
Decimal separator
A decimal separator is a symbol that separates the integer part from the fractional part of a number written in decimal form. Different countries officially designate different symbols for use as the separator. The choice of symbol can also affect the choice of symbol for the thousands separator used in digit grouping. Any such symbol can be called a decimal mark, decimal marker, or decimal sign. Symbol-specific names are also used; decimal point and decimal comma refer to a dot (either baseline or middle) and comma respectively, when it is used as a decimal separator; these are the usual terms used in English, with the aforementioned generic terms reserved for abstract usage. In many contexts, when a number is spoken, the function of the separator is assumed by the spoken name of the symbol: comma or point in most cases. In some specialized contexts, the word decimal is instead used for this purpose (such as in International Civil Aviation Organization-regulated air traffic control communications). In mathematics, the decimal separator is a type of radix point, a term that also applies to number systems with bases other than ten. History Hellenistic–Renaissance eras In the Middle Ages, before printing, a bar ( ¯ ) over the units digit was used to separate the integral part of a number from its fractional part, as in 995 (meaning 99.95 in decimal point format). A similar notation remains in common use as an underbar to superscript digits, especially for monetary values without a decimal separator, as in 99. Later, a "separatrix" (i.e., a short, roughly vertical ink stroke) between the units and tenths position became the norm among Arab mathematicians (e.g. 99ˌ95), while an L-shaped or vertical bar () served as the separatrix in England. When this character was typeset, it was convenient to use the existing comma (99,95) or full stop (99.95) instead. Positional decimal fractions appear for the first time in a book by the Arab mathematician Abu'l-Hasan al-Uqlidisi written in the 10th century. The practice is ultimately derived from the decimal Hindu–Arabic numeral system used in Indian mathematics, and popularized by the Persian mathematician Al-Khwarizmi, when Latin translation of his work on the Indian numerals introduced the decimal positional number system to the Western world. His Compendious Book on Calculation by Completion and Balancing presented the first systematic solution of linear and quadratic equations in Arabic. Gerbert of Aurillac marked triples of columns with an arc (called a "Pythagorean arc"), when using his Hindu–Arabic numeral-based abacus in the 10th century. Fibonacci followed this convention when writing numbers, such as in his influential work in the 13th century. The earliest known record of using the decimal point is in the astronomical tables compiled by the Italian merchant and mathematician Giovanni Bianchini in the 1440s. Tables of logarithms prepared by John Napier in 1614 and 1619 used the period (full stop) as the decimal separator, which was then adopted by Henry Briggs in his influential 17th century work. In France, the full stop was already in use in printing to make Roman numerals more readable, so the comma was chosen. Many other countries, such as Italy, also chose to use the comma to mark the decimal units position. It has been made standard by the ISO for international blueprints. However, English-speaking countries took the comma to separate sequences of three digits. In some countries, a raised dot or dash () may be used for grouping or decimal separator; this is particularly common in handwriting. English-speaking countries In the United States, the full stop or period (.) is used as the standard decimal separator. In the nations of the British Empire (and, later, the Commonwealth of Nations), the full stop could be used in typewritten material and its use was not banned, although the interpunct (a.k.a. decimal point, point or mid dot) was preferred as a decimal separator, in printing technologies that could accommodate it, e.g. However, as the mid dot was already in common use in the mathematics world to indicate multiplication, the SI rejected its use as the decimal separator. During the beginning of British metrication in the late 1960s and with impending currency decimalisation, there was some debate in the United Kingdom as to whether the decimal comma or decimal point should be preferred: the British Standards Institution and some sectors of industry advocated the comma and the Decimal Currency Board advocated for the point. In the event, the point was chosen by the Ministry of Technology in 1968. When South Africa adopted the metric system, it adopted the comma as its decimal separator, although a number of house styles, including some English-language newspapers such as The Sunday Times, continue to use the full stop. Previously, signs along California roads expressed distances in decimal numbers with the decimal part in superscript, as in 37, meaning 3.7. Though California has since transitioned to mixed numbers with common fractions, the older style remains on postmile markers and bridge inventory markers. Constructed languages The three most spoken international auxiliary languages, Ido, Esperanto, and Interlingua, all use the comma as the decimal separator. Interlingua has used the comma as its decimal separator since the publication of the Interlingua Grammar in 1951. Esperanto also uses the comma as its official decimal separator, whilst thousands are usually separated by non-breaking spaces (e.g. ). It is possible to separate thousands by a full stop (e.g. ), though this is not as common. Ido's Kompleta Gramatiko Detaloza di la Linguo Internaciona Ido (Complete Detailed Grammar of the International Language Ido) officially states that commas are used for the decimal separator whilst full stops are used to separate thousands, millions, etc. So the number 12,345,678.90123 (in American notation), for instance, would be written 12.345.678,90123 in Ido. The 1931 grammar of Volapük uses the comma as its decimal separator but, somewhat unusually, the middle dot as its thousands separator (12·345·678,90123). In 1958, disputes between European and American delegates over the correct representation of the decimal separator nearly stalled the development of the ALGOL computer programming language. ALGOL ended up allowing different decimal separators, but most computer languages and standard data formats (e.g., C, Java, Fortran, Cascading Style Sheets (CSS)) specify a dot. C++ and a couple of others permit a quote (') as thousands separator, and many others like Python and Julia, (only) allow ‘_’ as such a separator (it's usually ignored, i.e. also allows 1_00_00_000 aligning with the Indian number style of 1,00,00,000 that would be 10,000,000 in the US). Radix point In mathematics and computing, a radix point or radix character is a symbol used in the display of numbers to separate the integer part of the value from its fractional part. In English and many other languages (including many that are written right-to-left), the integer part is at the left of the radix point, and the fraction part at the right of it. A radix point is most often used in decimal (base 10) notation, when it is more commonly called the decimal point (the prefix deci- implying base 10). In English-speaking countries, the decimal point is usually a small dot (.) placed either on the baseline, or halfway between the baseline and the top of the digits (·) In many other countries, the radix point is a comma (,) placed on the baseline. These conventions are generally used both in machine displays (printing, computer monitors) and in handwriting. It is important to know which notation is being used when working in different software programs. The respective ISO standard defines both the comma and the small dot as decimal markers, but does not explicitly define universal radix marks for bases other than 10. Fractional numbers are rarely displayed in other number bases, but, when they are, a radix character may be used for the same purpose. When used with the binary (base 2) representation, it may be called "binary point". Current standards The 22nd General Conference on Weights and Measures declared in 2003, “The symbol for the decimal marker shall be either the point on the line or the comma on the line.” It further reaffirmed, “Numbers may be divided in groups of three in order to facilitate reading; neither dots nor commas are ever inserted in the spaces between groups” ( for example). This use has therefore been recommended by technical organizations, such as the United States’ National Institute of Standards and Technology. Past versions of ISO 8601, but not the 2019 revision, also stipulated normative notation based on SI conventions, adding that the comma is preferred over the full stop. ISO 80000-1 stipulates, “The decimal sign is either a comma or a point on the line.” The standard does not stipulate any preference, observing that usage will depend on customary usage in the language concerned, but adds a note that as per ISO/IEC directives, all ISO standards should use the comma as the decimal marker. Digit grouping For ease of reading, numbers with many digits (e.g. numbers over 999) may be divided into groups using a delimiter, such as comma "," or dot ".", half-space (or thin space) space , underscore "_" (as in maritime "21_450") or apostrophe «'». In some countries, these "digit group separators" are only employed to the left of the decimal separator; in others, they are also used to separate numbers with a long fractional part. An important reason for grouping is that it allows rapid judgement of the number of digits, via telling at a glance ("subitizing") rather than counting (contrast, for example, with 100000000 for one hundred million). The use of thin spaces as separators, not dots or commas (for example: and for "twenty thousand" and "one million"), has been official policy of the International Bureau of Weights and Measures since 1948 (and reaffirmed in 2003) stating "neither dots nor commas are ever inserted in the spaces between groups", as well as of the International Union of Pure and Applied Chemistry (IUPAC), the American Medical Association's widely followed AMA Manual of Style, and the UK Metrication Board, among others. The groups created by the delimiters tend to follow the usages of local languages, which vary. In European languages, large numbers are read in groups of thousands, and the delimiterwhich occurs every three digits when it is usedmay be called a "thousands separator". In East Asian cultures, particularly China, Japan, and Korea, large numbers are read in groups of myriads (10 000s) but the delimiter commonly separates every three digits. The Indian numbering system is somewhat more complex: It groups the rightmost three digits altogether (until the hundreds place) and thereafter groups by sets of two digits. For example, one trillion (on the short scale; a million millions) would thus be written as 10,00,00,00,00,000 or 10 kharab. The convention for digit group separators historically varied among countries, but usually sought to distinguish the delimiter from the decimal separator. Traditionally, English-speaking countries (except South Africa) employed commas as the delimiter – 10,000 – and other European countries employed periods or spaces: 10.000 or . Because of the confusion that could result in international documents, in recent years the use of spaces as separators has been advocated by the superseded SI/ISO 31-0 standard, as well as by the International Bureau of Weights and Measures and the International Union of Pure and Applied Chemistry, which have also begun advocating the use of a "thin space" in "groups of three". Within the United States, the American Medical Association's widely followed AMA Manual of Style also calls for a thin space. In programming languages and online encoding environments (for example, ASCII-only) a thin space is not practical or available, in which case an underscore, regular word space, or no delimiter are the alternatives. Data vis-à-vis mask Digit group separators can occur either as part of the data or as a mask through which the data is displayed. This is an example of the separation of presentation and content, making it possible to display numbers with spaced digit grouping in a way that does not insert any whitespace characters into the string of digits in the content. In many computing contexts, it is preferred to omit digit group separators from the data and instead overlay them as a mask (an input mask or an output mask). Common examples include spreadsheets and databases in which currency values are entered without such marks but are displayed with them inserted. (Similarly, phone numbers can have hyphens, spaces or parentheses as a mask rather than as data.) In web content, such digit grouping can be done with CSS style. It is useful because the number can be copied and pasted into calculators (including a web browser's omnibox) and parsed by the computer as-is (i.e., without the user manually purging the extraneous characters). For example, Wikipedia content can display numbers this way, as in the following examples: metres is 1 astronomical unit is rounded to 20 decimal places is rounded to 20 decimal places. In some programming languages, it is possible to group the digits in the program's source code to make it easier to read; see Integer literal: Digit separators. Ada C# D Go Haskell Java JavaScript Kotlin OCaml Perl Python PHP Ruby Rust Zig Java, JavaScript, Swift, Julia, and free-form Fortran 90 use the underscore (_) character for this purpose; as such, these languages allow seven hundred million to be entered as 700_000_000. Fixed-form Fortran ignores whitespace (in all contexts), so 700 000 000 has always been accepted. Fortran 90 and its successors allow (ignored) underscores in numbers in free-form. C++14, Rebol, and Red all allow the use of an apostrophe for digit grouping, so 700'000'000 is permissible. Below is shown an example of Kotlin code using separators to increase readability: val exampleNumber = 12_004_953 // Twelve million four thousand nine hundred fifty-three Exceptions to digit grouping The International Bureau of Weights and Measures states that "when there are only four digits before or after the decimal marker, it is customary not to use a space to isolate a single digit". Likewise, some manuals of style state that thousands separators should not be used in normal text for numbers from to inclusive where no decimal fractional part is shown (in other words, for four-digit whole numbers), whereas others use thousands separators and others use both. For example, APA style stipulates a thousands separator for "most figures of or more" except for page numbers, binary digits, temperatures, etc. There are always "common-sense" country-specific exceptions to digit grouping, such as year numbers, postal codes, and ID numbers of predefined nongrouped format, which style guides usually point out. In non-base-10 numbering systems In binary (base-2), a full space can be used between groups of four digits, corresponding to a nibble, or equivalently to a hexadecimal digit. For integer numbers, dots are used as well to separate groups of four bits. Alternatively, binary digits may be grouped by threes, corresponding to an octal digit. Similarly, in hexadecimal (base-16), full spaces are usually used to group digits into twos, making each group correspond to a byte. Additionally, groups of eight bytes are often separated by a hyphen. Influence of calculators and computers In countries with a decimal comma, the decimal point is also common as the "international" notation because of the influence of devices, such as electronic calculators, which use the decimal point. Most computer operating systems allow selection of the decimal separator; programs that have been carefully internationalized will follow this, but some programs ignore it and a few may even fail to operate if the setting has been changed. Computer interfaces may be set to the Unicode international "Common locale" using as defined at Details of the current (2020) definitions may be found at Conventions worldwide Hindu–Arabic numerals Countries using decimal comma Countries where a comma "," is used as decimal separator include: Countries using decimal point Countries where a dot "." is used as decimal separator include:
Mathematics
Basics
null
80733
https://en.wikipedia.org/wiki/32-bit%20computing
32-bit computing
In computer architecture, 32-bit computing refers to computer systems with a processor, memory, and other major system components that operate on data in 32-bit units. Compared to smaller bit widths, 32-bit computers can perform large calculations more efficiently and process more data per clock cycle. Typical 32-bit personal computers also have a 32-bit address bus, permitting up to 4 GB of RAM to be accessed, far more than previous generations of system architecture allowed. 32-bit designs have been used since the earliest days of electronic computing, in experimental systems and then in large mainframe and minicomputer systems. The first hybrid 16/32-bit microprocessor, the Motorola 68000, was introduced in the late 1970s and used in systems such as the original Apple Macintosh. Fully 32-bit microprocessors such as the HP FOCUS, Motorola 68020 and Intel 80386 were launched in the early to mid 1980s and became dominant by the early 1990s. This generation of personal computers coincided with and enabled the first mass-adoption of the World Wide Web. While 32-bit architectures are still widely-used in specific applications, the PC and server market has moved on to 64 bits with x86-64 and other 64-bit architectures since the mid-2000s with installed memory often exceeding the 32-bit 4G RAM address limits on entry level computers. The latest generation of smartphones have also switched to 64 bits. Range for storing integers A 32-bit register can store 232 different values. The range of integer values that can be stored in 32 bits depends on the integer representation used. With the two most common representations, the range is 0 through 4,294,967,295 (232 − 1) for representation as an (unsigned) binary number, and −2,147,483,648 (−231) through 2,147,483,647 (231 − 1) for representation as two's complement. One important consequence is that a processor with 32-bit memory addresses can directly access at most 4 GiB of byte-addressable memory (though in practice the limit may be lower). Technical history The world's first stored-program electronic computer, the Manchester Baby, used a 32-bit architecture in 1948, although it was only a proof of concept and had little practical capacity. It held only 32 32-bit words of RAM on a Williams tube, and had no addition operation, only subtraction. Memory, as well as other digital circuits and wiring, was expensive during the first decades of 32-bit architectures (the 1960s to the 1980s). Older 32-bit processor families (or simpler, cheaper variants thereof) could therefore have many compromises and limitations in order to cut costs. This could be a 16-bit ALU, for instance, or external (or internal) buses narrower than 32 bits, limiting memory size or demanding more cycles for instruction fetch, execution or write back. Despite this, such processors could be labeled 32-bit, since they still had 32-bit registers and instructions able to manipulate 32-bit quantities. For example, the IBM System/360 Model 30 had an 8-bit ALU, 8-bit internal data paths, and an 8-bit path to memory, and the original Motorola 68000 had a 16-bit data ALU and a 16-bit external data bus, but had 32-bit registers and a 32-bit oriented instruction set. The 68000 design was sometimes referred to as 16/32-bit. However, the opposite is often true for newer 32-bit designs. For example, the Pentium Pro processor is a 32-bit machine, with 32-bit registers and instructions that manipulate 32-bit quantities, but the external address bus is 36 bits wide, giving a larger address space than 4 GB, and the external data bus is 64 bits wide, primarily in order to permit a more efficient prefetch of instructions and data. Architectures Prominent 32-bit instruction set architectures used in general-purpose computing include the IBM System/360, IBM System/370 (which had 24-bit addressing), System/370-XA, ESA/370, and ESA/390 (which had 31-bit addressing), the DEC VAX, the NS320xx, the Motorola 68000 family (the first two models of which had 24-bit addressing), the Intel IA-32 32-bit version of the x86 architecture, and the 32-bit versions of the ARM, SPARC, MIPS, PowerPC and PA-RISC architectures. 32-bit instruction set architectures used for embedded computing include the 68000 family and ColdFire, x86, ARM, MIPS, PowerPC, and Infineon TriCore architectures. Applications On the x86 architecture, a 32-bit application normally means software that typically (not necessarily) uses the 32-bit linear address space (or flat memory model) possible with the 80386 and later chips. In this context, the term came about because DOS, Microsoft Windows and OS/2 were originally written for the 8088/8086 or 80286, 16-bit microprocessors with a segmented address space where programs had to switch between segments to reach more than 64 kilobytes of code or data. As this is quite time-consuming in comparison to other machine operations, the performance may suffer. Furthermore, programming with segments tend to become complicated; special far and near keywords or memory models had to be used (with care), not only in assembly language but also in high level languages such as Pascal, compiled BASIC, Fortran, C, etc. The 80386 and its successors fully support the 16-bit segments of the 80286 but also segments for 32-bit address offsets (using the new 32-bit width of the main registers). If the base address of all 32-bit segments is set to 0, and segment registers are not used explicitly, the segmentation can be forgotten and the processor appears as having a simple linear 32-bit address space. Operating systems like Windows or OS/2 provide the possibility to run 16-bit (segmented) programs as well as 32-bit programs. The former possibility exists for backward compatibility and the latter is usually meant to be used for new software development. Images In digital images/pictures, 32-bit usually refers to RGBA color space; that is, 24-bit truecolor images with an additional 8-bit alpha channel. Other image formats also specify 32 bits per pixel, such as RGBE. In digital images, 32-bit sometimes refers to high-dynamic-range imaging (HDR) formats that use 32 bits per channel, a total of 96 bits per pixel. 32-bit-per-channel images are used to represent values brighter than what sRGB color space allows (brighter than white); these values can then be used to more accurately retain bright highlights when either lowering the exposure of the image or when it is seen through a dark filter or dull reflection. For example, a reflection in an oil slick is only a fraction of that seen in a mirror surface. HDR imagery allows for the reflection of highlights that can still be seen as bright white areas, instead of dull grey shapes. File formats A 32-bit file format is a binary file format for which each elementary information is defined on 32 bits (or 4 bytes). An example of such a format is the Enhanced Metafile Format.
Technology
Computer architecture concepts
null
80750
https://en.wikipedia.org/wiki/Driving
Driving
Driving is the controlled operation and movement of a land vehicle, including cars, motorcycles, trucks, and buses. A driver's permission to drive on public highways is granted based on a set of conditions being met, and drivers are required to follow the established road and traffic laws in the location they are driving. The word "driving" has etymology dating back to the 15th century. Its meaning has changed from primarily driving working animals in the 15th century to automobiles in the 1800s. Driving skills have also developed since the 15th century, with physical, mental and safety skills being required to drive. This evolution of the skills required to drive have been accompanied by the introduction of driving laws which relate not only to the driver but also to the driveability of a car. The term "driver" originated in the 15th century, referring to the occupation of driving working animals such as pack or draft horses. It later applied to electric railway drivers in 1889 and motor-car drivers in 1896. The world's first long-distance road trip by automobile was in 1888, when Bertha Benz drove a Benz Patent-Motorwagen from Mannheim to Pforzheim, Germany. Driving requires both physical and mental skills, as well as an understanding of the rules of the road. In many countries, drivers must pass practical and theoretical driving tests to obtain a driving license. Physical skills required for driving include proper hand placement, gear shifting, pedal operation, steering, braking, and operation of ancillary devices. Mental skills involve hazard awareness, decision-making, evasive maneuvering, and understanding vehicle dynamics. Distractions, altered states of consciousness, and certain medical conditions can impair a driver's mental skills. Safety concerns in driving include poor road conditions, low visibility, texting while driving, speeding, impaired driving, sleep-deprived driving, and reckless driving. Laws regarding driving, driver licensing, and vehicle registration vary between jurisdictions. Most countries have laws against driving under the influence of alcohol or other drugs. Some countries impose annual renewals or point systems for driver's licenses to maintain road safety. The World Health Organization estimates that 1.35 million people are killed each year in road traffic; it is the leading cause of death for people aged 5 to 29. Etymology The origin of the term driver, as recorded from the 15th century, refers to the occupation of driving working animals, especially pack horses or draft horses. The verb to drive in origin means "to force to move, to impel by physical force". It is first recorded of electric railway drivers in 1889 and of a motor-car driver in 1896. Early alternatives were motorneer, motor-man, motor-driver or motorist. French favors "conducteur" (the English equivalent, "conductor", being used—from the 1830s—not of the driver but of the person in charge of passengers and collecting fares), while German influenced areas adopted Fahrer (used of coach-drivers in the 18th century, but shortened about 1900 from the compound Kraftwagenfahrer), and the verbs führen, lenken, steuern—all with a meaning "steer, guide, navigate"—translating to conduire. Introduction of the automobile The world's first long-distance road trip by automobile was in August 1888, when Bertha Benz, wife of Benz Patent-Motorwagen inventor Karl Benz, drove from Mannheim to Pforzheim, Germany, and returned, in the third experimental Benz motor car, which had a maximum speed of , with her two teenage sons Richard and Eugen but without the consent and knowledge of her husband. She had said she wanted to visit her mother, but also intended to generate publicity for her husband's invention, which had only been taken on short test drives before. In 1899, F. O. Stanley and his wife Flora drove their Stanley Steamer automobile, sometimes called a locomobile, to the summit of Mount Washington in New Hampshire in the United States to generate publicity for their automobile. The journey took over two hours (not counting time to add more water); the descent was accomplished by putting the engine in low gear and much braking. Driving skills Driving in traffic is more than just knowing how to operate the mechanisms which control the vehicle; it requires knowing how to apply the rules of the road (which ensure safe and efficient sharing with other users). An effective driver also has an intuitive understanding of the basics of vehicle handling and can drive responsibly. Although direct operation of a bicycle and a mounted animal are commonly referred to as riding, such operators are legally considered drivers and are required to obey the rules of the road. Driving over a long distance is referred to as a road trip. In many countries, knowledge of the rules of the road, both practical and theoretical, is assessed with a driving test(s), and those who pass are issued with a driving license. Physical skill A driver must have physical skills to be able to control direction, acceleration, and deceleration. For motor vehicles, the detailed tasks include: Proper hand placement and seating position Starting the vehicle's engine with the starting system Setting the transmission to the correct gear Depressing the pedals with one's feet to accelerate, slow and stop the vehicle and If the vehicle is equipped with a manual transmission, to modulate the clutch Steering the vehicle's direction with the steering wheel Applying brake pressure to slow or stop the vehicle Operating other important ancillary devices such as the indicators, headlights, parking brake and windshield wipers Speed and skid control Mental skill Avoiding or successfully handling an emergency driving situation can involve the following skills: Observing the environment for road signs, driving conditions, and hazards Awareness of surroundings, especially in heavy and city traffic Making good and quick decisions based on factors such as road and traffic conditions Evasive maneuvering Understanding vehicle dynamics Left- and right-hand traffic Distractions can compromise a driver's mental skills, as can any altered state of consciousness. One study on the subject of mobile phones and driving safety concluded that, after controlling for driving difficulty and time on task, drivers talking on a phone exhibited greater impairment than drivers who were suffering from alcohol intoxication. In the US "During daylight hours, approximately 481,000 drivers are using cell phones while driving according to the publication on the National Highway Traffic Safety Association. Another survey indicated that music could adversely affect a driver's concentration." Seizure disorders and Alzheimer's disease are among the leading medical causes of mental impairment among drivers in the United States and Europe. Whether or not physicians should be allowed, or even required, to report such conditions to state authorities, remains highly controversial. Safety Safety issues in driving include: Driving in poor road conditions and low visibility Texting while driving Speeding Drug–impaired driving and driving under the influence Distracted driving Sleep-deprived driving Reckless driving and street racing Teenagers There is a high rate of injury and death caused by motor vehicle accidents that involve teenage drivers. There is evidence that the less teenagers drive, the risk of injury drops. There is a lack of evidence as to whether educational interventions to promote active transport and share information about the risks, cost, and stresses involved with driving are effective at reducing or delaying car driving in the teenage years. Driveability Driveability of a vehicle means the smooth delivery of power, as demanded by the driver. Typical causes of driveability degradation are rough idling, misfiring, surging, hesitation, or insufficient power. Driving laws Drivers are subject to the laws of the jurisdiction in which they are driving. International conventions Some jurisdictions submit to some or all of the requirements of the Geneva Convention on Road Traffic of 1949. Additionally, the Vienna Convention on Road Signs and Signals standardises road signs, traffic lights and road markings to improve safety. Local driving laws The rules of the road, driver licensing and vehicle registration schemes vary considerably between jurisdictions, as do laws imposing criminal responsibility for negligent driving, vehicle safety inspections and compulsory insurance. Most countries also have differing laws against driving while under the influence of alcohol or other drugs. Aggressive driving and road rage have become problems for drivers in some areas. Some countries require annual renewal of the driver's license. This may require getting through another driving test or vision screening test to get recertified. Also, some countries use a points system for the driver's license. Both techniques (annual renewal with tests, points system) may or may not improve road safety compared to when the driver is not continuously or annually evaluated. Ownership and insurance Car ownership does not require a driver's license at all. As such, even with a withdrawn driver's license, former drivers are still legally allowed to possess a car and thus have access to it. In the USA, between 1993 and 1997 13.8% of all drivers involved in fatal crashes had no driver's license. In some countries (such as the UK), the car itself needs have a certificate that proves the vehicle is safe and roadworthy. Also, it needs to have a minimum of third party insurance. Driver training Drivers may be required to take lessons with an approved driving instructor (or are strongly encouraged to do so) and must pass a driving test before being granted a license. Almost all countries allow all adults with good vision and health to apply to take a driving test and, if successful, to drive on public roads. In many countries, even after passing one's driving test, new drivers are initially subject to special restrictions under graduated driver licensing rules. For example, in Australia, novice drivers are required to carry "P" ("provisional") plates, while in New Zealand it is called restricted (R). Many U.S. states now issue graduated drivers' licenses to novice minors. While graduated driver licensing rules vary between jurisdictions, typical restrictions include newly licensed minors not being permitted to drive or operate a motorized vehicle at night or with a passenger other than family members, zero blood alcohol, and limited power-to-weight ratio of the vehicle. Driving bans It is possible for a driver to be suspended or disqualified (banned) from driving, either for a short time or permanently. This is usually in response to a serious traffic offence (for example, causing death due to drink driving), repeated minor traffic offences (for example, accruing too many demerit points for speeding), or for a specific medical condition which prevents driving, pending a future assessment (for example, a traumatic brain injury). Some jurisdictions implement road space rationing, where vehicles are banned from driving on certain days depending on a variety of criteria, most commonly the letters and digits in their vehicle registration plate. A few countries banned women driving in the past. In Oman, women were not allowed to drive until 1970. In Saudi Arabia, women were not issued driving licenses until 2018. Saudi women had periodically staged driving protests against these restrictions and in September 2017, the Saudi government agreed to lift the ban, which went into effect in June 2018.
Technology
Road transport
null
80825
https://en.wikipedia.org/wiki/Free%20fall
Free fall
In classical mechanics, free fall is any motion of a body where gravity is the only force acting upon it. A freely falling object may not necessarily be falling down in the vertical direction. If the common definition of the word "fall" is used, an object moving upwards is not considered to be falling, but using scientific definitions, if it is subject to only the force of gravity, it is said to be in free fall. The Moon is thus in free fall around the Earth, though its orbital speed keeps it in very far orbit from the Earth's surface. In a roughly uniform gravitational field gravity acts on each part of a body approximately equally. When there are no other forces, such as the normal force exerted between a body (e.g. an astronaut in orbit) and its surrounding objects, it will result in the sensation of weightlessness, a condition that also occurs when the gravitational field is weak (such as when far away from any source of gravity). The term "free fall" is often used more loosely than in the strict sense defined above. Thus, falling through an atmosphere without a deployed parachute, or lifting device, is also often referred to as free fall. The aerodynamic drag forces in such situations prevent them from producing full weightlessness, and thus a skydiver's "free fall" after reaching terminal velocity produces the sensation of the body's weight being supported on a cushion of air. In the context of general relativity, where gravitation is reduced to a space-time curvature, a body in free fall has no other force acting on it. History In the Western world prior to the 16th century, it was generally assumed that the speed of a falling body would be proportional to its weight—that is, a 10 kg object was expected to fall ten times faster than an otherwise identical 1 kg object through the same medium. The ancient Greek philosopher Aristotle (384–322 BC) discussed falling objects in Physics (Book VII), one of the oldest books on mechanics (see Aristotelian physics). Although, in the 6th century, John Philoponus challenged this argument and said that, by observation, two balls of very different weights will fall at nearly the same speed. In 12th-century Iraq, Abu'l-Barakāt al-Baghdādī gave an explanation for the gravitational acceleration of falling bodies. According to Shlomo Pines, al-Baghdādī's theory of motion was "the oldest negation of Aristotle's fundamental dynamic law [namely, that a constant force produces a uniform motion], [and is thus an] anticipation in a vague fashion of the fundamental law of classical mechanics [namely, that a force applied continuously produces acceleration]." Galileo Galilei According to a tale that may be apocryphal, in 1589–1592 Galileo dropped two objects of unequal mass from the Leaning Tower of Pisa. Given the speed at which such a fall would occur, it is doubtful that Galileo could have extracted much information from this experiment. Most of his observations of falling bodies were really of bodies rolling down ramps. This slowed things down enough to the point where he was able to measure the time intervals with water clocks and his own pulse (stopwatches having not yet been invented). He repeated this "a full hundred times" until he had achieved "an accuracy such that the deviation between two observations never exceeded one-tenth of a pulse beat." In 1589–1592, Galileo wrote De Motu Antiquiora, an unpublished manuscript on the motion of falling bodies. Examples Examples of objects in free fall include: A spacecraft (in space) with propulsion off (e.g. in a continuous orbit, or on a suborbital trajectory (ballistics) going up for some minutes, and then down). An object dropped at the top of a drop tube. An object thrown upward or a person jumping off the ground at low speed (i.e. as long as air resistance is negligible in comparison to weight). Technically, an object is in free fall even when moving upwards or instantaneously at rest at the top of its motion. If gravity is the only influence acting, then the acceleration is always downward and has the same magnitude for all bodies, commonly denoted . Since all objects fall at the same rate in the absence of other forces, objects and people will experience weightlessness in these situations. Examples of objects not in free-fall: Flying in an aircraft: there is also an additional force of lift. Standing on the ground: the gravitational force is counteracted by the normal force from the ground. Descending to the Earth using a parachute, which balances the force of gravity with an aerodynamic drag force (and with some parachutes, an additional lift force). The example of a falling skydiver who has not yet deployed a parachute is not considered free fall from a physics perspective, since they experience a drag force that equals their weight once they have achieved terminal velocity (see below). Near the surface of the Earth, an object in free fall in a vacuum will accelerate at approximately 9.8 m/s2, independent of its mass. With air resistance acting on an object that has been dropped, the object will eventually reach a terminal velocity, which is around 53 m/s (190 km/h or 118 mph) for a human skydiver. The terminal velocity depends on many factors including mass, drag coefficient, and relative surface area and will only be achieved if the fall is from sufficient altitude. A typical skydiver in a spread-eagle position will reach terminal velocity after about 12 seconds, during which time they will have fallen around 450 m (1,500 ft). Free fall was demonstrated on the Moon by astronaut David Scott on August 2, 1971. He simultaneously released a hammer and a feather from the same height above the Moon's surface. The hammer and the feather both fell at the same rate and hit the surface at the same time. This demonstrated Galileo's discovery that, in the absence of air resistance, all objects experience the same acceleration due to gravity. On the Moon, however, the gravitational acceleration is approximately 1.63 m/s2, or only about 1⁄6 that on Earth. Free fall in Newtonian mechanics Uniform gravitational field without air resistance This is the "textbook" case of the vertical motion of an object falling a small distance close to the surface of a planet. It is a good approximation in air as long as the force of gravity on the object is much greater than the force of air resistance, or equivalently the object's velocity is always much less than the terminal velocity (see below). where is the initial vertical component of the velocity (m/s). is the vertical component of the velocity at (m/s). is the initial altitude (m). is the altitude at (m). is time elapsed (s). is the acceleration due to gravity (9.81 m/s2 near the surface of the earth). If the initial velocity is zero, then the distance fallen from the initial position will grow as the square of the elapsed time. Moreover, because the odd numbers sum to the perfect squares, the distance fallen in successive time intervals grows as the odd numbers. This description of the behavior of falling bodies was given by Galileo. Uniform gravitational field with air resistance This case, which applies to skydivers, parachutists or any body of mass, , and cross-sectional area, , with Reynolds number well above the critical Reynolds number, so that the air resistance is proportional to the square of the fall velocity, , has an equation of motion where is the air density and is the drag coefficient, assumed to be constant although in general it will depend on the Reynolds number. Assuming an object falling from rest and no change in air density with altitude, the solution is: where the terminal speed is given by The object's speed versus time can be integrated over time to find the vertical position as a function of time: Using the figure of 56 m/s for the terminal velocity of a human, one finds that after 10 seconds he will have fallen 348 metres and attained 94% of terminal velocity, and after 12 seconds he will have fallen 455 metres and will have attained 97% of terminal velocity. However, when the air density cannot be assumed to be constant, such as for objects falling from high altitude, the equation of motion becomes much more difficult to solve analytically and a numerical simulation of the motion is usually necessary. The figure shows the forces acting on meteoroids falling through the Earth's upper atmosphere. HALO jumps, including Joe Kittinger's and Felix Baumgartner's record jumps, also belong in this category. Inverse-square law gravitational field It can be said that two objects in space orbiting each other in the absence of other forces are in free fall around each other, e.g. that the Moon or an artificial satellite "falls around" the Earth, or a planet "falls around" the Sun. Assuming spherical objects means that the equation of motion is governed by Newton's law of universal gravitation, with solutions to the gravitational two-body problem being elliptic orbits obeying Kepler's laws of planetary motion. This connection between falling objects close to the Earth and orbiting objects is best illustrated by the thought experiment, Newton's cannonball. The motion of two objects moving radially towards each other with no angular momentum can be considered a special case of an elliptical orbit of eccentricity (radial elliptic trajectory). This allows one to compute the free-fall time for two point objects on a radial path. The solution of this equation of motion yields time as a function of separation: where is the time after the start of the fall is the distance between the centers of the bodies is the initial value of is the standard gravitational parameter. Substituting we get the free-fall time The separation can be expressed explicitly as a function of time where is the quantile function of the Beta distribution, also known as the inverse function of the regularized incomplete beta function . This solution can also be represented exactly by the analytic power series Evaluating this yields: where In general relativity In general relativity, an object in free fall is subject to no force and is an inertial body moving along a geodesic. Far away from any sources of space-time curvature, where spacetime is flat, the Newtonian theory of free fall agrees with general relativity. Otherwise the two disagree; e.g., only general relativity can account for the precession of orbits, the orbital decay or inspiral of compact binaries due to gravitational waves, and the relativity of direction (geodetic precession and frame dragging). The experimental observation that all objects in free fall accelerate at the same rate, as noted by Galileo and then embodied in Newton's theory as the equality of gravitational and inertial masses, and later confirmed to high accuracy by modern forms of the Eötvös experiment, is the basis of the equivalence principle, from which basis Einstein's theory of general relativity initially took off.
Physical sciences
Classical mechanics
Physics
1114063
https://en.wikipedia.org/wiki/Ingrown%20nail
Ingrown nail
An ingrown nail, also known as onychocryptosis (from () 'nail' and () 'hidden') is a common form of nail disease. It is an often painful condition in which the nail grows so that it cuts into one or both sides of the paronychium or nail bed. While ingrown nails can occur in the nails of both the hands and the feet, they occur most commonly with the toenails (as opposed to fingernails). A common misconception is that the cause of an ingrown toenail is the nail growing into the paronychium, but it can also be caused by overgrown toe skin. The condition is caused by a microbial inflammation of the paronychium causing a granuloma within which the nail is buried. A true ingrown toenail is caused by actual penetration of flesh by a sliver of toenail. Signs and symptoms Symptoms include pain along the margins of the nail caused by hypergranulation, worsening of pain when wearing tight footwear, and sensitivity to pressure of any kind (in some cases this pressure can be as light as the weight of bedsheets). Bumping of an affected toe can cause pain as the nail's surrounding tissue is punctured further. Ingrown nails can become easily infected unless care is taken early to treat the condition. Signs of infection include redness and swelling of the area around the nail, drainage of pus, and watery discharge tinged with blood. The main symptom is swelling at the base of the nail on the ingrowing side (though it may be present on both sides). Onychocryptosis should not be confused with irregular nail growth patterns such as convex nails, involuted nails, or pincer nails. It also should not be confused with the presence of small corns, callus, or debris down the nail sulci. Causes The main contributor to onychocryptosis is footwear, particularly ill-fitting shoes with inadequate toe box room and tight stockings that apply pressure to the top or side of the foot. Other factors may include the damp atmosphere of enclosed shoes, which soften the nail plate and cause swelling on the epidermal keratin (eventually increasing the convex arch permanently), genetics, trauma, and disease. Improper cutting of the nail may cause the nail to cut into the side-fold skin from growth and impact, whether or not the nail is truly "ingrown". The nail bends inwards or upwards depending on the angle of its cut. If the cutting tool, such as scissors, is at an attitude in which the lower blade is closer to the toe than the upper blade, the toenail will tend to grow upwards from its base, and vice versa. The process is visible along the nail as it grows, appearing as a warp advancing to the end of the nail. The upper corners turn more easily than the center of the nail tip. Holding the tool at the same angle for all nails may induce these conditions; as the nail turns closer to the skin, it becomes harder to fit the lower blade in the right attitude under the nail. When cutting a nail, it is not just the correct angle that is important, but also how short it is cut. A shorter cut will bend the nail more, unless the cut is even on both top and bottom of the nail. Causes may include: Shoes causing a bunching of the toes in the developmental stages of the foot (frequently in people under 21), which can cause the nail to curl and dig into the skin. This is particularly the case in ill-fitting shoes that are too narrow or too short, but any toed shoes may cause an ingrown nail. Poor nail care, including cutting the nail too short, rounded off at the tip or peeled off at the edges instead of being cut straight across. Broken toenails. Trauma to the nail plate or toe, which can occur by dropping objects on or stubbing the toenail, or by the nail protruding through the shoe (as during sports or other vigorous activity), can cause the flesh to become injured and the nail to grow irregularly and press into the flesh. Predisposition, such as abnormally shaped nail beds, nail deformities caused by diseases or a genetic susceptibility, increases the chance of an ingrown nail, but the ingrowth cannot occur without pressure from a shoe. A bacterial infection, treatable with antibiotics. One study compared patients with ingrown toenails to healthy controls and found no difference in the shape of toenails between those of patients and of the control group. The study suggested that treatment should not be based on the correction of a non-existent nail deformity. In some cases, however, there is nail deformity. Ingrown toenails are caused by weight-bearing (activities such as walking and running) in patients that have too much soft skin tissue on the sides of their nail. Weight bearing causes this excessive amount of skin to bulge up along the sides of the nail. The pressure on the skin around the nail results in the tissue being damaged, resulting in swelling, redness and infection. Many treatments are directed at the nail itself and often include partial or full removal of the healthy nail. However, failure to treat the cutaneous condition can result in a return of the ingrowth and a deformity or mutilation of the nail. Prevention The most common digit to become ingrown is the big toe, but ingrowth can occur on any nail. Ingrown nails can be avoided by cutting nails straight across; not along a curve, not too short and no shorter than the flesh around it. Footwear that is too small or too narrow, or with too shallow a "toe box", will exacerbate any underlying problem with a toenail. Sharp square corners may be uncomfortable and cause snagging on socks. Proper cutting leaves the leading edge of the nail free of the flesh, precluding it from growing into the toe. Filing of the corner is reasonable. Some nails require cutting of the corners far back to remove edges that dig into the flesh; this is often done as a partial wedge resection by a podiatrist. Ingrown toe nails can be caused by injury, commonly blunt trauma in which the flesh is pressed against the nail causing a small cut that swells. Injury to the nail can cause it to grow abnormally, making it wider or thicker than normal, or even bulged or crooked. Management The treatment of an ingrown toenail partly depends on its severity. Conservative treatment Mild to moderate cases are often treated conservatively with warm water and epsom salt soaks, antibacterial ointment and the use of dental floss. If conservative treatment of a minor ingrown toenail does not succeed, or if the ingrown toenail is severe, surgical treatment may be required. A "gutter splint" may be improvised by slicing a cotton-tipped wooden applicator diagonally to form a bevel and using this to insert a wisp of cotton from the applicator head under the nail to lift it from the underlying skin after a foot soak. Some over-the-counter ingrown toenail pain relief kits include a sodium sulfide gel with cushions and elastic bandages. Nail bracing Nail bracing is more conservative than surgery, but less widely used. Nail braces work by gently lifting the sides of the toenail and eventually retraining the nail to grow to a flatter shape over time. The total time needed for the nail to be reshaped is one full nail growth or about 18 months. There are two main types of nail braces: adhesive and hooked. Adhesive nail braces are generally made of a thin strip of composite material that is glued to the top of the nail. The strip naturally tries to return to a flat state and lifts the edges of the nails in the process. Hooked nail braces consist of a hook (usually made of dental wire) placed under either side of the nail with some type of tensioning system pulling the hooks together. Because of the curved shape of the nail, the tensioning device rests on the higher middle of the nail by applying upward pressure to the sides of the nail. In studies of diabetics, who need to avoid surgery when possible, nail bracing was found to be effective at providing immediate, as well as long-term, relief. Surgery Surgical treatment for an ingrown nail is carried out by a podiatrist, a foot and ankle specialist. This is typically an in-office procedure requiring local anesthesia and special surgical instruments. The surgical approach is the removal of the offending part of the nail plate known as a wedge resection. If the ingrown toenail recurs despite this treatment, destruction of the sides of the nail with chemicals or excision is done; this is known as a matrixectomy. Antibiotics may be used after the procedure but are not recommended, as they may delay healing. Surgical treatment for ingrown nails is more effective at preventing the nail from regrowing inwards compared to non-surgical treatments.
Biology and health sciences
Types
Health
1114337
https://en.wikipedia.org/wiki/Hydrogen%20bromide
Hydrogen bromide
Hydrogen bromide is the inorganic compound with the formula . It is a hydrogen halide consisting of hydrogen and bromine. A colorless gas, it dissolves in water, forming hydrobromic acid, which is saturated at 68.85% HBr by weight at room temperature. Aqueous solutions that are 47.6% HBr by mass form a constant-boiling azeotrope mixture that boils at . Boiling less concentrated solutions releases H2O until the constant-boiling mixture composition is reached. Hydrogen bromide, and its aqueous solution, hydrobromic acid, are commonly used reagents in the preparation of bromide compounds. Reactions Organic chemistry Hydrogen bromide and hydrobromic acid are important reagents in the production of organobromine compounds. In an electrophilic addition reaction, HBr adds to alkenes: The resulting alkyl bromides are useful alkylating agents, e.g., as precursors to fatty amine derivatives. Related free radical additions to allyl chloride and styrene give 1-bromo-3-chloropropane and phenylethylbromide, respectively. Hydrogen bromide reacts with dichloromethane to give bromochloromethane and dibromomethane, sequentially: These metathesis reactions illustrate the consumption of the stronger acid (HBr) and release of the weaker acid (HCl). Allyl bromide is prepared by treating allyl alcohol with HBr: HBr adds to alkynes to yield bromoalkenes. The stereochemistry of this type of addition is usually anti: RC≡CH + HBr → RC(Br)=CH2 Also, HBr adds epoxides and lactones, resulting in ring-opening. With triphenylphosphine, HBr gives triphenylphosphonium bromide, a solid "source" of HBr. Inorganic chemistry Vanadium(III) bromide and molybdenum(IV) bromide were prepared by treatment of the higher chlorides with HBr. These reactions proceed via redox reactions: Industrial preparation Hydrogen bromide (along with hydrobromic acid) is produced by combining hydrogen and bromine at temperatures between 200 and 400 °C. The reaction is typically catalyzed by platinum or asbestos. Laboratory synthesis HBr can be prepared by distillation of a solution of sodium bromide or potassium bromide with phosphoric acid or sulfuric acid: KBr + H2SO4 → KHSO4 + HBr Concentrated sulfuric acid is less effective because it oxidizes HBr to bromine: 2 HBr + H2SO4 → Br2 + SO2 + 2 H2O The acid may be prepared by: reaction of bromine with water and sulfur: 2 Br2 + S + 2 H2O → 4 HBr + SO2 bromination of tetralin: C10H12 + 4 Br2 → C10H8Br4 + 4 HBr reduction of bromine with phosphorous acid: Br2 + H3PO3 + H2O → H3PO4 + 2 HBr Anhydrous hydrogen bromide can also be produced on a small scale by thermolysis of triphenylphosphonium bromide in refluxing xylene. Hydrogen bromide prepared by the above methods can be contaminated with Br2, which can be removed by passing the gas through a solution of phenol at room temperature in tetrachloromethane or other suitable solvent (producing 2,4,6-tribromophenol and generating more HBr in the process) or through copper turnings or copper gauze at high temperature. Safety HBr is highly corrosive and, if inhaled, can cause lung damage.
Physical sciences
Hydrogen compounds
Chemistry
1115940
https://en.wikipedia.org/wiki/Alligator%20snapping%20turtle
Alligator snapping turtle
The alligator snapping turtle (Macrochelys temminckii) is a large species of turtle in the family Chelydridae. The species is endemic to freshwater habitats in the United States. M. temminckii is one of the heaviest living freshwater turtles in the world. It is the largest freshwater species of turtle in North America. It is often associated with, but not closely related to, the common snapping turtle, which is in the genus Chelydra. The specific epithet temminckii is in honor of Dutch zoologist Coenraad Jacob Temminck. Taxonomy Although it was once believed that only one extant species exists in the genus Macrochelys, recent studies have shown that there are two species, the other being the Suwannee snapping turtle (M. suwanniensis) of the Suwannee River. The most recent common ancestor (MRCA) of the two species lived approximately 3.2 to 8.9 million years ago, during the late Miocene to late Pliocene. A third species, the Apalachicola snapping turtle (M. apalachicolae), has been proposed, but is generally not recognized. The alligator snapping turtle is given its common name because of its immensely powerful jaws and distinct ridges on its shell that are similar in appearance to the rough, ridged skin of an alligator. It is also slightly less commonly known as "the loggerhead snapper" (not to be confused with the loggerhead sea turtle or loggerhead musk turtle). Distribution and habitat The alligator snapping turtle is found primarily in freshwaters of the southeastern United States. It is found from the Florida Panhandle west to East Texas, north to southeastern Kansas, Missouri, southeastern Iowa, western Illinois, southern Indiana, west Michigan western Kentucky, Louisiana, and western Tennessee. Typically, only nesting females venture onto open land. They are generally found only in bodies of water that flow into the Gulf of Mexico and usually do not occur in isolated wetlands or ponds. A study found that the turtles prefer places with canopy cover, overhanging trees, shrubs, dead submerged trees, and beaver dens. This species utilizes core sites within these habitats, and females tend to have larger movement patterns than males. The average home range for an individual is . Females have larger home ranges than males. Description The alligator snapping turtle is characterized by a large, heavy head, and a long, thick shell with three dorsal ridges of large scales (osteoderms), giving it a primitive appearance reminiscent of some of the plated dinosaurs, most notably Ankylosaurus. It can be immediately distinguished from the common snapping turtle by the three distinct rows of spikes and raised plates on the carapace, whereas the common snapping turtle has a smoother carapace. The spikes on the carapace gradually flatten out as the turtle ages. M. temminckii is a solid gray, brown, black, or olive-green in color, and often covered with algae. It has radiating yellow patterns around the eyes, serving to break up the outline of the eyes to keep the turtle camouflaged. The eyes are also surrounded by a star-shaped arrangement of fleshy, filamentous "eyelashes". Though not verified, a alligator snapping turtle was found in Kansas in 1937, but the largest verifiable one is debatable. One weighed at the Shedd Aquarium in Chicago was a 16-year resident giant alligator snapper weighing , sent to the Tennessee Aquarium as part of a breeding loan in 1999, where it subsequently died. Another weighing was housed at the Brookfield Zoo in suburban Chicago. Another large turtle reportedly weighed . The species generally does not grow quite that large. Breeding maturity is attained around , when the straight carapace length is around , but then the species continues to grow throughout life. Excluding exceptionally large specimens, adult alligator snapping turtles generally range in carapace length from and weigh from . Males are typically larger than females. 88 adult alligator snapping turtles averaged , 92 averaged , and 249 averaged . Usually very old males comprise the specimens that weigh in excess of per most population studies. Among extant freshwater turtles, only the little-known giant softshell turtles of the genera Chitra, Rafetus, and Pelochelys, native to Asia, reach comparable sizes. In mature specimens, those with a straight carapace length over , males and females can be differentiated by the position of the cloaca from the carapace, and by the thickness of the base of the tail. A mature male's cloaca extends beyond the carapace edge, a female's is placed exactly on the edge if not nearer to the plastron. The base of the tail of the male is also thicker as compared to that of the female because of the hidden reproductive organs. The inside of the turtle's mouth is camouflaged, and it possesses a vermiform (worm-shaped) appendage on the tip of its tongue used to lure fish, a form of aggressive mimicry. With its unique head morphology research suggests this species has strong natural selection for bite performance, can directly or indirectly affect fitness. Research suggests that M.temminckii thermoregulate by altering its depth in the water column, because this species is rarely seen basking. This turtle must be handled with extreme care and considered potentially dangerous. This species can bite through the handle of a broom and rare cases have been reported in which human fingers have been cleanly bitten off by the species. No human deaths have been reported to have been caused by the alligator snapping turtle. Diet The alligator snapping turtle is an opportunistic feeder that is almost entirely carnivorous. It relies on both catching live food and scavenging dead organisms. In general, it will eat almost anything it can catch. Fishermen have glorified the species' ability to catch fish and to deplete fish populations, whereas in fact it largely targets any abundant and easily caught prey, and rarely has any extensive deleterious effect on fish populations. Its natural diet consists primarily of fish and fish carcasses, mollusks, carrion, and amphibians, but it is also known to eat snakes, snails, worms and other invertebrates, crayfish, insects, water birds, aquatic plants, other turtles and sometimes even small alligators. In one study conducted in Louisiana, 79.8% of the stomach contents of adult alligator snapping turtles was found to be composed of other turtles, although the resistance of shell and reptile-bone fragments to digestion may have led these fragments to remain longer in the digestive tract than other items. This species may also, on occasion, prey on aquatic rodents, including nutrias and muskrats or even snatch small to mid-sized other mammals, including squirrels, mice, opossums, raccoons, and armadillos when they attempt to swim or come near the water's edge. In the wild, alligator snapping turtles are also recorded eating a wide range of plant matter such as seeds, tubers, stalks, American persimmons, wild grape, water hickory, pecans, and locust. Between March and October, stomach samples of 65 turtles showed that 56% of their diet by volume was composed of acorns of water, overcup, and willow oaks. The most frequently eaten food item was fish; despite this, fish only made up 7% of their diet by volume. Mammalian and bird prey, although less frequently eaten, made up 10% and 7% of the diet by volume respectively. As ingested willow oak acorns were found to germinate faster after defecation, it is suggested that alligator snapping turtles may be important seed dispersers of oak trees. While downstream dispersal of acorns is passive, ingestion by alligator snapping turtles could facilitate dispersal of acorns upstream as well as laterally across streams. The alligator snapping turtle seemingly most often hunts at night. It may also hunt diurnally, however. By day, it may try to attract fish and other prey by sitting quietly at the bottom of murky water and letting its jaws hang open to reveal its tongue appendage, which looks like a small, pink worm in the back of its gray mouth, and lure the prey into striking distance. The vermiform tongue imitates the movements of a worm, luring prey to the turtle's mouth. The mouth is then closed with tremendous speed and force, completing the ambush. Although the turtle does not actively hunt its prey, it can detect chemosensory cues from prey, like the mud turtle, in order to choose the location in which it is most likely to catch food. Small fish, such as minnows, are often caught in this way by younger alligator snapping turtles, whereas adults must eat a greater quantity per day and must forage more actively. Though not a regular food source for them, adult alligator snappers have even been known to kill and eat small American alligators. Reproduction and lifespan Maturity is reached around 12 years of age. Mating takes place yearly, in early spring in the southern part of its geographic range, and in later spring in the northern part. About two months later, the female builds a nest and lays a clutch of 10–50 eggs. It was found that some females lay eggs every year and some females lay eggs every other year. The sex of the young depends on the temperature at which the eggs are incubated. This is called temperature dependent sex determination, and it is used by all turtle species to determine sex. For the alligator snapping turtle, higher temperatures produce more males in a clutch. Nests are typically excavated at least 50 yards from the water's edge to prevent them from being flooded and drowned. Incubation takes from 100 to 140 days, and hatchlings emerge in the early fall. Though its potential lifespan in the wild is unknown, the alligator snapping turtle is believed to be capable of living to 200 years of age, but 80 to 120 is more likely. In captivity, it typically lives between 20 and 70 years. Predation The alligator snapping turtle is most vulnerable to predators before and shortly after hatching. The eggs can be eaten by birds or mammals. The risk of predation decreases as the turtle gets bigger, so the adult turtle does not have as many predators. Their largest predator in many parts of their range is the northern river otter (Lontra canadensis) when the turtles are young. Humans are also a threat to the alligator snapping turtle. Under human care The alligator snapping turtle is sometimes captive-bred as a pet and is readily available in the exotic animal trade. Due to its potential size and specific needs, it does not make a particularly good pet for any but the most experienced aquatic turtle keepers. It prefers to feed on live fish, but will readily feed on other types of meat or leafy vegetables if offered. Hand feeding is dangerous. Extreme temperatures are known to affect the turtle's appetite and would result in the turtle refusing to feed until the temperature has been regulated. Due to the turtle's sheer size, handling an adult specimen poses significant problems. With relative safety, a smaller turtle is held by the sides of its shell. A larger turtle, with its proportionately longer neck and greater reach, is held safely by grasping it just behind the head or close to the tail's base. Despite its reputation, the alligator snapping turtle is typically not prone to biting. However, if provoked, it is quite capable of delivering a powerful bite which can easily amputate fingers or cause other significant injuries, such as cuts. In some U.S. states, where the alligator snapping turtle does not naturally occur (such as California), it is prohibited from being kept as a pet by residents. Invasive species Some alligator snapping turtles were released or escaped into waters of the Czech Republic, Germany and Hungary. In Bavaria, a turtle was accused of causing injury to a child, but the claim was never substantiated and the turtle in question was never found. In Bohemia, four turtles of this species have been caught. In Hungary, one turtle was caught in the middle of a street near a lake. Alligator snapping turtles have been found throughout Italy beginning in the early 2000s. Certain EU countries have strong laws against keeping the alligator snapping turtle without permission, as it is an invasive species. In February 2024, a single male was found in Urswick Tarn in Cumbria, England. The turtle, which was nicknamed 'Fluffy' by his rescuers, has since been moved to the National Centre for Reptile Welfare in Kent. There are non-native established invasive populations of the alligator snapping turtle in South Africa. Conservation status Because of collection for the exotic pet trade, overharvesting for its meat, and habitat destruction, some states have imposed bans on collecting the alligator snapping turtle from the wild. The IUCN lists it as a threatened species, and as of 23 February 2023, it was listed as a CITES Appendix II species, meaning international trade (including in parts and derivatives) is regulated by the CITES permit system. The alligator snapping turtle is now endangered in several states, including Indiana, Illinois, Kentucky, and Missouri, where it is protected by state law. It is designated as "in need of conservation" in Kansas. In October 2013, one was found in the Prineville Reservoir in Oregon. It was captured and euthanized by the Oregon Department of Fish and Wildlife, which considers alligator snapping turtles to be an invasive species. This one was the first found in the state. In June 2024, it was announced captive alligator snapping turtles, bred in Oklahoma, would be reintroduced to the Neosho River in Kansas in hopes of bringing them back to its waterways.
Biology and health sciences
Reptiles
null
1116938
https://en.wikipedia.org/wiki/Coumarin
Coumarin
Coumarin () or 2H-chromen-2-one is an aromatic organic chemical compound with formula . Its molecule can be described as a benzene molecule with two adjacent hydrogen atoms replaced by an unsaturated lactone ring , forming a second six-membered heterocycle that shares two carbons with the benzene ring. It belongs to the benzopyrone chemical class and considered as a lactone. Coumarin is a colorless crystalline solid with a sweet odor resembling the scent of vanilla and a bitter taste. It is found in many plants, where it may serve as a chemical defense against predators. While coumarin is not an anticoagulant, its 3-hydroxy-4-alkyl derivatives, such as the fungal metabolite dicoumarol, inhibit synthesis of vitamin K, a key component in blood clotting. A related compound, the prescription drug anticoagulant warfarin, is used to inhibit formation of blood clots, deep vein thrombosis, and pulmonary embolism. Etymology Coumarin is derived from , the French word for the tonka bean, from the Old Tupi word for its tree, . History Coumarin was first isolated from tonka beans in 1820 by A. Vogel of Munich, who initially mistook it for benzoic acid. Also in 1820, Nicholas Jean Baptiste Gaston Guibourt (1790–1867) of France independently isolated coumarin, but he realized that it was not benzoic acid. In a subsequent essay he presented to the pharmacy section of the Académie Royale de Médecine, Guibourt named the new substance coumarine. In 1835, the French pharmacist A. Guillemette proved that Vogel and Guibourt had isolated the same substance. Coumarin was first synthesized in 1868 by the English chemist William Henry Perkin. Coumarin has been an integral part of the fougère genre of perfume since it was first used in Houbigant's Fougère Royale in 1882. Synthesis Coumarin can be prepared by a number of name reactions, with the Perkin reaction between salicylaldehyde and acetic anhydride being a popular example. The Pechmann condensation provides another route to coumarin and its derivatives starting from phenol, as does the Kostanecki acylation, which can also be used to produce chromones. Biosynthesis From lactonization of ortho-hydroxylated cis-hydroxycinnamic acid. Natural occurrence Coumarin is found naturally in many plants. Freshly ground plant parts contain higher amount of desired and undesired phytochemicals than powder. In addition, whole plant parts are harder to counterfeit; for example, one study showed that authentic Ceylon cinnamon bark contained 0.012 to 0.143 mg/g coumarin, but samples purchased at markets contained up to 3.462 mg/g, possibly because those were mixed with other cinnamon varieties. Vanilla grass (Anthoxanthum odoratum) Sweet woodruff (Galium odoratum) Sweet grass (Hierochloe odorata) Sweet-clover (genus Melilotus) Meranti trees (genus Shorea) Tonka bean (Dipteryx odorata) Cinnamon; a 2013 study showed different varieties containing different levels of coumarin: Ceylon cinnamon or true cinnamon (Cinnamomum verum): 0.005 to 0.090 mg/g Chinese cinnamon or Chinese cassia (C. cassia): 0.085 to 0.310 mg/g Indonesian cinnamon or Padang cassia (C. burmannii): 2.14 to 9.30 mg/g Saigon cinnamon or Vietnamese cassia (C. loureiroi): 1.06 to 6.97 mg/g Deertongue (Carphephorus odoratissimus), Tilo (Justicia pectoralis), Mullein (genus Verbascum) Many cherry blossom tree varieties (of the genus Prunus). Related compounds are found in some but not all specimens of genus Glycyrrhiza, from which the root and flavour licorice derives. Coumarin is found naturally also in many edible plants such as strawberries, black currants, apricots, and cherries. Coumarins were found to be uncommon but occasional components of propolis by Santos-Buelga and Gonzalez-Paramas 2017. Biological function Coumarin has appetite-suppressing properties, which may discourage animals from eating plants that contain it. Though the compound has a pleasant sweet odor, it has a bitter taste, and animals tend to avoid it. Metabolism The biosynthesis of coumarin in plants is via hydroxylation, glycolysis, and cyclization of cinnamic acid. In humans, the enzyme encoded by the gene UGT1A8 has glucuronidase activity with many substrates, including coumarins. Derivatives Coumarin is used in the pharmaceutical industry as a precursor reagent in the synthesis of a number of synthetic anticoagulant pharmaceuticals similar to dicoumarol. 4-hydroxycoumarins are a type of vitamin K antagonist. They block the regeneration and recycling of vitamin K. These chemicals are sometimes also incorrectly referred to as "coumadins" rather than 4-hydroxycoumarins. Some of the 4-hydroxycoumarin anticoagulant class of chemicals are designed to have high potency and long residence times in the body, and these are used specifically as rodenticides ("rat poison"). Death occurs after a period of several days to two weeks, usually from internal hemorrhaging. Uses Coumarin is often found in artificial vanilla substitutes, despite having been banned as a food additive in numerous countries since the mid-20th century. It is still used as a legal flavorant in soaps, rubber products, and the tobacco industry, particularly for sweet pipe tobacco and certain alcoholic drinks. Toxicity Coumarin is moderately toxic to the liver and kidneys of rodents, with a median lethal dose (LD50) of 293 mg/kg in the rat, a low toxicity compared to related compounds. Coumarin is hepatotoxic in rats, but less so in mice. Rodents metabolize it mostly to 3,4-coumarin epoxide, a toxic, unstable compound that on further differential metabolism may cause liver cancer in rats and lung tumors in mice. Humans metabolize it mainly to 7-hydroxycoumarin, a compound of lower toxicity, and no adverse affect has been directly measured in humans. The German Federal Institute for Risk Assessment has established a tolerable daily intake (TDI) of 0.1 mg coumarin per kg body weight, but also advises that higher intake for a short time is not dangerous. The Occupational Safety and Health Administration (OSHA) of the United States does not classify coumarin as a carcinogen for humans. European health agencies have warned against consuming high amounts of cassia bark, one of the four main species of cinnamon, because of its coumarin content. According to the German Federal Institute for Risk Assessment (BFR), 1 kg of (cassia) cinnamon powder contains about 2.1 to 4.4 g of coumarin. Powdered cassia cinnamon weighs 0.56 g/cm3, so a kilogram of cassia cinnamon powder equals 362.29 teaspoons. One teaspoon of cassia cinnamon powder therefore contains 5.8 to 12.1 mg of coumarin, which may be above the tolerable daily intake value for smaller individuals. However, the BFR only cautions against high daily intake of foods containing coumarin. Its report specifically states that Ceylon cinnamon (Cinnamomum verum) contains "hardly any" coumarin. The European Regulation (EC) No 1334/2008 describes the following maximum limits for coumarin: 50 mg/kg in traditional and/or seasonal bakery ware containing a reference to cinnamon in the labeling, 20 mg/kg in breakfast cereals including muesli, 15 mg/kg in fine bakery ware, with the exception of traditional and/or seasonal bakery ware containing a reference to cinnamon in the labeling, and 5 mg/kg in desserts. An investigation from the Danish Veterinary and Food Administration in 2013 shows that bakery goods characterized as fine bakery ware exceeds the European limit (15 mg/kg) in almost 50% of the cases. The paper also mentions tea as an additional important contributor to the overall coumarin intake, especially for children with a sweet habit. Coumarin was banned as a food additive in the United States in 1954, largely because of the hepatotoxicity results in rodents. Coumarin is currently listed by the Food and Drug Administration (FDA) of the United States among "Substances Generally Prohibited From Direct Addition or Use as Human Food," according to 21 CFR 189.130, but some natural additives containing coumarin, such as the flavorant sweet woodruff are allowed "in alcoholic beverages only" under 21 CFR 172.510. In Europe, popular examples of such beverages are Maiwein, white wine with woodruff, and Żubrówka, vodka flavoured with bison grass. Coumarin is subject to restrictions on its use in perfumery, as some people may become sensitized to it, however the evidence that coumarin can cause an allergic reaction in humans is disputed. Minor neurological dysfunction was found in children exposed to the anticoagulants acenocoumarol or phenprocoumon during pregnancy. A group of 306 children were tested at ages 7–15 years to determine subtle neurological effects from anticoagulant exposure. Results showed a dose–response relationship between anticoagulant exposure and minor neurological dysfunction. Overall, a 1.9 (90%) increase in minor neurological dysfunction was observed for children exposed to these anticoagulants, which are collectively referred to as "coumarins." In conclusion, researchers stated, "The results suggest that coumarins have an influence on the development of the brain which can lead to mild neurologic dysfunctions in children of school age." Coumarin's addition to cigarette tobacco by Brown & Williamson caused executive Dr. Jeffrey Wigand to contact CBS's news show 60 Minutes in 1995, charging that a "form of rat poison" was being used as an additive. He held that from a chemist’s point of view, coumarin is an "immediate precursor" to the rodenticide (and prescription drug) coumadin. Dr. Wigand later stated that coumarin itself is dangerous, pointing out that the FDA had banned its addition to human food in 1954. Under his later testimony, he would repeatedly classify coumarin as a "lung-specific carcinogen." In Germany, coumarin is banned as an additive in tobacco. Alcoholic beverages sold in the European Union are limited to a maximum of 10 mg/L coumarin by law. Cinnamon flavor is generally cassia bark steam-distilled to concentrate the cinnamaldehyde, for example, to about 93%. Clear cinnamon-flavored alcoholic beverages generally test negative for coumarin, but if whole cassia bark is used to make mulled wine, then coumarin shows up at significant levels.
Physical sciences
Phenylpropanoids
Chemistry
1117286
https://en.wikipedia.org/wiki/Imidazole
Imidazole
Imidazole (ImH) is an organic compound with the formula . It is a white or colourless solid that is soluble in water, producing a mildly alkaline solution. It can be classified as a heterocycle, specifically as a diazole. Many natural products, especially alkaloids, contain the imidazole ring. These imidazoles share the 1,3-C3N2 ring but feature varied substituents. This ring system is present in important biological building blocks, such as histidine and the related hormone histamine. Many drugs contain an imidazole ring, such as certain antifungal drugs, the nitroimidazole series of antibiotics, and the sedative midazolam. When fused to a pyrimidine ring, it forms a purine, which is the most widely occurring nitrogen-containing heterocycle in nature. The name "imidazole" was coined in 1887 by the German chemist Arthur Rudolf Hantzsch (1857–1935). Structure and properties Imidazole is a planar 5-membered ring, that exists in two equivalent tautomeric forms because hydrogen can be bound to one or another nitrogen atom. Imidazole is a highly polar compound, as evidenced by its electric dipole moment of 3.67 D, and is highly soluble in water. The compound is classified as aromatic due to the presence of a planar ring containing 6 π-electrons (a pair of electrons from the protonated nitrogen atom and one from each of the remaining four atoms of the ring). Some resonance structures of imidazole are shown below: Amphoterism Imidazole is amphoteric, which is to say that it can function both as an acid and as a base. As an acid, the pKa of imidazole is 14.5, making it less acidic than carboxylic acids, phenols, and imides, but slightly more acidic than alcohols. The acidic proton is the one bound to nitrogen. Deprotonation gives the imidazolide anion, which is symmetrical. As a base, the pKa of the conjugate acid (cited as pKBH+ to avoid confusion between the two) is approximately 7, making imidazole approximately sixty times more basic than pyridine. The basic site is the nitrogen with the lone pair (and not bound to hydrogen). Protonation gives the imidazolium cation, which is symmetrical. Preparation Imidazole was first reported in 1858 by the German chemist Heinrich Debus, although various imidazole derivatives had been discovered as early as the 1840s. It was shown that glyoxal, formaldehyde, and ammonia condense to form imidazole (glyoxaline, as it was originally named). This synthesis, while producing relatively low yields, is still used for generating C-substituted imidazoles. In one microwave modification, the reactants are benzil, benzaldehyde and ammonia in glacial acetic acid, forming 2,4,5-triphenylimidazole ("lophine"). Imidazole can be synthesized by numerous methods besides the Debus method. Many of these syntheses can also be applied to different substituted imidazoles and imidazole derivatives by varying the functional groups on the reactants. These methods are commonly categorized by which and how many bonds form to make the imidazole rings. For example, the Debus method forms the (1,2), (3,4), and (1,5) bonds in imidazole, using each reactant as a fragment of the ring, and thus this method would be a three-bond-forming synthesis. A small sampling of these methods is presented below. Formation of one bond The (1,5) or (3,4) bond can be formed by the reaction of an imidate and an α-aminoaldehyde or α-aminoacetal. The example below applies to imidazole when R1 = R2 = hydrogen. Formation of two bonds The (1,2) and (2,3) bonds can be formed by treating a 1,2-diaminoalkane, at high temperatures, with an alcohol, aldehyde, or carboxylic acid. A dehydrogenating catalyst, such as platinum on alumina, is required. The (1,2) and (3,4) bonds can also be formed from N-substituted α-aminoketones and formamide with heat. The product will be a 1,4-disubstituted imidazole, but here since R1 = R2 = hydrogen, imidazole itself is the product. The yield of this reaction is moderate, but it seems to be the most effective method of making the 1,4 substitution. Formation of four bonds This is a general method that is able to give good yields for substituted imidazoles. In essence, it is an adaptation of the Debus method called the Debus-Radziszewski imidazole synthesis. The starting materials are substituted glyoxal, aldehyde, amine, and ammonia or an ammonium salt. Formation from other heterocycles Imidazole can be synthesized by the photolysis of 1-vinyltetrazole. This reaction will give substantial yields only if the 1-vinyltetrazole is made efficiently from an organotin compound, such as 2-tributylstannyltetrazole. The reaction, shown below, produces imidazole when R1 = R2 = R3 = hydrogen. Imidazole can also be formed in a vapor-phase reaction. The reaction occurs with formamide, ethylenediamine, and hydrogen over platinum on alumina, and it must take place between 340 and 480 °C. This forms a very pure imidazole product. The Van Leusen reaction can also be employed to form imidazoles starting from TosMIC and an aldimine. The Van Leusen Imidazole Synthesis allows the preparation of imidazoles from aldimines by reaction with tosylmethyl isocyanide (TosMIC). The reaction has later been expanded to a two-step synthesis in which the aldimine is generated in situ: the Van Leusen Three-Component Reaction (vL-3CR). Biological significance and applications Imidazole is incorporated into many important biological compounds. The most pervasive is the amino acid histidine, which has an imidazole side-chain. Histidine is present in many proteins and enzymes, e.g. by binding metal cofactors, as seen in hemoglobin. Imidazole-based histidine compounds play an important role in intracellular buffering. Histidine can be decarboxylated to histamine. Histamine can cause urticaria (hives) when it is produced during allergic reaction. Pharmaceutical derivatives Imidazole substituents are found in many pharmaceuticals such as anticancer drug mercaptopurine. The imidazole group is present in many fungicides and antifungal, antiprotozoal, and antihypertensive medications. Imidazole is part of the theophylline molecule, found in tea leaves and coffee beans, that stimulates the central nervous system. A number of substituted imidazoles, including clotrimazole, are selective inhibitors of nitric oxide synthase. Other biological activities of the imidazole pharmacophore relate to the downregulation of intracellular Ca2+ and K+ fluxes, and interference with translation initiation. The substituted imidazole derivatives are valuable in treatment of many systemic fungal infections. Imidazoles belong to the class of azole antifungals, which includes ketoconazole, miconazole, and clotrimazole. For comparison, another group of azoles is the triazoles, which includes fluconazole, itraconazole, and voriconazole. The difference between the imidazoles and the triazoles involves the mechanism of inhibition of the cytochrome P450 enzyme. The N3 of the imidazole compound binds to the heme iron atom of ferric cytochrome P450, whereas the N4 of the triazoles bind to the heme group. The triazoles have been shown to have a higher specificity for the cytochrome P450 than imidazoles, thereby making them more potent than the imidazoles. Some imidazole derivatives show effects on insects, for example sulconazole nitrate exhibits a strong anti-feeding effect on the keratin-digesting Australian carpet beetle larvae Anthrenocerus australis, as does econazole nitrate with the common clothes moth Tineola bisselliella. Industrial applications Imidazole itself has few direct applications. It is instead a precursor to a variety of agrichemicals, including enilconazole, climbazole, clotrimazole, prochloraz, and bifonazole. Depolymerizing PET via Imidazolysis Polyethylene terephthalate (PET) is a widely used plastic found in clothing, food packaging, beverage bottles, and thermoplastic resins. The massive accumulation of PET waste, mostly from single-use beverage bottles and food packaging, creates serious environmental problems. As a type of polyester, PET can be broken down through a process called depolymerization which involves breaking its molecular chains using chemical methods. A method of depolymerization called "imidazolysis," uses imidazole and similar compounds to break down PET. When PET reacts with an excess of imidazole, it produces 1,1′-terephthaloylbisimidazole (TBI). TBI can be further processed into smaller products, including amides, benzimidazoles, and esters, or even reused to create new polymers. TBI is a flexible intermediate compound, meaning it can be stored and later transformed into specific products as needed. This may allow manufacturers to delay deciding on the final products until after the depolymerization process, providing flexibility to meet different industrial demands. Imidazolysis can also be used to break down other polyesters and polyurethanes, making it a versatile approach for recycling plastics. Coordination chemistry Imidazole and its derivatives have high affinity for metal cations. One of the applications of imidazole is in the purification of His-tagged proteins in immobilised metal affinity chromatography (IMAC). Imidazole is used to elute tagged proteins bound to nickel ions attached to the surface of beads in the chromatography column. An excess of imidazole is passed through the column, which displaces the His-tag from nickel coordination, freeing the His-tagged proteins. Use in biological research Imidazole is a suitable buffer for pH 6.2 to 7.8,. Pure imidazole has essentially no absorbance at protein relevant wavelengths (280 nm), however lower purities of imidazole can give notable absorbance at 280 nm. Imidazole can interfere with the Lowry protein assay. Imidazole is often used in protein purification, where recombinant proteins with polyhistidine tags are immobilized onto nickel resins and eluted with a high imidazole concentration. Salts of imidazole Salts of imidazole where the imidazole ring is the cation are known as imidazolium salts (for example, imidazolium chloride or nitrate). These salts are formed from the protonation or substitution at nitrogen of imidazole. These salts have been used as ionic liquids and precursors to stable carbenes. Salts where a deprotonated imidazole is an anion are also well known; these salts are known as imidazolates (for example, sodium imidazolate, NaC3H3N2). Related heterocycles Benzimidazole, an analog with a fused benzene ring Dihydroimidazole or imidazoline, an analog where the 4,5-double bond is saturated Pyrrole, an analog with only one nitrogen atom in position 1 Oxazole, an analog with the nitrogen atom in position 1 replaced by oxygen Thiazole, an analog with the nitrogen atom in position 1 replaced by sulfur Pyrazole, an analog with two adjacent nitrogen atoms Triazoles, analogs with three nitrogen atoms Safety Imidazole has low acute toxicity as indicated by the of 970 mg/kg (Rat, oral).
Physical sciences
Alkaloids
Chemistry
1117328
https://en.wikipedia.org/wiki/Pickaxe
Pickaxe
A pickaxe, pick-axe, or pick is a generally T-shaped hand tool used for prying. Its head is typically metal, attached perpendicularly to a longer handle, traditionally made of wood, occasionally metal, and increasingly fiberglass. A standard pickaxe, similar to a "pick mattock", has a pointed end on one side of its head and a broad flat "axe" blade opposite. A gradual curve characteristically spans the length of the head. The next most common configuration features two spikes, one slightly longer than the other. The pointed end is used both for breaking and prying, the axe for hoeing, skimming, and chopping through roots. Developed as agricultural tools in prehistoric times, picks have evolved into other tools such as the plough and the mattock. They also have been used in general construction and mining, and adapted to warfare. Etymology The Oxford Dictionary of English states that both pick and pickaxe have the same meaning, that being a tool with a long handle at right angles to a curved iron or steel bar with a point at one end and a chisel or point at the other, used for breaking up hard ground or rock. The term pickaxe is a folk etymology alteration of Middle English via Anglo-Norman , Old French , and directly from Medieval Latin , related to Latin . Though modern picks usually feature a head with both a pointed end and an adze-like flattened blade on the other end, current spelling is influenced by axe, and pickaxe, pick-axe, or sometimes just pick cover any and all versions of the tool. History In prehistoric times a large shed deer antler from a suitable species (e.g. red deer) was often cut down to its shaft and its lowest tine and used as a one-pointed pick, and with it sometimes a large animal's shoulder blade as a crude shovel. During war in medieval times, the pickaxe was used as a weapon. As a weapon The historic pickaxe was readily adapted to a weapon for hand-to-hand combat in ancient times. Over the centuries aspects of it were incorporated in various battle axes. A pickaxe handle (sometimes called a "pickhandle" or "pick helve") is sometimes used on its own as a club for bludgeoning. In The Grapes of Wrath by John Steinbeck, pick handles were used against migrant farmers, and Georgia governor Lester Maddox famously threatened to use a similar, more slender axe handle to bar blacks from entering a whites-only restaurant in the heated days of the American civil rights movement of the 1960s. A pick handle is officially used as a baton in the British Army. Pickaxes are commonly carried by Pioneer Sergeants in the British Army. A normal pickaxe handle is made of ash or hickory wood and is about and weighs about . British Army pickaxe handles must, by regulation, be exactly long, for use in measuring in the field. New variant designs are: With a plastic casing on the thick end. Made of carbon fibre They are sometimes made with a steel casing on the thick end.
Technology
Hand tools
null
1117403
https://en.wikipedia.org/wiki/Goliath%20frog
Goliath frog
The goliath frog (Conraua goliath), otherwise known commonly as the giant slippery frog and the goliath bullfrog, is a species of frog in the family Conrauidae. The Goliath frog is the largest living frog. Specimens can reach up to about in snout–vent length and in weight. This species has a relatively small habitat range in Cameroon and Equatorial Guinea. Its numbers are dwindling due to habitat destruction, collection for food, and the pet trade. Description The male and female are very similar. In a sample of 15 individuals, weights ranged between , and snout–vent lengths were between . In total length, including outstretched legs, the largest specimens can slightly exceed . The heaviest verified specimen, caught in the Muni River system in 1960, weighed and had a snout–vent length of . The longest verified specimen, caught in the same river system in 1966, weighed and had a snout–vent length of . There are unverified claims of considerably larger specimens, but they are dubious. Their eyes can be nearly in diameter. The conspicuous tympanum has a diameter around and is separated from the eye by about in adults. Goliath frog eggs and tadpoles are about the same size as other frogs despite their very large adult form. A lateral fold extends from the eye to the posterior portion of the tympanum. Their toes are fully webbed, with large interdigital membranes extending down to the toe tips. The second toe is the longest. The skin on the dorsum and on top of the limbs is granular. Dorsal coloration is green sienna, while the abdomen and ventral part of the limbs are yellow/orange. They have acute hearing, but no vocal sac, and also lack nuptial pads. Habitat and distribution The goliath frog is mainly found near waterfalls in Equatorial Guinea and Cameroon. Their habitat is divided into two main seasons: the dry season which occurs from November to April and the rainy season which occurs from May to October. Due to its large size, the goliath frog has an extremely selective distribution. This species is primarily located in a dense equatorial forest fringe which is somewhat parallel to the coast and surrounded by rivers. The goliath frog has been located in Sanaga Basin (mainly appearing near the Nachtigal cascades and in the Sakbayeme rapids), Kienke Basin, Ntem Basin (mainly being located near the rapids of the Mensolo and Nsana), and Mbía Basin (where it was found to be very abundant in the rapids and cascades). These distribution patterns emphasize its limited environment which tends to have a clear preference for water territories. Conservation The primary threat to the goliath frog is hunting, as it is considered a food source in its native range. The IUCN has highlighted the need for conservation measures, in cooperation with local communities, to ensure hunting occurs at sustainable levels. To a lesser extent, they are also threatened by habitat loss and degradation. They have been extensively exported to zoos and the pet trade, but have proven shy and nervous in captivity. Although captives may live longer than their wild counterparts, the species has not been bred in captivity. Due to their classification as an endangered species, the Equatorial Guinean government has declared that no more than 300 goliath frogs may be exported per year for the pet trade, but few now seem to be exported from this country. Diet It was determined in a study that the goliath frog consumes a wide variety of food, suggesting that the frog is omnivorous with a carnivorous preference. Their prey are terrestrial, aquatic, and semi-aquatic, indicating that they hunt both on land and in water. Food preferences were different among the different weight groups of frogs, possibly correlating to different stages of development. Frogs weighing less than consumed annelids, arachnids, myriapods, insects, crustaceans, gastropods, and reptiles. Frogs weighing more than consumed arachnids, myriapods, insects, crustaceans, and gastropods with a significantly higher occurrence of myriapods. Annelids and reptiles were present only in the diet of lower weight frogs, emphasizing a more diversified diet for younger goliath frogs. Fully developed frogs are also believed to prey on fish, small mammals, and smaller frogs. Reproduction Like most amphibians, water is vital for their reproduction. Because the goliath frog lacks a vocal sac, it does not produce mating calls, a behavior generally present in frogs and toads. The egg masses consist of several hundred to a few thousand eggs, approximately each, and often attached to aquatic vegetation. Goliath frogs have been observed to create three main nest types, all semi-circular in shape and located in or near a river. The first type of nest is constructed by clearing a section in an existing river pool. The second is constructed by expand an already existing pool, damming it off from the river. The third is constructed by digging a new pool roughly wide and deep, sometimes moving quite large stones in the process. This may partially explain the goliath frog's large size, as larger frogs may be more successful at moving heavy objects when constructing their nests. Adults have also been shown to guard the nests at night. Although not confirmed, there are indications that the nest is constructed by the male, while the female guards the nest with the eggs. Larval development takes between 85 and 95 days. Life cycle Longevity The goliath frog can live up to 15 years in the wild and up to 21 years in captivity. Due to its large size, Goliath frogs are only known to be preyed on by humans although other predators are possible. Developmental Stages While the reproductive behavior for this species is mostly not well-known, studies by Lamotte, Perret, and Zahl have allowed an overall chronological table of larval development to be created. Typically after 24 hours, the cover of the egg mass becomes yellow and the eggs become dark gray-brown. These egg masses were found attached to the bottom of plants. First Month: no organs were differentiated and only the ocular region showed significant pigmentation and transparent external gills. The lengths were and while the body/tail appeared to be slightly pigmented, the abdomen was always nearly white. Second Month: The activity of the tadpoles increased greatly for they are now beginning to feed on leaves. Furthermore, they have developed a denser pigmentation, and the spiraculum and anal tube are now beginning to become visible. Additionally, the mouth and the eyes are beginning to function. Sizes range . As the month continues, pigmentation is becoming more intense via the presence of nearly black spots, two rows of teeth are developing on both the upper and lower lip, feeding increases greatly, and their size becomes . Third Month: The posterior legs are now beginning to form and the length of the tadpoles are around 40mm. As the month progresses, the posterior legs become larger and joints/fingers are becoming more distinguishable. The total length of the tadpoles is now 45 mm. Finally, at the end of the month, anterior legs have fully appeared, posterior legs are grown with long and powerful fingers accompanying this growth, the mouth has become an arched slot line, tail regression has begun, and these tadpoles begin to put their head out of the water in order to breathe. Fourth Month: All of the specimens in the experiment reached the final stage of metamorphosis. The tail has been either completely or nearly reabsorbed, the shape and color of adults is obtained with slightly lighter and greener pigmentation, the total length is . The large size characteristic of these frogs is not obtained yet. The entire process of larval development takes approximately 85–95 days to complete. Parental care Nesting patterns The goliath frog creates nests as sites for their offspring as a form of parental care. There are three main types of nests: type 1 mainly contains rock pools that were cleared from detritus and leaf litter, type 2 contains existing washouts at riverbanks, and type 3 were depressions dug by the frogs into the gravel riverbanks. Each nest type contains advantages and disadvantages. Nest type 1 was the easiest to create since only cleaning of the substrate was required to create the nest. Consequently, these types of nests were the least reliable since they were usually positioned in the river bed which makes them extremely vulnerable to being washed out by the rising water levels and to having predators enter the nest. Both nest types 2 and 3 were less likely to be washed out, however, they have an increased risk for being dried-up during the dry season. Consequently, while each nest promotes clear advantages, nests are typically constructed depending on the environment cues (whether it is the dry season or the rainy season). All nest types can be used several times, and can consist of three distinct cohorts of tadpoles. The construction of these nests can also explain how the goliath frog became the largest frog. Digging out these nests which exceed 1 m in diameter is an extremely arduous task. Other species which perform this task are also quite large in size. This includes: male African Bullfrogs, Gladiator Frogs, and the Bornean Giant River Frog. Typically, the goliath frog attaches its eggs either underwater, in small groups to rocks, or in gravel or larger pieces of wood. The construction of the nests may help reduce predation for it would be more difficult for the fish and shrimp (species which typically eat the eggs) to find the eggs and it may prevent the eggs from being washed away by the rapid current. However, in contrast, the changing water levels may also cause an increase in predation, cause more of the eggs to spill out, and increase tadpole mortality as well since the tadpoles and eggs may remain trapped within the nests. Additionally, these nests allow the Goliath frogs to become less dependent on existing structures for egg deposition which can allow these frogs to prolong their breeding season and also increase the amount of suitable breeding sites (they are determined to be suitable by the absence of predators or water presence since water is required for the offspring to survive). The process of constructing a nest for the offspring is used as a method of promoting a male’s reproductive benefits to the females. It also serves as the main parental investment since once the female deposits the eggs after fertilization, there is no more parental investment. Threats Parasites The goliath frog is endangered due to deforestation, overhunting, and parasites. One particular parasite is one species of microfilarial nematode which belongs to genus Icoseilla. This parasite is often found within the blood and lymphatic system and its spread throughout the lymphatic system can cause lethargy and mortality when the infection is serious. This parasite was more prevalent during the dry season which is primarily due to the water speed decreasing thereby allowing more potential breeding sites created. Since there are more breeding sites (it is important to note that their primary habitat is near rivers and waterfalls, but in the dry season, they tend to create breeding sites in areas that contain less water), there would be more opportunity for the mosquitos to infect the frogs thereby increasing the contagion rate. Furthermore, there was a positive relationship detected between host size and parasite abundance: the greater the size of the host, the more intense the infection would be. Within this species, male goliath frogs were found to be more significantly infected when compared to the females which can be due to the weight difference between the two. Additionally, as within most species, as age increases, the severity of the infection will also increase. Helminth parasites There are also parasites that mainly target the gastro-intestinal tract of this frog. These parasites are called helminth parasites which are worm-like parasites divided into three main groups: flukes which are leaf-shaped flatworms, tapeworms which are elongated flatworms that inhabit extraintestinal tissues, and roundworms which inhabit intestinal and extraintestinal sites. The Goliath frog, however, was mainly infected by Nematodes (90.5%) which is a specific type of roundworm. The helminth species discovered within the Goliath frog was extremely similar to those discovered in amphibian hosts in other African countries emphasizing that its location/habitat is the main cause of the prevalence of this predator. However, the Goliath frog was also infected by the nymph of Sebekia sp which could be primarily due to these frogs sharing the same habitat of crocodiles (definitive host) and fishes (intermediate host). Frogs that originate in Loum and Yabassi which are places within Cameroon had the largest variety of helminth species whereas frogs from Nkondjock had the smallest variety. This could be explained by the difference in agricultural activities, deforestation, and poaching. Liver weight was examined to reveal that there is a higher accumulation of toxic products at Loum due to the significant increase in agriculture practice. Consequently, land use effects and their impact on water habitats plays a significant role in the pattern of parasitism and land use by goliath frogs. Additionally, the direct life cycles of the helminths may play an important role in species diversity. Parasites which have direct life cycles spend a majority of their adult life within one host which allows their offspring to be spread from one host to another. These types of parasites also often lack an intermediate stage which means that they must be able to survive in the outside environment as well and be able to establish themselves within a new host. This ability to adapt to new environments contributes greatly to the complexity of helminth communities of goliath frogs. Chytridiomycosis Chytridiomycosis is an infectious disease affecting amphibians, caused by the chytrid fungi Batrachochytrium dendrobatidis and Batrachochytrium salamandrivorans. It has the capability to cause random deaths due to its high mortality rate. These fungi invade the surface layers of the skin, causing damage to the outer keratin layer. As the tadpole continues to grow, more keratin becomes present on the skin thereby allowing the fungus to spread to many parts of the body resulting in the death of the tadpole. Amphibian skin is vital because it is physiologically active meaning it plays important roles in regulating respiration, water, and electrolytes. While the method of how this fungus kills frogs is not known, it can be hypothesized that its invasion through the skin is related since it can cause an electrolyte depletion, osmotic imbalance, and make it more difficult for the frog to breathe. When infected with this fungus, a frog may have discolored skin, peeling on the outside layers of its skin, be sluggish, and have its legs spread slightly away from itself. This fungus is also transmittable since it can be directly transferred through contact between frogs and tadpoles or through exposure to infected water. Thus, it is highly spreadable and with its high mortality rate among frogs, it is extremely deadly. Physiology The goliath frog contains extensive skin folds to promote respiratory gas exchange at high altitudes. Additionally, the lungs within these frogs are reduced to about one-third of the volume of other frogs and they also contain a smaller heart. This is primarily due to their difference in predation methods. Goliath frogs are typically sit-and-wait predators. This means that they tend to capture/trap their prey either by luring them or using elements of surprise by acting extremely stealthy. As a result, they have a reduced metabolic rate and a unique method of breathing. When attempting to breathe, each buccal movement (a method in which the mouth expands and contracts in order to promote the movement of air into the lungs) pumps air at a rapid rate and the process of getting the oxygen removed from the air is slightly more efficient in this species of frog. These adaptations are very useful for the production of territorial and reproductive calls created by these frogs. Typically, the call of the goliath frog which is produced by the frog opening its mouth is admitted at a high frequency of 4.4 kHz. Furthermore, the goliath frog does not contain a vocal sac which causes the process for making noises/calls for reproduction to be different than most frogs. Interactions with humans In addition to the impacts of climate change, agriculture, and deforestation, goliath frogs are also threatened by the local practice of hunting them for food. While hunting the frogs, locals will use lanterns to get their attention, then immobilize them using mesh nets. In Nkombia, they may also be captured with nets during the day while they are resting on rocks. However, this method of capture is not very effective, as they are able to jump high, thus easily escaping. Humans are the main predators of this species and the main cause of their endangerment. In order to save the species, hunting and environmental destruction should be limited.
Biology and health sciences
Frogs and toads
Animals
1117409
https://en.wikipedia.org/wiki/Verbena
Verbena
Verbena (), also known as vervain or verveine, is a genus in the family Verbenaceae. It contains about 150 species of annual and perennial herbaceous or semi-woody flowering plants. The majority of the species are native to the Americas and Asia; however, Verbena officinalis, the common vervain or common verbena, is the type species and native to Europe. Naming In English, the name Verbena is usually used in the United States and the United Kingdom; elsewhere, the terms verveine or vervain are in use. Description Verbena is an herbaceous flowering plant, belonging to the Verbenaceae family, and may be annual or perennial depending on the species. The leaves are usually opposite, simple, and in many species hairy, often densely so. The flowers are small, with five petals, and borne in dense spikes. Typically some shade of blue, they may also be white, pink, or purple, especially in cultivars. The genus can be divided into a diploid North American and a polyploid South American lineage, both with a base chromosome number of seven. The European species is derived from the North American lineage. It seems that verbena as well as the related mock vervains (Glandularia) evolved from the assemblage provisionally treated under the genus name Junellia; both other genera were usually included in the Verbenaceae until the 1990s. Intergeneric chloroplast gene transfer by an undetermined mechanism – though probably not hybridization – has occurred at least twice from vervains to Glandularia, between the ancestors of the present-day South American lineages and once more recently, between V. orcuttiana or V. hastata and G. bipinnatifida. In addition, several species of verbena are of natural hybrid origin; the well-known garden vervain/verbena has an entirely muddy history. The relationships of this close-knit group are therefore hard to resolve with standard methods of computational phylogenetics. Cultivation Some species, hybrids and cultivars of verbena are used as ornamental plants. They are drought-resistant, tolerating full to partial sun, and enjoy well-drained, average soils. Plants are usually grown from seed. Some species and hybrids are not hardy and are treated as half-hardy annuals in bedding schemes. They are valued in butterfly gardening in suitable climates, attracting Lepidoptera such as the Hummingbird hawk-moth, Chocolate albatross, or the Pipevine swallowtail, and also hummingbirds, especially V. officinalis, which is also grown as a honey plant. The hybrid cultivars "Silver Anne" and "Sissinghurst" have gained the Royal Horticultural Society's Award of Garden Merit. Pests and diseases For some verbena pathogens, see List of verbena diseases. Cultivated verbenas are sometimes parasitized by sweet potato whitefly (Bemisia tabaci) and spread this pest to other crops. Uses Although verbena ("vervain") has been used in herbalism and traditional medicine, usually as an herbal tonic, there is no high-quality evidence for its effectiveness. Verbena has been listed as one of the 38 plants used to prepare Bach flower remedies, a kind of alternative medicine promoted for its effect on health. According to Cancer Research UK, "essence therapists believe that using essences can help to increase your mental, emotional and spiritual wellbeing. However, essences are not used to prevent, control, or cure cancer or any other physical condition." The essential oil of various species, mainly common vervain, is traded as "Spanish verbena oil". Considered inferior to oil of lemon verbena (Aloysia citrodora) in perfumery, it is of some commercial importance for herbalism. In culture Verbena has long been associated with divine and other supernatural forces. It was called "tears of Isis" in ancient Egypt, and later called "Hera's tears". In ancient Greece, it was dedicated to Eos Erigineia. The generic name is the Latin term for a plant sacred to the ancient Romans. Pliny the Elder describes verbena presented on Jupiter altars; it is not entirely clear if this referred to a verbena rather than the general term for prime sacrificial herbs. Pliny the Elder notes "the Magi especially make the maddest statements about the plant: that [among other things] a circle must be drawn with iron round the plant". The common names of verbena in many Central and Eastern European languages often associate it with iron. These include for example the Dutch ("iron-hard"), Danish ("medical ironwort"), German ("true ironherb"), Slovak ("medical ironherb"), and Hungarian ("iron grass"). In the early Christian era, folk legend stated that V. officinalis was used to stanch Jesus' wounds after his removal from the cross. It was consequently called "holy herb" or (e.g. in Wales) "Devil's bane". According to the Wiccan writer Doreen Valiente, Vervain flowers signify the goddess Diana and are often depicted on cimaruta, traditional Italian amulets. In the 1870 The History and Practice of Magic by "Paul Christian" (Jean-Baptiste Pitois), it is employed in the preparation of a mandragora charm. The book also describes its antiseptic capabilities (p. 336), and use as a protection against spells (pp. 339, 414). Romani people use vervain for love and good luck. While common vervain is not native to North America, it has been introduced there; for example, the Pawnee have adopted it as an entheogen enhancer and in oneiromancy (dream divination), much as Calea zacatechichi is used in Mexico. An indeterminate vervain is among the plants on the eighth panel of the New World Tapestry (Expedition to Cape Cod). In the Victorian language of flowers, verbena held the dual meaning of enchantment and sensibility. Species The following species are accepted: (
Biology and health sciences
Lamiales
null
1117429
https://en.wikipedia.org/wiki/Ivermectin
Ivermectin
Ivermectin is an antiparasitic drug. After its discovery in 1975, its first uses were in veterinary medicine to prevent and treat heartworm and acariasis. Approved for human use in 1987, it is used to treat infestations including head lice, scabies, river blindness (onchocerciasis), strongyloidiasis, trichuriasis, ascariasis and lymphatic filariasis. It works through many mechanisms to kill the targeted parasites, and can be taken by mouth, or applied to the skin for external infestations. It belongs to the avermectin family of medications. William Campbell and Satoshi Ōmura were awarded the 2015 Nobel Prize in Physiology or Medicine for its discovery and applications. It is on the World Health Organization's List of Essential Medicines, and is approved by the U.S. Food and Drug Administration as an antiparasitic agent. In 2022, it was the 314th most commonly prescribed medication in the United States, with more than 200,000 prescriptions. It is available as a generic medicine. Misinformation has been widely spread claiming that ivermectin is beneficial for treating and preventing COVID-19. Such claims are not backed by credible scientific evidence. Multiple major health organizations, including the U.S. Food and Drug Administration, the U.S. Centers for Disease Control and Prevention, the European Medicines Agency, and the World Health Organization have advised that ivermectin is not recommended for the treatment of COVID-19. Medical uses Ivermectin is used to treat human diseases caused by roundworms and a wide variety of external parasites. Worm infections For river blindness (onchocerciasis) and lymphatic filariasis, ivermectin is typically given as part of mass drug administration campaigns that distribute the drug to all members of a community affected by the disease. Adult worms survive in the skin and eventually recover to produce larval worms again; to keep the worms at bay, ivermectin is given at least once per year for the 1015-year lifespan of the adult worms. The World Health Organization (WHO) considers ivermectin the drug of choice for strongyloidiasis. Ivermectin is also the primary treatment for Mansonella ozzardi and cutaneous larva migrans. The U.S. Centers for Disease Control and Prevention (CDC) recommends ivermectin, albendazole, or mebendazole as treatments for ascariasis. Ivermectin is sometimes added to albendazole or mebendazole for whipworm treatment, and is considered a second-line treatment for gnathostomiasis. Mites and insects Ivermectin is also used to treat infection with parasitic arthropods. Scabies – infestation with the mite Sarcoptes scabiei – is most commonly treated with topical permethrin or oral ivermectin. A single application of permethrin is more efficacious than a single treatment of ivermectin. For most scabies cases, ivermectin is used in a two-dose regimen: the first dose kills the active mites, but not their eggs. Over the next week, the eggs hatch, and a second dose kills the newly hatched mites. The two-dose regimen of ivermectin has similar efficacy to the single dose permethrin treatment. Ivermectin is, however, more effective than permethrin when used in the mass treatment of endemic scabies. For severe "crusted scabies", where the parasite burden is orders of magnitude higher than usual, the U.S. Centers for Disease Control and Prevention (CDC) recommends up to seven doses of ivermectin over the course of a month, along with a topical antiparasitic. Both head lice and pubic lice can be treated with oral ivermectin, an ivermectin lotion applied directly to the affected area, or various other insecticides. Ivermectin is also used to treat rosacea and blepharitis, both of which can be caused or exacerbated by Demodex folliculorum mites. Contraindications The only absolute contraindication to the use of ivermectin is hypersensitivity to the active ingredient or any component of the formulation. In children under the age of five or those who weigh less than , there is limited data regarding the efficacy or safety of ivermectin, though the available data demonstrate few adverse effects. However, the American Academy of Pediatrics cautions against use of ivermectin in such patients, as the blood-brain barrier is less developed, and thus there may be an increased risk of particular CNS side effects such as encephalopathy, ataxia, coma, or death. The American Academy of Family Physicians also recommends against use in these patients, given a lack of sufficient data to prove drug safety. Ivermectin is secreted in very low concentration in breast milk. It remains unclear if ivermectin is safe during pregnancy. Adverse effects Side effects, although uncommon, include fever, itching, and skin rash when taken by mouth; and red eyes, dry skin, and burning skin when used topically for head lice. It is unclear if the drug is safe for use during pregnancy, but it is probably acceptable for use during breastfeeding. Ivermectin is considered relatively free of toxicity in standard doses (around 300 μg/kg). Based on the data drug safety sheet for ivermectin, side effects are uncommon. However, serious adverse events following ivermectin treatment are more common in people with very high burdens of larval Loa loa worms in their blood. Those who have over 30,000 microfilaria per milliliter of blood risk inflammation and capillary blockage due to the rapid death of the microfilaria following ivermectin treatment. One concern is neurotoxicity after large overdoses, which in most mammalian species may manifest as central nervous system depression, ataxia, coma, and even death, as might be expected from potentiation of inhibitory chloride channels. Since drugs that inhibit the enzyme CYP3A4 often also inhibit P-glycoprotein transport, the risk of increased absorption past the blood-brain barrier exists when ivermectin is administered along with other CYP3A4 inhibitors. These drugs include statins, HIV protease inhibitors, many calcium channel blockers, lidocaine, the benzodiazepines, and glucocorticoids such as dexamethasone. During a typical treatment course, ivermectin can cause minor aminotransferase elevations. In rare cases it can cause mild clinically apparent liver disease. To provide context for the dosing and toxicity ranges, the of ivermectin in mice is 25 mg/kg (oral), and 80 mg/kg in dogs, corresponding to an approximated human-equivalent dose LD50 range of 2.02–43.24 mg/kg, which is far more than its FDA-approved usage (a single dose of 0.150–0.200 mg/kg to be used for specific parasitic infections). While ivermectin has also been studied for use in COVID-19, and while it has some ability to inhibit SARS-CoV-2 in vitro, achieving 50% inhibition in vitro was found to require an estimated oral dose of 7.0 mg/kg (or 35x the maximum FDA-approved dosage), high enough to be considered ivermectin poisoning. Despite insufficient data to show any safe and effective dosing regimen for ivermectin in COVID-19, doses have been taken far more than FDA-approved dosing, leading the CDC to issue a warning of overdose symptoms including nausea, vomiting, diarrhea, hypotension, decreased level of consciousness, confusion, blurred vision, visual hallucinations, loss of coordination and balance, seizures, coma, and death. The CDC advises against consuming doses intended for livestock or doses intended for external use and warns that increasing misuse of ivermectin-containing products is increasing harmful overdoses. Pharmacology Mechanism of action Ivermectin and its related drugs act by interfering with the nerve and muscle functions of helminths and insects. The drug binds to glutamate-gated chloride channels common to invertebrate nerve and muscle cells. The binding pushes the channels open, which increases the flow of chloride ions and hyper-polarizes the cell membranes, paralyzing and killing the invertebrate. Ivermectin is safe for mammals (at the normal therapeutic doses used to cure parasite infections) because mammalian glutamate-gated chloride channels only occur in the brain and spinal cord: the causative avermectins usually do not cross the blood–brain barrier, and are unlikely to bind to other mammalian ligand-gated channels. Pharmacokinetics Ivermectin can be given by mouth, topically, or via injection. Oral doses are absorbed into systemic circulation; the alcoholic solution form is more orally available than tablet and capsule forms. Ivermectin is widely distributed in the body. Ivermectin does not readily cross the blood-brain barrier of mammals due to the presence of P-glycoprotein (the MDR1 gene mutation affects the function of this protein). Crossing may still become significant if ivermectin is given at high doses, in which case brain levels peak 2–5 hours after administration. In contrast to mammals, ivermectin can cross the blood-brain barrier in tortoises, often with fatal consequences. Ivermectin is metabolized into eight different products by human CYP3A4, two of which (M1, M2) remain toxic to mosquitos. M1 and M2 also have longer elimination half-lives of about 55 hours. CYP3A5 produces a ninth metabolite. Chemistry Fermentation of Streptomyces avermitilis yields eight closely related avermectin homologues, of which B1a and B1b form the bulk of the products isolated. In a separate chemical step, the mixture is hydrogenated to give ivermectin, which is an approximately 80:20 mixture of the two 22,23-dihydroavermectin compounds. Ivermectin is a macrocyclical lactone. History The avermectin family of compounds was discovered by Satoshi Ōmura of Kitasato University and William Campbell of Merck. In 1970, Ōmura isolated a strain of Streptomyces avermitilis from woodland soil near a golf course along the southeast coast of Honshu, Japan. Ōmura sent the bacteria to William Campbell, who showed that the bacterial culture could cure mice infected with the roundworm Heligmosomoides polygyrus. Campbell isolated the active compounds from the bacterial culture, naming them "avermectins" and the bacterium Streptomyces avermitilis for the compounds' ability to clear mice of worms (in Latin: a 'without', vermis 'worms'). Of the various avermectins, Campbell's group found the compound "avermectin B1" to be the most potent when taken orally. They synthesized modified forms of avermectin B1 to improve its pharmaceutical properties, eventually choosing a mixture of at least 80% 22,23-dihydroavermectin B1a and up to 20% 22,23-dihydroavermectin B1b, a combination they called "ivermectin". The discovery of ivermectin has been described as a combination of "chance and choice." Merck was looking for a broad-spectrum anthelmintic, which ivermectin is; however, Campbell noted that they "...also found a broad-spectrum agent for the control of ectoparasitic insects and mites." Merck began marketing ivermectin as a veterinary antiparasitic in 1981. By 1986, ivermectin was registered for use in 46 countries and was administered massively to cattle, sheep, and other animals. By the late 1980s, ivermectin was the bestselling veterinary medicine in the world. Following its blockbuster success as a veterinary antiparasitic, another Merck scientist, Mohamed Aziz, collaborated with the World Health Organization to test the safety and efficacy of ivermectin against onchocerciasis in humans. They found it to be highly safe and effective, triggering Merck to register ivermectin for human use as "Mectizan" in France in 1987. A year later, Merck CEO Roy Vagelos agreed that Merck would donate all ivermectin needed to eradicate river blindness. In 1998, that donation would be expanded to include ivermectin used to treat lymphatic filariasis. Ivermectin earned the title of "wonder drug" for the treatment of nematodes and arthropod parasites. Ivermectin has been used safely by hundreds of millions of people to treat river blindness and lymphatic filariasis. Half of the 2015 Nobel Prize in Physiology or Medicine was awarded jointly to Campbell and Ōmura for discovering ivermectin, "the derivatives of which have radically lowered the incidence of river blindness and lymphatic filariasis, as well as showing efficacy against an expanding number of other parasitic diseases". Society and culture COVID-19 misinformation Economics The initial price proposed by Merck in 1987 was per treatment, which was unaffordable for patients who most needed ivermectin. The company donated hundreds of millions of courses of treatments since 1988 in more than 30 countries. Between 1995 and 2010, using donated ivermectin to prevent river blindness, the program is estimated to have prevented seven million years of disability at a cost of . Ivermectin is considered an inexpensive drug. As of 2019, ivermectin tablets (Stromectol) in the United States were the least expensive treatment option for lice in children at approximately , while Sklice, an ivermectin lotion, cost around for . , the cost effectiveness of treating scabies and lice with ivermectin has not been studied. Brand names It is sold under the brand names Heartgard, Sklice and Stromectol in the United States, Ivomec worldwide by Merial Animal Health, Mectizan in Canada by Merck, Iver-DT in Nepal by Alive Pharmaceutical and Ivexterm in Mexico by Valeant Pharmaceuticals International. In Southeast Asian countries, it is marketed by Delta Pharma Ltd. under the trade name Scabo 6. The formulation for rosacea treatment is sold under the brand name Soolantra. While in development, it was assigned the code MK-933 by Merck. Research Parasitic disease Ivermectin has been researched in laboratory animals, as a potential treatment for trichinosis and trypanosomiasis. Ivermectin has also been tested on zebrafish infected with Pseudocapillaria tomentosa. Tropical diseases Ivermectin is also of interest in the prevention of malaria, as it is toxic to both the malaria plasmodium itself and the mosquitos that carry it. A direct effect on malaria parasites could not be shown in an experimental infection of volunteers with Plasmodium falciparum. Use of ivermectin at higher doses necessary to control malaria is probably safe, though large clinical trials have not yet been done to definitively establish the efficacy or safety of ivermectin for prophylaxis or treatment of malaria. Mass drug administration of a population with ivermectin to treat and prevent nematode infestation is effective for eliminating malaria-bearing mosquitos and thereby potentially reducing infection with residual malaria parasites. Whilst effective in killing malaria-bearing mosquitos, a 2021 Cochrane review found that, to date, the evidence shows no significant impact on reducing incidence of malaria transmission from the community administration of ivermectin. One alternative to ivermectin is moxidectin, which has been approved by the Food and Drug Administration for use in people with river blindness. Moxidectin has a longer half-life than ivermectin and may eventually supplant ivermectin as it is a more potent microfilaricide, but there is a need for additional clinical trials, with long-term follow-up, to assess whether moxidectin is safe and effective for treatment of nematode infection in children and women of childbearing potential. There is tentative evidence that ivermectin kills bedbugs, as part of integrated pest management for bedbug infestations. However, such use may require a prolonged course of treatment which is of unclear safety. NAFLD In 2013, ivermectin was demonstrated as a novel ligand of the farnesoid X receptor, a therapeutic target for nonalcoholic fatty liver disease (NAFLD). COVID-19 During the COVID-19 pandemic, ivermectin was researched for possible utility in preventing and treating COVID-19, but no good evidence of benefit was found. Veterinary use Ivermectin is routinely used to control parasitic worms in the gastrointestinal tract of ruminant animals. These parasites normally enter the animal when it is grazing, pass the bowel, and set and mature in the intestines, after which they produce eggs that leave the animal via its droppings and can infest new pastures. Ivermectin is only effective in killing some of these parasites, because of an increase in anthelmintic resistance. This resistance has arisen from the persistent use of the same anthelmintic drugs for the past 40 years. Additionally, the use of Ivermectin for livestock has a profound impact on dung beetles, such as T. lusitanicus, as it can lead to acute toxicity within these insects. In dogs, ivermectin is routinely used as prophylaxis against heartworm. Dogs with defects in the P-glycoprotein gene (MDR1), often collie-like herding dogs, can be severely poisoned by ivermectin. The mnemonic "white feet, don't treat" refers to Scotch collies that are vulnerable to ivermectin. Some other dog breeds (especially the Rough Collie, the Smooth Collie, the Shetland Sheepdog, and the Australian Shepherd), also have a high incidence of mutation within the MDR1 gene (coding for P-glycoprotein) and are sensitive to the toxic effects of ivermectin. For dogs, the insecticide spinosad may have the effect of increasing the toxicity of ivermectin. A 0.01% ivermectin topical preparation for treating ear mites in cats is available. Clinical evidence suggests 7-week-old kittens are susceptible to ivermectin toxicity. Ivermectin is sometimes used as an acaricide in reptiles, both by injection and as a diluted spray. While this works well in some cases, care must be taken, as several species of reptiles are very sensitive to ivermectin. Use in turtles is particularly contraindicated. A characteristic of the antinematodal action of ivermectin is its potency: for instance, to combat Dirofilaria immitis in dogs, ivermectin is effective at 0.001 milligram per kilogram of body weight when administered orally.
Biology and health sciences
Antiparasitic
Health
1117979
https://en.wikipedia.org/wiki/VSEPR%20theory
VSEPR theory
Valence shell electron pair repulsion (VSEPR) theory ( , ) is a model used in chemistry to predict the geometry of individual molecules from the number of electron pairs surrounding their central atoms. It is also named the Gillespie-Nyholm theory after its two main developers, Ronald Gillespie and Ronald Nyholm. The premise of VSEPR is that the valence electron pairs surrounding an atom tend to repel each other. The greater the repulsion, the higher in energy (less stable) the molecule is. Therefore, the VSEPR-predicted molecular geometry of a molecule is the one that has as little of this repulsion as possible. Gillespie has emphasized that the electron-electron repulsion due to the Pauli exclusion principle is more important in determining molecular geometry than the electrostatic repulsion. The insights of VSEPR theory are derived from topological analysis of the electron density of molecules. Such quantum chemical topology (QCT) methods include the electron localization function (ELF) and the quantum theory of atoms in molecules (AIM or QTAIM). History The idea of a correlation between molecular geometry and number of valence electron pairs (both shared and unshared pairs) was originally proposed in 1939 by Ryutaro Tsuchida in Japan, and was independently presented in a Bakerian Lecture in 1940 by Nevil Sidgwick and Herbert Powell of the University of Oxford. In 1957, Ronald Gillespie and Ronald Sydney Nyholm of University College London refined this concept into a more detailed theory, capable of choosing between various alternative geometries. Overview VSEPR theory is used to predict the arrangement of electron pairs around central atoms in molecules, especially simple and symmetric molecules. A central atom is defined in this theory as an atom which is bonded to two or more other atoms, while a terminal atom is bonded to only one other atom. For example in the molecule methyl isocyanate (H3C-N=C=O), the two carbons and one nitrogen are central atoms, and the three hydrogens and one oxygen are terminal atoms. The geometry of the central atoms and their non-bonding electron pairs in turn determine the geometry of the larger whole molecule. The number of electron pairs in the valence shell of a central atom is determined after drawing the Lewis structure of the molecule, and expanding it to show all bonding groups and lone pairs of electrons. In VSEPR theory, a double bond or triple bond is treated as a single bonding group. The sum of the number of atoms bonded to a central atom and the number of lone pairs formed by its nonbonding valence electrons is known as the central atom's steric number. The electron pairs (or groups if multiple bonds are present) are assumed to lie on the surface of a sphere centered on the central atom and tend to occupy positions that minimize their mutual repulsions by maximizing the distance between them. The number of electron pairs (or groups), therefore, determines the overall geometry that they will adopt. For example, when there are two electron pairs surrounding the central atom, their mutual repulsion is minimal when they lie at opposite poles of the sphere. Therefore, the central atom is predicted to adopt a linear geometry. If there are 3 electron pairs surrounding the central atom, their repulsion is minimized by placing them at the vertices of an equilateral triangle centered on the atom. Therefore, the predicted geometry is trigonal. Likewise, for 4 electron pairs, the optimal arrangement is tetrahedral. As a tool in predicting the geometry adopted with a given number of electron pairs, an often used physical demonstration of the principle of minimal electron pair repulsion utilizes inflated balloons. Through handling, balloons acquire a slight surface electrostatic charge that results in the adoption of roughly the same geometries when they are tied together at their stems as the corresponding number of electron pairs. For example, five balloons tied together adopt the trigonal bipyramidal geometry, just as do the five bonding pairs of a PCl5 molecule. Steric number The steric number of a central atom in a molecule is the number of atoms bonded to that central atom, called its coordination number, plus the number of lone pairs of valence electrons on the central atom. In the molecule SF4, for example, the central sulfur atom has four ligands; the coordination number of sulfur is four. In addition to the four ligands, sulfur also has one lone pair in this molecule. Thus, the steric number is 4 + 1 = 5. Degree of repulsion The overall geometry is further refined by distinguishing between bonding and nonbonding electron pairs. The bonding electron pair shared in a sigma bond with an adjacent atom lies further from the central atom than a nonbonding (lone) pair of that atom, which is held close to its positively charged nucleus. VSEPR theory therefore views repulsion by the lone pair to be greater than the repulsion by a bonding pair. As such, when a molecule has 2 interactions with different degrees of repulsion, VSEPR theory predicts the structure where lone pairs occupy positions that allow them to experience less repulsion. Lone pair–lone pair (lp–lp) repulsions are considered stronger than lone pair–bonding pair (lp–bp) repulsions, which in turn are considered stronger than bonding pair–bonding pair (bp–bp) repulsions, distinctions that then guide decisions about overall geometry when 2 or more non-equivalent positions are possible. For instance, when 5 valence electron pairs surround a central atom, they adopt a trigonal bipyramidal molecular geometry with two collinear axial positions and three equatorial positions. An electron pair in an axial position has three close equatorial neighbors only 90° away and a fourth much farther at 180°, while an equatorial electron pair has only two adjacent pairs at 90° and two at 120°. The repulsion from the close neighbors at 90° is more important, so that the axial positions experience more repulsion than the equatorial positions; hence, when there are lone pairs, they tend to occupy equatorial positions as shown in the diagrams of the next section for steric number five. The difference between lone pairs and bonding pairs may also be used to rationalize deviations from idealized geometries. For example, the H2O molecule has four electron pairs in its valence shell: two lone pairs and two bond pairs. The four electron pairs are spread so as to point roughly towards the apices of a tetrahedron. However, the bond angle between the two O–H bonds is only 104.5°, rather than the 109.5° of a regular tetrahedron, because the two lone pairs (whose density or probability envelopes lie closer to the oxygen nucleus) exert a greater mutual repulsion than the two bond pairs. A bond of higher bond order also exerts greater repulsion since the pi bond electrons contribute. For example in isobutylene, (H3C)2C=CH2, the H3C−C=C angle (124°) is larger than the H3C−C−CH3 angle (111.5°). However, in the carbonate ion, , all three C−O bonds are equivalent with angles of 120° due to resonance. AXE method The "AXE method" of electron counting is commonly used when applying the VSEPR theory. The electron pairs around a central atom are represented by a formula AXmEn, where A represents the central atom and always has an implied subscript one. Each X represents a ligand (an atom bonded to A). Each E represents a lone pair of electrons on the central atom. The total number of X and E is known as the steric number. For example in a molecule AX3E2, the atom A has a steric number of 5. When the substituent (X) atoms are not all the same, the geometry is still approximately valid, but the bond angles may be slightly different from the ones where all the outside atoms are the same. For example, the double-bond carbons in alkenes like C2H4 are AX3E0, but the bond angles are not all exactly 120°. Likewise, SOCl2 is AX3E1, but because the X substituents are not identical, the X–A–X angles are not all equal. Based on the steric number and distribution of Xs and Es, VSEPR theory makes the predictions in the following tables. Main-group elements For main-group elements, there are stereochemically active lone pairs E whose number can vary between 0 to 3. Note that the geometries are named according to the atomic positions only and not the electron arrangement. For example, the description of AX2E1 as a bent molecule means that the three atoms AX2 are not in one straight line, although the lone pair helps to determine the geometry. Transition metals (Kepert model) The lone pairs on transition metal atoms are usually stereochemically inactive, meaning that their presence does not change the molecular geometry. For example, the hexaaquo complexes M(H2O)6 are all octahedral for M = V3+, Mn3+, Co3+, Ni2+ and Zn2+, despite the fact that the electronic configurations of the central metal ion are d2, d4, d6, d8 and d10 respectively. The Kepert model ignores all lone pairs on transition metal atoms, so that the geometry around all such atoms corresponds to the VSEPR geometry for AXn with 0 lone pairs E. This is often written MLn, where M = metal and L = ligand. The Kepert model predicts the following geometries for coordination numbers of 2 through 9: Examples The methane molecule (CH4) is tetrahedral because there are four pairs of electrons. The four hydrogen atoms are positioned at the vertices of a tetrahedron, and the bond angle is cos−1(−) ≈ 109° 28′. This is referred to as an AX4 type of molecule. As mentioned above, A represents the central atom and X represents an outer atom. The ammonia molecule (NH3) has three pairs of electrons involved in bonding, but there is a lone pair of electrons on the nitrogen atom. It is not bonded with another atom; however, it influences the overall shape through repulsions. As in methane above, there are four regions of electron density. Therefore, the overall orientation of the regions of electron density is tetrahedral. On the other hand, there are only three outer atoms. This is referred to as an AX3E type molecule because the lone pair is represented by an E. By definition, the molecular shape or geometry describes the geometric arrangement of the atomic nuclei only, which is trigonal-pyramidal for NH3. Steric numbers of 7 or greater are possible, but are less common. The steric number of 7 occurs in iodine heptafluoride (IF7); the base geometry for a steric number of 7 is pentagonal bipyramidal. The most common geometry for a steric number of 8 is a square antiprismatic geometry. Examples of this include the octacyanomolybdate () and octafluorozirconate () anions. The nonahydridorhenate ion () in potassium nonahydridorhenate is a rare example of a compound with a steric number of 9, which has a tricapped trigonal prismatic geometry. Steric numbers beyond 9 are very rare, and it is not clear what geometry is generally favoured. Possible geometries for steric numbers of 10, 11, 12, or 14 are bicapped square antiprismatic (or bicapped dodecadeltahedral), octadecahedral, icosahedral, and bicapped hexagonal antiprismatic, respectively. No compounds with steric numbers this high involving monodentate ligands exist, and those involving multidentate ligands can often be analysed more simply as complexes with lower steric numbers when some multidentate ligands are treated as a unit. Exceptions There are groups of compounds where VSEPR fails to predict the correct geometry. Some AX2E0 molecules The shapes of heavier Group 14 element alkyne analogues (RM≡MR, where M = Si, Ge, Sn or Pb) have been computed to be bent. Some AX2E2 molecules One example of the AX2E2 geometry is molecular lithium oxide, Li2O, a linear rather than bent structure, which is ascribed to its bonds being essentially ionic and the strong lithium-lithium repulsion that results. Another example is O(SiH3)2 with an Si–O–Si angle of 144.1°, which compares to the angles in Cl2O (110.9°), (CH3)2O (111.7°), and N(CH3)3 (110.9°). Gillespie and Robinson rationalize the Si–O–Si bond angle based on the observed ability of a ligand's lone pair to most greatly repel other electron pairs when the ligand electronegativity is greater than or equal to that of the central atom. In O(SiH3)2, the central atom is more electronegative, and the lone pairs are less localized and more weakly repulsive. The larger Si–O–Si bond angle results from this and strong ligand-ligand repulsion by the relatively large -SiH3 ligand. Burford et al showed through X-ray diffraction studies that Cl3Al–O–PCl3 has a linear Al–O–P bond angle and is therefore a non-VSEPR molecule. Some AX6E1 and AX8E1 molecules Some AX6E1 molecules, e.g. xenon hexafluoride (XeF6) and the Te(IV) and Bi(III) anions, , , , and , are octahedral, rather than pentagonal pyramids, and the lone pair does not affect the geometry to the degree predicted by VSEPR. Similarly, the octafluoroxenate ion () in nitrosonium octafluoroxenate(VI) is a square antiprism with minimal distortion, despite having a lone pair. One rationalization is that steric crowding of the ligands allows little or no room for the non-bonding lone pair; another rationalization is the inert-pair effect. Square planar ML4 complexes The Kepert model predicts that ML4 transition metal molecules are tetrahedral in shape, and it cannot explain the formation of square planar complexes. The majority of such complexes exhibit a d8 configuration as for the tetrachloroplatinate () ion. The explanation of the shape of square planar complexes involves electronic effects and requires the use of crystal field theory. Complexes with strong d-contribution Some transition metal complexes with low d electron count have unusual geometries, which can be ascribed to d subshell bonding interaction. Gillespie found that this interaction produces bonding pairs that also occupy the respective antipodal points (ligand opposed) of the sphere. This phenomenon is an electronic effect resulting from the bilobed shape of the underlying sdx hybrid orbitals. The repulsion of these bonding pairs leads to a different set of shapes. The gas phase structures of the triatomic halides of the heavier members of group 2, (i.e., calcium, strontium and barium halides, MX2), are not linear as predicted but are bent, (approximate X–M–X angles: CaF2, 145°; SrF2, 120°; BaF2, 108°; SrCl2, 130°; BaCl2, 115°; BaBr2, 115°; BaI2, 105°). It has been proposed by Gillespie that this is also caused by bonding interaction of the ligands with the d subshell of the metal atom, thus influencing the molecular geometry. Superheavy elements Relativistic effects on the electron orbitals of superheavy elements is predicted to influence the molecular geometry of some compounds. For instance, the 6d5/2 electrons in nihonium play an unexpectedly strong role in bonding, so NhF3 should assume a T-shaped geometry, instead of a trigonal planar geometry like its lighter congener BF3. In contrast, the extra stability of the 7p1/2 electrons in tennessine are predicted to make TsF3 trigonal planar, unlike the T-shaped geometry observed for IF3 and predicted for AtF3; similarly, OgF4 should have a tetrahedral geometry, while XeF4 has a square planar geometry and RnF4 is predicted to have the same. Odd-electron molecules The VSEPR theory can be extended to molecules with an odd number of electrons by treating the unpaired electron as a "half electron pair"—for example, Gillespie and Nyholm suggested that the decrease in the bond angle in the series (180°), NO2 (134°), (115°) indicates that a given set of bonding electron pairs exert a weaker repulsion on a single non-bonding electron than on a pair of non-bonding electrons. In effect, they considered nitrogen dioxide as an AX2E0.5 molecule, with a geometry intermediate between and . Similarly, chlorine dioxide (ClO2) is an AX2E1.5 molecule, with a geometry intermediate between and . Finally, the methyl radical (CH3) is predicted to be trigonal pyramidal like the methyl anion (), but with a larger bond angle (as in the trigonal planar methyl cation ()). However, in this case, the VSEPR prediction is not quite true, as CH3 is actually planar, although its distortion to a pyramidal geometry requires very little energy.
Physical sciences
Bond structure
Chemistry
12293642
https://en.wikipedia.org/wiki/Boiler%20%28power%20generation%29
Boiler (power generation)
A boiler or steam generator is a device used to create steam by applying heat energy to water. Although the definitions are somewhat flexible, it can be said that older steam generators were commonly termed boilers and worked at low to medium pressure () but, at pressures above this, it is more usual to speak of a steam generator. A boiler or steam generator is used wherever a source of steam is required. The form and size depends on the application: mobile steam engines such as steam locomotives, portable engines and steam-powered road vehicles typically use a smaller boiler that forms an integral part of the vehicle; stationary steam engines, industrial installations and power stations will usually have a larger separate steam generating facility connected to the point-of-use by piping. A notable exception is the steam-powered fireless locomotive, where separately-generated steam is transferred to a receiver (tank) on the locomotive. As a component of a prime mover The steam generator or steam boiler is an integral component of a steam engine when considered as a prime mover. However it needs to be treated separately, as to some extent a variety of generator types can be combined with a variety of engine units. A boiler incorporates a firebox or furnace in order to burn the fuel and generate heat. The generated heat is transferred to water to make steam, the process of boiling. This produces saturated steam at a rate which can vary according to the pressure above the boiling water. The higher the furnace temperature, the faster the steam production. The saturated steam thus produced can then either be used immediately to produce power via a turbine and alternator, or else may be further superheated to a higher temperature; this notably reduces suspended water content making a given volume of steam produce more work and creates a greater temperature gradient, which helps reduce the potential to form condensation. Any remaining heat in the combustion gases can then either be evacuated or made to pass through an economiser, the role of which is to warm the feed water before it reaches the boiler. Types Haycock and wagon top boilers For the first Newcomen engine of 1712, the boiler was little more than large brewer's kettle installed beneath the power cylinder. Because the engine's power was derived from the vacuum produced by condensation of the steam, the requirement was for large volumes of steam at very low pressure hardly more than . The whole boiler was set into brickwork which retained some heat. A voluminous coal fire was lit on a grate beneath the slightly dished pan which gave a very small heating surface; there was therefore a great deal of heat wasted up the chimney. In later models, notably by John Smeaton, heating surface was considerably increased by making the gases heat the boiler sides, passing through a flue. Smeaton further lengthened the path of the gases by means of a spiral labyrinth flue beneath the boiler. These under-fired boilers were used in various forms throughout the 18th century. Some were of round section (haycock). A longer version on a rectangular plan was developed around 1775 by Boulton and Watt (wagon top boiler). This is what is today known as a three-pass boiler, the fire heating the underside, the gases then passing through a central square-section tubular flue and finally around the boiler sides. Cylindrical fire-tube boilers An early proponent of the cylindrical form was the British engineer John Blakey, who proposed his design in 1774. Another early proponent was the American engineer, Oliver Evans, who rightly recognised that the cylindrical form was the best from the point of view of mechanical resistance and towards the end of the 18th century began to incorporate it into his projects. Probably inspired by the writings on Leupold's "high-pressure" engine scheme that appeared in encyclopaedic works from 1725, Evans favoured "strong steam" i.e. non condensing engines in which the steam pressure alone drove the piston and was then exhausted to atmosphere. The advantage of strong steam as he saw it was that more work could be done by smaller volumes of steam; this enabled all the components to be reduced in size and engines could be adapted to transport and small installations. To this end he developed a long cylindrical wrought iron horizontal boiler into which was incorporated a single fire tube, at one end of which was placed the fire grate. The gas flow was then reversed into a passage or flue beneath the boiler barrel, then divided to return through side flues to join again at the chimney (Columbian engine boiler). Evans incorporated his cylindrical boiler into several engines, both stationary and mobile. Due to space and weight considerations the latter were one-pass exhausting directly from fire tube to chimney. Another proponent of "strong steam" at that time was the Cornishman, Richard Trevithick. His boilers worked at and were at first of hemispherical then cylindrical form. From 1804 onwards Trevithick produced a small two-pass or return flue boiler for semi-portable and locomotive engines. The Cornish boiler developed around 1812 by Richard Trevithick was both stronger and more efficient than the simple boilers which preceded it. It consisted of a cylindrical water tank around long and in diameter, and had a coal fire grate placed at one end of a single cylindrical tube about three feet wide which passed longitudinally inside the tank. The fire was tended from one end and the hot gases from it travelled along the tube and out of the other end, to be circulated back along flues running along the outside then a third time beneath the boiler barrel before being expelled into a chimney. This was later improved upon by another 3-pass boiler, the Lancashire boiler which had a pair of furnaces in separate tubes side-by-side. This was an important improvement since each furnace could be stoked at different times, allowing one to be cleaned while the other was operating. Railway locomotive boilers were usually of the 1-pass type, although in early days, 2-pass "return flue" boilers were common, especially with locomotives built by Timothy Hackworth. Multi-tube boilers A significant step forward came in France in 1828 when Marc Seguin devised a two-pass boiler of which the second pass was formed by a bundle of multiple tubes. A similar design with natural induction used for marine purposes was the popular Scotch marine boiler. Prior to the Rainhill trials of 1829 Henry Booth, treasurer of the Liverpool and Manchester Railway suggested to George Stephenson, a scheme for a multi-tube one-pass horizontal boiler made up of two units: a firebox surrounded by water spaces and a boiler barrel consisting of two telescopic rings inside which were mounted 25 copper tubes; the tube bundle occupied much of the water space in the barrel and vastly improved heat transfer. Old George immediately communicated the scheme to his son Robert and this was the boiler used on Stephenson's Rocket, outright winner of the trial. The design formed the basis for all subsequent Stephensonian-built locomotives, being immediately taken up by other constructors; this pattern of fire-tube boiler has been built ever since. Structural resistance The 1712 boiler was assembled from riveted copper plates with a domed top made of lead in the first examples. Later boilers were made of small wrought iron plates riveted together. The problem was producing big enough plates, so that even pressures of around were not absolutely safe, nor was the cast iron hemispherical boiler initially used by Richard Trevithick. This construction with small plates persisted until the 1820s, when larger plates became feasible and could be rolled into a cylindrical form with just one butt-jointed seam reinforced by a gusset; Timothy Hackworth's Sans Pareil 11 of 1849 had a longitudinal welded seam. Welded construction for locomotive boilers was extremely slow to take hold. Once-through monotubular water tube boilers as used by Doble, Lamont and Pritchard are capable of withstanding considerable pressure and of releasing it without danger of explosion. Combustion The source of heat for a boiler is combustion of any of several fuels, such as wood, coal, oil, or natural gas. Nuclear fission is also used as a heat source for generating steam. Heat recovery steam generators (HRSGs) use the heat rejected from other processes such as gas turbines. Solid fuel firing In order to create optimum burning characteristics of the fire, air needs to be supplied both through the grate, and above the fire. Most boilers now depend on mechanical draft equipment rather than natural draught. This is because natural draught is subject to outside air conditions and temperature of flue gases leaving the furnace, as well as chimney height. All these factors make effective draught hard to attain and therefore make mechanical draught equipment much more economical. There are three types of mechanical draught: Induced draught: This is obtained one of three ways, the first being the "stack effect" of a heated chimney, in which the flue gas is less dense than the ambient air surrounding the boiler. The denser column of ambient air forces combustion air into and through the boiler. The second method is through use of a steam jet. The steam jet or ejector oriented in the direction of flue gas flow induces flue gases into the stack and allows for a greater flue gas velocity increasing the overall draught in the furnace. This method was common on steam driven locomotives which could not have tall chimneys. The third method is by simply using an induced draught fan (ID fan) which sucks flue gases out of the furnace and up the stack. Almost all induced draught furnaces have a negative pressure. Forced draught: draught is obtained by forcing air into the furnace by means of a fan (FD fan) and duct-work. Air is often passed through an air heater; which, as the name suggests, heats the air going into the furnace in order to increase the overall efficiency of the boiler. Dampers are used to control the quantity of air admitted to the furnace. Forced draught furnaces usually have a positive pressure. Balanced draught: Balanced draught is obtained through use of both induced and forced draft. This is more common with larger boilers where the flue gases have to travel a long distance through many boiler passes. The induced draft fan works in conjunction with the forced draft fan allowing the furnace pressure to be maintained slightly below atmospheric. Firetube boilers The next stage in the process is to boil water and make steam. The goal is to make the heat flow as completely as possible from the heat source to the water. The water is confined in a restricted space heated by the fire. The steam produced has lower density than the water and therefore will accumulate at the highest level in the vessel; its temperature will remain at boiling point and will only increase as pressure increases. Steam in this state (in equilibrium with the liquid water which is being evaporated within the boiler) is named "saturated steam". For example, saturated steam at atmospheric pressure boils at . Saturated steam taken from the boiler may contain entrained water droplets, however a well designed boiler will supply virtually "dry" saturated steam, with very little entrained water. Continued heating of the saturated steam will bring the steam to a "superheated" state, where the steam is heated to a temperature above the saturation temperature, and no liquid water can exist under this condition. Most reciprocating steam engines of the 19th century used saturated steam, however modern steam power plants universally use superheated steam which allows higher steam cycle efficiency. Superheaters L.D. Porta gives the following equation determining the efficiency of a steam locomotive, applicable to steam engines of all kinds: power (kW) = steam Production (kg h−1)/Specific steam consumption (kg/kW h). A greater quantity of steam can be generated from a given quantity of water by superheating it. As the fire is burning at a much higher temperature than the saturated steam it produces, far more heat can be transferred to the once-formed steam by superheating it and turning the water droplets suspended therein into more steam and greatly reducing water consumption. The superheater works like coils on an air conditioning unit, however to a different end. The steam piping (with steam flowing through it) is directed through the flue gas path in the boiler furnace. This area typically is between . Some superheaters are radiant type (absorb heat by thermal radiation), others are convection type (absorb heat via a fluid i.e. gas) and some are a combination of the two. So whether by convection or radiation the extreme heat in the boiler furnace/flue gas path will also heat the superheater steam piping and the steam within as well. While the temperature of the steam in the superheater is raised, the pressure of the steam is not: the turbine or moving pistons offer a "continuously expanding space" and the pressure remains the same as that of the boiler. The process of superheating steam is most importantly designed to remove all droplets entrained in the steam to prevent damage to the turbine blading and/or associated piping. Superheating the steam expands the volume of steam, which allows a given quantity (by weight) of steam to generate more power. When the totality of the droplets is eliminated, the steam is said to be in a superheated state. In a Stephensonian firetube locomotive boiler, this entails routing the saturated steam through small diameter pipes suspended inside large diameter firetubes putting them in contact with the hot gases exiting the firebox; the saturated steam flows backwards from the wet header towards the firebox, then forwards again to the dry header. Superheating only began to be generally adopted for locomotives around the year 1900 due to problems of overheating of and lubrication of the moving parts in the cylinders and steam chests. Many firetube boilers heat water until it boils, and then the steam is used at saturation temperature in other words the temperature of the boiling point of water at a given pressure (saturated steam); this still contains a large proportion of water in suspension. Saturated steam can and has been directly used by an engine, but as the suspended water cannot expand and do work and work implies temperature drop, much of the working fluid is wasted along with the fuel expended to produce it. Water tube boilers Another way to rapidly produce steam is to feed the water under pressure into a tube or tubes surrounded by the combustion gases. The earliest example of this was developed by Goldsworthy Gurney in the late 1820s for use in steam road carriages. This boiler was ultra-compact and light in weight and this arrangement has since become the norm for marine and stationary applications. The tubes frequently have a large number of bends and sometimes fins to maximize the surface area. This type of boiler is generally preferred in high pressure applications since the high pressure water/steam is contained within narrow pipes which can contain the pressure with a thinner wall. It can however be susceptible to damage by vibration in surface transport appliances. In a cast iron sectional boiler, sometimes called a "pork chop boiler" the water is contained inside cast iron sections. These sections are mechanically assembled on site to create the finished boiler. Supercritical steam generators Supercritical steam generators are frequently used for the production of electric power. They operate at supercritical pressure. In contrast to a "subcritical boiler", a supercritical steam generator operates at such a high pressure (over ) that actual boiling ceases to occur, the boiler has no liquid water - steam separation. There is no generation of steam bubbles within the water, because the pressure is above the critical pressure at which steam bubbles can form. It passes below the critical point as it does work in a high-pressure turbine and enters the generator's condenser. This results in slightly less fuel use and therefore less greenhouse gas production. The term "boiler" should not be used for a supercritical pressure steam generator, as no "boiling" actually occurs in this device. Water treatment Feed water for boilers needs to be as pure as possible with a minimum of suspended solids and dissolved impurities which cause corrosion, foaming and water carryover. The most common options for demineralization of boiler feedwater are reverse osmosis (RO) and ion exchange (IX). Safety When water is converted to steam it expands in volume 1,600 times and travels down steam pipes at over 25 m/s. Because of this, steam is a good way of moving energy and heat around a site from a central boiler house to where it is needed, but without the right boiler feed water treatment, a steam-raising plant will suffer from scale formation and corrosion. At best, this increases energy costs and can lead to poor quality steam, reduced efficiency, shorter plant life and an operation which is unreliable. At worst, it can lead to catastrophic failure and loss of life. While variations in standards may exist in different countries, stringent legal, testing, training and certification is applied to try to minimize or prevent such occurrences. Failure modes include: Overpressurization of the boiler Insufficient water in the boiler causing overheating and vessel failure Pressure vessel failure of the boiler due to inadequate construction or maintenance. Doble boiler The Doble steam car uses a once-through type contra-flow generator, consisting of a continuous tube. The fire here is on top of the coil instead of underneath. Water is pumped into the tube at the bottom and the steam is drawn off at the top. This means that every particle of water and steam must necessarily pass through every part of the generator causing an intense circulation which prevents any sediment or scale from forming on the inside of the tube. Water enters the bottom of this tube at the flow rate of a second with less than two quarts of water in the tube at any one time. As the hot gases pass down between the coils, they gradually cool, as the heat is being absorbed by the water. The last portion of the generator with which the gases come into contact remains the cold incoming water. The fire is positively cut off when the pressure reaches a pre-determined point, usually set at , cold water pressure; a safety valve set at provides added protection. The fire is automatically cut off by temperature as well as pressure, so in case the boiler were completely dry it would be impossible to damage the coil as the fire would be automatically cut off by the temperature. Similar forced circulation generators, such as the Pritchard and Lamont and Velox boilers present the same advantages. Applications Steam boilers are used where steam and hot steam is needed. Hence, steam boilers are used as generators to produce electricity in the energy business. It is also used in rice mills for parboiling and drying. Besides many different application areas in the industry for example in heating systems or for cement production, steam boilers are used in agriculture as well for soil steaming. Testing The preeminent code for testing fired steam generators in the USA is the American Society of Mechanical Engineers (ASME) performance test code, PTC 4. A related component is the regenerative air heater. A major revision to the performance test code for air heaters will be published in 2013. Copies of the draft are available for review. The European standards for acceptance test of steam boilers are EN 12952-15 and EN 12953–11. The British standards BS 845-1 and BS 845-2 remain also in use in the UK.
Technology
Power generation
null
12304303
https://en.wikipedia.org/wiki/Trans-African%20Highway%20network
Trans-African Highway network
The Trans-African Highway network comprises transcontinental road projects in Africa being developed by the United Nations Economic Commission for Africa (UNECA), the African Development Bank (ADB), and the African Union in conjunction with regional international communities. They aim to promote trade and alleviate poverty in Africa through highway infrastructure development and the management of road-based trade corridors. The total length of the nine highways in the network is . In some documents the highways are referred to as "Trans-African Corridors" or "Road Corridors" rather than highways. The name Trans-African Highway and its variants are not in wide common usage outside of planning and development circles, and as of 2014 one does not see them signposted as such or labelled on maps, except in Kenya and Uganda where the Mombasa–Nairobi–Kampala–Fort Portal section (or the Kampala–Kigali feeder road) of Trans-African Highway 8 is sometimes referred to as the "Trans-Africa Highway". Background Need for the highway system Colonial powers and, later, competing superpowers and regional powers, generally did not encourage road links between their respective spheres except where absolutely necessary (i.e. trade), and in newly independent African states, border restrictions were often tightened rather than relaxed as a way of protecting internal trade, as a weapon in border disputes, and to increase the opportunities for official corruption. The development of trans-African highways and associated road infrastructure is aimed at combating poverty in Africa by increasing interstate and domestic trade, revitalizing small and medium-sized businesses, reducing prices for goods and improving living conditions. Thanks to highways, Africa's population is finally being served by ambulances, police, fire protection, rescue, repair and construction services. The agencies developing the highway network are influenced by the idea that road infrastructure stimulates trade and so alleviates poverty, as well as benefiting health and education since they allow medical and educational services to be distributed to previously inaccessible areas. On 1 July 1971 Robert K. A. Gardiner, the Executive Secretary of the United Nations Economic Commission for Africa (UNECA), established the Trans-African Highway Bureau to oversee the development of a continental road network. Wars and conflicts As well as preventing progress in road construction, wars and conflicts have led to the destruction of roads and river crossings, have prevented maintenance and have often closed vital links. Sierra Leone, Liberia, the Democratic Republic of the Congo and Angola are all in rebuilding phases after war. Wars in the Democratic Republic of the Congo set back road infrastructure in that country by decades and cut the principal route between East and West Africa. In recent years, security considerations have restricted road travel in the southern parts of Morocco, Algeria, Libya and Egypt as well as in northern Chad and much of Sudan. Trans-African highways can only develop in times of peace and stability, and in 2007 the future looks brighter, with the southern Sudan conflict being the only one currently affecting development of the network (highway 6). Lawlessness rather than war hampers progress in developing highway 3 between Libya and Chad, and though economic instability could affect maintenance of paved highways 4 and 9 though Zimbabwe, there are practical alternatives through neighbouring countries. Conflicts in Somalia do not affect the network as that is the largest African country with no trans-African highways, but they will affect the development of feeder roads to the network. Principles and processes Using existing national highways as much as possible, the aim of the development agencies is to identify priorities in relation to trade, to plan the highways, and to seek financing for the construction of missing links and bridges, the paving of sections of earth and gravel roads, and the rehabilitation of deteriorated paved sections. The need to reduce delays caused by highway checkpoints and border controls or to ease travel restrictions has also been identified, but so far solutions have not been forthcoming. Rather than just having international highways over which each country maintains its regulations and practices, there is a need for transnational highways over which regulations and practices are simplified, unified and implemented without causing delays to goods and travellers. Features of the network Countries served The network as planned reaches all the continental African nations except Burundi, Eritrea, Eswatini, Somalia, Equatorial Guinea (Rio Muni), Lesotho, Malawi, Rwanda and South Sudan. Of these, Rwanda, Malawi, Lesotho and Eswatini have paved highways connecting to the network, and the network reaches almost to the border of the others. Missing links More than half of the network has been paved, though maintenance remains a problem. There are numerous missing links in the network where tracks are impassable after rain or hazardous due to rocks, sand, and sandstorms. In a few cases, there has never been a road of any sort, such as the 200 km gap between Salo in the Central African Republic and Ouésso in the Republic of the Congo on highway 3. The missing links arise mainly because the section does not have a high national priority as opposed to a regional or transcontinental priority. As a result of missing links, of the five major regions—North, West, Central, East, and Southern Africa—road travel in all weather is only relatively easy between East and Southern Africa, and that relies on a single paved road through southwestern Tanzania (the Tanzam Highway). While North Africa and West Africa are linked across the Sahara, the main deficiency of the network is that there are no paved highways across Central Africa. Not only does this prevent road trade between East and West Africa, or between West and Southern Africa, but it restricts trade within Central Africa. Although there may be paved links from West, East, or Southern Africa to the fringes of Central Africa, those links do not penetrate very far into the region. The terrain, rainforest, and climate of Central Africa, particularly in the catchments of the lower and middle Congo River and the Ubangui, Sangha, and Sanaga Rivers, present formidable obstacles to highway engineers, and paved roads there have short lifespans. Further north in Cameroon and Chad, hilly terrain or plains prone to flooding have restricted the development of local paved road networks. Through this forbidding environment, three Trans-African Highways are planned to cross in the east–west direction (highways 6, 8, and 9) while one will cross north to south (highway 3). As of 2014, all have substantial missing links in Central Africa. Description of the highways in the network Nine highways have been designated, in a rough grid of six mainly east–west routes and three mainly north–south routes. A fourth north–south route is formed from the extremities of two east–west routes. East-west routes Starting with the most northerly, the east–west routes are: Trans-African Highway 1 (TAH 1), Cairo–Dakar Highway, : a mainly coastal route along the Mediterranean coast of North Africa, continuing down the Atlantic coast of North-West Africa; substantially complete, although the border between Algeria and Morocco is closed. TAH 1 joins with TAH 7 to form an additional north–south route around the western extremity of the continent. Connects with M40 of the Arab Mashreq International Road Network. Trans-African Highway 5 (TAH 5), Dakar–N'Djamena Highway, , also known as the Trans-Sahelian Highway, linking West African countries of the Sahel, about 80% complete. Trans-African Highway 6 (TAH 6), N'Djamena–Djibouti Highway, : contiguous with TAH 5, continuing through the eastern Sahelian region to Indian Ocean port of Djibouti. The approximate route of TAH 5 and TAH 6 was originally proposed in the early 20th century as an aim of the French Empire. Trans-African Highway 7 (TAH 7), Dakar–Lagos Highway, : also known as the Trans–West African Coastal Road, about 80% complete. This highway joins with TAH 1 to form an additional north–south route around the western extremity of the continent. Trans-African Highway 8 (TAH 8), Lagos–Mombasa Highway, : which is contiguous with TAH7 and forms with it a 10,269-km east–west crossing of the continent. The Lagos–Mombasa Highway's eastern half is complete through Kenya and Uganda, where locally it is known as the Trans-Africa Highway (the only place where the name is in common use). Its western extremity in Nigeria, Cameroon and Central African Republic is mostly complete but a long missing link across DR Congo currently prevents any practical use through the middle section. Trans-African Highway 9 (TAH 9), Beira–Lobito Highway, : substantially complete in the eastern half but the western half through Angola and south-central DR Congo requires reconstruction. North-south routes Starting with the most westerly, these are: Trans-African Highway 2 (TAH 2), Algiers–Lagos Highway, : also known as the Trans-Sahara Highway: substantially complete, only of desert track remains to be paved, but border and security controls restrict usage. Trans-African Highway 3 (TAH 3), Tripoli–Windhoek–(Cape Town) Highway, : this route has the most missing links and requires the most new construction, as only national paved roads in Libya, Cameroon, Angola, Namibia and South Africa can be used to any extent. South Africa was not originally included, as the highway was first planned in the Apartheid era, but it is now recognized that it would continue to Cape Town. Trans-African Highway 4 (TAH 4), Cairo–Gaborone–(Pretoria/Cape Town) Highway, : the completion of the stretch of highway from Dongola to Abu Simbel Junction in Northern Sudan and the road from the Galabat border crossing in North-Western Ethiopia leaves no section unpaved; the road section between Babati and Dodoma in central Tanzania was completed in May 2018. The section between Isiolo and Moyale in northern Kenya (dubbed 'the road to hell' by overland travellers) has recently been completed creating a smooth crossing across Kenya. Crossing the Egypt-Sudan border by road has been prohibited for a number of years, a vehicle ferry on Lake Nasser is used instead. As with TAH 3, South Africa was not originally included as the idea was first proposed in the Apartheid era, but it is now recognized that it would continue to Pretoria and Cape Town. Except for passing through Ethiopia, the route roughly coincides with proposals for the Cape to Cairo Road put forward in the early 20th century British Empire. As noted above, TAH 1 and TAH 7 join to form an additional north–south route around the western extremity of the continent between Monrovia and Rabat. Regional highway projects in Africa Regional international communities are heavily involved in trans-African highway development and work in conjunction with the ADB and UNECA. For example: The Arab Maghreb Union drives the development and maintenance of the Tripoli to Nouakchott section of TAH 1. The Economic Community of West African States (ECOWAS) drives the development of and maintenance of TAH 5 and 7. the Southern African Development Community (SADC) has an extensive network of road projects and trade corridors in southern Africa. TAH 9 and the southern ends of TAH 3 and 4 utilize regional highways developed by SADC or its forerunners. In particular SADC manages road and rail corridors from landlocked areas to ports. Trans-Kalahari Corridor. Djibouti-Bata, Equatorial Guinea — east-west direction — 5823 km.
Technology
Ground transportation networks
null
12305127
https://en.wikipedia.org/wiki/History%20of%20life
History of life
The history of life on Earth traces the processes by which living and extinct organisms evolved, from the earliest emergence of life to the present day. Earth formed about 4.5 billion years ago (abbreviated as Ga, for gigaannum) and evidence suggests that life emerged prior to 3.7 Ga. The similarities among all known present-day species indicate that they have diverged through the process of evolution from a common ancestor. The earliest clear evidence of life comes from biogenic carbon signatures and stromatolite fossils discovered in 3.7 billion-year-old metasedimentary rocks from western Greenland. In 2015, possible "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia. There is further evidence of possibly the oldest forms of life in the form of fossilized microorganisms in hydrothermal vent precipitates from the Nuvvuagittuq Belt, that may have lived as early as 4.28 billion years ago, not long after the oceans formed 4.4 billion years ago, and after the Earth formed 4.54 billion years ago. These earliest fossils, however, may have originated from non-biological processes. Microbial mats of coexisting bacteria and archaea were the dominant form of life in the early Archean eon, and many of the major steps in early evolution are thought to have taken place in this environment. The evolution of photosynthesis by cyanobacteria, around 3.5 Ga, eventually led to a buildup of its waste product, oxygen, in the oceans. After free oxygen saturated all available reductant substances on the Earth's surface, it built up in the atmosphere, leading to the Great Oxygenation Event around 2.4 Ga. The earliest evidence of eukaryotes (complex cells with organelles) dates from 1.85 Ga, likely due to symbiogenesis between anaerobic archaea and aerobic proteobacteria in co-adaptation against the new oxidative stress. While eukaryotes may have been present earlier, their diversification accelerated when aerobic cellular respiration by the endosymbiont mitochondria provided a more abundant source of biological energy. Around 1.6 Ga, some eukaryotes gained the ability to photosynthesize via endosymbiosis with cyanobacteria, and gave rise to various algae that eventually overtook cyanobacteria as the dominant primary producers. At around 1.7 Ga, multicellular organisms began to appear, with differentiated cells performing specialised functions. While early organisms reproduced asexually, the primary method of reproduction for the vast majority of macroscopic organisms, including almost all eukaryotes (which includes animals and plants), is sexual reproduction, the fusion of male and female reproductive cells (gametes) to create a zygote. The origin and evolution of sexual reproduction remain a puzzle for biologists, though it is thought to have evolved from a single-celled eukaryotic ancestor. While microorganisms formed the earliest terrestrial ecosystems at least 2.7 Ga, the evolution of plants from freshwater green algae dates back to about 1 billion years ago. Microorganisms are thought to have paved the way for the inception of land plants in the Ordovician period. Land plants were so successful that they are thought to have contributed to the Late Devonian extinction event as early tree Archaeopteris drew down CO2 levels, leading to global cooling and lowered sea levels, while their roots increased rock weathering and nutrient run-offs which may have triggered algal bloom anoxic events. Bilateria, animals having a left and a right side that are mirror images of each other, appeared by 555 Ma (million years ago). Ediacara biota appeared during the Ediacaran period, while vertebrates, along with most other modern phyla originated about during the Cambrian explosion. During the Permian period, synapsids, including the ancestors of mammals, dominated the land. The Permian–Triassic extinction event killed most complex species of its time, . During the recovery from this catastrophe, archosaurs became the most abundant land vertebrates; one archosaur group, the dinosaurs, dominated the Jurassic and Cretaceous periods. After the Cretaceous–Paleogene extinction event killed off the non-avian dinosaurs, mammals increased rapidly in size and diversity. Such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. Only a very small percentage of species have been identified: one estimate claims that Earth may have 1 trillion species, because "identifying every microbial species on Earth presents a huge challenge." Only 1.75–1.8 million species have been named and 1.8 million documented in a central database. The currently living species represent less than one percent of all species that have ever lived on Earth. Earliest history of Earth The oldest meteorite fragments found on Earth are about 4.54 billion years old; this, coupled primarily with the dating of ancient lead deposits, has put the estimated age of Earth at around that time. The Moon has the same composition as Earth's crust but does not contain an iron-rich core like the Earth's. Many scientists think that about 40 million years after the formation of Earth, it collided with a body the size of Mars, throwing crust material into the orbit that formed the Moon. Another hypothesis is that the Earth and Moon started to coalesce at the same time but the Earth, having a much stronger gravity than the early Moon, attracted almost all the iron particles in the area. Until 2001, the oldest rocks found on Earth were about 3.8 billion years old, leading scientists to estimate that the Earth's surface had been molten until then. Accordingly, they named this part of Earth's history the Hadean. However, analysis of zircons formed 4.4 Ga indicates that Earth's crust solidified about 100 million years after the planet's formation and that the planet quickly acquired oceans and an atmosphere, which may have been capable of supporting life. Evidence from the Moon indicates that from 4 to 3.8 Ga it suffered a Late Heavy Bombardment by debris that was left over from the formation of the Solar System, and the Earth should have experienced an even heavier bombardment due to its stronger gravity. While there is no direct evidence of conditions on Earth 4 to 3.8 Ga, there is no reason to think that the Earth was not also affected by this late heavy bombardment. This event may well have stripped away any previous atmosphere and oceans; in this case gases and water from comet impacts may have contributed to their replacement, although outgassing from volcanoes on Earth would have supplied at least half. However, if subsurface microbial life had evolved by this point, it would have survived the bombardment. Earliest evidence for life on Earth The earliest identified organisms were minute and relatively featureless, and their fossils looked like small rods that are very difficult to tell apart from structures that arise through abiotic physical processes. The oldest undisputed evidence of life on Earth, interpreted as fossilized bacteria, dates to 3 Ga. Other finds in rocks dated to about 3.5 Ga have been interpreted as bacteria, with geochemical evidence also seeming to show the presence of life 3.8 Ga. However, these analyses were closely scrutinized, and non-biological processes were found which could produce all of the "signatures of life" that had been reported. While this does not prove that the structures found had a non-biological origin, they cannot be taken as clear evidence for the presence of life. Geochemical signatures from rocks deposited 3.4 Ga have been interpreted as evidence for life. Evidence for fossilized microorganisms considered to be 3.77 billion to 4.28 billion years old was found in the Nuvvuagittuq Greenstone Belt in Quebec, Canada, although the evidence is disputed as inconclusive. Origins of life on Earth Most biologists reason that all living organisms on Earth must share a single last universal ancestor, because it would be virtually impossible that two or more separate lineages could have independently developed the many complex biochemical mechanisms common to all living organisms. According to a different scenario a single last universal ancestor, e.g. a "first cell" or a first individual precursor cell has never existed. Instead, the early biochemical evolution of life led to diversification through the development of a multiphenotypical population of pre-cells from which the precursor cells (protocells) of the three domains of life emerged. Thus, the formation of cells was a successive process. See , below. Independent emergence on Earth Life on Earth is based on carbon and water. Carbon provides stable frameworks for complex chemicals and can be easily extracted from the environment, especially from carbon dioxide. There is no other chemical element whose properties are similar enough to carbon's to be called an analogue; silicon, the element directly below carbon on the periodic table, does not form very many complex stable molecules, and because most of its compounds are water-insoluble and because silicon dioxide is a hard and abrasive solid in contrast to carbon dioxide at temperatures associated with living things, it would be more difficult for organisms to extract. The elements boron and phosphorus have more complex chemistries but suffer from other limitations relative to carbon. Water is an excellent solvent and has two other useful properties: the fact that ice floats enables aquatic organisms to survive beneath it in winter; and its molecules have electrically negative and positive ends, which enables it to form a wider range of compounds than other solvents can. Other good solvents, such as ammonia, are liquid only at such low temperatures that chemical reactions may be too slow to sustain life, and lack water's other advantages. Organisms based on alternative biochemistry may, however, be possible on other planets. Research on how life might have emerged from non-living chemicals focuses on three possible starting points: self-replication, an organism's ability to produce offspring that are very similar to itself; metabolism, its ability to feed and repair itself; and external cell membranes, which allow food to enter and waste products to leave, but exclude unwanted substances. Research on abiogenesis still has a long way to go, since theoretical and empirical approaches are only beginning to make contact with each other. Replication first: RNA world Even the simplest members of the three modern domains of life use DNA to record their "recipes" and a complex array of RNA and protein molecules to "read" these instructions and use them for growth, maintenance and self-replication. The discovery that some RNA molecules can catalyze both their own replication and the construction of proteins led to the hypothesis of earlier life-forms based entirely on RNA. These ribozymes could have formed an RNA world in which there were individuals but no species, as mutations and horizontal gene transfers would have meant that offspring were likely to have different genomes from their parents, and evolution occurred at the level of genes rather than organisms. RNA would later have been replaced by DNA, which can build longer, more stable genomes, strengthening heritability and expanding the capabilities of individual organisms. Ribozymes remain as the main components of ribosomes, the "protein factories" in modern cells. Evidence suggests the first RNA molecules formed on Earth prior to 4.17 Ga. Although short self-replicating RNA molecules have been artificially produced in laboratories, doubts have been raised about whether natural non-biological synthesis of RNA is possible. The earliest "ribozymes" may have been formed of simpler nucleic acids such as PNA, TNA or GNA, which would have been replaced later by RNA. In 2003, it was proposed that porous metal sulfide precipitates would assist RNA synthesis at about and ocean-bottom pressures near hydrothermal vents. Under this hypothesis, lipid membranes would be the last major cell components to appear and, until then, the protocells would be confined to the pores. Membranes first: Lipid world It has been suggested that double-walled "bubbles" of lipids like those that form the external membranes of cells may have been an essential first step. Experiments that simulated the conditions of the early Earth have reported the formation of lipids, and these can spontaneously form liposomes, double-walled "bubbles", and then reproduce themselves. Although they are not intrinsically information-carriers as nucleic acids are, they would be subject to natural selection for longevity and reproduction. Nucleic acids such as RNA might then have formed more easily within the liposomes than outside. The clay hypothesis RNA is complex and there are doubts about whether it can be produced non-biologically in the wild. Some clays, notably montmorillonite, have properties that make them plausible accelerators for the emergence of an RNA world: they grow by self-replication of their crystalline pattern; they are subject to an analogue of natural selection, as the clay "species" that grows fastest in a particular environment rapidly becomes dominant; and they can catalyze the formation of RNA molecules. Although this idea has not become the scientific consensus, it still has active supporters. Research in 2003 reported that montmorillonite could also accelerate the conversion of fatty acids into "bubbles" and that the "bubbles" could encapsulate RNA attached to the clay. These "bubbles" can then grow by absorbing additional lipids and then divide. The formation of the earliest cells may have been aided by similar processes. A similar hypothesis presents self-replicating iron-rich clays as the progenitors of nucleotides, lipids and amino acids. Metabolism first: Iron–sulfur world A series of experiments starting in 1997 showed that early stages in the formation of proteins from inorganic materials including carbon monoxide and hydrogen sulfide could be achieved by using iron sulfide and nickel sulfide as catalysts. Most of the steps required temperatures of about and moderate pressures, although one stage required and a pressure equivalent to that found under of rock. Hence it was suggested that self-sustaining synthesis of proteins could have occurred near hydrothermal vents. Metabolism first: Pre–cells (successive cellularisation) In this scenario, the biochemical evolution of life led to diversification through the development of a multiphenotypical population of pre-cells, i.e. evolving entities of primordial life with different characteristics and widespread horizontal gene transfer. From this pre-cell population the founder groups A, B, C and then, from them, the precursor cells (here named proto-cells) of the three domains of life arose successively, leading first to the domain Bacteria, then to the domain Archea and finally to the domain Eucarya. For the development of cells (cellularisation), the pre-cells had to be protected from their surroundings by envelopes (i.e. membranes, walls). For instance, the development of rigid cell walls by the invention of peptidoglycan in bacteria (domain Bacteria) may have been a prerequisite for their successful survival, radiation and colonisation of virtually all habitats of the geosphere and hydrosphere. This scenario may explain the quasi-random distribution of evolutionarily important features among the three domains and, at the same time, the existence of the most basic biochemical features (genetic code, set of protein amino acids etc.) in all three domains (unity of life), as well as the close relationship between the Archaea and the Eucarya. A scheme of the pre-cell scenario is shown in the adjacent figure, where important evolutionary improvements are indicated by numbers. Prebiotic environments Geothermal springs Wet-dry cycles at geothermal springs are shown to solve the problem of hydrolysis and promote the polymerization and vesicle encapsulation of biopolymers. The temperatures of geothermal springs are suitable for biomolecules. Silica minerals and metal sulfides in these environments have photocatalytic properties to catalyze biomolecules. Solar UV exposure also promotes the synthesis of biomolecules like RNA nucleotides. An analysis of hydrothermal veins at a 3.5 Gya (giga years ago or 1 billion years) geothermal spring setting were found to have elements required for the origin of life, which are potassium, boron, hydrogen, sulfur, phosphorus, zinc, nitrogen, and oxygen. Mulkidjanian and colleagues find that such environments have identical ionic concentrations to the cytoplasm of modern cells. Fatty acids in acidic or slightly alkaline geothermal springs assemble into vesicles after wet-dry cycles as there is a lower concentration of ionic solutes at geothermal springs since they are freshwater environments, in contrast to seawater which has a higher concentration of ionic solutes. For organic compounds to be present at geothermal springs, they would have likely been transported by carbonaceous meteors. The molecules that fell from the meteors were then accumulated in geothermal springs. Geothermal springs can accumulate aqueous phosphate in the form of phopshoric acid. Based on lab-run models, these concentrations of phoshate are insufficient to facilitate biosynthesis. As for the evolutionary implications, freshwater heterotrophic cells that depended upon synthesized organic compounds later evolved photosynthesis because of the continuous exposure to sunlight as well as their cell walls with ion pumps to maintain their intracellular metabolism after they entered the oceans. Deep sea hydrothermal vents Catalytic mineral particles and transition metal sulfides at these environments are capable of catalyzing organic compounds. Scientists simulated laboratory conditions that were identical to white smokers and successfully oligomerized RNA, measured to be 4 units long. Long chain fatty acids can be synthesized via Fischer-Tropsch synthesis. Another experiment that replicated conditions also similar white smokers, with long chain fatty acids present resulted in the assembly of vesicles. Exergonic reactions at hydrothermal vents are suggested to have been a source of free energy that promoted chemical reactions, synthesis of organic molecules, and are inducive to chemical gradients. In small rock pore systems, membranous structures between alkaline seawater and the acidic ocean would be conducive to natural proton gradients. Nucleobase synthesis could occur by following universally conserved biochemical pathways by using metal ions as catalysts. RNA molecules of 22 bases can be polymerized in alkaline hydrothermal vent pores. Thin pores are shown to only accumulate long polynucleotides whereas thick pores accumulate both short and long polynucleotides. Small mineral cavities or mineral gels could have been a compartment for abiogenic processes. A genomic analysis supports this hypothesis as they found 355 genes that likely traced to LUCA upon 6.1 million sequenced prokaryotic genes. They reconstruct LUCA as a thermophilic anaerobe with a Wood-Ljungdahl pathway, implying an origin of life at white smokers. LUCA would also have exhibited other biochemical pathways such as gluconeogenesis, reverse incomplete Krebs cycle, glycolysis, and the pentose phosphate pathway, including biochemical reactions such as reductive amination and transamination. Life "seeded" from elsewhere The Panspermia hypothesis does not explain how life arose originally, but simply examines the possibility of its coming from somewhere other than Earth. The idea that life on Earth was "seeded" from elsewhere in the Universe dates back at least to the Greek philosopher Anaximander in the sixth century BCE. In the twentieth century it was proposed by the physical chemist Svante Arrhenius, by the astronomers Fred Hoyle and Chandra Wickramasinghe, and by molecular biologist Francis Crick and chemist Leslie Orgel. There are three main versions of the "seeded from elsewhere" hypothesis: from elsewhere in our Solar System via fragments knocked into space by a large meteor impact, in which case the most credible sources are Mars and Venus; by alien visitors, possibly as a result of accidental contamination by microorganisms that they brought with them; and from outside the Solar System but by natural means. Experiments in low Earth orbit, such as EXOSTACK, have demonstrated that some microorganism spores can survive the shock of being catapulted into space and some can survive exposure to outer space radiation for at least 5.7 years. Meteorite ALH84001, which was once part of the Martian crust, shows evidence of carbonate-globules with texture and size indicative of terrestrial bacterial activity. Scientists are divided over the likelihood of life arising independently on Mars, or on other planets in our galaxy. Carbonate-rich lakes One theory traces the origins of life to the abundant carbonate-rich lakes which would have dotted the early Earth. Phosphate would have been an essential cornerstone to the origin of life since it is a critical component of nucleotides, phospholipids, and adenosine triphosphate. Phosphate is often depleted in natural environments due to its uptake by microbes and its affinity for calcium ions. In a process called 'apatite precipitation', free phosphate ions react with the calcium ions abundant in water to precipitate out of solution as apatite minerals. When attempting to simulate prebiotic phosphorylation, scientists have only found success when using phosphorus levels far above modern day natural concentrations. This problem of low phosphate is solved in carbonate-rich environments. When in the presence of carbonate, calcium readily reacts to form calcium carbonate instead of apatite minerals. With the free calcium ions removed from solution, phosphate ions are no longer precipitated from solution. This is specifically seen in lakes with no inflow, since no new calcium is introduced into the water body. After all of the calcium is sequestered into calcium carbonate (calcite), phosphate concentrations are able to increase to levels necessary for facilitating biomolecule creation. Though carbonate-rich lakes have alkaline chemistry in modern times, models suggest that carbonate lakes had a pH low enough for prebiotic synthesis when placed in the acidifying context of Earth's early carbon dioxide rich atmosphere. Rainwater rich in carbonic acid weathered the rock on the surface of the Earth at rates far greater than today. With high phosphate influx, no phosphate precipitation, and no microbial usage of phosphate at this time, models show phosphate reached concentrations approximately 100 times greater than they are today. Modeled pH and phosphate levels of early Earth carbonate-rich lakes nearly match the conditions used in current laboratory experiments on the origin of life. Similar to the process predicted by geothermal hot spring hypotheses, changing lake levels and wave action deposited phosphorus-rich brine onto dry shore and marginal pools. This drying of the solution promotes polymerization reactions and removes enough water to promote phosphorylation, a process integral to biological energy storage and transfer. When washed away by further precipitation and wave action, researchers concluded these newly formed biomolecules may have washed back into the lake - allowing the first prebiotic syntheses on Earth to occur. Environmental and evolutionary impact of microbial mats Microbial mats are multi-layered, multi-species colonies of bacteria and other organisms that are generally only a few millimeters thick, but still contain a wide range of chemical environments, each of which favors a different set of microorganisms. To some extent each mat forms its own food chain, as the by-products of each group of microorganisms generally serve as "food" for adjacent groups. Stromatolites are stubby pillars built as microorganisms in mats slowly migrate upwards to avoid being smothered by sediment deposited on them by water. There has been vigorous debate about the validity of alleged stromatolite fossils from before 3 Ga, with critics arguing that they could have been formed by non-biological processes. In 2006, another find of stromatolites was reported from the same part of Australia, in rocks dated to 3.5 Ga. In modern underwater mats the top layer often consists of photosynthesizing cyanobacteria which create an oxygen-rich environment, while the bottom layer is oxygen-free and often dominated by hydrogen sulfide emitted by the organisms living there. Oxygen is toxic to organisms that are not adapted to it, but greatly increases the metabolic efficiency of oxygen-adapted organisms; oxygenic photosynthesis by bacteria in mats increased biological productivity by a factor of between 100 and 1,000. The source of hydrogen atoms used by oxygenic photosynthesis is water, which is much more plentiful than the geologically produced reducing agents required by the earlier non-oxygenic photosynthesis. From this point onwards life itself produced significantly more of the resources it needed than did geochemical processes. Oxygen became a significant component of Earth's atmosphere about 2.4 Ga. Although eukaryotes may have been present much earlier, the oxygenation of the atmosphere was a prerequisite for the evolution of the most complex eukaryotic cells, from which all multicellular organisms are built. The boundary between oxygen-rich and oxygen-free layers in microbial mats would have moved upwards when photosynthesis shut down overnight, and then downwards as it resumed on the next day. This would have created selection pressure for organisms in this intermediate zone to acquire the ability to tolerate and then to use oxygen, possibly via endosymbiosis, where one organism lives inside another and both of them benefit from their association. Cyanobacteria have the most complete biochemical "toolkits" of all the mat-forming organisms. Hence they are the most self-sufficient, well-adapted to strike out on their own both as floating mats and as the first of the phytoplankton, provide the basis of most marine food chains. Diversification of eukaryotes Chromatin, nucleus, endomembrane system, and mitochondria Eukaryotes may have been present long before the oxygenation of the atmosphere, but most modern eukaryotes require oxygen, which is used by their mitochondria to fuel the production of ATP, the internal energy supply of all known cells. In the 1970s, a vigorous debate concluded that eukaryotes emerged as a result of a sequence of endosymbiosis between prokaryotes. For example: a predatory microorganism invaded a large prokaryote, probably an archaean, but instead of killing its prey, the attacker took up residence and evolved into mitochondria; one of these chimeras later tried to swallow a photosynthesizing cyanobacterium, but the victim survived inside the attacker and the new combination became the ancestor of plants; and so on. After each endosymbiosis, the partners eventually eliminated unproductive duplication of genetic functions by re-arranging their genomes, a process which sometimes involved transfer of genes between them. Another hypothesis proposes that mitochondria were originally sulfur- or hydrogen-metabolising endosymbionts, and became oxygen-consumers later. On the other hand, mitochondria might have been part of eukaryotes' original equipment. There is a debate about when eukaryotes first appeared: the presence of steranes in Australian shales may indicate eukaryotes at 2.7 Ga; however, an analysis in 2008 concluded that these chemicals infiltrated the rocks less than 2.2 Ga and prove nothing about the origins of eukaryotes. Fossils of the algae Grypania have been reported in 1.85 billion-year-old rocks (originally dated to 2.1 Ga but later revised), indicating that eukaryotes with organelles had already evolved. A diverse collection of fossil algae were found in rocks dated between 1.5 and 1.4 Ga. The earliest known fossils of fungi date from 1.43 Ga. Plastids Plastids, the superclass of organelles of which chloroplasts are the best-known exemplar, are thought to have originated from endosymbiotic cyanobacteria. The symbiosis evolved around 1.5 Ga and enabled eukaryotes to carry out oxygenic photosynthesis. Three evolutionary lineages of photosynthetic plastids have since emerged: chloroplasts in green algae and plants, rhodoplasts in red algae and cyanelles in the glaucophytes. Not long after this primary endosymbiosis of plastids, rhodoplasts and chloroplasts were passed down to other bikonts, establishing a eukaryotic assemblage of phytoplankton by the end of the Neoproterozoic Eon. Sexual reproduction and multicellular organisms Evolution of sexual reproduction The defining characteristics of sexual reproduction in eukaryotes are meiosis and fertilization, resulting in genetic recombination, giving offspring 50% of their genes from each parent. By contrast, in asexual reproduction there is no recombination, but occasional horizontal gene transfer. Bacteria also exchange DNA by bacterial conjugation, enabling the spread of resistance to antibiotics and other toxins, and the ability to utilize new metabolites. However, conjugation is not a means of reproduction, and is not limited to members of the same species – there are cases where bacteria transfer DNA to plants and animals. On the other hand, bacterial transformation is clearly an adaptation for transfer of DNA between bacteria of the same species. This is a complex process involving the products of numerous bacterial genes and can be regarded as a bacterial form of sex. This process occurs naturally in at least 67 prokaryotic species (in seven different phyla). Sexual reproduction in eukaryotes may have evolved from bacterial transformation. The disadvantages of sexual reproduction are well-known: the genetic reshuffle of recombination may break up favorable combinations of genes; and since males do not directly increase the number of offspring in the next generation, an asexual population can out-breed and displace in as little as 50 generations a sexual population that is equal in every other respect. Nevertheless, the great majority of animals, plants, fungi and protists reproduce sexually. There is strong evidence that sexual reproduction arose early in the history of eukaryotes and that the genes controlling it have changed very little since then. How sexual reproduction evolved and survived is an unsolved puzzle. The Red Queen hypothesis suggests that sexual reproduction provides protection against parasites, because it is easier for parasites to evolve means of overcoming the defenses of genetically identical clones than those of sexual species that present moving targets, and there is some experimental evidence for this. However, there is still doubt about whether it would explain the survival of sexual species if multiple similar clone species were present, as one of the clones may survive the attacks of parasites for long enough to out-breed the sexual species. Furthermore, contrary to the expectations of the Red Queen hypothesis, Kathryn A. Hanley et al. found that the prevalence, abundance and mean intensity of mites was significantly higher in sexual geckos than in asexuals sharing the same habitat. In addition, biologist Matthew Parker, after reviewing numerous genetic studies on plant disease resistance, failed to find a single example consistent with the concept that pathogens are the primary selective agent responsible for sexual reproduction in the host. Alexey Kondrashov's deterministic mutation hypothesis (DMH) assumes that each organism has more than one harmful mutation and that the combined effects of these mutations are more harmful than the sum of the harm done by each individual mutation. If so, sexual recombination of genes will reduce the harm that bad mutations do to offspring and at the same time eliminate some bad mutations from the gene pool by isolating them in individuals that perish quickly because they have an above-average number of bad mutations. However, the evidence suggests that the DMH's assumptions are shaky because many species have on average less than one harmful mutation per individual and no species that has been investigated shows evidence of synergy between harmful mutations. The random nature of recombination causes the relative abundance of alternative traits to vary from one generation to another. This genetic drift is insufficient on its own to make sexual reproduction advantageous, but a combination of genetic drift and natural selection may be sufficient. When chance produces combinations of good traits, natural selection gives a large advantage to lineages in which these traits become genetically linked. On the other hand, the benefits of good traits are neutralized if they appear along with bad traits. Sexual recombination gives good traits the opportunities to become linked with other good traits, and mathematical models suggest this may be more than enough to offset the disadvantages of sexual reproduction. Other combinations of hypotheses that are inadequate on their own are also being examined. The adaptive function of sex remains a major unresolved issue in biology. The competing models to explain it were reviewed by John A. Birdsell and Christopher Wills. The hypotheses discussed above all depend on the possible beneficial effects of random genetic variation produced by genetic recombination. An alternative view is that sex arose and is maintained as a process for repairing DNA damage, and that the genetic variation produced is an occasionally beneficial byproduct. Multicellularity The simplest definitions of "multicellular", for example "having multiple cells", could include colonial cyanobacteria like Nostoc. Even a technical definition such as "having the same genome but different types of cell" would still include some genera of the green algae Volvox, which have cells that specialize in reproduction. Multicellularity evolved independently in organisms as diverse as sponges and other animals, fungi, plants, brown algae, cyanobacteria, slime molds and myxobacteria. For the sake of brevity, this article focuses on the organisms that show the greatest specialization of cells and variety of cell types, although this approach to the evolution of biological complexity could be regarded as "rather anthropocentric". The initial advantages of multicellularity may have included: more efficient sharing of nutrients that are digested outside the cell, increased resistance to predators, many of which attacked by engulfing; the ability to resist currents by attaching to a firm surface; the ability to reach upwards to filter-feed or to obtain sunlight for photosynthesis; the ability to create an internal environment that gives protection against the external one; and even the opportunity for a group of cells to behave "intelligently" by sharing information. These features would also have provided opportunities for other organisms to diversify, by creating more varied environments than flat microbial mats could. Multicellularity with differentiated cells is beneficial to the organism as a whole but disadvantageous from the point of view of individual cells, most of which lose the opportunity to reproduce themselves. In an asexual multicellular organism, rogue cells which retain the ability to reproduce may take over and reduce the organism to a mass of undifferentiated cells. Sexual reproduction eliminates such rogue cells from the next generation and therefore appears to be a prerequisite for complex multicellularity. The available evidence indicates that eukaryotes evolved much earlier but remained inconspicuous until a rapid diversification around 1 Ga. The only respect in which eukaryotes clearly surpass bacteria and archaea is their capacity for variety of forms, and sexual reproduction enabled eukaryotes to exploit that advantage by producing organisms with multiple cells that differed in form and function. By comparing the composition of transcription factor families and regulatory network motifs between unicellular organisms and multicellular organisms, scientists found there are many novel transcription factor families and three novel types of regulatory network motifs in multicellular organisms, and novel family transcription factors are preferentially wired into these novel network motifs which are essential for multicullular development. These results propose a plausible mechanism for the contribution of novel-family transcription factors and novel network motifs to the origin of multicellular organisms at transcriptional regulatory level. Fossil evidence Fungi-like fossils were found in vesicular basalt dating back to the Paleoproterozoic Era around 2.4 billion years ago. The controversial Francevillian biota fossils, dated to 2.1 Ga, are the earliest known fossil organisms that are clearly multicellular, if they are indeed fossils. They may have had differentiated cells. Another early multicellular fossil, Qingshania, dated to 1.7 Ga, appears to consist of virtually identical cells. The red algae called Bangiomorpha, dated at 1.2 Ga, is the earliest known organism that certainly has differentiated, specialized cells, and is also the oldest known sexually reproducing organism. The 1.43 billion-year-old fossils interpreted as fungi appear to have been multicellular with differentiated cells. The "string of beads" organism Horodyskia, found in rocks dated from 1.5 Ga to 900 Ma, may have been an early metazoan; however, it has also been interpreted as a colonial foraminiferan. Emergence of animals The animal family tree Animals are multicellular eukaryotes, and are distinguished from plants, algae, and fungi by lacking cell walls. All animals are motile, if only at certain life stages. All animals except sponges have bodies differentiated into separate tissues, including muscles, which move parts of the animal by contracting, and nerve tissue, which transmits and processes signals. In November 2019, researchers reported the discovery of Caveasphaera, a multicellular organism found in 609-million-year-old rocks, that is not easily defined as an animal or non-animal, which may be related to one of the earliest instances of animal evolution. Fossil studies of Caveasphaera have suggested that animal-like embryonic development arose much earlier than the oldest clearly defined animal fossils. and may be consistent with studies suggesting that animal evolution may have begun about 750 million years ago. Nonetheless, the earliest widely accepted animal fossils are the rather modern-looking cnidarians (the group that includes jellyfish, sea anemones and Hydra), possibly from around , although fossils from the Doushantuo Formation can only be dated approximately. Their presence implies that the cnidarian and bilaterian lineages had already diverged. The Ediacara biota, which flourished for the last 40 million years before the start of the Cambrian, were the first animals more than a very few centimetres long. Many were flat and had a "quilted" appearance, and seemed so strange that there was a proposal to classify them as a separate kingdom, Vendozoa. Others, however, have been interpreted as early molluscs (Kimberella), echinoderms (Arkarua), and arthropods (Spriggina, Parvancorina). There is still debate about the classification of these specimens, mainly because the diagnostic features which allow taxonomists to classify more recent organisms, such as similarities to living organisms, are generally absent in the Ediacarans. However, there seems little doubt that Kimberella was at least a triploblastic bilaterian animal, in other words, an animal significantly more complex than the cnidarians. The small shelly fauna are a very mixed collection of fossils found between the Late Ediacaran and Middle Cambrian periods. The earliest, Cloudina, shows signs of successful defense against predation and may indicate the start of an evolutionary arms race. Some tiny Early Cambrian shells almost certainly belonged to molluscs, while the owners of some "armor plates", Halkieria and Microdictyon, were eventually identified when more complete specimens were found in Cambrian lagerstätten that preserved soft-bodied animals. In the 1970s there was already a debate about whether the emergence of the modern phyla was "explosive" or gradual but hidden by the shortage of Precambrian animal fossils. A re-analysis of fossils from the Burgess Shale lagerstätte increased interest in the issue when it revealed animals, such as Opabinia, which did not fit into any known phylum. At the time these were interpreted as evidence that the modern phyla had evolved very rapidly in the Cambrian explosion and that the Burgess Shale's "weird wonders" showed that the Early Cambrian was a uniquely experimental period of animal evolution. Later discoveries of similar animals and the development of new theoretical approaches led to the conclusion that many of the "weird wonders" were evolutionary "aunts" or "cousins" of modern groups—for example that Opabinia was a member of the lobopods, a group which includes the ancestors of the arthropods, and that it may have been closely related to the modern tardigrades. Nevertheless, there is still much debate about whether the Cambrian explosion was really explosive and, if so, how and why it happened and why it appears unique in the history of animals. Deuterostomes and the first vertebrates Most of the animals at the heart of the Cambrian explosion debate were protostomes, one of the two main groups of complex animals. The other major group, the deuterostomes, contains invertebrates such as starfish and sea urchins (echinoderms), as well as chordates (see below). Many echinoderms have hard calcite "shells", which are fairly common from the Early Cambrian small shelly fauna onwards. Other deuterostome groups are soft-bodied, and most of the significant Cambrian deuterostome fossils come from the Chengjiang fauna, a lagerstätte in China. The chordates are another major deuterostome group: animals with a distinct dorsal nerve cord. Chordates include soft-bodied invertebrates such as tunicates as well as vertebrates—animals with a backbone. While tunicate fossils predate the Cambrian explosion, the Chengjiang fossils Haikouichthys and Myllokunmingia appear to be true vertebrates, and Haikouichthys had distinct vertebrae, which may have been slightly mineralized. Vertebrates with jaws, such as the acanthodians, first appeared in the Late Ordovician. Colonization of land Adaptation to life on land is a major challenge: all land organisms need to avoid drying-out and all those above microscopic size must create special structures to withstand gravity; respiration and gas exchange systems have to change; reproductive systems cannot depend on water to carry eggs and sperm towards each other. Although the earliest good evidence of land plants and animals dates back to the Ordovician period (), and a number of microorganism lineages made it onto land much earlier, modern land ecosystems only appeared in the Late Devonian, about . In May 2017, evidence of the earliest known life on land may have been found in 3.48-billion-year-old geyserite and other related mineral deposits (often found around hot springs and geysers) uncovered in the Pilbara Craton of Western Australia. In July 2018, scientists reported that the earliest life on land may have been bacteria living on land 3.22 billion years ago. In May 2019, scientists reported the discovery of a fossilized fungus, named Ourasphaira giraldae, in the Canadian Arctic, that may have grown on land a billion years ago, well before plants were living on land. Evolution of terrestrial antioxidants Oxygen began to accumulate in Earth's atmosphere over 3 Ga, as a by-product of photosynthesis in cyanobacteria (blue-green algae). However, oxygen produces destructive chemical oxidation which was toxic to most previous organisms. Protective endogenous antioxidant enzymes and exogenous dietary antioxidants helped to prevent oxidative damage. For example, brown algae accumulate inorganic mineral antioxidants such as rubidium, vanadium, zinc, iron, copper, molybdenum, selenium and iodine, concentrated more than 30,000 times more than in seawater. Most marine mineral antioxidants act in the cells as essential trace elements in redox and antioxidant metalloenzymes. When plants and animals began to enter rivers and land about 500 Ma, environmental deficiency of these marine mineral antioxidants was a challenge to the evolution of terrestrial life. Terrestrial plants slowly optimized the production of new endogenous antioxidants such as ascorbic acid, polyphenols, flavonoids, tocopherols, etc. A few of these appeared more recently, in the last 200–50 Ma, in fruits and flowers of angiosperm plants. In fact, angiosperms (the dominant type of plant today) and most of their antioxidant pigments evolved during the Late Jurassic period. Plants employ antioxidants to defend their structures against reactive oxygen species produced during photosynthesis. Animals are exposed to the same oxidants, and they have evolved endogenous enzymatic antioxidant systems. Iodine in the form of the iodide ion I− is the most primitive and abundant electron-rich essential element in the diet of marine and terrestrial organisms; it acts as an electron donor and has this ancestral antioxidant function in all iodide-concentrating cells, from primitive marine algae to terrestrial vertebrates. Evolution of soil Before the colonization of land there was no soil, a combination of mineral particles and decomposed organic matter. Land surfaces were either bare rock or shifting sand produced by weathering. Water and dissolved nutrients would have drained away very quickly. In the Sub-Cambrian peneplain in Sweden, for example, maximum depth of kaolinitization by Neoproterozoic weathering is about 5 m, while nearby kaolin deposits developed in the Mesozoic are much thicker. It has been argued that in the late Neoproterozoic sheet wash was a dominant process of erosion of surface material due to the lack of plants on land. Films of cyanobacteria, which are not plants but use the same photosynthesis mechanisms, have been found in modern deserts in areas unsuitable for vascular plants. This suggests that microbial mats may have been the first organisms to colonize dry land, possibly in the Precambrian. Mat-forming cyanobacteria could have gradually evolved resistance to desiccation as they spread from the seas to intertidal zones and then to land. Lichens, which are symbiotic combinations of a fungus (almost always an ascomycete) and one or more photosynthesizers (green algae or cyanobacteria), are also important colonizers of lifeless environments, and their ability to break down rocks contributes to soil formation where plants cannot survive. The earliest known ascomycete fossils date from in the Silurian. Soil formation would have been very slow until the appearance of burrowing animals, which mix the mineral and organic components of soil and whose feces are a major source of organic components. Burrows have been found in Ordovician sediments, and are attributed to annelids (worms) or arthropods. Plants and the Late Devonian wood crisis In aquatic algae, almost all cells are capable of photosynthesis and are nearly independent. Life on land requires plants to become internally more complex and specialized: photosynthesis is most efficient at the top; roots extract water and nutrients from the ground; and the intermediate parts support and transport. Spores of land plants resembling liverworts have been found in Middle Ordovician rocks from . Middle Silurian rocks from contain fossils of true plants, including clubmosses such as Baragwanathia; most were under high, and some appear closely related to vascular plants, the group that includes trees. By the Late Devonian , abundant trees such as Archaeopteris bound the soil so firmly that they changed river systems from mostly braided to mostly meandering. This caused the "Late Devonian wood crisis" because: They removed more carbon dioxide from the atmosphere, reducing the greenhouse effect and thus causing an ice age in the Carboniferous period. This did not repeat in later ecosystems, since the carbon dioxide "locked up" in wood was returned to the atmosphere by decomposition of dead wood, but the earliest fossil evidence of fungi that can decompose wood also comes from the Late Devonian. The increasing depth of plants' roots led to more washing of nutrients into rivers and seas by rain. This caused algal blooms whose high consumption of oxygen caused anoxic events in deeper waters, increasing the extinction rate among deep-water animals. Land invertebrates Animals had to change their feeding and excretory systems, and most land animals developed internal fertilization of their eggs. The difference in refractive index between water and air required changes in their eyes. On the other hand, in some ways movement and breathing became easier, and the better transmission of high-frequency sounds in the air encouraged the development of hearing. The oldest animal with evidence of air-breathing, although not being the oldest myriapod fossil record, is Pneumodesmus, an archipolypodan millipede from the Early Devonian, about . Its air-breathing, terrestrial nature is evidenced by the presence of spiracles, the openings to tracheal systems. However, some earlier trace fossils from the Cambrian-Ordovician boundary about are interpreted as the tracks of large amphibious arthropods on coastal sand dunes, and may have been made by euthycarcinoids, which are thought to be evolutionary "aunts" of myriapods. Other trace fossils from the Late Ordovician a little over probably represent land invertebrates, and there is clear evidence of numerous arthropods on coasts and alluvial plains shortly before the Silurian-Devonian boundary, about , including signs that some arthropods ate plants. Arthropods were well pre-adapted to colonise land, because their existing jointed exoskeletons provided protection against desiccation, support against gravity and a means of locomotion that was not dependent on water. The fossil record of other major invertebrate groups on land is poor: none at all for non-parasitic flatworms, nematodes or nemerteans; some parasitic nematodes have been fossilized in amber; annelid worm fossils are known from the Carboniferous, but they may still have been aquatic animals; the earliest fossils of gastropods on land date from the Late Carboniferous, and this group may have had to wait until leaf litter became abundant enough to provide the moist conditions they need. The earliest confirmed fossils of flying insects date from the Late Carboniferous, but it is thought that insects developed the ability to fly in the Early Carboniferous or even Late Devonian. This gave them a wider range of ecological niches for feeding and breeding, and a means of escape from predators and from unfavorable changes in the environment. About 99% of modern insect species fly or are descendants of flying species. Amphibians Family tree of tetrapods Tetrapods, vertebrates with four limbs, evolved from other rhipidistian fish over a relatively short timespan during the Late Devonian (). The early groups are grouped together as Labyrinthodontia. They retained aquatic, fry-like tadpoles, a system still seen in modern amphibians. Iodine and T4/T3 stimulate the amphibian metamorphosis and the evolution of nervous systems transforming the aquatic, vegetarian tadpole into a "more evolved" terrestrial, carnivorous frog with better neurological, visuospatial, olfactory and cognitive abilities for hunting. The new hormonal action of T3 was made possible by the formation of T3-receptors in the cells of vertebrates. First, about 600–500 million years ago, the alpha T3-receptors with a metamorphosing action appeared in primitive chordates and then, about 250–150 million years ago, the beta T3-receptors with metabolic and thermogenetic actions appeared in birds and mammals. From the 1950s to the early 1980s it was thought that tetrapods evolved from fish that had already acquired the ability to crawl on land, possibly in order to go from a pool that was drying out to one that was deeper. However, in 1987, nearly complete fossils of Acanthostega from about showed that this Late Devonian transitional animal had legs and both lungs and gills, but could never have survived on land: its limbs and its wrist and ankle joints were too weak to bear its weight; its ribs were too short to prevent its lungs from being squeezed flat by its weight; its fish-like tail fin would have been damaged by dragging on the ground. The current hypothesis is that Acanthostega, which was about long, was a wholly aquatic predator that hunted in shallow water. Its skeleton differed from that of most fish, in ways that enabled it to raise its head to breathe air while its body remained submerged, including: its jaws show modifications that would have enabled it to gulp air; the bones at the back of its skull are locked together, providing strong attachment points for muscles that raised its head; the head is not joined to the shoulder girdle and it has a distinct neck. The Devonian proliferation of land plants may help to explain why air breathing would have been an advantage: leaves falling into streams and rivers would have encouraged the growth of aquatic vegetation; this would have attracted grazing invertebrates and small fish that preyed on them; they would have been attractive prey but the environment was unsuitable for the big marine predatory fish; air-breathing would have been necessary because these waters would have been short of oxygen, since warm water holds less dissolved oxygen than cooler marine water and since the decomposition of vegetation would have used some of the oxygen. Later discoveries revealed earlier transitional forms between Acanthostega and completely fish-like animals. Unfortunately, there is then a gap (Romer's gap) of about 30 Ma between the fossils of ancestral tetrapods and Middle Carboniferous fossils of vertebrates that look well-adapted for life on land, during which only some fossils are found, which had five digits in the terminating point of the four limbs, showing true or crown tetrapods appeared in the gap around 350 Ma. Some of the fossils after this gap look as if the animals which they belonged to were early relatives of modern amphibians, all of which need to keep their skins moist and to lay their eggs in water, while others are accepted as early relatives of the amniotes, whose waterproof skin and egg membranes enable them to live and breed far from water. The Carboniferous Rainforest Collapse may have paved the way for amniotes to become dominant over amphibians. Reptiles Possible family tree of dinosaurs, birds and mammals Amniotes, whose eggs can survive in dry environments, probably evolved in the Late Carboniferous period (). The earliest fossils of the two surviving amniote groups, synapsids and sauropsids, date from around . The synapsid pelycosaurs and their descendants the therapsids are the most common land vertebrates in the best-known Permian (298.9 to 251.9 Ma) fossil beds. However, at the time these were all in temperate zones at middle latitudes, and there is evidence that hotter, drier environments nearer the Equator were dominated by sauropsids and amphibians. The Permian–Triassic extinction event wiped out almost all land vertebrates, as well as the great majority of other life. During the slow recovery from this catastrophe, estimated to have taken 30 million years, a previously obscure sauropsid group became the most abundant and diverse terrestrial vertebrates: a few fossils of archosauriformes ("ruling lizard forms") have been found in Late Permian rocks, but, by the Middle Triassic, archosaurs were the dominant land vertebrates. Dinosaurs distinguished themselves from other archosaurs in the Late Triassic, and became the dominant land vertebrates of the Jurassic and Cretaceous periods (). Birds During the Late Jurassic, birds evolved from small, predatory theropod dinosaurs. The first birds inherited teeth and long, bony tails from their dinosaur ancestors, but some had developed horny, toothless beaks by the very Late Jurassic and short pygostyle tails by the Early Cretaceous. Mammals While the archosaurs and dinosaurs were becoming more dominant in the Triassic, the mammaliaform successors of the therapsids evolved into small, mainly nocturnal insectivores. This ecological role may have promoted the evolution of mammals, for example nocturnal life may have accelerated the development of endothermy ("warm-bloodedness") and hair or fur. By in the Early Jurassic there were animals that were very like today's mammals in a number of respects. Unfortunately, there is a gap in the fossil record throughout the Middle Jurassic. However, fossil teeth discovered in Madagascar indicate that the split between the lineage leading to monotremes and the one leading to other living mammals had occurred by . After dominating land vertebrate niches for about 150 Ma, the non-avian dinosaurs perished in the Cretaceous–Paleogene extinction event () along with many other groups of organisms. Mammals throughout the time of the dinosaurs had been restricted to a narrow range of taxa, sizes and shapes, but increased rapidly in size and diversity after the extinction, with bats taking to the air within 13 million years, and cetaceans to the sea within 15 million years. Flowering plants The first flowering plants appeared around 130 Ma. The 250,000 to 400,000 species of flowering plants outnumber all other ground plants combined, and are the dominant vegetation in most terrestrial ecosystems. There is fossil evidence that flowering plants diversified rapidly in the Early Cretaceous, from , and that their rise was associated with that of pollinating insects. Among modern flowering plants Magnolia are thought to be close to the common ancestor of the group. However, paleontologists have not succeeded in identifying the earliest stages in the evolution of flowering plants. Social insects The social insects are remarkable because the great majority of individuals in each colony are sterile. This appears contrary to basic concepts of evolution such as natural selection and the selfish gene. In fact, there are very few eusocial insect species: only 15 out of approximately 2,600 living families of insects contain eusocial species, and it seems that eusociality has evolved independently only 12 times among arthropods, although some eusocial lineages have diversified into several families. Nevertheless, social insects have been spectacularly successful; for example although ants and termites account for only about 2% of known insect species, they form over 50% of the total mass of insects. Their ability to control a territory appears to be the foundation of their success. The sacrifice of breeding opportunities by most individuals has long been explained as a consequence of these species' unusual haplodiploid method of sex determination, which has the paradoxical consequence that two sterile worker daughters of the same queen share more genes with each other than they would with their offspring if they could breed. However, E. O. Wilson and Bert Hölldobler argue that this explanation is faulty: for example, it is based on kin selection, but there is no evidence of nepotism in colonies that have multiple queens. Instead, they write, eusociality evolves only in species that are under strong pressure from predators and competitors, but in environments where it is possible to build "fortresses"; after colonies have established this security, they gain other advantages through co-operative foraging. In support of this explanation they cite the appearance of eusociality in bathyergid mole rats, which are not haplodiploid. The earliest fossils of insects have been found in Early Devonian rocks from about , which preserve only a few varieties of flightless insect. The Mazon Creek lagerstätten from the Late Carboniferous, about , include about 200 species, some gigantic by modern standards, and indicate that insects had occupied their main modern ecological niches as herbivores, detritivores and insectivores. Social termites and ants first appeared in the Early Cretaceous, and advanced social bees have been found in Late Cretaceous rocks but did not become abundant until the Middle Cenozoic. Humans The idea that, along with other life forms, modern-day humans evolved from an ancient, common ancestor was proposed by Robert Chambers in 1844 and taken up by Charles Darwin in 1871. Modern humans evolved from a lineage of upright-walking apes that has been traced back over to Sahelanthropus. The first known stone tools were made about , apparently by Australopithecus garhi, and were found near animal bones that bear scratches made by these tools. The earliest hominines had chimpanzee-sized brains, but there has been a fourfold increase in the last 3 Ma; a statistical analysis suggests that hominine brain sizes depend almost completely on the date of the fossils, while the species to which they are assigned has only slight influence. There is a long-running debate about whether modern humans evolved all over the world simultaneously from existing advanced hominines or are descendants of a single small population in Africa, which then migrated all over the world less than 200,000 years ago and replaced previous hominine species. There is also debate about whether anatomically modern humans had an intellectual, cultural and technological "Great Leap Forward" under 40,000–50,000 years ago and, if so, whether this was due to neurological changes that are not visible in fossils. Mass extinctions Life on Earth has suffered occasional mass extinctions at least since . Although they were disasters at the time, mass extinctions have sometimes accelerated the evolution of life on Earth. When dominance of particular ecological niches passes from one group of organisms to another, it is rarely because the new dominant group is "superior" to the old and usually because an extinction event eliminates the old dominant group and makes way for the new one. The fossil record appears to show that the gaps between mass extinctions are becoming longer and that the average and background rates of extinction are decreasing. Both of these phenomena could be explained in one or more ways: The oceans may have become more hospitable to life over the last 500 Ma and less vulnerable to mass extinctions: dissolved oxygen became more widespread and penetrated to greater depths; the development of life on land reduced the run-off of nutrients and hence the risk of eutrophication and anoxic events; and marine ecosystems became more diversified so that food chains were less likely to be disrupted. Reasonably complete fossils are very rare, most extinct organisms are represented only by partial fossils, and complete fossils are rarest in the oldest rocks. So paleontologists have mistakenly assigned parts of the same organism to different genera, which were often defined solely to accommodate these finds—the story of Anomalocaris is an example of this. The risk of this mistake is higher for older fossils because these are often both unlike parts of any living organism and poorly conserved. Many of the "superfluous" genera are represented by fragments which are not found again and the "superfluous" genera appear to become extinct very quickly. Biodiversity in the fossil record, which is "...the number of distinct genera alive at any given time; that is, those whose first occurrence predates and whose last occurrence postdates that time" shows a different trend: a fairly swift rise from ; a slight decline from , in which the devastating Permian–Triassic extinction event is an important factor; and a swift rise from to the present.
Biology and health sciences
Evolution
null
9898300
https://en.wikipedia.org/wiki/Leanchoilia
Leanchoilia
Leanchoilia is a megacheiran arthropod known from Cambrian deposits of the Burgess Shale in Canada and the Chengjiang biota of China. Description L. superlata was about long and had long, whip-like flagellae extending from its great appendages. Its internal organs are occasionally preserved within the substrate in three dimensions. Their two pairs of eyes are protected and covered by their exterior head shields, with two eyes being located on each side. Species Seven species are tentatively accepted today: L. superlata (the type species), L. persephone and L. protogonia from the Burgess Shale, L. illecebrosa and L. obesa from the Chengjiang biota, ''L. robisoni from Kaili, and L.? hanceyi from the Spence Shale. L. superlata and L. persephone may however be examples of sexual dimorphism. Distribution 55 specimens of Leanchoilia are known from the Greater Phyllopod bed, where they comprise 0.1% of the community.
Biology and health sciences
Fossil arthropods
Animals
3246644
https://en.wikipedia.org/wiki/Fattail%20scorpion
Fattail scorpion
Fattail scorpion or fat-tailed scorpion is the common name given to scorpions of the genus Androctonus, one of the most dangerous groups of scorpion species in the world. The genus was first described in 1828 by Christian Gottfried Ehrenberg. They are found throughout the semi-arid and arid regions of the Middle East and Africa. They are moderate sized scorpions, attaining lengths of 10 cm (just under 4 in). Their common name is derived from their distinctly fat metasoma, or tail, while the Latin name originates from Greek and means "man killer". Their venom contains powerful neurotoxins and is especially potent. Stings from Androctonus species are known to cause several human deaths each year. Several pharmaceutical companies manufacture an antivenom for treatment of Androctonus envenomations. The fat-tailed scorpion is nocturnal and enjoys making nests where they hide in crevices during the day to stay moisturized. One of the main threats that the scorpions face is habitat loss. Geographic range Androctonus is widespread in North and West Africa, the Middle East and eastwards to the Hindukush region. Countries where Androctonus species live including Egypt, Israel, India, Lebanon, Turkey, Jordan, Saudi Arabia, Yemen, Oman, United Arab Emirates, Qatar, Kuwait, Iraq, Iran, Afghanistan, Bahrain, Pakistan and Morocco. Etymology A rough English translation of the name Androctonus is "man-killer", from the Ancient Greek anḗr, andrós (ἀνήρ, ἀνδρός), meaning "man" and kteínein (κτείνειν), meaning "to kill". Crassicauda means fat-tailed, from the Latin crassus meaning "fat" and cauda, meaning "tail". Androctonus crassicauda is widespread throughout the Middle East and its name means "fat-tailed man-killer". Similarly, the Latin word for South is australis, from which Androctonus australis, "southern man-killer", derives. Taxonomy Taxonomic reclassification is ongoing, sources tending to disagree on the number of species. Androctonus Ehrenberg, 1828 (30 species): Androctonus aeneas C. L. Koch, 1839* Androctonus afghanus Lourenço & Qi, 2006* Androctonus aleksandrplotkini Lourenço & Qi, 2007* Androctonus amoreuxi (Audouin, 1826) Androctonus australis (Linnaeus, 1758) Androctonus baluchicus (Pocock, 1900)* Androctonus barbouri (Werner, 1932)* Androctonus bicolor Ehrenberg, 1828 Androctonus cholistanus Kovarik & Ahmed, 2013* Androctonus cacahuati Lourenço, 2023* Androctonus crassicauda (Olivier, 1807) Androctonus dekeyseri Lourenço, 2005* Androctonus donairei Rossi, 2015* Androctonus eburneus (Pallary, 1928)* Androctonus finitimus (Pocock, 1897) Androctonus gonneti Vachon, 1948* Androctonus hoggarensis (Pallary, 1929) Androctonus kunti Yağmur, 2023 Androctonus liouvillei (Pallary, 1924)* Androctonus maelfaiti Lourenço, 2005* Androctonus mauritanicus (Pocock, 1902) Androctonus maroccanus Lourenço, Ythier & Leguin, 2009* Androctonus pallidus Lourenço, Duhem & Cloudsley-Thompson, 2012* Androctonus robustus Kovarik & Ahmed, 2013* Androctonus santi Lourenço, 2015* Androctonus sergenti Vachon, 1948 Androctonus simonettai Rossi, 2015* Androctonus tenuissimus Teruel, Kovarik & Turiel, 2013* Androctonus tigrai Lourenço, Rossi & Sadine 2015* Androctonus togolensis Lourenço, 2008* Androctonus tropeai Rossi, 2015* In captivity Despite the risks of keeping such a dangerously venomous species in captivity, Androctonus scorpions are frequently found in the exotic animal trade, A. amoreuxi and A. australis being the most commonly available. The fat-tailed scorpion's main diet when in captivity consists of cockroaches, grasshoppers, and crickets. However, the fat-tailed scorpion is able to go months without consuming food. Scorpions will generally try to kill and eat anything which moves and is smaller than themselves. Fat-tail scorpions kill their prey by first crushing them with their pincers and then injecting them with venom from their stingers. Once the prey has been stung, it causes paralysis and allows the scorpion to consume it with ease. Interestingly, the fat-tail scorpion can only ingest liquids. To simulate the desert environment, the enclosure used to keep the scorpion in must be kept at a temperature of between .
Biology and health sciences
Scorpions
Animals
3246948
https://en.wikipedia.org/wiki/Silo
Silo
A silo () is a structure for storing bulk materials. Silos are commonly used for bulk storage of grain, coal, cement, carbon black, woodchips, food products and sawdust. Three types of silos are in widespread use today: tower silos, bunker silos, and bag silos. Silos are used in agriculture to store fermented feed known as silage. Types of silos Tower silo Storage silos are cylindrical structures, typically 10 to 90 ft (3 to 27 m) in diameter and 30 to 275 ft (10 to 90 m) in height with the slipform and Jumpform concrete silos being the larger diameter and taller silos. They can be made of many materials. Wood staves, concrete staves, cast concrete, and steel panels have all been used, and have varying cost, durability, and airtightness tradeoffs. Silos storing grain, cement and woodchips are typically unloaded with air slides or augers. Silos can be unloaded into rail cars, trucks or conveyors. Tower silos containing silage are usually unloaded from the top of the pile, originally by hand using a silage fork—which has many more tines than the common pitchfork; 12 vs 4—and in modern times using mechanical unloaders. Bottom silo unloaders are utilized at times, but have problems with difficulty of repair. An advantage of tower silos is that the silage tends to pack well due to its own weight, except in the top few feet. However, this may be a disadvantage for items like chopped wood. The tower silo was invented by Franklin Hiram King. In Canada, Australia and the United States, many country towns or the larger farmers in grain-growing areas have groups of wooden or concrete tower silos, known as grain elevators, to collect grain from the surrounding towns and store and protect the grain for transport by train, truck or barge to a processor or to an export port. In bumper crop times, the excess grain is stored in piles without silos or bins, causing considerable losses. Concrete stave silos Concrete stave silos are constructed from small precast concrete blocks with ridged grooves along each edge that lock them together into a high strength shell. Concrete is much stronger in compression than tension, so the silo is reinforced with steel hoops encircling the tower and compressing the staves into a tight ring. The vertical stacks are held together by intermeshing of the ends of the staves by a short distance around the perimeter of each layer, and hoops which are tightened directly across the stave edges. The static pressure of the material inside the silo pressing outward on the staves increases towards the bottom of the silo, so the hoops can be spaced wide apart near the top but become progressively more closely spaced towards the bottom to prevent seams from opening and the contents leaking out. Concrete stave silos are built from common components designed for high strength and long life. They have the flexibility to have their height increased according to the needs of the farm and purchasing power of the farmer, or to be completely disassembled and reinstalled somewhere else if no longer needed. Low-oxygen tower silos Low-oxygen silos are designed to keep the contents in a low-oxygen atmosphere at all times, to keep the fermented contents in a high quality state, and to prevent mold and decay, as may occur in the top layers of a stave silo or bunker. Low-oxygen silos are only opened directly to the atmosphere during the initial forage loading, and even the unloader chute is sealed against air infiltration. It would be expensive to design such a huge structure that is immune to atmospheric pressure changes over time. Instead, the silo structure is open to the atmosphere but outside air is separated from internal air by large impermeable bags sealed to the silo breather openings. In the warmth of the day when the silo is heated by the sun, the gas trapped inside the silo expands and the bags "breathe out" and collapse. At night the silo cools, the air inside contracts and the bags "breathe in" and expand again. While the iconic blue Harvestore low-oxygen silos were once very common, the speed of its unloader mechanism was not able to match the output rates of modern bunker silos, and this type of silo went into decline. Unloader repair expenses also severely hurt the Harvestore reputation, because the unloader feed mechanism is located in the bottom of the silo under tons of silage. In the event of cutter chain breakage, it can cost up to US$10,000 to perform repairs. The silo may need to be partially or completely emptied by alternate means, to unbury the broken unloader and retrieve broken components lost in the silage at the bottom of the structure. In 2005 the Harvestore company recognized these issues and worked to develop new unloaders with double the flow rate of previous models to stay competitive with bunkers, and with far greater unloader chain strength. They are now also using load sensing soft-start variable frequency drive motor controllers to reduce the likelihood of mechanism breakage, and to control the feeder sweep arm movement. Bunker silos Bunker silos are trenches, usually with concrete walls, that are filled and packed using tractors and loaders. The filled trench is covered with a plastic tarp to make it airtight. These silos are usually unloaded with a tractor and loader. They are inexpensive and especially well suited to very large operations. Bag silos Bag silos are heavy plastic tubes, usually around 8 to 12 ft (2.4 to 3.6 m) in diameter, and of variable length as required for the amount of material to be stored. They are packed using a machine made for the purpose, and sealed on both ends. They are unloaded using a tractor and loader or skid-steer loader. The bag is discarded in sections as it is torn off. Bag silos require little capital investment. They can be used as a temporary measure when growth or harvest conditions require more space, though some farms use them every year. Grain bins A grain bin is typically much shorter than a silo, and is typically used for holding dry matter such as cement or grain. Grain is often dried in a grain dryer before being stored in the bin. Bins may be round or square, but round bins tend to empty more easily due to a lack of corners for the stored material to become wedged and encrusted. The stored material may be powdered, as seed kernels, or as cob corn. Due to the dry nature of the stored material, it tends to be lighter than silage and can be more easily handled by under-floor grain unloaders. To facilitate drying after harvesting, some grain bins contain a hollow perforated or screened central shaft to permit easier air infiltration into the stored grain. Cement storage silos There are different types of cement silos such as the low-level mobile silo and the static upright cement silo, which are used to hold and discharge cement and other powder materials such as pulverised fuel ash (PFA). The low-level silos are fully mobile with capacities from 100 to 750 tons. They are simple to transport and are easy to set up on site. These mobile silos generally come equipped with an electronic weighing system with digital display and printer. This allows any quantity of cement or powder discharged from the silo to be controlled and also provides an accurate indication of what remains inside the silo. The static upright silos have capacities from 200 to 800 tons. These are considered a low-maintenance option for the storage of cement or other powders. Cement silos can be used in conjunction with bin-fed batching plants. Sand and salt silos Sand and salt for winter road maintenance are stored in conical dome-shaped (clear truss roof) silos. These are more common in North America, namely in Canada and the United States. The shaped is based on natural shape formed when piling solids. The dome is made of prefabricated wood panels with shingles installed on a circular reinforced concrete base. Open canopy entrance allows for front end loaders to fill and retrieve easily. These are usually found along major highway or key primary roads. Plastic silos Plastic silos, also known as hopper bottom tanks, are manufactured through various processes such as: injection molding, rotational molding, and blow molding. They are constructed using a wide variety of polyethylene plastics. The silos are light weight and make for great small scale storage for farmers with livestock and grain operations. The light weight design and cost effective materials make plastic silos a great alternative to traditional steel bins. Unlike fabric silos, which "tend to be prone to grain rot and pests which have left many farmers frustrated", plastic silos are more safe and secure, keeping grain fresh and unspoiled. They can be designed to be stationary hopper bottom bins or portable pallet bins. Fabric silos Fabric silos are constructed of a fabric bag suspended within a rigid, structural frame. Polyester based fabrics are often used for fabrication of the bag material, with specific attention given to fabric pore size. Upper areas of silo fabric are often manufactured with slightly larger pore size, with the design intent of acting as a vent filter during silo filling. Some designs include metal thread within the fabric, providing a static conductive path from the surface of the fabric to ground. The frame of a fabric silo is typically constructed of steel. Fabric silos are an attractive option because of their relative low cost compared to conventional silos. However, when fabric silos are used to store granular or particulate combustible materials, conventional practices prescribed by established industry consensus standards addressing combustible dust hazards can not be applied without a considerable engineering analysis of the system. Flexible silo storage system Flexible silos are the most versatile and cost-effective solution for the storage of bulk powder and granules. Manufactured from trevira tissue, a tough non-toxic fabric, the silos can handle particle size down to 2 microns and can be pneumatically loaded without the need for a dust collector. The 45-degree fabric silo cone flexes freely when the product discharges, enabling the efficient flow of hard to handle products such as sugar, flour, calcium carbonate etc., minimally assisted by a small vibrator fitted to the discharge transition. The trevira tissue is able to breathe, preventing condensation from forming on its internal walls. This eliminates lumping and caking of the product. Rigid silos With sizes ranging from 2m3 to over 1000m3, Rigid Silos cover an extreme range of applications and they can be constructed from various materials. Rigid silos can be provided with more than one vertical partition to compartmentalize it for different grades of product. History The 5th millennium BC site of Tel Tsaf in the southern Levant contain the earliest known silos. Archaeological ruins and ancient texts show that silos were used in ancient Greece as far back as the late 8th century BC; the term silo is derived from the Greek σιρός (siros), "pit for holding grain". The silo pit, as it has been termed, has been a favorite way of storing grain from time immemorial in Asia. In Turkey and Persia, insurance agents bought stores of wheat or barley whilst comparatively cheap, and store it in hidden pits against seasons of dearth. In Malta a relatively large stock of wheat was preserved in some hundreds of pits (silos) cut in the rock. A single silo stored from 60 to 80 tons of wheat, which, with proper precautions, kept in good condition for four years or more. The first modern silo, a wooden and upright one filled with grain, was invented and built in 1873 in Spring Grove, Illinois by Fred Hatch of McHenry County, Illinois, US. Forage silo usage Forage harvesting Forage silo filling is performed using a forage harvester which may either be self-propelled with an engine and driver's cab, or towed behind a tractor that supplies power through a PTO. The harvester contains a drum-shaped series of cutting knives which shear the fibrous plant material into small pieces no more than an inch long, to facilitate mechanized blowing and transport via augers. The finely chopped plant material is then blown by the harvester into a forage wagon which contains an automatic unloading system. Tower filling Tower forage filling is typically performed with a silo blower which is a very large fan with paddle-shaped blades. Material is fed into a vibrating hopper and is pushed into the blower using a spinning spiral auger. There is commonly a water connection on the blower to add moisture to the plant matter being blown into the silo. The blower may be driven by an electric motor but it is more common to use a spare tractor instead. A large slow-moving conveyor chain underneath the silage in the forage wagon moves the pile towards the front, where rows of rotating teeth break up the pile and drop it onto a high-speed transverse conveyor that pours the silage out the side of the wagon into the blower hopper. Bag filling Silo bags are filled using a traveling sled driven from the PTO of a tractor left in neutral and which is gradually pushed forward as the bag is filled. The steering of the tractor controls the direction of bag placement as it fills, but bags are normally laid in a straight line. The bag is loaded using the same forage harvesting methods as the tower, but the forage wagon must be moved progressively forward with the bag loader. The loader uses an array of rotating cam-shaped spiraled teeth associated with a large comb-shaped tines to push forage into the bag. The forage is pushed in through a large opening, and as the teeth rotate back out, they pass between the comb tines. The cam-shaped auger teeth essentially wipe the forage off using the steel tines, keeping the forage in the bag. Before filling begins, the entire bag is placed onto the loader as a bunched-up tube folded back on itself in many layers to form a thick pile of plastic. Because the plastic is minimally elastic, the loader mechanism filling chute is slightly smaller than the final size of the bag, to accommodate this stack of plastic around the mouth of the loader. The plastic slowly unfurls itself around the edges of the loader as the tube is filled. The contents of the silo bag are under pressure as it is filled, with the pressure controlled by a large brake shoe pressure regulator, holding back two large winch drums on either side of the loader. Cables from the drum extend to the rear of the bag where a large mesh basket holds the rear end of the bag shut. To prevent molding and to assure an airtight seal during fermentation, the ends of the silo bag tube are gathered, folded, and tied shut to prevent oxygen from entering the bag. Removal of the bag loader can be hazardous to bystanders since the pressure must be released and the rear end allowed to collapse onto the ground. Tower unloading A silo unloader specifically refers to a special cylindrical rotating forage pickup device used inside a single tower silo. The main operating component of the silo unloader is suspended in the silo from a steel cable on a pulley that is mounted in the top-center of the roof of the silo. The vertical positioning of the unloader is controlled by an electric winch on the exterior of the silo. For the summer filling of a tower silo, the unloader is winched as high as possible to the top of the silo and put into a parking position. The silo is filled with a silo blower, which is literally a very large fan that blows a large volume of pressurized air up a 10-inch tube on the side of the silo. A small amount of water is introduced into the air stream during filling to help lubricate the filling tube. A small adjustable nozzle at the top, controlled by a handle at the base of the silo directs the silage to fall into the silo on the near, middle, or far side, to facilitate evenly layered loading. Once completely filled, the top of the exposed silage pile is covered with a large heavy sheet of silo plastic which seals out oxygen and permits the entire pile to begin to ferment in the autumn. In the winter when animals must be kept indoors, the silo plastic is removed, the unloader is lowered down onto the top of the silage pile, and a hinged door is opened on the side of the silo to permit the silage to be blown out. There is an array of these access doors arranged vertically up the side of the silo, with an unloading tube next to the doors that has a series of removable covers down the side of the tube. The unloader tube and access doors are normally covered with a large U-shaped shield mounted on the silo, to protect the farmer from wind, snow, and rain while working on the silo. The silo unloader mechanism consists of a pair of counter-rotating toothed augers which rip up the surface of the silage and pull it towards the center of the unloader. The toothed augers rotate in a circle around the center hub, evenly chewing the silage off the surface of the pile. In the center, a large blower assembly picks up the silage and blows it out the silo door, where the silage falls by gravity down the unloader tube to the bottom of the silo, typically into an automated conveyor system. The unloader is typically lowered only a half-inch or so at a time by the operator, and the unloader picks up only a small amount of material until the winch cable has become taut and the unloader is not picking up any more material. The operator then lowers the unloader another half-inch or so and the process repeats. If lowered too far, the unloader can pull up much more material than it can handle, which can overflow and plug up the blower, outlet spout, and the unloader tube, resulting in a time-wasting process of having to climb up the silo to clear the blockages. Once silage has entered the conveyor system, it can be handled by either manual or automatic distribution systems. The simplest manual distribution system uses a sliding metal platform under the pickup channel. When slid open, the forage drops through the open hole and down a chute into a wagon, wheelbarrow, or open pile. When closed, the forage continues past the opening and onward to other parts of the conveyor. Computer automation and a conveyor running the length of a feeding stall can permit the silage to be automatically dropped from above to each animal, with the amount dispensed customized for each location. Safety Silos are hazardous, and people are killed or injured every year in the process of filling and maintaining them. The machinery used is dangerous, and workers can fall from a tower silo's ladder or work platform. Several fires have occurred over the years. Dangers of loading process Filling a silo requires parking two tractors very close to each other, both running at full power and with live PTO shafts, one powering the silo blower and the other powering a forage wagon unloading fresh-cut forage into the blower. The farmer must continually move around in this highly hazardous environment of spinning shafts and high-speed conveyors to check material flows and adjust speeds, and to start and stop all the equipment between loads. Preparation for filling a silo requires winching the unloader to the top, and any remaining forage at the base that the unloader could not pick up must be removed from the floor of the silo. This job requires that the farmer work directly underneath a machine weighing several tons suspended fifty feet or more overhead from a small steel cable. Should the unloader fall, the farmer will likely be killed instantly. Dangers of unloading process Unloading also poses its own special hazards, due to the requirement that the farmer regularly climb the silo to close an upper door and open a lower door, moving the unloader chute from door to door in the process. The fermentation of the silage produces methane gas which over time will outgas and displace the oxygen in the top of the silo. A farmer directly entering a silo without any other precautions can be asphyxiated by the methane, knocked unconscious, and silently suffocate to death before anyone else knows what has happened. It is either necessary to leave the silo blower attached to the silo at all times to use it when necessary to ventilate the silo with fresh air, or to have a dedicated electric fan system to blow fresh air into the silo, before anyone attempts to enter it. In the event that the unloader mechanism becomes plugged, the farmer must climb the silo and directly stand on the unloader, reaching into the blower spout to dig out the soft silage. After clearing a plug, the forage needs to be forked out into an even layer around the unloader so that the unloader does not immediately dig into the pile and plug itself again. All during this process the farmer is standing on or near a machine that could easily kill them in seconds if it were to accidentally start up. This could happen if someone in the barn were to unknowingly switch on the unloading mechanism while someone is in the silo working on the unloader. Often, when unloading grain from an auger or other opening at the bottom of the silo, another worker will be atop the grain "walking it down", to ensure an even flow of grain out of the silo. Sometimes unstable pockets in the grain will collapse beneath the worker doing the walking; this is called grain entrapment as the worker can be completely sunk into the grain within seconds. Entrapment can also occur in moving grain, or when workers clear large clumps of grain that have become stuck on the side of the silo. This often results in death by suffocation. Dry-material/bin hazards There have also been many cases of bins and the associated ducts and buildings exploding. If the air inside becomes laden with finely granulated particles, such as grain dust, a spark can trigger a dust explosion powerful enough to blow a concrete silo and adjacent buildings apart, usually setting the adjacent grain and building on fire. Sparks are often caused by (metal) rubbing against metal ducts; or due to static electricity produced by dust moving along the ducts when extra dry. The two main problems which will necessitate silo cleaning in bins are 'bridging' and 'rat-holing'. Bridging occurs when the material interlaces over the unloading mechanism at the base of the bins and blocks the flow of stored material by gravity into the unloading system. Rat-holing occurs when the material starts to adhere to the side of the bin. This will reduce the operating capacity of a bin as well as leading to cross-contamination of newer material with older material. There is a number of ways to clean a bin and many of these carry their own risks. However, since the early 1990s acoustic cleaners have become available. These are non-invasive, have minimum risk, and can offer a very cost-effective way to keep a small particle bin clean. Notable silos Henninger Turm, Frankfurt, Germany, before demolition in 2013, had an observation deck and 2 revolving restaurants, height: 120 metres Swissmill Tower, Zürich, Switzerland, height: 118 metres, the world highest silo in activity. Schapfen-Mill-Tower, Ulm, Germany, height: 115 metres Silo Tower Basel, Basel, Switzerland, has an observation deck, height: 52 metres Quaker Square, Akron, Ohio, United States, is a former set of tower silos that is now a hotel, restaurants and shops Dagon, Haifa, Israel - transformed into a museum of agriculture, a prominent local feature. Silo art "Silo art" is a recent and distinctly Australian art movement involving silos being decorated with huge mural-type paintings covering a wide range of themes. The first silo to be decorated was in Northam, Western Australia in 2015. The number of examples increased rapidly: , the Australian Silo Art Trail encompassed more than 60 sites. In 2017, the Yarriambiack Shire Council in Victoria sought to trademark the term "silo art trail". Grains handling company GrainCorp, which had supported 14 silo art projects, opposed the move, saying that the term should not "be owned by anyone, but [be] freely used by the community". IP Australia subsequently upheld the opposition. Old water towers have also been decorated in many regional centres. In Melbourne, a huge painting of New Zealand Prime Minister Jacinda Ardern embracing a Muslim woman, an image beamed around the world after the 2019 Christchurch mosque attacks, was painted on the Tinning Street silo in the suburb of Brunswick, after was raised in a day via crowdfunding. The town of Monto in the North Burnett Region of Queensland has been put on the tourism map as the most northerly silo art installation in Australia. Its "Three Moons" silos depict several stories of the past, including the era of gold mining, cattle mustering and The Dreaming. It also has a mural on an old water tower. Silo cleaning Silo cleaning is a process to maximize the efficiency of storage silos that hold bulk powders or granules. In silos, material is fed through the top and removed from the bottom. Typical silo applications include animal feed, industrial powders, cement, and pharmaceuticals. Free movement of stored materials, on a first-in, first-out basis, is essential in maximizing silo efficiency. The goal of silo efficiency is to ensure that the oldest material is used first and does not contaminate newer, fresher material. There are two major complications in silo efficiency: rat holing and bridging. Rat holing occurs when powders adhere to the sides of silos. Bridging occurs when material blocks at the silo base. Manual cleaning is the simplest way to clean silos. This entails lowering a worker on a rope to free material inside the silo. Manual cleaning is dangerous due to the release of material and the possible presence of gases. In cases of bridging, an additional danger exists as the exit hole needs to be rodded from underneath, exposing the worker to falling powder. Alternative cleaning methods include: Air blasters are a well-established cleaning method. Air cannons are expensive, however, as limited coverage requires purchase of multiple units. Air cannons are also noise intrusive and require high consumption of compressed air. Vibrators are easy to fit into empty silos, but can cause structural damage and contribute to powder compaction. Low friction linings are quiet, but expensive to install and prone to erosion which can then contaminate the environment or product. Inflatable pads and liners are easy to install in empty silos and can help side-wall buildup but have no effect on bridging. Inflatable pads and liners are also hard to maintain and can cause compaction. Fluidisation through a one-way membrane can help compacted material, but are expensive and difficult to install and maintain. These systems can also contribute to mechanical interlocking and bridging. Acoustic cleaners are the latest and possibly safest way to clean silos as these systems are non-invasive and do not require silos to be emptied. Acoustic cleaning is also a preventative solution. Pneumatic or hydraulic whip machines are portable machines used to "cut" build up on the walls of silos while being remotely operated from outside of the vessel. Silo cleaning companies provide turn key silo cleaning services using several different kinds of methods (depending on the company).
Technology
Buildings and infrastructure
null
2361675
https://en.wikipedia.org/wiki/New%20Croton%20Dam
New Croton Dam
The New Croton Dam (also known as Cornell Dam) is a dam forming the New Croton Reservoir, both parts of the New York City water supply system. It stretches across the Croton River near Croton-on-Hudson, New York, about north of New York City. Construction began in 1892 and was completed in 1906. Designed by Alphonse Fteley (1837–1903), the masonry dam is broad at its base and high from base to crest. At the time of its completion, it was the tallest dam in the world. It impounds up to of water, a small fraction of the New York City water system's total storage capacity of . History Background The original Croton Dam (Old Croton Dam) was built between 1837 and 1842 to improve New York City's water supply. By 1881, after extensive repairs to the dam, which was high, the Old Croton Reservoir was able to supply about a day to the city via the Old Croton Aqueduct. To meet escalating water needs, the Aqueduct Commission of the City of New York ordered construction of a new Croton system in 1885. Hydro engineer James B. Francis was brought in as a consultant for the construction. The proposed dam and reservoir were to cover of land occupied by public and private buildings, six cemeteries, and more than 400 farms. Condemnation disputes led to "protests, lawsuits, and confusion" before payment of claims and the awarding of construction contracts. The work force on the new dam included stonemasons and laborers who had worked on the original dam. John B. Goldsborough, superintendent of excavations and hiring for the project, also recruited stonemasons from southern Italy, who re-located to New York. Construction Construction began in 1892 and was completed in 1906. Building the dam meant diverting the river from its normal path and pumping the riverbed dry. To accomplish this, workers dug a crescent-shaped canal long and wide in the hill on the north side of the river, secured the canal with a masonry retaining wall, and built temporary dams to control the water flow. The initial construction lasted eight years, and extensive modifications and repairs went on for another six. Working conditions were often difficult. A silent film, The Croton Dam Strike, released in 1900, depicted labor–management problems related to the dam's construction. Designed by Alphonse Fteley (1837–1903), the masonry dam is broad at its base and high from base to crest. At the time of its completion, it was the tallest dam in the world. Its foundation extends below the bed of the river, and the dam contains of masonry. The engineers' tablet mounted on the headhouse nearest the spillway lists the spillway length as and the total length of the dam and spillway combined as . New Croton Dam impounds up to of water, a small fraction of the New York City water system's total storage capacity of . Work began in 1892 at a site on the property of A.B. Cornell downstream of the original dam, which was submerged by the new reservoir. New Croton Reservoir was eventually able to supply a day via a new aqueduct that carried water to Jerome Park Reservoir in the north Bronx, New York City. Repair The bridge over the spillway was replaced in 1975 and again in 2005. In that same year, because of the September 11 attacks on New York City, the New York City Department of Environmental Protection proposed permanent closure of the road across the top of the dam. Pedestrians and emergency vehicles were allowed to use New Croton Dam Road, but all other traffic was re-routed. The department made plans to replace temporary vehicle barriers with permanent barriers after completion of a New Croton Dam Rehabilitation Project in 2011. Discharge Data U.S. Geological survey provides average daily discharge data for the Croton Dam here. Record discharge at the Croton Dam since records began in 1933 was on 1955-10-16 with 33,000 cfs (cubic feet per second); this was after dual hurricanes Connie and Diane. Trails Croton Gorge Park offers views of the dam from directly downstream. The Old Croton Trail, a popular hiking and biking path that roughly follows the route of the Old Croton Aqueduct, has an endpoint near the base of the dam. Teatown Lake Reservation, a nature preserve, lies nearby as does Croton Point Park in Croton-on-Hudson.
Technology
Dams
null
2362507
https://en.wikipedia.org/wiki/Uranium%E2%80%93lead%20dating
Uranium–lead dating
Uranium–lead dating, abbreviated U–Pb dating, is one of the oldest and most refined of the radiometric dating schemes. It can be used to date rocks that formed and crystallised from about 1 million years to over 4.5 billion years ago with routine precisions in the 0.1–1 percent range. The method is usually applied to zircon. This mineral incorporates uranium and thorium atoms into its crystal structure, but strongly rejects lead when forming. As a result, newly-formed zircon crystals will contain no lead, meaning that any lead found in the mineral is radiogenic. Since the exact rate at which uranium decays into lead is known, the current ratio of lead to uranium in a sample of the mineral can be used to reliably determine its age. The method relies on two separate decay chains, the uranium series from 238U to 206Pb, with a half-life of 4.47 billion years and the actinium series from 235U to 207Pb, with a half-life of 710 million years. Decay routes Uranium decays to lead via a series of alpha and beta decays, in which 238U and its daughter nuclides undergo a total of eight alpha and six beta decays, whereas 235U and its daughters only experience seven alpha and four beta decays. The existence of two 'parallel' uranium–lead decay routes (238U to 206Pb and 235U to 207Pb) leads to multiple feasible dating techniques within the overall U–Pb system. The term U–Pb dating normally implies the coupled use of both decay schemes in the 'concordia diagram' (see below). However, use of a single decay scheme (usually 238U to 206Pb) leads to the U–Pb isochron dating method, analogous to the rubidium–strontium dating method. Finally, ages can also be determined from the U–Pb system by analysis of Pb isotope ratios alone. This is termed the lead–lead dating method. Clair Cameron Patterson, an American geochemist who pioneered studies of uranium–lead radiometric dating methods, used it to obtain one of the earliest estimates of the age of the Earth in 1956 to be 4.550Gy ± 70My; a figure that has remained largely unchallenged since. Mineralogy Although zircon (ZrSiO4) is most commonly used, other minerals such as monazite (see: monazite geochronology), titanite, and baddeleyite can also be used. Where crystals such as zircon with uranium and thorium inclusions cannot be obtained, uranium–lead dating techniques have also been applied to other minerals such as calcite / aragonite and other carbonate minerals. These types of minerals often produce lower-precision ages than igneous and metamorphic minerals traditionally used for age dating, but are more commonly available in the geologic record. Mechanism During the alpha decay steps, the zircon crystal experiences radiation damage, associated with each alpha decay. This damage is most concentrated around the parent isotope (U and Th), expelling the daughter isotope (Pb) from its original position in the zircon lattice. In areas with a high concentration of the parent isotope, damage to the crystal lattice is quite extensive, and will often interconnect to form a network of radiation damaged areas. Fission tracks and micro-cracks within the crystal will further extend this radiation damage network. These fission tracks act as conduits deep within the crystal, providing a method of transport to facilitate the leaching of lead isotopes from the zircon crystal. Computation Under conditions where no lead loss or gain from the outside environment has occurred, the age of the zircon can be calculated by assuming exponential decay of uranium. That is where is the number of uranium atoms measured now. is the number of uranium atoms originally - equal to the sum of uranium and lead atoms measured now. is the decay rate of Uranium. is the age of the zircon, which one wants to determine. This gives which can be written as The more commonly used decay chains of Uranium and Lead gives the following equations: (The notation , sometimes used in this context, refers to radiogenic lead. For zircon, the original lead content can be assumed to be zero, and the notation can be ignored.) These are said to yield concordant ages (t from each equation 1 and 2). It is these concordant ages, plotted over a series of time intervals, that result in the concordant line. Loss (leakage) of lead from the sample will result in a discrepancy in the ages determined by each decay scheme. This effect is referred to as discordance and is demonstrated in Figure 1. If a series of zircon samples has lost different amounts of lead, the samples generate a discordant line. The upper intercept of the concordia and the discordia line will reflect the original age of formation, while the lower intercept will reflect the age of the event that led to open system behavior and therefore the lead loss; although there has been some disagreement regarding the meaning of the lower intercept ages. Undamaged zircon retains the lead generated by radioactive decay of uranium and thorium up to very high temperatures (about 900 °C), though accumulated radiation damage within zones of very high uranium can lower this temperature substantially. Zircon is very chemically inert and resistant to mechanical weathering – a mixed blessing for geochronologists, as zones or even whole crystals can survive melting of their parent rock with their original uranium–lead age intact. Thus, zircon crystals with prolonged and complicated histories can contain zones of dramatically different ages (usually with the oldest zone forming the core, and the youngest zone forming the rim of the crystal), and so are said to demonstrate "inherited characteristics". Unraveling such complexities (which can also exist within other minerals, depending on their maximum lead-retention temperature) generally requires in situ micro-beam analysis using, for example, ion microprobe (SIMS), or laser ICP-MS.
Physical sciences
Geochronology
Earth science
2363228
https://en.wikipedia.org/wiki/Guitarfish
Guitarfish
The guitarfish, also referred to as shovelnose rays, are a family, Rhinobatidae, of rays. The guitarfish are known for an elongated body with a flattened head and trunk and small, ray-like wings. The combined range of the various species is tropical, subtropical, and warm temperate waters worldwide. Names In Australia and New Zealand, guitarfish are commonly referred to as shovelnose rays or shovelnose sharks. Description Guitarfish have a body form intermediate between those of sharks and rays. The tail has a typical shark-like form, but in many species, the head has a triangular, or guitar-like shape, rather than the disc-shape formed by fusion with the pectoral fins found in other rays. Reproduction Guitarfish can be ovoviviparous; the embryo matures inside an egg within the mother until it is ready to hatch. This is typical of rays. Habitat Guitarfish are bottom feeders that bury themselves in mud or sand and eat worms, crabs, and clams. Some can tolerate salt, fresh, and brackish water. They generally live close to the beach/coastline or in estuaries. Evolution Rays, including guitarfish, belong to the ancient lineage of cartilaginous fishes. Fossil denticles (tooth-like scales in the skin) resembling that of today's chondrichthyans date at least as far back as the Ordovician, with the oldest unambiguous fossils of cartilaginous fish dating from the middle Devonian. A clade within this diverse family, the Neoselachii, emerged by the Triassic, with the best-understood neoselachian fossils dating from the Jurassic. This clade is represented today by sharks, sawfish, rays and skates. Classification There are a number of issues in the taxonomy of Rhinobatidae, and many fish that were once in this family have been moved to their own families. Nelson's 2006 Fishes of the World recognized four genera in this family: Aptychotrema, Rhinobatos, Trygonorrhina, and Zapteryx. Of these, Aptychotrema, Trygonorrhina, and Zapteryx have been reclassified in the family Trygonorrhinidae. Several other taxa once placed in the Rhinobatidae, such as Platyrhinoidis and Rhina, have also been moved to their own families. Recently, the genus Glaucostegus has again become recognized as distinct from Rhinobatos, and now comprises its own family, Glaucostegidae. Rhinobatos has been split in three genera based on genetic and morphological considerations: Rhinobatos, Acroteriobatus and Pseudobatos. Tarsistes is dubious and may be a synonym of Pseudobatos, and other genera formerly included in Rhinobatidae have been moved to Glaucostegidae, Rhinidae and Trygonorrhinidae. A 2021 re-evaluation of almost complete and articulated material from the Konservat-Lagerstätten of Bolca in Italy suggested that †"Rhinobatos" dezignii and †"Rhinobatos" primaevus should be excluded from Rhinobatos and assigned to the new genera †Pseudorhinobatos and †Eorhinobatos, respectively. Genus Acroteriobatus Giltay, 1928 Acroteriobatus andysabini (2021) (Malagasy blue-spotted guitarfish) Acroteriobatus annulatus (J. P. Müller & Henle, 1841) (Lesser guitarfish) Acroteriobatus blochii (J. P. Müller & Henle, 1841) (Bluntnose guitarfish) Acroteriobatus leucospilus (Norman, 1926) (Grayspotted guitarfish) Acroteriobatus ocellatus (Norman, 1926) (Speckled guitarfish) Acroteriobatus omanensis Last, Hendeson & Naylor, 2016 (Oman guitarfish) Acroteriobatus salalah (J. E. Randall & Compagno, 1995) (Salalah guitarfish) Acroteriobatus stehmanni (2021) (Socotra blue-spotted guitarfish) Acroteriobatus variegatus (Nair & Lal Mohan, 1973) (Stripenose guitarfish) Acroteriobatus zanzibarensis (Norman, 1926) (Zanzibar guitarfish) Genus †Eorhinobatos Marramà et al., 2021 †Eorhinobatos primaevus (De Zigno, 1874) Genus Pseudobatos Last, Seret, and Naylor, 2016 Pseudobatos buthi K.M. Rutledge, 2019 (Spadenose guitarfish) Pseudobatos glaucostigmus (D. S. Jordan & C. H. Gilbert, 1883) (Speckled guitarfish) Pseudobatos horkelii (J. P. Müller & Henle, 1841) (Brazilian guitarfish) Pseudobatos lentiginosus (Garman, 1880) (Atlantic guitarfish) Pseudobatos leucorhynchus (Günther, 1867) (Whitesnout guitarfish) Pseudobatos percellens (Walbaum, 1792) (Chola guitarfish) Pseudobatos planiceps (Garman, 1880) (Pacific guitarfish) Pseudobatos prahli (Acero P & Franke, 1995) (Gorgona guitarfish) Pseudobatos productus (Ayres, 1854) (Shovelnose guitarfish) Marramà Genus †Pseudorhinobatos Marramà et al., 2021 †Pseudorhinobatos dezignii (Heckel, 1853) Genus Rhinobatos H. F. Linck, 1790 Rhinobatos albomaculatus Norman, 1930 (white-spotted guitarfish) Rhinobatos annandalei Norman, 1926 (Annandale's guitarfish) Rhinobatos borneensis Last, Séret & Naylor, 2016 (Borneo guitarfish) Rhinobatos holcorhynchus Norman, 1922 (slender guitarfish) Rhinobatos hynnicephalus J. Richardson, 1846 (Ringstreaked guitarfish) Rhinobatos irvinei Norman, 1931 (spineback guitarfish) Rhinobatos jimbaranensis Last, W. T. White & Fahmi, 2006 (Jimbaran shovelnose ray) Rhinobatos lionotus Norman, 1926 (smoothback guitarfish) Rhinobatos nudidorsalis Last, Compagno & Nakaya, 2004 (Bareback shovelnose ray) Rhinobatos penggali Last, W. T. White & Fahmi, 2006 (Indonesian shovelnose ray) Rhinobatos punctifer Compagno & Randall, 1987 (spotted guitarfish) Rhinobatos rhinobatos Linnaeus, 1758 (common guitarfish) Rhinobatos sainsburyi Last, 2004 (goldeneye shovelnose ray) Rhinobatos schlegelii J. P. Müller & Henle, 1841 (brown guitarfish) Rhinobatos whitei Last, Corrigan & Naylor, 2014 (Philippine guitarfish) Genus †Myledaphus Cope, 1876 Myledaphus araucanus Otero, 2019 †Myledaphus bipartitus Cope, 1876
Biology and health sciences
Batoidea
Animals
2366340
https://en.wikipedia.org/wiki/Fishing%20weir
Fishing weir
A fishing weir, fish weir, fishgarth or kiddle is an obstruction placed in tidal waters, or wholly or partially across a river, to direct the passage of, or trap fish. A weir may be used to trap marine fish in the intertidal zone as the tide recedes, fish such as salmon as they attempt to swim upstream to breed in a river, or eels as they migrate downstream. Alternatively, fish weirs can be used to channel fish to a particular location, such as to a fish ladder. Weirs were traditionally built from wood or stones. The use of fishing weirs as fish traps probably dates back prior to the emergence of modern humans, and have since been used by many societies around the world. In the Philippines, specific indigenous fishing weirs (a version of the ancient Austronesian stone fish weirs) are also known in English as fish corral and barrier net. Etymology The English word 'weir' comes from the Anglo-Saxon wer, one meaning of which is a device to trap fish. Fishing weirs by region Africa A line of stones dating to the Acheulean in Kenya may have been a stone tidal weir in a prehistoric lake, which if true would make this technology older than modern humans. Americas North America In September 2014 researchers from University of Victoria investigated what may turn out to be a 14,000-year-old fish weir in of water off the coast of Haida Gwaii, British Columbia. In Virginia, the Native Americans built V-shaped stone weirs in the Potomac River and James River. These were described in 1705 in The History and Present State of Virginia, In Four Parts by Robert Beverley Jr: This practice was taken up by the early settlers but the Maryland General Assembly ordered the weirs to be destroyed on the Potomac in 1768. Between 1768 and 1828 considerable efforts were made to destroy fish weirs that were an obstruction to navigation and from the mid-1800s, those that were assumed to be detrimental to sports fishing. In the Back Bay area of Boston, Massachusetts, wooden stake remains of the Boylston Street Fishweir have been documented during excavations for subway tunnels and building foundations. The Boylston Street Fishweir was actually a series of fish weirs built and maintained near the tidal shoreline between 3,700 and 5,200 years ago. Natives in Nova Scotia use weirs that stretch across the entire river to retain shad during their seasonal runs up the Shubenacadie, Nine Mile, and Stewiacke rivers, and use nets to scoop the trapped fish. Various weir patterns were used on tidal waters to retain a variety of different species, which are still used today. V-shaped weirs with circular formations to hold the fish during high tides are used on the Bay of Fundy to fish herring, which follow the flow of water. Similar V-shaped weirs are also used in British Columbia to corral salmon to the end of the "V" during the changing of the tides. The Cree of the Hudson Bay Lowlands used weirs consisting of a fence of poles and a trap across fast flowing rivers. The fish were channelled by the poles up a ramp and into a box-like structure made of poles lashed together. The top of the ramp remained below the surface of the water but slightly above the top of the box so that the flow of the water and the overhang of the ramp stopped the fish from escaping from the box. The fish were then scooped out of the box with a dip net. South America A large series of fish weirs, canals and artificial islands was built by an unknown pre-Columbian culture in the Baures region of Bolivia, part of the Llanos de Moxos. These earthworks cover over , and appear to have supported a large and dense population around 3000 BCE. Stone fish weirs were in use 6,000 years ago in Chiloé Island off the coast of Chile. Asia and Oceania Tidal stone fish weirs are one of the ancestral fishing technologies of the seafaring Austronesian peoples. They are found throughout regions settled by Austronesians during the Austronesian expansion () and are very similar in shape and construction throughout. In some regions they have also been adopted into fish pens or use more perishable materials like bamboo, brushwood, and netting. They are found in the highest concentrations in Penghu Island in Taiwan, the Philippines, and all throughout Micronesia. They are also prevalent in eastern Indonesia, Melanesia, and Polynesia. Around 500 stone weirs survive in Taiwan, and millions of stone weirs used to exist through all of the islands of Micronesia. They are known as in the Visayas Islands of the Philippines, in Chuuk, in Yap, in Hawaii, and in New Zealand, among other names. The oldest known example of a stone fish weir in Taiwan was constructed by the indigenous Taokas people in Miaoli County. Most stone fish weirs are believed to also be ancient, but few studies have been conducted into their antiquity as they are difficult to determine due to being continually rebuilt in the same location. The technology of tidal stone fish weirs has also spread to neighboring regions when Taiwan came under the jurisdiction of China and Imperial Japan in recent centuries. They are known as or in Kyushu, in the Ryukyu Islands; , , , or in South Korea (pariticularly Jeju Island); and in Taiwan. The Han Chinese also had separate ancient fish weir techniques, known as , which use bamboo gates or "curtains" in river estuaries. These date back to at least the 7th century in China. Europe In medieval Europe, large fishing weir structures were constructed from wood posts and wattle fences. V-shaped structures in rivers could be as long as and worked by directing fish towards fish traps or nets. Such weirs were frequently the cause of disputes between various classes of river users and tenants of neighbouring land. Basket weir fish traps are shown in medieval illustrations and surviving examples have been found. Basket weirs are about long and comprise two wicker cones, one inside the other—easy for fish to get into but difficult to escape. Great Britain In Great Britain the traditional form was one or more rock weirs constructed in tidal races or on a sandy beach, with a small gap that could be blocked by wattle fences when the tide turned to flow out again. Wales Surviving examples, but no longer in use, can be seen in the Menai Strait, with the best preserved examples to be found at Ynys Gored Goch (Red Weir Island) dating back to around 1842. Also surviving are 'goredi' (originally twelve in number) on the beach at Aberarth, Ceredigion. Another ancient example was at Rhos Fynach in North Wales, which survived in use until World War I. The medieval fish weir at Traeth Lligwy, Moelfre, Anglesey was scheduled as an Ancient Monument in 2002. England Fish weirs were an obstacle to shipping and a threat to fish stocks, for which reasons over the course of history several attempts were made to control their proliferation. The Magna Carta of 1215 includes a clause embodying the barons' demands for the removal of the king's weirs and others: A statute was passed during the reign of King Edward III (1327–1377) and was reaffirmed by King Edward IV in 1472 A further regulation was enacted under King Henry VIII, apparently at the instigation of Thomas Cromwell, when in 1535 commissioners were appointed in each county to oversee the "putting-down" of weirs. The words of the commission were as follows: All weirs noisome to the passage of ships or boats to the hurt of passages or ways and causeys (i.e. causeways) shall be pulled down and those that be occasion of drowning of any lands or pastures by stopping of waters and also those that are the destruction of the increase of fish, by the discretion of the commissioners, so that if any of the before-mentioned depend or may grow by reason of the same weir then there is no redemption but to pull them down, although the same weirs have stood since 500 years before the Conquest. The king did not exempt himself from the regulation and by the destruction of royal weirs lost 500 marks in annual income. The Lisle Papers provide a detailed contemporary narrative of the struggle of the owners of the weir at Umberleigh in Devon to be exempted from this 1535 regulation. The Salmon Fishery Act 1861 (24 & 25 Vict. c. 109) (relevant provisions re-enacted since) bans their use except wherever their almost continuous use can be traced to before the Magna Carta (1215). Ireland In Ireland, discoveries of fish traps associated with weirs have been dated to 8,000 years ago. Stone tidal weirs were used around the world and by 1707, 160 such structures, some of which reached 360 metres in length, were in use along the coast of the Shimabara Peninsula of Japan. Gallery
Technology
Hunting and fishing
null
2366752
https://en.wikipedia.org/wiki/High-fructose%20corn%20syrup
High-fructose corn syrup
High-fructose corn syrup (HFCS), also known as glucose–fructose, isoglucose and glucose–fructose syrup, is a sweetener made from corn starch. As in the production of conventional corn syrup, the starch is broken down into glucose by enzymes. To make HFCS, the corn syrup is further processed by D-xylose isomerase to convert some of its glucose into fructose. HFCS was first marketed in the early 1970s by the Clinton Corn Processing Company, together with the Japanese Agency of Industrial Science and Technology, where the enzyme was discovered in 1965. As a sweetener, HFCS is often compared to granulated sugar, but manufacturing advantages of HFCS over sugar include that it is cheaper. "HFCS 42" and "HFCS 55" refer to dry weight fructose compositions of 42% and 55% respectively, the rest being glucose. HFCS 42 is mainly used for processed foods and breakfast cereals, whereas HFCS 55 is used mostly for production of soft drinks. The United States Food and Drug Administration (FDA) states that it is not aware of evidence showing that HFCS is less safe than traditional sweeteners such as sucrose and honey. Uses and exports of HFCS from American producers have grown steadily during the early 21st century. Food In the United States, HFCS is among the sweeteners that have mostly replaced sucrose (table sugar) in the food industry. Factors contributing to the increased use of HFCS in food manufacturing include production quotas of domestic sugar, import tariffs on foreign sugar, and subsidies of U.S. corn, raising the price of sucrose and reducing that of HFCS, creating a manufacturing-cost advantage among sweetener applications. In spite of having a 10% greater fructose content, the relative sweetness of HFCS 55, used most commonly in soft drinks, is comparable to that of sucrose. HFCS provides advantages in food and beverage manufacturing, such as simplicity of formulation, stability, and enabling processing efficiencies. HFCS (or standard corn syrup) is the primary ingredient in most brands of commercial "pancake syrup," as a less expensive substitute for maple syrup. Assays to detect adulteration of sweetened products with HFCS, such as liquid honey, use differential scanning calorimetry and other advanced testing methods. Production Process In the contemporary process, corn is milled to extract corn starch and an "acid-enzyme" process is used, in which the corn-starch solution is acidified to begin breaking up the existing carbohydrates. High-temperature enzymes are added to further metabolize the starch and convert the resulting sugars to fructose. The first enzyme added is alpha-amylase, which breaks the long chains down into shorter sugar chains (oligosaccharides). Glucoamylase is mixed in and converts them to glucose. The resulting solution is filtered to remove protein using activated carbon. Then the solution is demineralized using ion-exchange resins. That purified solution is then run over immobilized xylose isomerase, which turns the sugars to ~50–52% glucose with some unconverted oligosaccharides and 42% fructose (HFCS 42), and again demineralized and again purified using activated carbon. Some is processed into HFCS 90 by liquid chromatography, and then mixed with HFCS 42 to form HFCS 55. The enzymes used in the process are made by microbial fermentation. Composition and varieties HFCS is 24% water, the rest being mainly fructose and glucose with 0–5% unprocessed glucose oligomers. The most common forms of HFCS used for food and beverage manufacturing contain fructose in either 42% ("HFCS 42") or 55% ("HFCS 55") by dry weight, as described in the U.S. Code of Federal Regulations (21 CFR 184.1866). HFCS 42 (approx. 42% fructose if water were ignored) is used in beverages, processed foods, cereals, and baked goods. HFCS 55 is mostly used in soft drinks. HFCS 70 is used in filling jellies Commerce and consumption The global market for HFCS is expected to grow from $5.9 billion in 2019 to a projected $7.6 billion in 2024. China HFCS in China makes up about 20% of sweetener demand. HFCS has gained popularity due to rising prices of sucrose, while selling for a third the price. Production was estimated to reach 4,150,000 tonnes in 2017. About half of total produced HFCS is exported to the Philippines, Indonesia, Vietnam, and India. European Union In the European Union (EU), HFCS is known as isoglucose or glucose–fructose syrup (GFS) which has 20–30% fructose content compared to 42% (HFCS 42) and 55% (HFCS 55) in the United States. While HFCS is produced exclusively with corn in the U.S., manufacturers in the EU use corn and wheat to produce GFS. GFS was once subject to a sugar production quota, which was abolished on 1 October 2017, removing the previous production cap of 720,000 tonnes, and allowing production and export without restriction. Use of GFS in soft drinks is limited in the EU because manufacturers do not have a sufficient supply of GFS containing at least 42% fructose content. As a result, soft drinks are primarily sweetened by sucrose which has a 50% fructose content. Japan In Japan, HFCS is also referred to as 異性化糖 (iseika-to; isomerized sugar). HFCS production arose in Japan after government policies created a rise in the price of sugar. Japanese HFCS is manufactured mostly from imported U.S. corn, and the output is regulated by the government. For the period from 2007 to 2012, HFCS had a 27–30% share of the Japanese sweetener market. Japan consumed approximately 800,000 tonnes of HFCS in 2016. The United States Department of Agriculture states that HFCS is produced in Japan from U.S. corn. Japan imports at a level of 3 million tonnes per year, leading 20 percent of corn imports to be for HFCS production. Mexico Mexico is the largest importer of U.S. HFCS. HFCS accounts for about 27 percent of total sweetener consumption, with Mexico importing 983,069 tonnes of HFCS in 2018. Mexico's soft drink industry is shifting from sugar to HFCS which is expected to boost U.S. HFCS exports to Mexico according to a U.S. Department of Agriculture Foreign Agricultural Service report. On 1 January 2002, Mexico imposed a 20% beverage tax on soft drinks and syrups not sweetened with cane sugar. The United States challenged the tax, appealing to the World Trade Organization (WTO). On 3 March 2006, the WTO ruled in favor of the U.S. citing the tax as discriminatory against U.S. imports of HFCS without being justified under WTO rules. Philippines The Philippines was the largest importer of Chinese HFCS. Imports of HFCS would peak at 373,137 tonnes in 2016. Complaints from domestic sugar producers would result in a crackdown on Chinese exports. On 1 January 2018, the Philippine government imposed a tax of 12 pesos ($.24) on drinks sweetened with HFCS versus 6 pesos ($.12) for drinks sweetened with other sugars. United States In the United States, HFCS was widely used in food manufacturing from the 1970s through the early 21st century, primarily as a replacement for sucrose because its sweetness was similar to sucrose, it improved manufacturing quality, was easier to use, and was cheaper. Domestic production of HFCS increased from 2.2 million tons in 1980 to a peak of 9.5 million tons in 1999. Although HFCS use is about the same as sucrose use in the United States, more than 90% of sweeteners used in global manufacturing is sucrose. Production of HFCS in the United States was 8.3 million tons in 2017. HFCS is easier to handle than granulated sucrose, although some sucrose is transported as solution. Unlike sucrose, HFCS cannot be hydrolyzed, but the free fructose in HFCS may produce hydroxymethylfurfural when stored at high temperatures; these differences are most prominent in acidic beverages. Soft drink makers such as Coca-Cola and Pepsi continue to use sugar in other nations but transitioned to HFCS for U.S. markets in 1980 before completely switching over in 1984. Large corporations, such as Archer Daniels Midland, lobby for the continuation of government corn subsidies. Consumption of HFCS in the U.S. has declined since it peaked at per person in 1999. The average American consumed approximately of HFCS in 2018, versus of refined cane and beet sugar. This decrease in domestic consumption of HFCS resulted in a push in exporting of the product. In 2014, exports of HFCS were valued at $436 million, a decrease of 21% in one year, with Mexico receiving about 75% of the export volume. In 2010, the Corn Refiners Association petitioned the FDA to call HFCS "corn sugar," but the petition was denied. Vietnam 90% of Vietnam's HFCS import comes from China and South Korea. Imports would total 89,343 tonnes in 2017. One ton of HFCS was priced at $398 in 2017, while one ton of sugar would cost $702. HFCS has a zero cent import tax and no quota, while sugarcane under quota has a 5% tax, and white and raw sugar not under quota have an 85% and 80% tax. In 2018, the Vietnam Sugarcane and Sugar Association (VSSA) called for government intervention on current tax policies. According to the VSSA, sugar companies face tighter lending policies which cause the association's member companies with increased risk of bankruptcy. Health Nutrition HFCS is 76% carbohydrates and 24% water, containing no fat, protein, or micronutrients in significant amounts. In a 100-gram reference amount, it supplies 281 calories, while in one tablespoon of 19 grams, it supplies 53 calories. Obesity and metabolic syndrome The role of fructose in metabolic syndrome has been the subject of controversy, but , there is no scientific consensus that fructose or HFCS has any impact on cardiometabolic markers when substituted for sucrose. A 2014 systematic review found little evidence for an association between HFCS consumption and liver diseases, enzyme levels or fat content. A 2018 review found that lowering consumption of sugary beverages and fructose products may reduce hepatic fat accumulation, which is associated with non-alcoholic fatty liver disease. In 2018, the American Heart Association recommended that people limit total added sugar (including maltose, sucrose, high-fructose corn syrup, molasses, cane sugar, corn sweetener, raw sugar, syrup, honey, or fruit juice concentrates) in their diets to nine teaspoons per day for men and six for women. Safety and manufacturing concerns Since 2014, the United States FDA has determined that HFCS is safe (GRAS) as an ingredient for food and beverage manufacturing, and there is no evidence that retail HFCS products differ in safety from those containing alternative nutritive sweeteners. The 2010 Dietary Guidelines for Americans recommended that added sugars should be limited in the diet. One consumer concern about HFCS is that processing of corn is more complex than used for common sugar sources, such as fruit juice concentrates or agave nectar, but all sweetener products derived from raw materials involve similar processing steps of pulping, hydrolysis, enzyme treatment, and filtration, among other common steps of sweetener manufacturing from natural sources. In the contemporary process to make HFCS, an "acid-enzyme" step is used in which the corn starch solution is acidified to digest the existing carbohydrates, then enzymes are added to further metabolize the corn starch and convert the resulting sugars to their constituents of fructose and glucose. Analyses published in 2014 showed that HFCS content of fructose was consistent across samples from 80 randomly selected carbonated beverages sweetened with HFCS. One prior concern in manufacturing was whether HFCS contains reactive carbonyl compounds or advanced glycation end-products evolved during processing. This concern was dismissed, however, with evidence that HFCS poses no dietary risk from these compounds. As late as 2004, some factories manufacturing HFCS used a chlor-alkali corn processing method which, in cases of applying mercury cell technology for digesting corn raw material, left trace residues of mercury in some batches of HFCS. In a 2009 release, The Corn Refiners Association stated that all factories in the American industry for manufacturing HFCS had used mercury-free processing over several previous years, making the prior report outdated. Other Taste difference Most countries, including Mexico, use sucrose, or table sugar, in soft drinks. In the U.S., soft drinks, such as Coca-Cola, are typically made with HFCS 55. HFCS has a sweeter taste than sucrose. Some Americans seek out drinks such as Mexican Coca-Cola in ethnic groceries because they prefer the taste over that of HFCS-sweetened Coca-Cola. Kosher Coca-Cola, sold in the U.S. around the Jewish holiday of Passover, also uses sucrose rather than HFCS. Beekeeping In apiculture in the United States, HFCS is a honey substitute for some managed honey bee colonies during times when nectar is in low supply. However, when HFCS is heated to about , hydroxymethylfurfural, which is toxic to bees, can form from the breakdown of fructose. Although some researchers cite honey substitution with HFCS as one factor among many for colony collapse disorder, there is no evidence that HFCS is the only cause. Compared to hive honey, both HFCS and sucrose caused signs of malnutrition in bees fed with them, apparent in the expression of genes involved in protein metabolism and other processes affecting honey bee health. Public relations There are various public relations concerns with HFCS, including how HFCS products are advertised and labeled as "natural." As a consequence, several companies reverted to manufacturing with sucrose (table sugar) from products that had previously been made with HFCS. In 2010, the Corn Refiners Association applied to allow HFCS to be renamed "corn sugar," but that petition was rejected by the FDA in 2012. In August 2016, in a move to please consumers with health concerns, McDonald's announced that it would be replacing all HFCS in their buns with sucrose (table sugar) and would remove preservatives and other artificial additives from its menu items. Marion Gross, senior vice president of McDonald's stated, "We know that they [consumers] don't feel good about high-fructose corn syrup so we're giving them what they're looking for instead." Over the early 21st century, other companies such as Yoplait, Gatorade, and Hershey's also phased out HFCS, replacing it with conventional sugar because consumers perceived sugar to be healthier. Companies such as PepsiCo and Heinz have also released products that use sugar in lieu of HFCS, although they still sell HFCS-sweetened products. History Commercial production of HFCS began in 1964. In the late 1950s, scientists at Clinton Corn Processing Company of Clinton, Iowa, tried to turn glucose from corn starch into fructose, but the process they used was not scalable. In 1965–1970, Yoshiyuki Takasaki, at the Japanese National Institute of Advanced Industrial Science and Technology developed a heat-stable xylose isomerase enzyme from yeast. In 1967, the Clinton Corn Processing Company obtained an exclusive license to manufacture glucose isomerase derived from Streptomyces bacteria and began shipping an early version of HFCS in February 1967. In 1983, the FDA accepted HFCS as "generally recognized as safe," and that decision was reaffirmed in 1996. Prior to the development of the worldwide sugar industry, dietary fructose was limited to only a few items. Milk, meats, and most vegetables, the staples of many early diets, have no fructose, and only 5–10% fructose by weight is found in fruits such as grapes, apples, and blueberries. Most traditional dried fruits, however, contain about 50% fructose. From 1970 to 2000, there was a 25% increase in "added sugars" in the U.S. When recognized as a cheaper, more versatile sweetener, HFCS replaced sucrose as the main sweetener of soft drinks in the United States. Since 1789, the U.S. sugar industry has had trade protection in the form of tariffs on foreign-produced sugar, while subsidies to corn growers cheapen the primary ingredient in HFCS, corn. Accordingly, industrial users looking for cheaper sugar replacements rapidly adopted HFCS in the 1970s.
Technology
Food, water and health
null
16743512
https://en.wikipedia.org/wiki/Fabric%20%28geology%29
Fabric (geology)
In geology, a rock's fabric describes the spatial and geometric configuration of all the elements that make it up. In sedimentary rocks, the fabric developed depends on the depositional environment and can provide information on current directions at the time of deposition. In structural geology, fabrics may provide information on both the orientation and magnitude of the strains that have affected a particular piece of deformed rock. Types of fabric Primary fabric — a fabric created during the original formation of the rock, e.g. a preferred orientation of clast long axes in a conglomerate, parallel to the flow direction, deposited by a fast waning current. Shape fabric — a fabric that is defined by the preferred orientation of inequant elements within the rock, such as platy- or needle-like mineral grains. It may also be formed by the deformation of originally equant elements such as mineral grains. Crystallographic preferred orientation — in plastically deformed rocks, the constituent minerals commonly display a preferred orientation of their crystal axes as a result of dislocation processes. S-fabric — a planar fabric such as cleavage or foliation; when it forms the dominant fabric in a rock, it may be called an S-tectonite. L-fabric — a linear fabric such as mineral stretching lineation where aggregates of recrystallised grains are stretched out into the long axis of the finite strain ellipsoid, where it forms the dominant fabric in a rock, it may be called an L-tectonite. Penetrative fabric — a fabric that is present throughout the rock, generally down to the grain scale, although this does also depend on the scale at which the observations take place. Magnetic fabric — orientation of magnetic particles within a rock sample or in soils to determine paleomagnetic history or to quantify tectonic strain.
Physical sciences
Structural geology
Earth science
7647575
https://en.wikipedia.org/wiki/Five-hundred-meter%20Aperture%20Spherical%20Telescope
Five-hundred-meter Aperture Spherical Telescope
The Five-hundred-meter Aperture Spherical Telescope (FAST; ), nicknamed Tianyan (, lit. "Sky's/Heaven's Eye"), is a radio telescope located in the Dawodang depression (), a natural basin in Pingtang County, Guizhou, southwest China. FAST has a diameter dish constructed in a natural depression in the landscape. It is the world's largest filled-aperture radio telescope and the second-largest single-dish aperture, after the sparsely-filled RATAN-600 in Russia. It has a novel design, using an active surface made of 4,500 metal panels which form a moving parabola shape in real time. The cabin containing the feed antenna, suspended on cables above the dish, can move automatically by using winches to steer the instrument to receive signals from different directions. It observes at wavelengths of 10 cm to 4.3 m. Construction of FAST began in 2011. It observed first light in September 2016. After three years of testing and commissioning, it was declared fully operational on 11 January 2020. The telescope made its first discovery, of two new pulsars, in August 2017. The new pulsars PSR J1859-01 and PSR J1931-02—also referred to as FAST pulsar #1 and #2 (FP1 and FP2), were detected on 22 and 25 August 2017; they are 16,000 and 4,100 light years away, respectively. Parkes Observatory in Australia independently confirmed the discoveries on 10 September 2017. By September 2018, FAST had discovered 44 new pulsars, and by 2021, 500. History The telescope was first proposed in 1994. The project was approved by the National Development and Reform Commission (NDRC) in July 2007. A 65-person village was relocated from the valley to make room for the telescope and an additional 9,110 people living within a radius of the telescope were relocated to create a radio-quiet area. The Chinese government spent around in poverty relief funds and bank loans for the relocation of the local residents, while the construction of the telescope itself cost $180 million. On 26 December 2008, a foundation-laying ceremony was held on the construction site. Construction started in March 2011, and the last panel was installed on the morning of 3 July 2016. Originally budgeted for , the final cost was (). Significant difficulties encountered were the site's remote location and poor road access, and the need to add shielding to suppress radio-frequency interference (RFI) from the primary mirror actuators. The actuators were redesigned to meet shielding efficiency requirements and their installation was completed in 2015. Interference from the actuators has not been detected since. Testing and commissioning began with first light on 25 September 2016. The first observations are being done without the active primary reflector, configuring it in a fixed shape and using the Earth's rotation to scan the sky. Subsequent early science took place mainly in lower frequencies while the active surface is brought to its design accuracy; longer wavelengths are less sensitive to errors in reflector shape. It took three years to calibrate the various instruments so it can become fully operational. Local government efforts to develop a tourist industry around the telescope are causing some concern among astronomers worried about nearby mobile telephones acting as sources of RFI. A projected 10 million tourists in 2017 will force officials to decide on the scientific mission versus the economic benefits of tourism. The primary driving force behind the project was Nan Rendong, a researcher with the Chinese National Astronomical Observatory, part of the Chinese Academy of Sciences. He held the positions of chief scientist and chief engineer of the project. He died on 15 September 2017 in Boston due to lung cancer. On 14 June 2022, astronomers, working with China's FAST telescope, reported the possibility of having detected artificial (presumably alien) signals, but cautioned that further studies are required to determine if some kind of natural radio interference may be the source. More recently, on 18 June 2022, Dan Werthimer, chief scientist for several SETI-related projects, noted, "These signals are from radio interference; they are due to radio pollution from earthlings, not from E.T." Overview FAST has a reflecting surface in diameter located in a natural sinkhole in the karst rock landscape, focusing radio waves on a receiving antenna in a "feed cabin" suspended above it. The reflector is made of perforated aluminium panels supported by a mesh of steel cables hanging from the rim. FAST's surface is made of 4,450 triangular panels, on a side, in the form of a geodesic dome. There are 2,225 winches located underneath make it an active surface, pulling on joints between panels, deforming the flexible steel cable support into a parabolic antenna aligned with the desired sky direction. Above the reflector is a lightweight feed cabin moved by a cable robot using winch servomechanisms on six support towers. The receiving antennas are mounted below this on a Stewart platform which provides fine position control and compensates for disturbances like wind motion. This produces a planned pointing precision of 8 arcseconds. The maximum zenith angle is 40 degrees when the effective illuminated aperture is reduced to 200 m, while it is 26.4 degrees when the effective illuminated aperture is 300 m without loss. Although the reflector diameter is , held in the correct parabolic shape and "illuminated" by the receiver, only a circle of 300 m diameter is useful at any one time. The telescope can be pointed to different positions on the sky by illuminating a 300-meter section of the 500 meter aperture. (FAST has a smaller effective aperture than the Jicamarca Radio Observatory, which has a filled aperture of equivalent diameter of 338 m). Its working frequency ranges from 70 MHz to 3.0 GHz, with the upper limit set by the precision with which the primary can approximate a parabola. It could be improved slightly, but the size of the triangular segments limits the shortest wavelength which can be received. The original plan was to cover the frequency range with 9 receivers. During the construction phase, a commissioning ultra-wide band receiver covering 260 MHz to 1620 MHz was proposed and built, which produced the first pulsar discovery from FAST. At the moment, only the FAST L-band Receiver-array of 19 beams (FLAN) is installed and is operational between 1.05 GHz and 1.45 GHz. The Next Generation Archive System (NGAS), developed by the International Centre for Radio Astronomy Research (ICRAR) in Perth, Australia and the European Southern Observatory will store and maintain the large amount of data that it collects. A five-kilometre zone near the telescope forbids tourists from using mobile phones and other radio-emitting devices. An expansion has been planned to build additional 24 radio dishes with 40 meters diameter, and forming a radio-telescope array within the surrounding area of 10KM diameter. The project should expect a boost of telescope resolution by 30 times. Science mission The FAST website lists the following science objectives of the radio telescope: Large scale neutral hydrogen survey Pulsar observations Leading the international very long baseline interferometry (VLBI) network Detection of interstellar molecules Detecting interstellar communication signals (Search for extraterrestrial intelligence) Pulsar timing arrays The FAST telescope joined the Breakthrough Listen SETI project in October 2016 to search for intelligent extraterrestrial communications in the Universe. In February 2020, scientists announced the first SETI observations with the telescope. China's Global Times reported that its 500-meter (1,600 foot) FAST telescope will be open to the global scientific community starting in April 2021 (when applications will be reviewed), and becoming effective in August 2021. Foreign scientists will be able to submit applications to China's National Astronomical Observatories online. Comparison with Arecibo telescope The basic design of FAST is similar to the former Arecibo Telescope. Both designs had reflectors installed in natural hollows within karst limestone, made of perforated aluminium panels with a movable receiver suspended above; and both have an effective aperture smaller than the physical size of the primary. There are however significant differences in addition to the size. First, Arecibo's dish was fixed in a spherical shape. Although it was also suspended from steel cables with supports underneath for fine-tuning the shape, they were manually operated and adjusted only during maintenance. It had a fixed spherical shape with two additional suspended reflectors in a Gregorian configuration to correct for spherical aberration. Second, Arecibo's receiver platform was fixed in place. To support the greater weight of the additional reflectors, the primary support cables were static, with the only motorised portion being three hold-down winches which compensated for thermal expansion. The antennas could move along a rotating arm below the platform, to allow limited adjustment of azimuth, although Arecibo was not limited in azimuth, only in zenith angle: The smaller range of motion limited it to viewing objects within 19.7° of the zenith. Third, Arecibo could receive higher frequencies. The finite size of the triangular panels making up FAST's primary reflector limits the accuracy with which it can approximate a parabola, and thus the shortest wavelength it can focus. Arecibo's more rigid design allowed it to maintain sharp focus down to 3 cm wavelength (10 GHz); FAST is limited to 10 cm (3 GHz). Improvements in position control of the secondary might be able to push that to 6 cm (5 GHz), but then the primary reflector becomes a hard limit. Fourth, the FAST dish is significantly deeper, contributing to a wider field of view. Although % larger in diameter, FAST's radius of curvature is , barely larger than Arecibo's , so it forms a ° arc (vs. ° for Arecibo). Although Arecibo's full aperture of could be used when observing objects at the zenith, this was only possible with the line feed which had a very narrow frequency range and had been unavailable due to damage since 2017. Most Arecibo observations used the Gregorian feeds, where the effective aperture was approximately at zenith. Fifth, Arecibo's larger secondary platform also housed several transmitters, making it one of the few instruments in the world capable of radar astronomy. (Planetary radar is also possible at the Jicamarca and Millstone and Altair observatories.) The NASA-funded Planetary Radar System allowed Arecibo to study solid objects from Mercury to Saturn, and to perform very accurate orbit determination on near-earth objects, particularly potentially hazardous objects. Arecibo also included several NSF funded radars for ionospheric studies (ionosondes). Such powerful transmitters are too large and heavy for FAST's small receiver cabin, so it will not be able to participate in planetary defense although in principle it could serve as a receiver in a bistatic radar system. (Arecibo has been used in several multi-static experiments with an auxiliary 100 meter dish, including S-band radar experiments in the stratosphere, and ISAR mapping of Venus.)
Technology
Ground-based observatories
null
393531
https://en.wikipedia.org/wiki/Deneb
Deneb
Deneb () is a first-magnitude blue supergiant star in the constellation of Cygnus. Deneb is one of the vertices of the asterism known as the Summer Triangle and the "head" of the Northern Cross. It is the brightest star in Cygnus and the 19th brightest star in the night sky, with an average apparent magnitude of +1.25. Deneb rivals Rigel, a closer blue supergiant, as the most luminous first-magnitude star. However, its distance, and hence luminosity, is poorly known; its luminosity is somewhere between 55,000 and 196,000 times that of the Sun. Its Bayer designation is α Cygni, which is Latinised to Alpha Cygni, abbreviated to Alpha Cyg or α Cyg. At a distance of 802 parsecs, it is the farthest star from Earth with a magnitude higher than 2.50. Nomenclature α Cygni (Latinised to Alpha Cygni) is the star's designation given by Johann Bayer in 1603. The traditional name Deneb is derived from the Arabic word for "tail", from the phrase ذنب الدجاجة Dhanab al-Dajājah, or "tail of the hen". The IAU Working Group on Star Names has recognised the name Deneb for this star, and it is entered in their Catalog of Star Names. Denebadigege was used in the Alfonsine Tables, other variants include Deneb Adige, Denebedigege and Arided. This latter name was derived from Al Ridhādh, a name for the constellation. Johann Bayer called it Arrioph, derived from Aridf and Al Ridf, 'the hindmost' or Gallina. German poet and author Philippus Caesius termed it Os rosae, or Rosemund in German, or Uropygium – the parson's nose. The names Arided and Aridif have fallen out of use. An older traditional name is Arided , from the Arabic ar-ridf 'the one sitting behind the rider' (or just 'the follower'), perhaps referring to the other major stars of Cygnus, which were called al-fawāris 'the riders'. Observation The 19th brightest star in the night sky, Deneb culminates each year on October 23 at 6 PM and September 7 at 9 PM, corresponding to summer evenings in the northern hemisphere. It never dips below the horizon at or above 45° north latitude, just grazing the northern horizon at its lowest point at such locations as Minneapolis, Montréal and Turin. In the southern hemisphere, Deneb is not visible south of 45° parallel south, so it just barely rises above the horizon in South Africa, southern Australia, and northern New Zealand during the southern winter. Deneb is located at the tip of the Northern Cross asterism made up of the brightest stars in Cygnus, the others being Albireo (Beta Cygni), Gamma Cygni, Delta Cygni, and Epsilon Cygni. It also lies at one vertex of the prominent and widely spaced asterism called the Summer Triangle, shared with the first-magnitude stars Vega in the constellation Lyra and Altair in Aquila. This outline of stars is the approximate shape of a right triangle, with Deneb located at one of the acute angles. The spectrum of Alpha Cygni has been observed by astronomers since at least 1888, and by 1910 the variable radial velocity had become apparent. This led to the early suggestion by E. B. Frost that this is a binary star system. In 1935, the work of G. F. Paddock and others had established that this star was variable in luminosity with a dominant period of 11.7 days and possibly with other, lower amplitude periods. By 1954, closer examination of the star's calcium H and K lines showed a stationary core, which indicated the variable velocity was instead being caused by motion of the star's atmosphere. This variation ranged from +6 to −9 km/s around the star's mean radial velocity. Other, similar supergiants were found to have variable velocities, with this star being a typical member. Pole star Due to the Earth's axial precession, Deneb will be an approximate pole star (7° off of the north celestial pole) at around 9800 AD. The north pole of Mars points to the midpoint of the line connecting Deneb and the star Alderamin. Physical characteristics Deneb's adopted distance from the Earth is around . This is based on the distance to the Cygnus OB7 association. Another distance estimate using its bolometric magnitude implied by its effective temperature and surface gravity gives . The original derivation of a parallax using measurements from the astrometric satellite Hipparcos gave an uncertain result of 1.01 ± 0.57 mas that was consistent with this distance. However, the 2007 re-analysis gives a much larger parallax whose distance is barely half the current accepted value. This would result in a distance of , or . The controversy over whether the direct Hipparcos measurements can be ignored in favour of a wide range of indirect stellar models and interstellar distance scales is similar to the better known situation with the Pleiades. Deneb's absolute magnitude is estimated as −8.4, placing it among the visually brightest stars known, with an estimated luminosity of nearly . This is towards the upper end of values published over the past few decades. By the distance from Hipparcos parallax, Deneb has a luminosity of . Deneb is the most luminous first magnitude star, that is, stars with a brighter apparent magnitude than 1.5. Deneb is also the most distant of the 30 brightest stars by a factor of almost 2. Based on its temperature and luminosity, and also on direct measurements of its tiny angular diameter (a mere 0.002 seconds of arc), Deneb appears to have a diameter about 100200 times that of the Sun; if placed at the center of the Solar System, Deneb would extend to the orbit of Mercury or Earth. It is one of the largest white 'A' spectral type stars known. Deneb is a bluish-white star of spectral type A2Ia, classifying it as a blue supergiant star with a surface temperature of 8,500 kelvin. Since 1943, its spectrum has served as one of the stable references by which other stars are classified. Its mass is estimated at 19 . Stellar winds causes matter to be lost at an average rate of per year, 100,000 times the Sun's rate of mass loss or equivalent to about one Earth mass per 500 years. Evolutionary state Deneb spent much of its early life as an O-type main-sequence star of about , but it has now exhausted the hydrogen in its core and expanded to become a supergiant. Stars in the mass range of Deneb eventually expand to become the most luminous red supergiants, and within a few million years their cores will collapse producing a supernova explosion. It is now known that red supergiants up to a certain mass explode as the commonly seen type II-P supernovae, but more massive ones lose their outer layers to become hotter again. Depending on their initial masses and the rate of mass loss, they may explode as yellow hypergiants or luminous blue variables, or they may become Wolf-Rayet stars before exploding in a type Ib or Ic supernova. Identifying whether Deneb is currently evolving towards a red supergiant or is currently evolving bluewards again would place valuable constraints on the classes of stars that explode as red supergiants and those that explode as hotter stars. Stars evolving red-wards for the first time are most likely fusing hydrogen in a shell around a helium core that has not yet grown hot enough to start fusion to carbon and oxygen. Convection has begun dredging up fusion products but these do not reach the surface. Post-red supergiant stars are expected to show those fusion products at the surface due to stronger convection during the red supergiant phase and due to loss of the obscuring outer layers of the star. Deneb is thought to be increasing its temperature after a period as a red supergiant, although current models do not exactly reproduce the surface elements showing in its spectrum. On the contrary, it is possible that Deneb has just left the main sequence and is evolving to a red supergiant phase, which is in agreement with estimates of its current mass, while its spectral composition can be explained by Deneb having been a rapidly rotating star during its main sequence phase. Variable star Deneb is the prototype of the Alpha Cygni (α Cygni) variable stars, whose small irregular amplitudes and rapid pulsations can cause its magnitude to vary anywhere between 1.21 and 1.29. Its variable velocity discovered by Lee in 1910, but it was not formally placed as a unique class of variable stars until the 1985 4th edition of the General Catalogue of Variable Stars. The cause of the pulsations of Alpha Cygni variable stars are not fully understood, but their irregular nature seems to be due to beating of multiple pulsation periods. Analysis of radial velocities determined 16 different harmonic pulsation modes with periods ranging between 6.9 and 100.8 days. A longer period of about 800 days probably also exists. Possible spectroscopic companion Deneb has been reported as a possible single line spectroscopic binary with a period of about 850 days, where the spectral lines from the star suggest cyclical radial velocity changes. Later investigations have found no evidence supporting the existence of a companion. Etymology and cultural significance Names similar to Deneb have been given to at least seven different stars, most notably Deneb Kaitos, the brightest star in the constellation of Cetus; Deneb Algedi, the brightest star in Capricornus; and Denebola, the second brightest star in Leo. All these stars are referring to the tail of the animals that their respective constellations represent. In Chinese, (), meaning Celestial Ford, refers to an asterism consisting of Deneb, Gamma Cygni, Delta Cygni, 30 Cygni, Nu Cygni, Tau Cygni, Upsilon Cygni, Zeta Cygni and Epsilon Cygni. Consequently, the Chinese name for Deneb itself is (, ). In the Chinese love story of Qi Xi, Deneb marks the magpie bridge across the Milky Way, which allows the separated lovers Niu Lang (Altair) and Zhi Nü (Vega) to be reunited on one special night of the year in late summer. In other versions of the story, Deneb is a fairy who acts as chaperone when the lovers meet. Namesakes USS Arided was a United States Navy Crater-class cargo ship named after the star. SS Deneb was an Italian merchant vessel that bore this name from 1951 until she was scrapped in 1966.
Physical sciences
Notable stars
Astronomy
393632
https://en.wikipedia.org/wiki/Wadi
Wadi
Wadi (, alternatively wād; , Maghrebi Arabic oued, ) is the Arabic term traditionally referring to a river valley. In some instances, it may refer to a wet (ephemeral) riverbed that contains water only when heavy rain occurs. Arroyo (Spanish) is used in the Americas for similar landforms. Etymology The term is very widely found in Arabic toponyms. Some Spanish toponyms are derived from Andalusian Arabic where was used to mean a permanent river, for example: Guadalcanal from wādī al-qanāl (, "river of refreshment stalls"), Guadalajara from wādī al-ḥijārah (, "river of stones"), or Guadalquivir, from al-wādī al-kabīr (, "the great river"). General morphology and processes Wadis are located on gently sloping, nearly flat parts of deserts; commonly they begin on the distal portions of alluvial fans and extend to inland sabkhas or dry lakes. In basin and range topography, wadis trend along basin axes at the terminus of fans. Permanent channels do not exist, due to lack of continual water flow. They have braided stream patterns because of the deficiency of water and the abundance of sediments. Water percolates down into the stream bed, causing an abrupt loss of energy and resulting in vast deposition. Wadis may develop dams of sediment that change the stream patterns in the next flash flood. Wind also causes sediment deposition. When wadi sediments are underwater or moist, wind sediments are deposited over them. Thus, wadi sediments contain both wind and water sediments. Sediments and sedimentary structures Wadi sediments may contain a range of material, from gravel to mud, and the sedimentary structures vary widely. Thus, wadi sediments are the most diverse of all desert environments. Flash floods result from severe energy conditions and can result in a wide range of sedimentary structures, including ripples and common plane beds. Gravels commonly display imbrications, and mud drapes show desiccation cracks. Wind activity also generates sedimentary structures, including large-scale cross-stratification and wedge-shaped cross-sets. A typical wadi sequence consists of alternating units of wind and water sediments; each unit ranging from about . Sediment laid by water shows complete fining upward sequence. Gravels show imbrication. Wind deposits are cross-stratified and covered with mud-cracked deposits. Some horizontal loess may also be present. Hydrological action Modern English usage differentiates wadis from canyons or washes by the action and prevalence of water. Wadis, as drainage courses, are formed by water, but are distinguished from river valleys or gullies in that surface water is intermittent or ephemeral. Wadis are generally dry year round, except after a rain. The desert environment is characterized by sudden but infrequent heavy rainfall, often resulting in flash floods. Crossing wadis at certain times of the year can be dangerous as a result. Wadis tend to be associated with centers of human population because sub-surface water is sometimes available in them. Nomadic and pastoral desert peoples will rely on seasonal vegetation found in wadis, even in regions as dry as the Sahara, as they travel in complex transhumance routes. The centrality of wadis to water – and human life – in desert environments gave birth to the distinct sub-field of wadi hydrology in the 1990s. Deposits Deposition in a wadi is rapid because of the sudden loss of stream velocity and seepage of water into the porous sediment. Wadi deposits are thus usually mixed gravels and sands. These sediments are often altered by eolian processes. Over time, wadi deposits may become "inverted wadis," where former underground water caused vegetation and sediment to fill in the eroded channel, turning previous washes into ridges running through desert regions. Gallery
Physical sciences
Fluvial landforms
Earth science
393892
https://en.wikipedia.org/wiki/Vostok%20programme
Vostok programme
The Vostok programme (; , , translated as "East") was a Soviet human spaceflight project to put the first Soviet cosmonauts into low Earth orbit and return them safely. Competing with the United States Project Mercury, it succeeded in placing the first human into space, Yuri Gagarin, in a single orbit in Vostok 1 on April 12, 1961. The Vostok capsule was developed from the Zenit spy satellite project, and its launch vehicle was adapted from the existing R-7 Semyorka intercontinental ballistic missile (ICBM) design. The name "Vostok" was treated as classified information until Gagarin's flight was first publicly disclosed to the world press. The programme carried out six crewed spaceflights between 1961 and 1963. The longest flight lasted nearly five days, and the last four were launched in pairs, one day apart. This exceeded Project Mercury's demonstrated capabilities of a longest flight of just over 34 hours, and of single missions. Vostok was succeeded by two Voskhod programme flights in 1964 and 1965, which used three- and two-man modifications of the Vostok capsule and a larger launch rocket. Background The world's first artificial satellite, Sputnik 1, had been put into orbit by the Soviets in 1957. The next milestone in the history of space exploration would be to put a human in space, and both the Soviets and the Americans wanted to be the first. Cosmonaut selection and training By January 1959, the Soviets had begun preparations for human spaceflight. Physicians from the Soviet Air Force insisted that the potential cosmonaut candidates be qualified Air Force pilots, arguing that they would have relevant skills such as exposure to higher g-forces, as well as ejection seat experience; also the Americans had chosen the Mercury Seven in April 1959, all of whom had aviation backgrounds. The candidates had to be intelligent, comfortable in high-stress situations, and physically fit. Chief designer of the Soviet space program, Sergei Korolev, decided that the cosmonauts must be male, between 25 and 30 years old, no taller than 1.75 meters, and weigh no more than 72 kilograms. The final specifications for cosmonauts were approved in June 1959. By September interviews with potential cosmonauts had begun. Although the pilots were not told they might be flying into space, one of the physicians in charge of the selection process believed that some pilots had deduced this. Just over 200 candidates made it through the interview process, and by October a series of demanding physical tests were conducted on those remaining, such as exposure to low pressures, and a centrifuge test. By the end of 1959, 20 men had been selected. Korolev insisted on having a larger group than NASA's astronaut team of seven. Of these 20, five were outside the desired age range; hence, the age requirement was relaxed. Unlike NASA's astronaut group, this group did not particularly consist of experienced pilots; Belyayev was the most experienced with 900 flying hours. The Soviet spacecraft were more automated than the American counterparts, so significant piloting experience was not necessary. On January 11, 1960, Soviet Chief Marshal of Aviation Konstantin Vershinin approved plans to establish the Cosmonaut Training Center, whose exclusive purpose would be to prepare the cosmonauts for their upcoming flights; initially the facility would have about 250 staff. Vershinin assigned the already famous aviator Nikolai Kamanin to supervise operations at the facility. By March, most of the cosmonauts had arrived at the training facility; Vershinin gave a welcome speech on March 7, and those who were present were formally inducted into the cosmonaut group. By mid-June all twenty were permanently stationed at the center. In March the cosmonauts were started on a daily fitness regime, and were taught classes on topics such as rocket space systems, navigation, geophysics, and astronomy. Owing to the initial facility's space limitations, the cosmonauts and staff were relocated to a new facility in Star City (then known as Zelenyy), which has been the home of Russia's cosmonaut training program for over 60 years. The move officially took place on June 29, 1960. Vanguard Six At the Gromov Flight Research Institute, a spacecraft simulator had been built, called the TDK-1. Owing to the inefficiency of training all 20 cosmonauts in the simulator, it was decided they would select six men who would go through accelerated training. This group, which would be known as The Vanguard Six, was decided on May 30, 1960, and initially consisted of Gagarin, Kartashov, Nikolayev, Popovich, Titov, and Varlamov. Alexei Leonov recalls that these six were the shortest of the group of 20. In July, shortly after relocation to Star City, two of the six were replaced on medical grounds. Firstly, during a centrifuge test of 8 g, Kartashov experienced some internal damage, causing minor hemorrhaging on his back. Despite Gagarin's requests for him to stay, the doctors decided to remove Kartashov from the group of six. Later in July, Varlamov was involved in a swimming accident. During a dive into a lake near the training center, he hit his head on the bottom, displacing a cervical vertebra. So by the end of July, the Vanguard Six were: Gagarin, Bykovskiy, Nelyubov, Nikolayev, Popovich, and Titov. By January 1961, these six had all finished parachute and recovery training, as well as three-day regimes in simulators. On January 17, the six participated in their final exams, including time spent in a simulator, and a written test. Based on these results, a commission, supervised by Kamanin, recommended the use of the cosmonauts in the following order: Gagarin, Titov, Nelyubov, Nikolayev, Bykovskiy, Popovich. At this stage Gagarin was the clear favorite to be the first man in space, not only based on the exams, but also among an informal peer evaluation. Missions Vostok 1, the first human spaceflight in April 1961, was preceded by several preparatory flights. In mid-1960, the Soviets learned that the Americans could launch a sub-orbital human spaceflight as early as January 1961. Korolev saw this as an important deadline, and was determined to launch a crewed orbital mission before the Americans launched their human suborbital mission. By April 1960, designers at Sergei Korolev's design bureau, then known as OKB-1, had completed a draft plan for the first Vostok spacecraft, called Vostok 1K. This design would be used for testing purposes; also in their plan was Vostok 2K, a spy satellite that would later become known as Zenit 2, and Vostok 3K, which would be used for all six crewed Vostok missions. Despite the very large geographical size of the Soviet Union, there were obvious limitations to monitoring orbital spaceflights from ground stations within the country. To remedy this, the Soviets stationed about seven naval vessels, or tracking ships, around the world. For each ground station or tracking ship, the duration of communications with an orbiting spacecraft was limited to between five and ten minutes. Korabl-Sputnik 1 The first Vostok spacecraft was a variant not designed to be recovered from orbit; the variant was also called Vostok 1KP (or 1P). At Korolev's suggestion, the media would call the spacecraft Korabl-Sputnik, ("Satellite-ship"); the name Vostok was still a secret codename at this point. This first Vostok spacecraft was successfully sent into orbit on May 15, 1960. Owing to a system malfunction, on the spacecraft's 64th orbit the thrusters fired and sent it into an even higher orbit. The orbit eventually decayed, and it re-entered the atmosphere several years later. Vostok 1K The next six launches were all of the Vostok 1K design, equipped with life-support facilities, and planned to be recovered after orbit. The first spacecraft launched on July 28, 1960 carried two space dogs named Chayka and Lisichka. An explosion destroyed the spacecraft shortly after launch, killing both dogs, and the mission was not given a name. The next mission, designated Korabl-Sputnik 2, was launched on August 19, 1960, carrying two more dogs, Belka and Strelka, as well as a variety of other biological specimens such as mice, insects, and strips of human skin. This mission was successful, and Belka and Strelka became the first living beings recovered from orbit. The spacecraft was only the second object ever to have been recovered from orbit, the first being the return capsule of the American Discoverer 13 the previous week. During the mission there was some concern for Belka and Strelka's health, after images of Belka vomiting had been obtained from the onboard cameras. The spacecraft and dogs were recovered following the 26-hour spaceflight, and extensive physiological tests revealed that the dogs were in good health. This represented a significant success for the Vostok programme. The success of Korabl-Sputnik 2 gave the designers confidence to put forward a plan leading to a human spaceflight. A document regarding a plan for the Vostok programme, dated September 10, 1960, and declassified in 1991, was sent to the Central Committee of the Communist Party, and approved by Premier Nikita Khrushchev. This document had been signed by the top leaders in the Soviet defence industry at the time, the most senior being Deputy Chairman Dmitriy Ustinov; this indicated the elevated importance of the document. The plan called for one or two more Vostok 1K flights, followed by two uncrewed Vostok 3K flights, followed by a crewed flight in December 1960. A major setback occurred on October 24, when a rocket explosion killed over 100 people, including Chief Marshal of Artillery Mitrofan Nedelin, in what is now called the Nedelin catastrophe. This was one of the worst disasters in the history of spaceflight. It involved a rocket that was not designed by Korolev, and was not necessary for the Vostok programme; the rocket was by rival designer Mikhail Yangel, intended to be a new generation of intercontinental ballistic missiles. It would be two weeks before work on the Vostok programme continued, and it was realised that the original target of a December crewed launch was unrealistic. On December 1, 1960, the next Vostok 1K spacecraft, called Korabl-Sputnik 3 by the press, was launched. It carried the two dogs Pchyolka and Mushka. After about 24 hours, the engines were intended to fire to begin re-entry, but they fired for less time than had been expected. This meant that the spacecraft would enter the atmosphere, but not over Soviet territory. For this reason the self-destruct system was activated, and the spacecraft and the two dogs were destroyed. At the time, the press reported that an incorrect altitude caused the cabin to be destroyed upon re-entry. The next Vostok 1K spacecraft was launched on December 22, 1960, but it was unnamed because it failed to reach orbit. It carried two dogs, named Kometa and Shutka. The third stage of the launch system malfunctioned, and the emergency escape system was activated. The spacecraft landed 3,500 kilometres downrange of the launch site. The resulting rescue operation took several days, in -40 °C conditions. After a few days, the dogs were both recovered alive, and the spacecraft was returned to Moscow a few weeks later. Despite Korolev's desire to announce this failure to the press, the State Commission vetoed the idea. Vostok 3KA The two uncrewed missions immediately preceding the first human flight used the same spacecraft design as in the crewed missions, a design called Vostok 3KA (or 3A). The only differences were that they would carry a single dog into orbit, a life-size mannequin would be strapped into the main ejection seat, and (unlike the crewed missions) they had a self-destruct system. The recent failures of Vostok 1K were not encouraging, but it was decided to proceed with launches of an automated variant of Vostok 3KA, the spacecraft design that would conduct a human spaceflight. The approval of a crewed mission was contingent upon the success of the two automated Vostok 3KA missions. Unlike the previous Vostok 1K flights, the two uncrewed Vostok 3KA flights were planned to last only a single orbit, to imitate the plan for the first human flight. The first of these uncrewed flights, Korabl-Sputnik 4, was launched on March 9, 1961. It carried the dog Chernushka into orbit, as well as a mannequin called Ivan Ivanovich, who wore a functioning SK-1 spacesuit. The dog was contained in a small pressurized sphere, which also contained 80 mice, several guinea pigs, and other biological specimens. Additional mice, guinea pigs, and other specimens were placed within the mannequin. After one orbit, the descent module successfully re-entered the atmosphere, the mannequin was safely ejected, and the dog and other specimens landed separately in the descent module by parachute. The spaceflight lasted 106 minutes, and the dog was recovered alive after landing. The mission was a complete success. On March 23, before the next mission, an accident occurred during training which led to the death of cosmonaut candidate Valentin Bondarenko. He was burned in a fire in an oxygen-rich isolation chamber, and died in a hospital eight hours after the incident. Bondarenko's death was the first known cosmonaut or astronaut fatality. It is not clear whether other cosmonauts were told of his death immediately; the media did not learn of Bondarenko's death – or even of his existence – until many years later, in 1986. Unsubstantiated reports of other cosmonaut deaths created the myth of the lost cosmonaut. The next uncrewed flight, Korabl-Sputnik 5, was launched on March 25, two days after Bondarenko's death. Like the previous Vostok 3KA flight, it lasted for only a single orbit, carried a mannequin and many animals, which included frogs, plants, mice, rats, and a dog, Zvezdochka ("Starlet", or "Little star"). This mission was also a complete success, which was the final step required to get approval for a crewed mission. The re-entry module of the Korabl-Sputnik 5 spacecraft, also called Vostok 3KA-2, was auctioned at Sotheby's on April 12, 2011, the 50th anniversary of the first human spaceflight, Vostok 1. Evgeny Yurchenko, a Russian investment banker, paid $2,882,500 for the capsule. Crewed flights Cancelled missions One different (1963) and seven original (going through to April 1966) Vostok flights were originally planned: Vostok 6A - pair to Vostok 5 group flight with female cosmonaut instead fulfilled Vostok 6 flight Vostok 7 - 8-days high-altitude flight for radiological-biological studies with natural re-entry from orbit Vostok 8 - pair to Vostok 9 10-days group high-altitude flight for extended scientific studies with natural re-entry from orbit Vostok 9 - pair to Vostok 8 10-days group high-altitude flight for extended scientific studies with natural re-entry from orbit Vostok 10 - 10-days high-altitude flight for extended scientific studies with natural re-entry from orbit Vostok 11 - supplemental flight for extra-vehicular activity tests Vostok 12 - supplemental flight for extra-vehicular activity tests Vostok 13 - 10-days high-altitude flight for extended scientific studies with natural re-entry from orbit All these original missions were cancelled in early 1964 and the components recycled into the Voskhod programme, which was intended to achieve more Soviet firsts in space.
Technology
Programs and launch sites
null
393943
https://en.wikipedia.org/wiki/Platinum%20group
Platinum group
The platinum-group metals (PGMs), also known as the platinoids, platinides, platidises, platinum group, platinum metals, platinum family or platinum-group elements (PGEs), are six noble, precious metallic elements clustered together in the periodic table. These elements are all transition metals in the d-block (groups 8, 9, and 10, periods 5 and 6). The six platinum-group metals are ruthenium, rhodium, palladium, osmium, iridium, and platinum. They have similar physical and chemical properties, and tend to occur together in the same mineral deposits. However, they can be further subdivided into the iridium-group platinum-group elements (IPGEs: Os, Ir, Ru) and the palladium-group platinum-group elements (PPGEs: Rh, Pt, Pd) based on their behaviour in geological systems. The three elements above the platinum group in the periodic table (iron, nickel and cobalt) are all ferromagnetic; these, together with the lanthanide element gadolinium (at temperatures below 20 °C), are the only known transition metals that display ferromagnetism near room temperature. History Naturally occurring platinum and platinum-rich alloys were known by pre-Columbian Americans for many years. However, even though the metal was used by pre-Columbian peoples, the first European reference to platinum appears in 1557 in the writings of the Italian humanist Julius Caesar Scaliger (1484–1558) as a description of a mysterious metal found in Central American mines between Darién (Panama) and Mexico ("up until now impossible to melt by any of the Spanish arts"). The name platinum is derived from the Spanish word platina ("little silver"), the name given to the metal by Spanish settlers in Colombia. They regarded platinum as an unwanted impurity in the silver they were mining. By 1815, rhodium and palladium had been discovered by William Hyde Wollaston, and iridium and osmium by his close friend and collaborator Smithson Tennant. Properties and uses The platinum metals have many useful catalytic properties. They are highly resistant to wear and tarnish, making platinum, in particular, well suited for fine jewellery. Other distinctive properties include resistance to chemical attack, excellent high-temperature characteristics, high mechanical strength, good ductility, and stable electrical properties. Apart from their application in jewellery, platinum metals are also used in anticancer drugs, industries, dentistry, electronics, and vehicle exhaust catalysts (VECs). VECs contain solid platinum (Pt), palladium (Pd), and rhodium (Rh) and are installed in the exhaust system of vehicles to reduce harmful emissions, such as carbon monoxide (CO), by converting them into less harmful emissions. Occurrence Generally, ultramafic and mafic igneous rocks have relatively high, and granites low, PGE trace content. Geochemically anomalous traces occur predominantly in chromian spinels and sulfides. Mafic and ultramafic igneous rocks host practically all primary PGM ore of the world. Mafic layered intrusions, including the Bushveld Complex, outweigh by far all other geological settings of platinum deposits. Other economically significant PGE deposits include mafic intrusions related to flood basalts, and ultramafic complexes of the Alaska, Urals type. PGM minerals Typical ores for PGMs contain ca. 10 g PGM/ton ore, thus the identity of the particular mineral is unknown. Platinum Platinum can occur as a native metal, but it can also occur in various different minerals and alloys. That said, Sperrylite (platinum arsenide, PtAs2) ore is by far the most significant source of this metal. A naturally occurring platinum-iridium alloy, platiniridium, is found in the mineral cooperite (platinum sulfide, PtS). Platinum in a native state, often accompanied by small amounts of other platinum metals, is found in alluvial and placer deposits in Colombia, Ontario, the Ural Mountains, and in certain western American states. Platinum is also produced commercially as a by-product of nickel ore processing. The huge quantities of nickel ore processed makes up for the fact that platinum makes up only two parts per million of the ore. South Africa, with vast platinum ore deposits in the Merensky Reef of the Bushveld complex, is the world's largest producer of platinum, followed by Russia. Platinum and palladium are also mined commercially from the Stillwater igneous complex in Montana, USA. Leaders of primary platinum production are South Africa and Russia, followed by Canada, Zimbabwe and USA. Osmium Osmiridium is a naturally occurring alloy of iridium and osmium found in platinum-bearing river sands in the Ural Mountains and in North and South America. Trace amounts of osmium also exist in nickel-bearing ores found in the Sudbury, Ontario, region along with other platinum group metals. Even though the quantity of platinum metals found in these ores is small, the large volume of nickel ores processed makes commercial recovery possible. Iridium Metallic iridium is found with platinum and other platinum group metals in alluvial deposits. Naturally occurring iridium alloys include osmiridium and iridosmine, both of which are mixtures of iridium and osmium. It is recovered commercially as a by-product from nickel mining and processing. Ruthenium Ruthenium is generally found in ores with the other platinum group metals in the Ural Mountains and in North and South America. Small but commercially important quantities are also found in pentlandite extracted from Sudbury, Ontario, and in pyroxenite deposits in South Africa. Rhodium The industrial extraction of rhodium is complex, because it occurs in ores mixed with other metals such as palladium, silver, platinum, and gold. It is found in platinum ores and obtained free as a white inert metal which is very difficult to fuse. Principal sources of this element are located in South Africa, Zimbabwe, in the river sands of the Ural Mountains, North and South America, and also in the copper-nickel sulfide mining area of the Sudbury Basin region. Although the quantity at Sudbury is very small, the large amount of nickel ore processed makes rhodium recovery cost effective. However, the annual world production in 2003 of this element is only 7 or 8 tons and there are very few rhodium minerals. Palladium Palladium is preferentially hosted in sulphide minerals, primarily in pyrrhotite. Palladium is found as a free metal and alloyed with platinum and gold with platinum group metals in placer deposits of the Ural Mountains of Eurasia, Australia, Ethiopia, South and North America. However it is commercially produced from nickel-copper deposits found in South Africa and Ontario, Canada. The huge volume of nickel-copper ore processed makes this extraction profitable in spite of its low concentration in these ores. Production The production of individual platinum group metals normally starts from residues of the production of other metals with a mixture of several of those metals. Purification typically starts with the anode residues of gold, copper, or nickel production. This results in a very energy intensive extraction process, which leads to environmental consequences. Carbon dioxide emissions are expected to rise as a result of increased demand for platinum metals and there is likely to be expanded mining activity in the Bushveld Igneous Complex because of this. Further research is needed to determine the environmental impacts. Classical purification methods exploit differences in chemical reactivity and solubility of several compounds of the metals under extraction. These approaches have yielded to new technologies that utilize solvent extraction. Separation begins with dissolution of the sample. If aqua regia is used, the chloride complexes are produced. Depending on the details of the process, which are often trade secrets, the individual PGMs are obtained as the following compounds: the poorly soluble (NH4)2IrCl6 and (NH4)2PtCl6, PdCl2(NH3)2, the volatile OsO4 and RuO4, and [RhCl(NH3)5]Cl2. Production in nuclear reactors Significant quantities of the three light platinum group metals—ruthenium, rhodium and palladium—are formed as fission products in nuclear reactors. With escalating prices and increasing global demand, reactor-produced noble metals are emerging as an alternative source. Various reports are available on the possibility of recovering fission noble metals from spent nuclear fuel. Environmental concerns It was previously thought that platinum group metals had very few negative attributes in comparison to their distinctive properties and their ability to successfully reduce harmful emission from automobile exhausts. However, even with all the positives of platinum metal use, the negative effects of their use need to be considered in how it might impact the future. For example, metallic Pt are considered to not be chemically reactive and non-allergenic, so when Pt is emitted from VECs it is in metallic and oxide forms it is considered relatively safe. However, Pt can solubilise in road dust, enter water sources, the ground, and increase dose rates in animals through bioaccumulation. These impacts from platinum groups were previously not considered, however over time the accumulation of platinum group metals in the environment may actually pose more of a risk than previously thought. Future research is needed to fully grasp the threat of platinum metals, especially since as more internal combustion cars are driven, the more platinum metal emissions there are. The bioaccumulation of Pt metals in animals can pose a significant health risk to both humans and biodiversity. Species will tend to get more toxic if their food source is contaminated by these hazardous Pt metals emitted from VECs. This can potentiality harm other species, including humans if we eat these hazardous animals, such as fish. Platinum metals extracted during the mining and smelting process can also cause significant environmental impacts. In Zimbabwe, a study showed that platinum group mining caused significant environmental risks, such as pollution in water sources, acidic water drainage, and environmental degradation. Another hazard of Pt is being exposed to halogenated Pt salts, which can cause allergic reactions in high rates of asthma and dermatitis. This is a hazard that can sometimes be seen in the production of industrial catalysts, causing workers to have reactions. Workers removed immediately from further contact with Pt salts showed no evidence of long-term effects, however continued exposure could lead to health effects. Platinum use in drugs also may need to be reevaluated, as some of the side effects to these drugs include nausea, hearing loss, and nephrotoxicity. Handling of these drugs by professionals, such as nurses, have also resulted in some side effects including chromosome aberrations and hair loss. Therefore, the long term effects of platinum drug use and exposure need to be evaluated and considered to determine if they are safe to use in medical care. While exposure of relatively low volumes of platinum group metal emissions may not have any long-term health effects, there is considerable concern for how the accumulation of Pt metal emissions will impact the environment as well as human health. This is a threat that will need more research to determine the safe levels of risk, as well as ways to mitigate potential hazards from platinum group metals.
Physical sciences
d-Block
Chemistry
394710
https://en.wikipedia.org/wiki/Induced%20demand
Induced demand
In economics, induced demand – related to latent demand and generated demand – is the phenomenon whereby an increase in supply results in a decline in price and an increase in consumption. In other words, as a good or service becomes more readily available and mass produced, its price goes down and consumers are more likely to buy it, meaning that the quantity demanded subsequently increases. This is consistent with the economic model of supply and demand. In transportation planning, induced demand, also called "induced traffic" or consumption of road capacity, has become important in the debate over the expansion of transportation systems, and is often used as an argument against increasing roadway traffic capacity as a cure for congestion. Induced traffic may be a contributing factor to urban sprawl. City planner Jeff Speck has called induced demand "the great intellectual black hole in city planning, the one professional certainty that every thoughtful person seems to acknowledge, yet almost no one is willing to act upon." The inverse effect, known as reduced demand, is also observed. Economics "Induced demand" and other terms were given economic definitions in a 1999 paper by Lee, Klein, and Camus. In the paper, "induced traffic" is defined as a change in traffic by movement along the short-run demand curve. This would include new trips made by existing residents, taken because driving on the road is now faster. Likewise, "induced demand" is defined as a change in traffic by movement along the long-run demand curve. This would include all trips made by new residents who moved to take advantage of the wider road. In transportation systems Definitions According to CityLab: Induced demand is a catch-all term used for a variety of interconnected effects that cause new roads to quickly fill to capacity. In rapidly growing areas where roads were not designed for the current population, there may be significant latent demand for new road capacity, which causes a flood of new drivers to immediately take to the freeway once the new lanes are open, quickly congesting them again. But these individuals were presumably already living nearby; how did they get around before the expansion? They may have taken alternative modes of transport, travelled at off-peak hours, or not made those trips at all. That’s why latent demand can be difficult to disentangle from generated demand—the new traffic that is a direct result of the new capacity. (Some researchers try to isolate generated demand as the sole effect of induced demand.) The technical distinction between the two terms, which are often used interchangeably, is that latent demand is travel that cannot be realised because of constraints. It is thus "pent-up". Induced demand is demand that has been realised, or "generated", by improvements made to transportation infrastructure. Thus, induced demand generates the traffic that had been "pent-up" as latent demand. History Latent demand has been recognised by road traffic professionals for many decades, and was initially referred to as "traffic generation". In the simplest terms, latent demand is demand that exists, but, for any number of reasons, most having to do with human psychology, is suppressed by the inability of the system to handle it. Once additional capacity is added to the network, the demand that had been latent materialises as actual usage. The effect was recognised as early as 1930, when an executive of a St. Louis, Missouri, electric railway company told the Transportation Survey Commission that widening streets simply produces more traffic, and heavier congestion. In New York, it was clearly seen in the highway-building program of Robert Moses, the "master builder" of the New York City area. As described by Moses's biographer, Robert Caro, in The Power Broker: During the last two or three years before [the entrance of the United States into World War II], a few planners had ... begun to understand that, without a balanced system [of transportation], roads would not only not alleviate transportation congestion but would aggravate it. Watching Moses open the Triborough Bridge to ease congestion on the Queensborough Bridge, open the Bronx-Whitestone Bridge to ease congestion on the Triborough Bridge and then watching traffic counts on all three bridges mount until all three were as congested as one had been before, planners could hardly avoid the conclusion that "traffic generation" was no longer a theory but a proven fact: the more highways were built to alleviate congestion, the more automobiles would pour into them and congest them and thus force the building of more highways – which would generate more traffic and become congested in their turn in an ever-widening spiral that contained far-reaching implications for the future of New York and of all urban areas. The same effect had been seen earlier with the new parkways that Moses had built on Long Island in the 1930s and 40s, where ... every time a new parkway was built, it quickly became jammed with traffic, but the load on the old parkways was not significantly relieved. Similarly, the building of the Brooklyn–Battery Tunnel failed to ease congestion on the Queens-Midtown Tunnel and the three East River bridges, as Moses had expected it to. By 1942, Moses could no longer ignore the reality that his roads were not alleviating congestion in the way he expected them to, but his answer to the problem was not to invest in mass transit, it was to build even more roads, in a vast program which would expand or create of roads, including additional bridges, such as the Throgs Neck Bridge and the Verrazano Narrows Bridge. J. J. Leeming, a British road-traffic engineer and county surveyor between 1924 and 1964, described the phenomenon in his 1969 book, Road Accidents: Prevent or Punish?: Motorways and bypasses generate traffic, that is, produce extra traffic, partly by inducing people to travel who would not otherwise have done so by making the new route more convenient than the old, partly by people who go out of their direct route to enjoy the greater convenience of the new road, and partly by people who use the towns bypassed because they are more convenient for shopping and visits when through traffic has been removed. Leeming went on to give an example of the observed effect following the opening of the Doncaster Bypass section of the A1(M) in 1961. By 1998, Donald Chen quoted the British Transport Minister as saying "The fact of the matter is that we cannot tackle our traffic problem by building more roads." In Southern California, a study by the Southern California Association of Governments in 1989 concluded that steps taken to alleviate traffic congestion, such as adding lanes or turning freeways into double-decked roads, would have nothing but a cosmetic effect on the problem. Also, the University of California at Berkeley published a study of traffic in 30 California counties between 1973 and 1990 which showed that every 10 percent increase in roadway capacity, traffic increased by 9 percent within four years time. A 2004 meta-analysis, which took in dozens of previously published studies, confirmed this. It found that: ... on average, a 10 percent increase in lane miles induces an immediate 4 percent increase in vehicle miles travelled, which climbs to 10 percent – the entire new capacity – in a few years. An aphorism among some traffic engineers is "Trying to cure traffic congestion by adding more capacity is like trying to cure obesity by loosening your belt." According to city planner Jeff Speck, the "seminal" text on induced demand is the 1993 book The Elephant in the Bedroom: Automobile Dependence and Denial, written by Stanley I. Hart and Alvin L. Spivak. Price of road travel A journey on a road can be considered as having an associated cost or price (the generalised cost, g) which includes the out-of-pocket cost (e.g. fuel costs and tolls) and the opportunity cost of the time spent travelling, which is usually calculated as the product of travel time and the value of travellers' time. These cost determinants change often, and all have variable effects on demand for transport, which tends to be dependent on the reason(s) as well as method of travel. When road capacity is increased, initially there is more road space per vehicle travelling than there was before, so congestion is reduced, and therefore the time spent travelling is reduced – reducing the generalised cost of every journey (by affecting the second "cost" mentioned in the previous paragraph). In fact, this is one of the key justifications for construction of new road capacity (the reduction in journey times). A change in the cost (or price) of travel results in a change in the quantity consumed. Factors such as petrol prices, as well as fuel costs, are the most common variables that influence the quantity demanded for transport. This can be explained using the simple supply and demand theory, illustrated in this figure. Elasticity of transport demand The economic concept of elasticity measures the change in quantity demanded relative to a change in another variable, most commonly price. For roads or highways, the supply relates to capacity and the quantity consumed refers to vehicle miles traveled. The size of the increase in quantity consumed depends on the elasticity of demand. The elasticity of demand for transport differs significantly depending on the reason people are choosing to travel initially. The clearest example of inelastic demand in this area is commuting, as studies indicate that most people are going to commute to work, regardless of fluctuations in variables such as petrol prices, as it is a required activity to generate income. This exemplifies the fact that activities that yield a high economic benefit, in this case, financial gain in the form of income, tend to be inelastic. Whereas, travelling for recreational or social reasons have a high tolerance for price increases, and as such the demand for recreational travel when prices spike sees a sharp decline. A review of transport research suggests that the elasticity of traffic demand with respect to travel time is around −0.5 in the short term and −1.0 in the long term. This indicates that a 1.0% saving in travel time will generate an additional 0.5% increase in traffic within the first year. In the longer term, a 1.0% saving in travel time will result in a 1.0% increase in traffic volume. Sources of induced traffic In the short term, increased travel on new road space can come from one of two sources: diverted travel and induced traffic. Diverted travel occurs when people divert their trip from another road (change in route) or reschedule their travel to avoid peak period congestion – but if road capacity is expanded, peak congestion is lower and they can travel at the time they prefer. Induced traffic occurs when new automobile trips are generated. This can occur when people choose to travel by car instead of public transport, or decide to travel when they otherwise would not have. Shortening travel times can also encourage longer trips as reduced travel costs encourage people to choose farther destinations. Although this may not increase the number of trips, it increases vehicle miles travelled. In the long term, this effect alters land use patterns as people choose homes and workplace locations farther away than they would have without the expanded road capacity. These development patterns encourage automobile dependency which contributes to the high long-term demand elasticities of road expansion. Induced traffic and transport planning Although planners take into account future traffic growth when planning new roads (this often being an apparently reasonable justification for new roads in itself – that traffic growth will mean more road capacity is required), this traffic growth is calculated from increases in car ownership and economic activity, and does not take into account traffic induced by the presence of the new road; that is, it is assumed that traffic will grow, regardless of whether a road is built or not. In the UK, the idea of induced traffic was used as grounds for protests against government policy of road construction in the 1970s, 1980s and early 1990s, until it became accepted as a given by the government as a result of their own Standing Advisory Committee on Trunk Road Assessment (SACTRA) study of 1994. However, despite the concept of induced traffic now being accepted, it is not always taken into consideration in planning. Studies A 1998 meta-analysis by the Surface Transportation Policy Project, which used data from the institute, stated that "Metro areas which invested heavily in road capacity expansion fared no better in easing congestion than metro areas that did not." On the other hand, a comparison of congestion data from 1982 to 2011 by the Texas A&M Transportation Institute suggested that additional roadways reduced the rate of congestion increase. When increases in road capacity were matched to the increase demand, growth in congestion was found to be lower. A study by Robert Cervero, a professor of City and Regional Planning at the University of California, Berkeley, found that "over a six-to eight-year period following freeway expansion, around twenty percent of added capacity is 'preserved,' and around eighty percent gets absorbed or depleted. Half of this absorption is due to external factors, like growing population and income. The other half is due to induced-demand effects, mostly higher speeds but also increased building activities. These represent California experiences from 1980 to 1994. Whether they hold true elsewhere is of course unknown." And Mokhtarian et al. (2002) paired eighteen California state highway segments whose capacities had been improved in the early 1970s with control segments that matched the improved segments with regard to facility type, region, approximate size, and initial volumes & congestion levels. Taking annual data for average daily traffic (ADT) and design-hour-traffic-to-capacity (DTC) ratios during the 21 years 1976–1996, they found the growth rates between the two types of segments to be “statistically and practically indistinguishable, suggesting that the capacity expansions, in and of themselves, had a negligible effect on traffic growth”. Policy implications When evaluating induced demand traffic demand theoretically, consideration is mainly given to the actual amount of traffic that will arise from a certain scenario. In real world applications, policymakers must consider the benefits of new infrastructure with the potential negative impacts on the environment, public health, and social equity. Carbon emissions have become a primary concern for policymakers in recent times and continues to be a consideration for infrastructure planning. An example of this is the Expansion of Heathrow Airport, where hopes of additional runways would spur economic growth within the UK: increasing both the amount and frequency of direct flights. These expansion proposals posed climate concerns and prompted studies into its environmental viability. It was estimated by the government that such expansion plans would create 210.8 Mt (million tons) CO2 annually. In addition, approximately 700 homes, a church, and eight listed buildings would have to be destroyed to make way for the project. In 2020, the court of appeal ruled the expansion plans illegal due to the ministers’ lack of consideration towards the government’s commitments to climate change. In contrast to negative externalities, Bogotá, Colombia, has been recognized as a success story in managing induced demand for transportation by investing in new bike infrastructure. The city’s first bike path was established in 1974, with heavy investment in the late 1990s which eventuated in over 300 kilometers of bike lanes and dedicated bike paths. This infrastructure has been credited with reducing traffic congestion through encouraging more people to bike as transport. Less traffic then directly leads to lesser emissions, improved air quality and healthier lifestyles for residents. In addition, the city has implemented additional policies such as a bike-sharing program, bike-friendly streets and education campaigns to promote biking as a healthy and sustainable mode of transportation. Criticism Critics of induced demand arguments generally accept their premise, but argue against their interpretation. Steven Polzin, former director of the Center for Urban Transportation Research and former Senior Advisor at the US Department of Transportation, argues that most forms of induced demand are actually good things and that, due to changing transportation trends, past data cannot be applied to present circumstances. Specifically, he argues: One type of induced demand is simply keeping up with population growth. This is a good thing. Another is traffic moving out of neighborhoods and onto newly expanded freeways. This is a very good thing. Another is people adjusting timing of trips to their desired timing, thus improving business efficiency and quality of life - both good things. Another is shifting transportation from non-auto transport to auto transport. Polzin does not argue that this is good, but rather that it's irrelevant (at least in a US context) as non-auto transport is such a small fraction of the total, and thus cannot meaningfully induce demand anymore (unlike in the past). By contrast, going in reverse would require unprecedented growth rates in public transport systems even just to keep up with population growth. Another is people taking trips to places that they wouldn't have gone before, such as shopping in new places or living further from work. Beyond arguments that this implies improved quality of life, while this appears to have been a major driver in induced demand in the past, it ignores trends. From 1980 to 2015 increases in road capacity in the US didn't even keep up with population growth, yet vehicle miles per capita doubled - a detachment between capacity growth and demand. But since the late 2000s, vehicle miles per capita have stagnated - and growing trends of telecommuting and e-commerce are likely to apply further downward pressure. Aka: people don't drive further to shop or work if they're shopping or working from home either way. As personal road travel declines, commercial and service travel increases. This travel is not sensitive to road capacity and is not readily shifted to alternate modes of transportation. Rather than limiting demand by reducing road capacity, Polzin argues for limiting demand via highway pricing, such as managed lanes, toll highways, congestion pricing or cordon pricing, as this provides a revenue stream which can (among other things) subsidize public transportation. Similar arguments have also been made by libertarian transportation policy analyst Randal O'Toole, economist William L. Anderson, transportation journalist and Market Urbanist director Scott Beyer, Professor of City and Regional planning Robert Cervero, studies such as from WSP and Rand Europe, and numerous others. Film-induced demand Film-induced demand, also referred to as film-induced tourism, is a relatively recent form of cultural tourism in which destinations that are included in media outlets such as television and films receive an increase in tourist visits. This is supported by several regression analyses that suggest a high correlation between destinations taking a proactive approach in order to encourage producers/studios to film at their location, and the tourism success in the area after the release of the movie. This is consistent with induced demand theory. When the supply increases, in the form of media exposure to areas that were not regarded as tourist hotspots, the number of visitors increases, even though the majority of these new visitors would not have necessarily visited these areas previously. This is exemplified by a Travelsat Competitive Index study that indicated that in 2017 alone, approximately 80 million tourists made the decision to travel to a destination based primarily on its featuring in a television series or film. This figure has doubled since 2015. Reduced demand (the inverse effect) Just as increasing road capacity reduces the cost of travel and thus increases demand, the reverse is also observed – decreasing road capacity increases the cost of travel, so demand is reduced. This observation, for which there is much empirical evidence, has been called disappearing traffic, also traffic evaporation or traffic suppression, or, more generally, dissuaded demand. So the closure of a road or reduction in its capacity (e.g. reducing the number of available lanes) will result in the adjustment of traveler behavior to compensate – for example, people might stop making particular trips to patronize local businesses, condense multiple trips into one, re-time their trips to a less congested time, use online shopping with free shipping, or switch to public transport, carpooling, walking, bicycling or smaller motor vehicles less affected by road diets, such as motorcycles, depending upon the values of those trips or of the schedule delay they experience. Studies In 1994, the UK advisory committee SACTRA carried out a major review of the effect of increasing road capacity for trunk roads and motorways only, and reported that the evidence suggested such increases often resulted in substantial increases in the volume of traffic. Following this, London Transport and the Department of the Environment, Transport and the Regions commissioned a study to see if the reverse also occurred, namely that when road capacity was reduced, there would be a reduction in traffic. This follow-up study was carried out by Sally Cairns, Carmen Hass-Klau and Phil Goodwin, with an Annex by Ryuichi Kitamura, Toshiyuki Yamamoto and Satoshi Fujii, and published as a book in 1998. A third study was carried out by Sally Cairns, Steve Atkins and Phil Goodwin, and published in the journal Municipal Engineer in 2002. The 1998 study referred to about 150 sources of evidence, of which the most important were about 60 case studies in the UK, Germany, Austria, Switzerland, Italy, The Netherlands, Sweden, Norway, the US, Canada, Tasmania and Japan. They included major town centre traffic schemes to make pedestrian areas closed to traffic, bus priority measures (especially bus lanes), bridge and road closures for maintenance, and closures due to natural disasters, mostly earthquakes. The 2002 study added some extra case studies, including some involving cycle lanes. The Annex by Kitamura and his colleagues reported a detailed study of the effects of the Hanshin-Awaji earthquake in Japan. Taking the results as a whole, there was an average reduction of 41% of the traffic flows on the roads whose capacity had been reduced, of which rather less than half could be detected as reappearing on alternative routes. Thus, on average, about 25% of the traffic disappeared. Analysis of surveys and traffic counts indicated that the disappearance was accounted for by between 15 and 20 different behavioural responses, including changing to other modes of transport, changing to other destinations, a reduction in the frequency of trips, and car-sharing. There was a large variation around these average results, with the biggest effects seen in large-scale pedestrianisation in German town centres, and the smallest seen in small-scale temporary closures with good alternative routes, and small reductions in capacity in uncongested streets. In a few cases, there was actually an increase in the volume of traffic, notably in towns which had closed some town centre roads at the same time as opening a new by-pass. Cairns et al. concluded that: The European Union have produced a manual titled "Reclaiming city streets for people" that presents case studies and methodologies for traffic evaporation in urban areas. Real-world examples An early example of the reduced demand effect was described by Jane Jacobs in her classic 1961 book The Death and Life of Great American Cities. Jacobs and others convinced New York City to close the street that split Greenwich Village's Washington Square Park in two, and also not to widen the surrounding streets to service the extra capacity they were expected to carry because of the closing of the street. The city's traffic engineers expected the result to be chaos, but, in fact, the extra traffic never appeared, as drivers instead avoided the area entirely. Two widely known examples of reduced demand occurred in San Francisco, California, and in Manhattan, New York City, where, respectively, the Embarcadero Freeway and the lower portion of the elevated West Side Highway were torn down after sections of them collapsed. Concerns were expressed that the traffic which had used these highways would overwhelm local streets, but, in fact, the traffic, instead of being displaced, for the most part disappeared entirely. A New York State Department of Transportation study showed that 93% of the traffic which had used the West Side Highway was not displaced, but simply vanished. After these examples, other highways, including portions of Harbor Drive in Portland, Oregon, the Park East Freeway in Milwaukee, Wisconsin, the Central Freeway in San Francisco, and the Cheonggyecheon Freeway in Seoul, South Korea, were torn down, with the same effect observed. The argument is also made to convert roads previously open to vehicular traffic into pedestrian areas, with a positive impact on the environment and congestion, as in the example of the central area of Florence, Italy. In New York City, after Mayor Michael Bloomberg's plan for congestion pricing in Manhattan was rejected by the New York State Assembly, portions of Broadway at Times Square, Herald Square and Madison Square were converted into pedestrian plazas, and traffic lanes in other areas taken out of service in favor of protected bike lanes, reducing the convenience of using Broadway as a through-route. As a result, traffic on Broadway was reduced, and the speed of traffic in the area lessened. Another measure instituted was the replacement of through-lanes on some of Manhattan's north–south avenues with dedicated left-turn lanes and protected bike lanes, reducing the avenues' carrying capacity. The Bloomberg administration was able to put these changes into effect as they did not require approval from the state legislature. Despite the success of the Broadway pedestrian plazas in Manhattan, some pedestrian malls in the US, in which all traffic is removed from shopping streets, have not been successful. Areas with sufficient population density or pedestrian traffic are more likely to successfully pursue this path. Of the approximately 200 pedestrian malls created in the US from the 1970s on, only about 30 remained as of 2012, and many of these became poorer areas of their cities, as lack of accessibility caused commercial property values to decline. The exceptions, including the Third Street Promenade in Santa Monica, California, and 16th Street in Denver, Colorado, are indicators that conversion of shopping streets to pedestrian malls can be successful. Some of the failed pedestrian malls have improved by allowing limited automobile traffic to return. Pedestrian zones are common across cities and towns in Europe.
Technology
Basics_7
null
395186
https://en.wikipedia.org/wiki/Dry%20lake
Dry lake
A dry lake bed, also known as a playa (), is a basin or depression that formerly contained a standing surface water body, which disappears when evaporation processes exceed recharge. If the floor of a dry lake is covered by deposits of alkaline compounds, it is known as an alkali flat. If covered with salt, it is known as a salt flat. Terminology If its basin is primarily salt, then a dry lake bed is called a salt pan, pan, or salt flat (the latter being a remnant of a salt lake). Hardpan is the dry terminus of an internally drained basin in a dry climate, a designation typically used in the Great Basin of the western United States. Another term for dry lake bed is playa. The Spanish word playa () literally means "beach". Dry lakes are known by this name in some parts of Mexico and the western United States. This term is used e.g. on the Llano Estacado and other parts of the Southern High Plains and is commonly used to address paleolake sediments in the Sahara like Lake Ptolemy. In South America, the usual term for a dry lake bed is salar or salina, Spanish for salt pan. Pan is the term used in most of South Africa. These may include the small round highveld pans, typical of the Chrissiesmeer area, to the extensive pans of the Northern Cape province. Terms used in Australia include salt pans (where evaporite minerals are present) and clay pans. In Arabic, a salt flat is called a sabkha (also spelled sabkhah, subkha or sebkha) or shott (chott). In Central Asia, a similar "cracked mud" salt flat is known as a takyr. In Iran salt flats are called kavir. Formation A dry lake is formed when water from rain or other sources, like intersection with a water table, flows into a dry depression in the landscape, creating a pond or lake. If the total annual evaporation rate exceeds the total annual inflow, the depression will eventually become dry again, forming a dry lake. Salts originally dissolved in the water precipitate out and are left behind, gradually building up over time. A dry lake appears as a flat bed of clay, generally encrusted with precipitated salts. These evaporite minerals are a concentration of weathering products such as sodium carbonate, borax, and other salts. In deserts, a dry lake may be found in an area ringed by bajadas. Dry lakes are typically formed in semi-arid to arid regions of the world. The largest concentration of dry lakes (nearly 22,000) is in the southern High Plains of Texas and eastern New Mexico. Most dry lakes are small. However, Salar de Uyuni in Bolivia, near Potosí, the largest salt flat in the world, comprises 4,085 square miles (10,582 square km).<ref>"Uyuni Salt Flat" Encyclopædia Britannica</ref> Many dry lakes contain shallow water during the rainy season, especially during wet years. If the layer of water is thin and is moved around the dry lake bed by wind, an exceedingly hard and smooth surface may develop. Thicker layers of water may result in a "cracked-mud" surface and teepee structure desiccation features. If there is very little water, dunes can form. The Racetrack Playa, located in Death Valley, California, features a geological phenomenon known as "sailing stones" that leave "racetrack" imprints as they slowly move across the surface without human or animal intervention. These rocks have been recently filmed in motion by the Scripps Institution of Oceanography at the University of California, San Diego and are due to a perfect coincidence of events. First, the playa has to fill with water, which must be deep enough to form floating ice during winter, but still shallow enough that the rocks are exposed. When the temperature drops at night, this pond freezes into thin sheets of "windowpane" ice, which then must be thick enough to maintain strength, but thin enough to move freely. Finally, when the sun comes out, the ice melts and cracks into floating panels; these are blown across the playa by light winds, propelling the rocks in front of them. The stones only move once every two or three years and most tracks last for three or four years. Ecology While a dry lake bed is itself typically devoid of vegetation, they are commonly ringed by shadscale, saltbrush and other salt-tolerant plants that provide critical winter fodder for livestock and other herbivores. In southwest Idaho and parts of Nevada and Utah there are a number of rare species that occur nowhere else but in the inhospitable environment of seasonally flooded playas. A new species of giant fairy shrimp was found in 2006. Although a large predatory species, it evaded detection because of the murkiness of the playa's water caused by winds and a fine clay load. This shrimp species is able to regenerate using tiny undetectable cysts that can remain in a dry lake bed for years until conditions are optimum for hatching.Lepidium davisii is another rare species, a perennial plant whose habitat is restricted to playas in southern Idaho and northern Nevada. Far from major rivers or lakes, playas are often the only water available to wildlife in the desert. Antelope and other wildlife gather there after rainstorms to drink. Threats to dry lakes include pollution from concentrated animal feeding operations such as cattle feedlots and dairies. Results are erosion; fertilizer, pesticide and sediment runoff from farms; and overgrazing. A non-native shrub that has been used for rangeland restoration in the west, Kochia prostrata, also poses a significant threat to playas and their associated rare species, as it capable of crowding out native vegetation and draining a playa's standing water because of its root growth. Human use The extremely flat, smooth, and hard surfaces of dry lake beds make them ideal for fast motor vehicles and motorcycles. Large-sized dry lakes are excellent spots for pursuing land speed records, as the smoothness of the surface allows low-clearance vehicles to travel very fast without any risk of disruption by surface irregularities, and the path traveled has no obstacles to avoid. The dry lake beds at Bonneville Salt Flats in Utah and Black Rock Desert in Nevada have both been used for setting land speed records. Lake Eyre and Lake Gairdner in South Australia have also been used for various land speed record attempts. Dry lake beds that rarely fill with water are sometimes used as locations for air bases for similar reasons. Examples include Groom Lake at Area 51 in Nevada and Edwards Air Force Base (known initially as Muroc Dry Lake) in California. Brines from the subsurface of dry lakes are often exploited for valuable minerals in solution. See, for example, Searles Dry Lake and Lithium resources. Under United States law, a "playa lake" may be considered isolated wetlands and may be eligible to enroll in the new wetlands component of the Conservation Reserve Program, enacted in the 2002 farm bill (P.L. 107–171, Sec. 2101). The Burning Man yearly event takes place in a playa in the Black Rock Desert in western Nevada every year. Fangfang Yao et al (2023), at the University of Virginia reported that more than half of the world's large lakes are drying up. They assessed almost 2,000 large lakes using satellite measurements combined with climate and hydrological models. They found that unsustainable human use, changes in rainfall and run-off, sedimentation, and rising temperatures have driven lake levels down globally, with 53% of lakes showing a decline from 1992 to 2020. Gallery
Physical sciences
Hydrology
null
395286
https://en.wikipedia.org/wiki/Aizoaceae
Aizoaceae
The Aizoaceae (), or fig-marigold family, is a large family of dicotyledonous flowering plants containing 135 genera and about 1,800 species. Several genera are commonly known as 'ice plants' or 'carpet weeds'. The Aizoaceae are also referred to as vygies in South Africa. Some of the unusual Southern African genera—such as Conophytum, Lithops, Titanopsis and Pleiospilos (among others)—resemble gemstones, rocks or pebbles, and are sometimes referred to as 'living stones' or 'mesembs' (short for mesembryanthemums). Description The family Aizoaceae is widely recognised by taxonomists. It once went by the botanical name "Ficoidaceae", now disallowed. The APG II system of 2003 (unchanged from the APG system of 1998) also recognizes the family, and assigns it to the order Caryophyllales in the clade core eudicots. The APG II system also classes the former families Mesembryanthemaceae Fenzl, Sesuviaceae Horan. and Tetragoniaceae Link under the family Aizoaceae. The common Afrikaans name "vygie" meaning "small fig" refers to the fruiting capsule, which resembles the true fig. Glistening epidermal bladder cells give the family its common name "ice plants". Most species (96%, 1782 species in 132 genera) in this family are endemic to arid or semiarid parts of Southern Africa in the Succulent Karoo. Much of the Aizoaceae's diversity is found in the Greater Cape Floristic Region, which is the most plant-diverse temperate region in the world. A few species are found in Australia and the Central Pacific area. Most fig-marigolds are herbaceous, rarely somewhat woody, with sympodial growth and stems either erect or prostrate. Leaves are simple, opposite or alternate, and more or less succulent with entire (or rarely toothed) margins. Flowers are perfect in most species (but unisexual in some), actinomorphic, and appear singularly or in few-flowered cymes developing from the leaf axils. Sepals are typically five (3–8) and more or less connate (fused) below. True petals are absent. However, some species have numerous linear petals derived from staminodes. The seed capsules have one to numerous seeds per cell and are often hygrochastic, dispersing seeds by "jet action" when wet. Evolution The radiation of the Aizoaceae, specifically the subfamily Ruschioideae, was one of the most recent among the angiosperms, occurring 1.13–6.49 Mya. It is also one of the fastest radiations ever described in the angiosperms, with a diversification rate of about 4.4 species per million years. This diversification was roughly contemporaneous with major radiations in two other succulent lineages, Cactaceae and Agave. The family includes many species that use crassulacean acid metabolism as pathway for carbon fixation. Some species in the subfamily Sesuvioideae instead use carbon fixation, which might have evolved multiple times in the group. Taxonomy Because of the hyperdiversity of the Aizoaceae and the young age of the clade, many generic and species boundaries are uncertain. Subfamily Acrosanthoideae Genera:<ref>{{cite web|url=https://npgsweb.ars-grin.gov/gringlobal/taxon/taxonomygenuslist?id=3270&type=subfamily |title=GRIN Genera of Aizoaceae subfam. 'Acrosanthoideae |work=Germplasm Resources Information Network |access-date=2022-11-10 }}</ref> Acrosanthes Eckl. & Zeyh. Subfamily Aizooideae Genera: Aizoanthemopsis Klak Aizoanthemum Dinter ex Friedrich Aizoon L. Gunniopsis Pax Tetragonia L. Subfamily Mesembryanthemoideae Genera: Subfamily Ruschioideae Genera: Tribe Ruschieae Acrodon N.E.Br Aloinopsis Schwantes Amphibolia L.Bolus ex A.G.J.Herre Antegibbaeum Schwantes ex C.Weber Antimima N.E.Br Arenifera Herre, synonym of Mesembryanthemum Argyroderma N.E.Br Astridia Dinter Bergeranthus Schwantes Bijlia N.E.Br, synonym of Pleiospilos Braunsia Schwantes Brianhuntleya Chess. et al. Carpobrotus N.E.Br × Carruanthophyllum (Carruanthus × Machairophyllum) Carruanthus (Schwantes) Schwantes Cephalophyllum N.E.Br Cerochlamys N.E.Br Chasmatophyllum Dinter & Schwantes Cheiridopsis N.E.Br Circandra N.E.Br Conophytum N.E.Br Corpuscularia Schwantes, synonym of Delosperma Cylindrophyllum Schwantes Delosperma N.E.Br Dicrocaulon N.E.Br Didymaotus N.E.Br Dinteranthus Schwantes Diplosoma Schwantes Disphyma N.E.Br Dracophilus (Schwantes) Dinter & Schwantes Drosanthemum Schwantes Eberlanzia Schwantes Ebracteola Dinter & Schwantes Ectotropis N.E.Br, synonym of Delosperma Enarganthe N.E.Br Erepsia N.E.Br Esterhuysenia L.Bolus Faucaria Schwantes Fenestraria N.E.Br Frithia N.E.Br Gibbaeum Haw. ex N.E.Br Glottiphyllum Haw. ex N.E.Br Hallianthus H.E.K.Hartmann Hereroa (Schwantes) Dinter & Schwantes Ihlenfeldtia H.E.K.Hartmann, synonym of Cheiridopsis Imitaria N.E.Br, synonym of Gibbaeum Jacobsenia L.Bolus & Schwantes Jensenobotrya A.G.J.Herre Jordaaniella H.E.K.Hartmann Juttadinteria Schwantes Khadia N.E.Br Lampranthus N.E.Br Lapidaria (Dinter & Schwantes) N.E.Br. Leipoldtia L.Bolus Lemonanthemum Klak Lithops N.E.Br Machairophyllum Schwantes Malephora N.E.Br Malotigena Niederle Marlothistella Schwantes Mestoklema N.E.Br. ex Glen Meyerophytum Schwantes Mitrophyllum Schwantes Monilaria (Schwantes) Schwantes Mossia N.E.Br Muiria N.E.Br Namaquanthus L.Bolus Namibia (Schwantes) Schwantes Nananthus N.E.Br Nelia Schwantes Neohenricia L.Bolus Octopoma N.E.Br Odontophorus N.E.Br, synonym of Cheiridopsis Oophytum N.E.Br Orthopterum L.Bolus Oscularia Schwantes Ottosonderia L.Bolus Phiambolia Klak Pleiospilos N.E.Br Polymita N.E.Br, synonym of Schlechteranthus Psammophora Dinter & Schwantes Rabiea N.E.Br Rhinephyllum N.E.Br Rhombophyllum (Schwantes) Schwantes Roosia van Jaarsv. Ruschia Schwantes Ruschianthemum Friedrich, synonym of Stoeberia Ruschianthus L.Bolus Sarcozona J.M.Black Schlechteranthus Schwantes Schwantesia Dinter Scopelogena L.Bolus Smicrostigma N.E.Br Stayneria L.Bolus Stoeberia Dinter & Schwantes Stomatium Schwantes Tanquana H.E.K.Hartmann & Liede Titanopsis Schwantes Trichodiadema Schwantes Vanheerdea L.Bolus ex H.E.K.Hartmann Vanzijlia L.Bolus Vlokia S.A.Hammer Wooleya L.Bolus Zeuktophyllum N.E.Br Subfamily Sesuvioideae This subfamily includes a number of species. Genera: Anisostigma Schinz, synonym of Tetragonia Sesuvium L. Trianthema L. Tribulocarpus S.Moore Zaleya Burm.f. Unplaced genera Include; Hammeria Peersia Uses Several genera are cultivated. Lithops, or "living stones", are popular as novelty house plants because of their stone-like appearance. Some species are edible, including: Carpobrotus edulis (Hottentot fig, highway ice plant) has edible leaves and fruit. Mesembryanthemum crystallinum has edible leaves. Tetragonia tetragonoides ("New Zealand spinach") is grown as a garden plant in somewhat dry climates and used as an alternative to spinach in upscale salads.C. edulis'' was introduced to California in the early 1900s to stabilize soil along railroad tracks and has become invasive. In southern California, ice plants are sometimes used as firewalls; however, they do burn if not carefully maintained.
Biology and health sciences
Caryophyllales
Plants
395375
https://en.wikipedia.org/wiki/Activated%20carbon
Activated carbon
Activated carbon, also called activated charcoal, is a form of carbon commonly used to filter contaminants from water and air, among many other uses. It is processed (activated) to have small, low-volume pores that greatly increase the surface area available for adsorption or chemical reactions. (Adsorption, not to be confused with absorption, is a process where atoms or molecules adhere to a surface). The pores can be thought of as a microscopic "sponge" structure. Activation is analogous to making popcorn from dried corn kernels: popcorn is light, fluffy, and its kernels have a high surface-area-to-volume ratio. Activated is sometimes replaced by active. Because it is so porous on a microscopic scale, one gram of activated carbon has a surface area of over , as determined by gas adsorption. For charcoal, the equivalent figure before activation is about . A useful activation level may be obtained solely from high surface area. Further chemical treatment often enhances adsorption properties. Activated carbon is usually derived from waste products such as coconut husks; waste from paper mills has been studied as a source. These bulk sources are converted into charcoal before being activated. When derived from coal, it is referred to as activated coal. Activated coke is derived from coke. Uses Activated carbon is used in methane and hydrogen storage, air purification, capacitive deionization, supercapacitive swing adsorption, solvent recovery, decaffeination, gold purification, metal extraction, water purification, medicine, sewage treatment, air filters in respirators, filters in compressed air, teeth whitening, production of hydrogen chloride, edible electronics, and many other applications. Industrial One major industrial application involves use of activated carbon in metal finishing for purification of electroplating solutions. For example, it is the main purification technique for removing organic impurities from bright nickel plating solutions. A variety of organic chemicals are added to plating solutions for improving their deposit qualities and for enhancing properties like brightness, smoothness, ductility, etc. Due to passage of direct current and electrolytic reactions of anodic oxidation and cathodic reduction, organic additives generate unwanted breakdown products in solution. Their excessive build up can adversely affect plating quality and physical properties of deposited metal. Activated carbon treatment removes such impurities and restores plating performance to the desired level. Medical Activated carbon is used to treat poisonings and overdoses following oral ingestion. Tablets or capsules of activated carbon are used in many countries as an over-the-counter drug to treat diarrhea, indigestion, and flatulence. However, activated charcoal shows no effect on intestinal gas and diarrhea, is ordinarily medically ineffective if poisoning resulted from ingestion of corrosive agents, boric acid, or petroleum products, and is particularly ineffective against poisonings of strong acids or bases, cyanide, iron, lithium, arsenic, methanol, ethanol, or ethylene glycol. Activated carbon will not prevent these chemicals from being absorbed into the human body. It is on the World Health Organization's List of Essential Medicines. Incorrect application (e.g. into the lungs) results in pulmonary aspiration, which can sometimes be fatal if immediate medical treatment is not initiated. Analytical chemistry Activated carbon, in 50% w/w combination with celite, is used as stationary phase in low-pressure chromatographic separation of carbohydrates (mono-, di-, tri-saccharides) using ethanol solutions (5–50%) as mobile phase in analytical or preparative protocols. Activated carbon is useful for extracting the direct oral anticoagulants (DOACs) such as dabigatran, apixaban, rivaroxaban and edoxaban from blood plasma samples. For this purpose it has been made into "minitablets", each containing 5 mg activated carbon for treating 1ml samples of DOAC. Since this activated carbon has no effect on blood clotting factors, heparin or most other anticoagulants this allows a plasma sample to be analyzed for abnormalities otherwise affected by the DOACs. Environmental Carbon adsorption has numerous applications in removing pollutants from air or water streams both in the field and in industrial processes such as: Spill cleanup Groundwater remediation Drinking water filtration Wastewater treatment Air purification Volatile organic compounds capture from painting, dry cleaning, gasoline dispensing operations, and other processes Volatile organic compounds recovery (SRU, Solvent Recovery Unit; SRP, Solvent Recovery Plant; SRS, Solvent Recovery System) from flexible packaging, converting, coating, and other processes. During early implementation of the 1974 Safe Drinking Water Act in the US, EPA officials developed a rule that proposed requiring drinking water treatment systems to use granular activated carbon. Because of its high cost, the so-called GAC rule encountered strong opposition across the country from the water supply industry, including the largest water utilities in California. Hence, the agency set aside the rule. Activated carbon filtration is an effective water treatment method due to its multi-functional nature. There are specific types of activated carbon filtration methods and equipment that are indicated – depending upon the contaminants involved. Activated carbon is also used for the measurement of radon concentration in air. Biomass waste-derived activated carbons were also successfully used for the removal of caffeine and paracetamol from water. Agricultural Activated carbon (charcoal) is an allowed substance used by organic farmers in both livestock production and wine making. In livestock production it is used as a pesticide, animal feed additive, processing aid, nonagricultural ingredient and disinfectant. In organic winemaking, activated carbon is allowed for use as a processing agent to adsorb brown color pigments from white grape concentrates. It is sometimes used as biochar. Distilled alcoholic beverage purification Activated carbon filters (AC filters) can be used to filter vodka and whiskey of organic impurities which can affect color, taste, and odor. Passing an organically impure vodka through an activated carbon filter at the proper flow rate will result in vodka with an identical alcohol content and significantly increased organic purity, as judged by odor and taste. Fuel storage Research is being done testing various activated carbons' ability to store natural gas and hydrogen gas. The porous material acts like a sponge for different types of gases. The gas is attracted to the carbon material via Van der Waals forces. Some carbons have been able to achieve binding energies of 5–10 kJ per mol. The gas may then be desorbed when subjected to higher temperatures and either combusted to do work or in the case of hydrogen gas extracted for use in a hydrogen fuel cell. Gas storage in activated carbons is an appealing gas storage method because the gas can be stored in a low pressure, low mass, low volume environment that would be much more feasible than bulky on-board pressure tanks in vehicles. The United States Department of Energy has specified certain goals to be achieved in the area of research and development of nano-porous carbon materials. All of the goals are yet to be satisfied but numerous institutions, including the ALL-CRAFT program, are continuing to conduct work in this field. Gas purification Filters with activated carbon are usually used in compressed air and gas purification to remove oil vapors, odor, and other hydrocarbons from the air. The most common designs use a 1-stage or 2 stage filtration principle in which activated carbon is embedded inside the filter media. Activated carbon filters are used to retain radioactive gases within the air vacuumed from a nuclear boiling water reactor turbine condenser. The large charcoal beds adsorb these gases and retain them while they rapidly decay to nonradioactive solid species. The solids are trapped in the charcoal particles, while the filtered air passes through. Chemical purification Activated carbon is commonly used on the laboratory scale to purify solutions of organic molecules containing unwanted colored organic impurities. Filtration over activated carbon is used in large scale fine chemical and pharmaceutical processes for the same purpose. The carbon is either mixed with the solution then filtered off or immobilized in a filter. Mercury scrubbing Activated carbon, often infused with sulfur or iodine, is widely used to trap mercury emissions from coal-fired power stations, medical incinerators, and from natural gas at the wellhead. However, despite its effectiveness, activated carbon is expensive to use. Since it is often not recycled, the mercury-laden activated carbon presents a disposal dilemma. If the activated carbon contains less than 260 ppm mercury, United States federal regulations allow it to be stabilized (for example, trapped in concrete) for landfilling. However, waste containing greater than 260 ppm is considered to be in the high-mercury subcategory and is banned from landfilling (Land-Ban Rule). This material is now accumulating in warehouses and in deep abandoned mines at an estimated rate of 100 tons per year. The problem of disposal of mercury-laden activated carbon is not unique to the United States. In the Netherlands, this mercury is largely recovered and the activated carbon is disposed of by complete burning, forming carbon dioxide (). Food additive Activated, food-grade charcoal became a food trend in 2016, being used as an additive to impart a "slightly smoky" taste and a dark coloring to products including hotdogs, ice cream, pizza bases, and bagels. People taking medication, including birth control pills and antidepressants, are advised to avoid novelty foods or drinks that use activated charcoal coloring since it can render the medication ineffective. Smoking filtration Activated charcoal is used in smoking filters as a way to reduce the tar content and other chemicals present in smoke, which is a result of combustion, wherein it has been found to reduce the toxicants from tobacco smoke, in particular the free radicals. Structure of activated carbon The structure of activated carbon has long been a subject of debate. In a book published in 2006, Harry Marsh and Francisco Rodríguez-Reinoso considered more than 15 models for the structure, without coming to a definite conclusion about which was correct. Recent work using aberration-corrected transmission electron microscopy has suggested that activated carbons may have a structure related to that of the fullerenes, with pentagonal and heptagonal carbon rings. Production Activated carbon is carbon produced from carbonaceous source materials such as bamboo, coconut husk, willow peat, wood, coir, lignite, coal, and petroleum pitch. It can be produced (activated) by one of the following processes: Physical activation: The source material is developed into activated carbon using hot gases. Air is then introduced to burn out the gasses, creating a graded, screened and de-dusted form of activated carbon. This is generally done by using one or more of the following processes: Carbonization: Material with carbon content is pyrolyzed at temperatures in the range 600–900 °C, usually in an inert atmosphere with gases such as argon or nitrogen Activation/oxidation: Raw material or carbonized material is exposed to oxidizing atmospheres (oxygen or steam) at temperatures above 250 °C, usually in the temperature range of 600–1200 °C. The activation is performed by heating the sample for 1 h in a muffle furnace at 450 °C in the presence of air. Chemical activation: The carbon material is impregnated with certain chemicals. The chemical is typically an acid, strong base, or a salt (phosphoric acid 25%, potassium hydroxide 5%, sodium hydroxide 5%, potassium carbonate 5%, calcium chloride 25%, and zinc chloride 25%). The carbon is then subjected to high temperatures (250–600 °C). It is believed that the temperature activates the carbon at this stage by forcing the material to open up and have more microscopic pores. Chemical activation is preferred to physical activation owing to the lower temperatures, better quality consistency, and shorter time needed for activating the material. The Dutch company Norit NV, part of the Cabot Corporation, is the largest producer of activated carbon in the world. Haycarb, a Sri Lankan coconut shell-based company, controls 16% of the global market share. Classification Activated carbons are complex products which are difficult to classify on the basis of their behaviour, surface characteristics and other fundamental criteria. However, some broad classification is made for general purposes based on their size, preparation methods, and industrial applications. Powdered activated carbon (PAC) Normally, activated carbons (R 1) are made in particulate form as powders or fine granules less than 1.0 mm in size with an average diameter between 0.15 and 0.25 mm. Thus they present a large surface to volume ratio with a small diffusion distance. Activated carbon (R 1) is defined as the activated carbon particles retained on a 50-mesh sieve (0.297 mm). Powdered activated carbon (PAC) material is finer material. PAC is made up of crushed or ground carbon particles, 95–100% of which will pass through a designated mesh sieve. The ASTM classifies particles passing through an 80-mesh sieve (0.177 mm) and smaller as PAC. It is not common to use PAC in a dedicated vessel, due to the high head loss that would occur. Instead, PAC is generally added directly to other process units, such as raw water intakes, rapid mix basins, clarifiers, and gravity filters. Granular activated carbon (GAC) Granular activated carbon (GAC) has a relatively larger particle size compared to powdered activated carbon and consequently, presents a smaller external surface. Diffusion of the adsorbate is thus an important factor. These carbons are suitable for adsorption of gases and vapors, because gaseous substances diffuse rapidly. Granulated carbons are used for air filtration and water treatment, as well as for general deodorization and separation of components in flow systems and in rapid mix basins. GAC can be obtained in either granular or extruded form. GAC is designated by sizes such as 8×20, 20×40, or 8×30 for liquid phase applications and 4×6, 4×8 or 4×10 for vapor phase applications. A 20×40 carbon is made of particles that will pass through a U.S. Standard Mesh Size No. 20 sieve (0.84 mm) (generally specified as 85% passing) but be retained on a U.S. Standard Mesh Size No. 40 sieve (0.42 mm) (generally specified as 95% retained). AWWA (1992) B604 uses the 50-mesh sieve (0.297 mm) as the minimum GAC size. The most popular aqueous-phase carbons are the 12×40 and 8×30 sizes because they have a good balance of size, surface area, and head loss characteristics. Extruded activated carbon (EAC) Extruded activated carbon (EAC) combines powdered activated carbon with a binder, which are fused together and extruded into a cylindrical shaped activated carbon block with diameters from 0.8 to 130 mm. These are mainly used for gas phase applications because of their low pressure drop, high mechanical strength and low dust content. Also sold as CTO filter (Chlorine, Taste, Odor). Bead activated carbon (BAC) Bead activated carbon (BAC) is made from petroleum pitch and supplied in diameters from approximately 0.35 to 0.80 mm. Similar to EAC, it is also noted for its low pressure drop, high mechanical strength and low dust content, but with a smaller grain size. Its spherical shape makes it preferred for fluidized bed applications such as water filtration. Impregnated carbon Porous carbons containing several types of inorganic impregnate such as iodine and silver. Cations such as aluminium, manganese, zinc, iron, lithium, and calcium have also been prepared for specific application in air pollution control especially in museums and galleries. Due to its antimicrobial and antiseptic properties, silver loaded activated carbon is used as an adsorbent for purification of domestic water. Drinking water can be obtained from natural water by treating the natural water with a mixture of activated carbon and aluminium hydroxide (Al(OH)3), a flocculating agent. Impregnated carbons are also used for the adsorption of hydrogen sulfide (H2S) and thiols. Adsorption rates for H2S as high as 50% by weight have been reported. Polymer coated carbon This is a process by which a porous carbon can be coated with a biocompatible polymer to give a smooth and permeable coat without blocking the pores. The resulting carbon is useful for hemoperfusion. Hemoperfusion is a treatment technique in which large volumes of the patient's blood are passed over an adsorbent substance in order to remove toxic substances from the blood. Woven carbon There is a technology of processing technical rayon fiber into activated carbon cloth for carbon filtering. Adsorption capacity of activated cloth is greater than that of activated charcoal (BET theory) surface area: 500–1500 m2/g, pore volume: 0.3–0.8 cm3/g). Thanks to the different forms of activated material, it can be used in a wide range of applications (supercapacitors, odor absorbers, CBRN-defense industry etc.). Properties A gram of activated carbon can have a surface area in excess of , with being readily achievable. Carbon aerogels, while more expensive, have even higher surface areas, and are used in special applications. Under an electron microscope, the high surface-area structures of activated carbon are revealed. Individual particles are intensely convoluted and display various kinds of porosity; there may be many areas where flat surfaces of graphite-like material run parallel to each other, separated by only a few nanometres or so. These micropores provide superb conditions for adsorption to occur, since adsorbing material can interact with many surfaces simultaneously. Tests of adsorption behaviour are usually done with nitrogen gas at 77 K under high vacuum, but in everyday terms activated carbon is perfectly capable of producing the equivalent, by adsorption from its environment, liquid water from steam at and a pressure of 1/10,000 of an atmosphere. James Dewar, the scientist after whom the Dewar (vacuum flask) is named, spent much time studying activated carbon and published a paper regarding its adsorption capacity with regard to gases. In this paper, he discovered that cooling the carbon to liquid nitrogen temperatures allowed it to adsorb significant quantities of numerous air gases, among others, that could then be recollected by simply allowing the carbon to warm again and that coconut-based carbon was superior for the effect. He uses oxygen as an example, wherein the activated carbon would typically adsorb the atmospheric concentration (21%) under standard conditions, but release over 80% oxygen if the carbon was first cooled to low temperatures. Physically, activated carbon binds materials by van der Waals force or London dispersion force. Activated carbon does not bind well to certain chemicals, including alcohols, diols, strong acids and bases, metals and most inorganics, such as lithium, sodium, iron, lead, arsenic, fluorine, and boric acid. Activated carbon adsorbs iodine very well. The iodine capacity, mg/g, (ASTM D28 Standard Method test) may be used as an indication of total surface area. Carbon monoxide is not well adsorbed by activated carbon. This should be of particular concern to those using the material in filters for respirators, fume hoods, or other gas control systems because the gas is undetectable to the human senses, toxic to the metabolism, and neurotoxic. Substantial lists of the common industrial and agricultural gases adsorbed by activated carbon can be found online. Activated carbon can be used as a substrate for the application of various chemicals to improve the adsorptive capacity for some inorganic (and problematic organic) compounds such as hydrogen sulfide (H2S), ammonia (NH3), formaldehyde (HCOH), mercury (Hg) and radioactive iodine-131(131I). This property is known as chemisorption. Iodine number Many carbons preferentially adsorb small molecules. Iodine number is the most fundamental parameter used to characterize activated carbon performance. It is a measure of activity level (higher number indicates higher degree of activation) often reported in mg/g (typical range 500–1200 mg/g). It is a measure of the micropore content of the activated carbon (0 to 20 Å, or up to 2 nm) by adsorption of iodine from solution. It is equivalent to surface area of carbon between 900 and 1100 m2/g. It is the standard measure for liquid-phase applications. Iodine number is defined as the milligrams of iodine adsorbed by one gram of carbon when the iodine concentration in the residual filtrate is at a concentration of 0.02 normal (i.e. 0.02N). Basically, iodine number is a measure of the iodine adsorbed in the pores and, as such, is an indication of the pore volume available in the activated carbon of interest. Typically, water-treatment carbons have iodine numbers ranging from 600 to 1100. Frequently, this parameter is used to determine the degree of exhaustion of a carbon in use. However, this practice should be viewed with caution, as chemical interactions with the adsorbate may affect the iodine uptake, giving false results. Thus, the use of iodine number as a measure of the degree of exhaustion of a carbon bed can only be recommended if it has been shown to be free of chemical interactions with adsorbates and if an experimental correlation between iodine number and the degree of exhaustion has been determined for the particular application. Molasses Some carbons are more adept at adsorbing large molecules. Molasses number or molasses efficiency is a measure of the mesopore content of the activated carbon (greater than 20 Å, or larger than 2 nm) by adsorption of molasses from solution. A high molasses number indicates a high adsorption of big molecules (range 95–600). Caramel dp (decolorizing performance) is similar to molasses number. Molasses efficiency is reported as a percentage (range 40%–185%) and parallels molasses number (600 = 185%, 425 = 85%). The European molasses number (range 525–110) is inversely related to the North American molasses number. Molasses Number is a measure of the degree of decolorization of a standard molasses solution that has been diluted and standardized against standardized activated carbon. Due to the size of color bodies, the molasses number represents the potential pore volume available for larger adsorbing species. As all of the pore volume may not be available for adsorption in a particular waste water application, and as some of the adsorbate may enter smaller pores, it is not a good measure of the worth of a particular activated carbon for a specific application. Frequently, this parameter is useful in evaluating a series of active carbons for their rates of adsorption. Given two active carbons with similar pore volumes for adsorption, the one having the higher molasses number will usually have larger feeder pores resulting in more efficient transfer of adsorbate into the adsorption space. Tannin Tannins are a mixture of large and medium size molecules. Carbons with a combination of macropores and mesopores adsorb tannins. The ability of a carbon to adsorb tannins is reported in parts per million concentration (range 200 ppm–362 ppm). Methylene blue Some carbons have a mesopore (20 Å to 50 Å, or 2 to 5 nm) structure which adsorbs medium size molecules, such as the dye methylene blue. Methylene blue adsorption is reported in g/100g (range 11–28 g/100g). Dechlorination Some carbons are evaluated based on the dechlorination half-life length, which measures the chlorine-removal efficiency of activated carbon. The dechlorination half-value length is the depth of carbon required to reduce the chlorine concentration by 50%. A lower half-value length indicates superior performance. Apparent density The solid or skeletal density of activated carbons will typically range between 2000 and 2100 kg/m3 (125–130 lbs./cubic foot). However, a large part of an activated carbon sample will consist of air space between particles, and the actual or apparent density will therefore be lower, typically 400 to 500 kg/m3 (25–31 lbs./cubic foot). Higher density provides greater volume activity and normally indicates better-quality activated carbon. ASTM D 2854 -09 (2014) is used to determine the apparent density of activated carbon. Hardness/abrasion number It is a measure of the activated carbon's resistance to attrition. It is an important indicator of activated carbon to maintain its physical integrity and withstand frictional forces. There are large differences in the hardness of activated carbons, depending on the raw material and activity levels (porosity). Ash content Ash reduces the overall activity of activated carbon and reduces the efficiency of reactivation. The amount is exclusively dependent on the base raw material used to produce the activated carbon (e.g., coconut, wood, coal, etc.). The metal oxides (Fe2O3) can leach out of activated carbon resulting in discoloration. Acid/water-soluble ash content is more significant than total ash content. Soluble ash content can be very important for aquarists, as ferric oxide can promote algal growths. A carbon with a low soluble ash content should be used for marine, freshwater fish and reef tanks to avoid heavy metal poisoning and excess plant/algal growth. ASTM (D2866 Standard Method test) is used to determine the ash content of activated carbon. Carbon tetrachloride activity Measurement of the porosity of an activated carbon by the adsorption of saturated carbon tetrachloride vapour. Particle size distribution The finer the particle size of an activated carbon, the better the access to the surface area and the faster the rate of adsorption kinetics. In vapour phase systems this needs to be considered against pressure drop, which will affect energy cost. Careful consideration of particle size distribution can provide significant operating benefits. However, in the case of using activated carbon for adsorption of minerals such as gold, the particle size should be in the range of . Activated carbon with particle size less than 1 mm would not be suitable for elution (the stripping of mineral from an activated carbon). Modification of properties and reactivity Acid-base, oxidation-reduction and specific adsorption characteristics are strongly dependent on the composition of the surface functional groups. The surface of conventional activated carbon is reactive, capable of oxidation by atmospheric oxygen and oxygen plasma steam, and also carbon dioxide and ozone. Oxidation in the liquid phase is caused by a wide range of reagents (HNO3, H2O2, KMnO4). Through the formation of a large number of basic and acidic groups on the surface of oxidized carbon to sorption and other properties can differ significantly from the unmodified forms. Activated carbon can be nitrogenated by natural products or polymers or processing of carbon with nitrogenating reagents. Activated carbon can interact with chlorine, bromine and fluorine. Surface of activated carbon, like other carbon materials can be fluoralkylated by treatment with (per)fluoropolyether peroxide in a liquid phase, or with wide range of fluoroorganic substances by CVD-method. Such materials combine high hydrophobicity and chemical stability with electrical and thermal conductivity and can be used as electrode material for super capacitors. Sulfonic acid functional groups can be attached to activated carbon to give "starbons" which can be used to selectively catalyse the esterification of fatty acids. Formation of such activated carbons from halogenated precursors gives a more effective catalyst which is thought to be a result of remaining halogens improving stability. It is reported about synthesis of activated carbon with chemically grafted superacid sites –CF2SO3H. Some of the chemical properties of activated carbon have been attributed to presence of the surface active carbon double bond. The Polyani adsorption theory is a popular method for analyzing adsorption of various organic substances to their surface. Examples of adsorption Heterogeneous catalysis The most commonly encountered form of chemisorption in industry, occurs when a solid catalyst interacts with a gaseous feedstock, the reactant/s. The adsorption of reactant/s to the catalyst surface creates a chemical bond, altering the electron density around the reactant molecule and allowing it to undergo reactions that would not normally be available to it. Reactivation and regeneration The reactivation or the regeneration of activated carbons involves restoring the adsorptive capacity of saturated activated carbon by desorbing adsorbed contaminants on the activated carbon surface. Thermal reactivation The most common regeneration technique employed in industrial processes is thermal reactivation. The thermal regeneration process generally follows three steps: Adsorbent drying at approximately High temperature desorption and decomposition () under an inert atmosphere Residual organic gasification by a non-oxidising gas (steam or carbon dioxide) at elevated temperatures () The heat treatment stage utilises the exothermic nature of adsorption and results in desorption, partial cracking and polymerization of the adsorbed organics. The final step aims to remove charred organic residue formed in the porous structure in the previous stage and re-expose the porous carbon structure regenerating its original surface characteristics. After treatment the adsorption column can be reused. Per adsorption-thermal regeneration cycle between 5–15 wt% of the carbon bed is burnt off resulting in a loss of adsorptive capacity. Thermal regeneration is a high energy process due to the high required temperatures making it both an energetically and commercially expensive process. Plants that rely on thermal regeneration of activated carbon have to be of a certain size before it is economically viable to have regeneration facilities onsite. As a result, it is common for smaller waste treatment sites to ship their activated carbon cores to specialised facilities for regeneration. Other regeneration techniques Current concerns with the high energy/cost nature of thermal regeneration of activated carbon has encouraged research into alternative regeneration methods to reduce the environmental impact of such processes. Though several of the regeneration techniques cited have remained areas of purely academic research, some alternatives to thermal regeneration systems have been employed in industry. Current alternative regeneration methods are: TSA (thermal swing adsorption) and/or PSA (pressure swing adsorption) processes: through convection (heat transfer) using steam, "hot" inert gas (typically heated nitrogen (150–250 °C (302–482 °F))), or vacuum (T+VSA or TVSA, combining TSA and VSA processes) in situ regeneration MWR (microwave regeneration) Chemical and solvent regeneration Microbial regeneration Electrochemical regeneration Ultrasonic regeneration Wet air oxidation
Physical sciences
Group 14
Chemistry
395846
https://en.wikipedia.org/wiki/Mycobacterium
Mycobacterium
Mycobacterium is a genus of over 190 species in the phylum Actinomycetota, assigned its own family, Mycobacteriaceae. This genus includes pathogens known to cause serious diseases in mammals, including tuberculosis (M. tuberculosis) and leprosy (M. leprae) in humans. The Greek prefix myco- means 'fungus', alluding to this genus' mold-like colony surfaces. Since this genus has cell walls with a waxy lipid-rich outer layer containing high concentrations of mycolic acid, acid-fast staining is used to emphasize their resistance to acids, compared to other cell types. Mycobacterial species are generally aerobic, non-motile, and capable of growing with minimal nutrition. The genus is divided based on each species' pigment production and growth rate. While most Mycobacterium species are non-pathogenic, the genus' characteristic complex cell wall contributes to evasion from host defenses. Microbiology Morphology Mycobacteria are aerobic with 0.2-0.6 μm wide and 1.0-10 μm long rod shapes. They are generally non-motile, except for the species Mycobacterium marinum, which has been shown to be motile within macrophages. Mycobacteria possess capsules and most do not form endospores. M. marinum and perhaps M. bovis have been shown to sporulate; however, this has been contested by further research. The distinguishing characteristic of all Mycobacterium species is a thick, hydrophobic, and mycolic acid-rich cell wall made of peptidoglycan and arabinogalactan, with these unique components offering targets for new tuberculosis drugs. Physiology Many Mycobacterium species readily grow with minimal nutrients, using ammonia and/or amino acids as nitrogen sources and glycerol as a carbon source in the presence of mineral salts. Temperatures for optimal growth vary between species and media conditions, ranging from 25 to 45 °C. Most Mycobacterium species, including most clinically relevant species, can be cultured in blood agar. However, some species grow very slowly due to extremely long reproductive cycles, such as M. leprae requiring 12 days per division cycle compared to 20 minutes for some E. coli strains. Ecology Whereas Mycobacterium tuberculosis and M. leprae are pathogenic, most mycobacteria do not cause disease unless they enter skin lesions of those with pulmonary and/or immune dysfunction, despite being widespread across aquatic and terrestrial environments. Through biofilm formation, cell wall resistance to chlorine, and association with amoebas, mycobacteria can survive a variety of environmental stressors. The agar media used for most water testing does not support the growth of mycobacteria, allowing it to go undetected in municipal and hospital systems. Genomics Hundreds of Mycobacterium genomes have been completely sequenced. The genome sizes of mycobacteria range from relatively small ones (e.g. in M. leprae) to quite large ones, such as that as M. vulneris, encoding 6,653 proteins, larger than the ~6000 proteins of eukaryotic yeast. Pathogenicity Mycobacterium tuberculosis complex Mycobacterium tuberculosis can remain latent in human hosts for decades after an initial infection, allowing it to continue infecting others. It has been estimated that a third of the world population has latent tuberculosis (TB). M. tuberculosis has many virulence factors, which can be divided across lipid and fatty acid metabolism, cell envelope proteins, macrophage inhibitors, kinase proteins, proteases, metal-transporter proteins, and gene expression regulators. Several lineages such as M. t. var. bovis (bovine TB) were considered separate species in the M, tuberculosis complex until they were finally merged into the main species in 2018. Leprosy The development of Leprosy is caused by infection with either Mycobacterium leprae or Mycobacterium lepromatosis, two closely related bacteria. Roughly 200,000 new cases of infection are reported each year, and 80% of new cases are reported in Brazil, India, and Indonesia. M. leprae infection localizes within the skin macrophages and Schwann cells found in peripheral nerve tissue. Nontuberculosis Mycobacteria Nontuberculosis Mycobacteria (NTM), which exclude M. tuberculosis, M. leprae, and M. lepromatosis, can infect mammalian hosts. These bacteria are referred to as "atypical mycobacteria." Although person-to-person transmission is rare, transmission of M. abscessus has been observed between patients with cystic fibrosis. The four primary diseases observed in humans are chronic pulmonary disease, disseminated disease in immunocompromised patients, skin and soft tissue infections, and superficial lymphadenitis. 80-90% of recorded NTM infections manifest as pulmonary diseases. M. abscessus is the most virulent rapidly-growing mycobacterium (RGM), as well as the leading cause of RGM based pulmonary infections. Although it has been traditionally viewed as an opportunistic pathogen like other NTMs, analysis of various virulence factors (VFs) have shifted this view to that of a true pathogen. This is due to the presence of known mycobacterial VFs and other non-mycobacterial VFs found in other prokaryotic pathogens. Virulence factors Mycobacteria have cell walls with peptidoglycan, arabinogalactan, and mycolic acid; a waxy outer mycomembrane of mycolic acid; and an outermost capsule of glucans and secreted proteins for virulence. It constantly remodels these layers to survive in stressful environments and avoid host immune defenses. This cell wall structure results in colony surfaces resembling fungi, leading to the genus' use of the Greek prefix myco-. This unique structure makes penicillins ineffective, instead requiring a multi-drug antibiotic treatment of isoniazid to inhibit mycolic acid synthesis, rifampicin to interfere with transcription, ethambutol to hinder arabinogalactan synthesis, and pyrazinamide to impede Coenzyme A synthesis. History Mycobacteria have historically been categorized through phenotypic testing, such as the Runyon classification of analyzing growth rate and production of yellow/orange carotenoid pigments. Group I contains photochromogens (pigment production induced by light), Group II comprises scotochromogens (constitutive pigment production), and the non-chromogens of Groups III and IV have a pale yellow/tan pigment, regardless of light exposure. Group IV species are "rapidly-growing" mycobacteria compared to the "slowly-growing" Group III species because samples grow into visible colonies in less than seven days. Because the International Code of Nomenclature of Prokaryotes (ICNP) currently recognizes 195 Mycobacterium species, classification and identification systems now rely on DNA sequencing and computational phylogenetics. The major disease-causing groups are the M. tuberculosis complex (tuberculosis), M. avium complex (mycobacterium avium-intracellulare infection), M. leprae and M. lepromatosis (leprosy), and M. abscessus (chronic lung infection). Microbiologist Enrico Tortoli has constructed a phylogenetic tree of the genus' key species based on the earlier genetic sequencing of Rogall, et al. (1990), alongside new phylogentic trees based on Tortoli's 2017 sequencing of 148 Mycobacterium species: Proposed division of the genus Gupta et al. have proposed dividing Mycobacterium into five genera, based on an analysis of 150 species in this genus. Due to controversy over complicating clinical diagnoses and treatment, all of the renamed species have retained their original identity in the Mycobacterium genus as a valid taxonomic synonym: Mycobacterium based on the Slowly-Growing Tuberculosis-Simiae clade Mycobacteroides based on the Rapidly-Growing Abscessus-Chelonae clade Mycolicibacillus based on the Slowly-Growing Triviale clade Mycolicibacter based on the Slowly-Growing Terrae clade Mycolicibacterium based on the Rapidly-Growing Fortuitum-Vaccae clade Diagnosis The two most common methods for visualizing these acid-fast bacilli as bright red against a blue background are the Ziehl-Neelsen stain and modified Kinyoun stain. Fite's stain is used to color M. leprae cells as pink against a blue background. Rapid Modified Auramine O Fluorescent staining has specific binding to slowly-growing mycobacteria for yellow staining against a dark background. Newer methods include Gomori-Methenamine Silver staining and Perioidic Acid Schiff staining to color Mycobacterium avium complex (MAC) cells black and pink, respectively. While some mycobacteria can take up to eight weeks to grow visible colonies from a cultured sample, most clinically relevant species will grow within the first four weeks, allowing physicians to consider alternative causes if negative readings continue past the first month. Growth media include Löwenstein–Jensen medium and mycobacteria growth indicator tube (MGIT). Mycobacteriophages Mycobacteria can be infected by mycobacteriophages, a class of viruses with high specificity for their targets. By hijacking the cellular machinery of mycobacteria to produce additional phages, such viruses can be used in phage therapy for eukaryotic hosts, as they would die alongside the mycobacteria. Since only some mycobacteriophages are capable of penetrating the M. tuberculosis membrane, the viral DNA may be delivered through artificial liposomes because bacteria uptake, transcribe, and translate foreign DNA into proteins. Mycosides Mycosides are glycolipids isolated from Mycobacterium species with Mycoside A found in photochromogenic strains, Mycoside B in bovine strains, and Mycoside C in avian strains. Different forms of Mycoside C have varying success as a receptor to inactivate mycobacteriophages. Replacement of the gene encoding mycocerosic acid synthase in M. bovis prevents formation of mycosides.
Biology and health sciences
Gram-positive bacteria
Plants
396022
https://en.wikipedia.org/wiki/Euler%20equations%20%28fluid%20dynamics%29
Euler equations (fluid dynamics)
In fluid dynamics, the Euler equations are a set of partial differential equations governing adiabatic and inviscid flow. They are named after Leonhard Euler. In particular, they correspond to the Navier–Stokes equations with zero viscosity and zero thermal conductivity. The Euler equations can be applied to incompressible and compressible flows. The incompressible Euler equations consist of Cauchy equations for conservation of mass and balance of momentum, together with the incompressibility condition that the flow velocity is divergence-free. The compressible Euler equations consist of equations for conservation of mass, balance of momentum, and balance of energy, together with a suitable constitutive equation for the specific energy density of the fluid. Historically, only the equations of conservation of mass and balance of momentum were derived by Euler. However, fluid dynamics literature often refers to the full set of the compressible Euler equations – including the energy equation – as "the compressible Euler equations". The mathematical characters of the incompressible and compressible Euler equations are rather different. For constant fluid density, the incompressible equations can be written as a quasilinear advection equation for the fluid velocity together with an elliptic Poisson's equation for the pressure. On the other hand, the compressible Euler equations form a quasilinear hyperbolic system of conservation equations. The Euler equations can be formulated in a "convective form" (also called the "Lagrangian form") or a "conservation form" (also called the "Eulerian form"). The convective form emphasizes changes to the state in a frame of reference moving with the fluid. The conservation form emphasizes the mathematical interpretation of the equations as conservation equations for a control volume fixed in space (which is useful from a numerical point of view). History The Euler equations first appeared in published form in Euler's article "Principes généraux du mouvement des fluides", published in Mémoires de l'Académie des Sciences de Berlin in 1757 (although Euler had previously presented his work to the Berlin Academy in 1752). Prior work included contributions from the Bernoulli family as well as from Jean le Rond d'Alembert. The Euler equations were among the first partial differential equations to be written down, after the wave equation. In Euler's original work, the system of equations consisted of the momentum and continuity equations, and thus was underdetermined except in the case of an incompressible flow. An additional equation, which was called the adiabatic condition, was supplied by Pierre-Simon Laplace in 1816. During the second half of the 19th century, it was found that the equation related to the balance of energy must at all times be kept for compressible flows, and the adiabatic condition is a consequence of the fundamental laws in the case of smooth solutions. With the discovery of the special theory of relativity, the concepts of energy density, momentum density, and stress were unified into the concept of the stress–energy tensor, and energy and momentum were likewise unified into a single concept, the energy–momentum vector. Incompressible Euler equations with constant and uniform density In convective form (i.e., the form with the convective operator made explicit in the momentum equation), the incompressible Euler equations in case of density constant in time and uniform in space are: where: is the flow velocity vector, with components in an N-dimensional space , , for a generic function (or field) denotes its material derivative in time with respect to the advective field and is the gradient of the specific (with the sense of per unit mass) thermodynamic work, the internal source term, and is the flow velocity divergence. represents body accelerations (per unit mass) acting on the continuum, for example gravity, inertial accelerations, electric field acceleration, and so on. The first equation is the Euler momentum equation with uniform density (for this equation it could also not be constant in time). By expanding the material derivative, the equations become: In fact for a flow with uniform density the following identity holds: where is the mechanic pressure. The second equation is the incompressible constraint, stating the flow velocity is a solenoidal field (the order of the equations is not causal, but underlines the fact that the incompressible constraint is not a degenerate form of the continuity equation, but rather of the energy equation, as it will become clear in the following). Notably, the continuity equation would be required also in this incompressible case as an additional third equation in case of density varying in time or varying in space. For example, with density nonuniform in space but constant in time, the continuity equation to be added to the above set would correspond to: So the case of constant and uniform density is the only one not requiring the continuity equation as additional equation regardless of the presence or absence of the incompressible constraint. In fact, the case of incompressible Euler equations with constant and uniform density discussed here is a toy model featuring only two simplified equations, so it is ideal for didactical purposes even if with limited physical relevance. The equations above thus represent respectively conservation of mass (1 scalar equation) and momentum (1 vector equation containing scalar components, where is the physical dimension of the space of interest). Flow velocity and pressure are the so-called physical variables. In a coordinate system given by the velocity and external force vectors and have components and , respectively. Then the equations may be expressed in subscript notation as: where the and subscripts label the N-dimensional space components, and is the Kroenecker delta. The use of Einstein notation (where the sum is implied by repeated indices instead of sigma notation) is also frequent. Properties Although Euler first presented these equations in 1755, many fundamental questions or concepts about them remain unanswered. In three space dimensions, in certain simplified scenarios, the Euler equations produce singularities. Smooth solutions of the free (in the sense of without source term: g=0) equations satisfy the conservation of specific kinetic energy: In the one-dimensional case without the source term (both pressure gradient and external force), the momentum equation becomes the inviscid Burgers' equation: This model equation gives many insights into Euler equations. Nondimensionalisation In order to make the equations dimensionless, a characteristic length , and a characteristic velocity , need to be defined. These should be chosen such that the dimensionless variables are all of order one. The following dimensionless variables are thus obtained: and of the field unit vector: Substitution of these inversed relations in Euler equations, defining the Froude number, yields (omitting the * at apix): Euler equations in the Froude limit (no external field) are named free equations and are conservative. The limit of high Froude numbers (low external field) is thus notable and can be studied with perturbation theory. Conservation form The conservation form emphasizes the mathematical properties of Euler equations, and especially the contracted form is often the most convenient one for computational fluid dynamics simulations. Computationally, there are some advantages in using the conserved variables. This gives rise to a large class of numerical methods called conservative methods. The free Euler equations are conservative, in the sense they are equivalent to a conservation equation: or simply in Einstein notation: where the conservation quantity in this case is a vector, and is a flux matrix. This can be simply proved. At last Euler equations can be recast into the particular equation: Spatial dimensions For certain problems, especially when used to analyze compressible flow in a duct or in case the flow is cylindrically or spherically symmetric, the one-dimensional Euler equations are a useful first approximation. Generally, the Euler equations are solved by Riemann's method of characteristics. This involves finding curves in plane of independent variables (i.e., and ) along which partial differential equations (PDEs) degenerate into ordinary differential equations (ODEs). Numerical solutions of the Euler equations rely heavily on the method of characteristics. Incompressible Euler equations In convective form the incompressible Euler equations in case of density variable in space are: where the additional variables are: is the fluid mass density, is the pressure, . The first equation, which is the new one, is the incompressible continuity equation. In fact the general continuity equation would be: but here the last term is identically zero for the incompressibility constraint. Conservation form The incompressible Euler equations in the Froude limit are equivalent to a single conservation equation with conserved quantity and associated flux respectively: Here has length and has size . In general (not only in the Froude limit) Euler equations are expressible as: Conservation variables The variables for the equations in conservation form are not yet optimised. In fact we could define: where is the momentum density, a conservation variable. where is the force density, a conservation variable. Euler equations In differential convective form, the compressible (and most general) Euler equations can be written shortly with the material derivative notation: where the additional variables here is: is the specific internal energy (internal energy per unit mass). The equations above thus represent conservation of mass, momentum, and energy: the energy equation expressed in the variable internal energy allows to understand the link with the incompressible case, but it is not in the simplest form. Mass density, flow velocity and pressure are the so-called convective variables (or physical variables, or lagrangian variables), while mass density, momentum density and total energy density are the so-called conserved variables (also called eulerian, or mathematical variables). If one expands the material derivative the equations above are: Incompressible constraint (revisited) Coming back to the incompressible case, it now becomes apparent that the incompressible constraint typical of the former cases actually is a particular form valid for incompressible flows of the energy equation, and not of the mass equation. In particular, the incompressible constraint corresponds to the following very simple energy equation: Thus for an incompressible inviscid fluid the specific internal energy is constant along the flow lines, also in a time-dependent flow. The pressure in an incompressible flow acts like a Lagrange multiplier, being the multiplier of the incompressible constraint in the energy equation, and consequently in incompressible flows it has no thermodynamic meaning. In fact, thermodynamics is typical of compressible flows and degenerates in incompressible flows. Basing on the mass conservation equation, one can put this equation in the conservation form: meaning that for an incompressible inviscid nonconductive flow a continuity equation holds for the internal energy. Enthalpy conservation Since by definition the specific enthalpy is: The material derivative of the specific internal energy can be expressed as: Then by substituting the momentum equation in this expression, one obtains: And by substituting the latter in the energy equation, one obtains that the enthalpy expression for the Euler energy equation: In a reference frame moving with an inviscid and nonconductive flow, the variation of enthalpy directly corresponds to a variation of pressure. Thermodynamics of ideal fluids In thermodynamics the independent variables are the specific volume, and the specific entropy, while the specific energy is a function of state of these two variables. For a thermodynamic fluid, the compressible Euler equations are consequently best written as: where: is the specific volume is the flow velocity vector is the specific entropy In the general case and not only in the incompressible case, the energy equation means that for an inviscid thermodynamic fluid the specific entropy is constant along the flow lines, also in a time-dependent flow. Basing on the mass conservation equation, one can put this equation in the conservation form: meaning that for an inviscid nonconductive flow a continuity equation holds for the entropy. On the other hand, the two second-order partial derivatives of the specific internal energy in the momentum equation require the specification of the fundamental equation of state of the material considered, i.e. of the specific internal energy as function of the two variables specific volume and specific entropy: The fundamental equation of state contains all the thermodynamic information about the system (Callen, 1985), exactly like the couple of a thermal equation of state together with a caloric equation of state. Conservation form The Euler equations in the Froude limit are equivalent to a single conservation equation with conserved quantity and associated flux respectively: where: is the momentum density, a conservation variable. is the total energy density (total energy per unit volume). Here has length N + 2 and has size N(N + 2). In general (not only in the Froude limit) Euler equations are expressible as: where is the force density, a conservation variable. We remark that also the Euler equation even when conservative (no external field, Froude limit) have no Riemann invariants in general. Some further assumptions are required However, we already mentioned that for a thermodynamic fluid the equation for the total energy density is equivalent to the conservation equation: Then the conservation equations in the case of a thermodynamic fluid are more simply expressed as: where is the entropy density, a thermodynamic conservation variable. Another possible form for the energy equation, being particularly useful for isobarics, is: where is the total enthalpy density. Quasilinear form and characteristic equations Expanding the fluxes can be an important part of constructing numerical solvers, for example by exploiting (approximate) solutions to the Riemann problem. In regions where the state vector y varies smoothly, the equations in conservative form can be put in quasilinear form: where are called the flux Jacobians defined as the matrices: This Jacobian does not exist where the state variables are discontinuous, as at contact discontinuities or shocks. Characteristic equations The compressible Euler equations can be decoupled into a set of N+2 wave equations that describes sound in Eulerian continuum if they are expressed in characteristic variables instead of conserved variables. In fact the tensor A is always diagonalizable. If the eigenvalues (the case of Euler equations) are all real the system is defined hyperbolic, and physically eigenvalues represent the speeds of propagation of information. If they are all distinguished, the system is defined strictly hyperbolic (it will be proved to be the case of one-dimensional Euler equations). Furthermore, diagonalisation of compressible Euler equation is easier when the energy equation is expressed in the variable entropy (i.e. with equations for thermodynamic fluids) than in other energy variables. This will become clear by considering the 1D case. If is the right eigenvector of the matrix corresponding to the eigenvalue , by building the projection matrix: One can finally find the characteristic variables as: Since A is constant, multiplying the original 1-D equation in flux-Jacobian form with P−1 yields the characteristic equations: The original equations have been decoupled into N+2 characteristic equations each describing a simple wave, with the eigenvalues being the wave speeds. The variables wi are called the characteristic variables and are a subset of the conservative variables. The solution of the initial value problem in terms of characteristic variables is finally very simple. In one spatial dimension it is: Then the solution in terms of the original conservative variables is obtained by transforming back: this computation can be explicited as the linear combination of the eigenvectors: Now it becomes apparent that the characteristic variables act as weights in the linear combination of the jacobian eigenvectors. The solution can be seen as superposition of waves, each of which is advected independently without change in shape. Each i-th wave has shape wipi and speed of propagation λi. In the following we show a very simple example of this solution procedure. Waves in 1D inviscid, nonconductive thermodynamic fluid If one considers Euler equations for a thermodynamic fluid with the two further assumptions of one spatial dimension and free (no external field: g = 0): If one defines the vector of variables: recalling that is the specific volume, the flow speed, the specific entropy, the corresponding jacobian matrix is: At first one must find the eigenvalues of this matrix by solving the characteristic equation: that is explicitly: This determinant is very simple: the fastest computation starts on the last row, since it has the highest number of zero elements. Now by computing the determinant 2×2: by defining the parameter: or equivalently in mechanical variables, as: This parameter is always real according to the second law of thermodynamics. In fact the second law of thermodynamics can be expressed by several postulates. The most elementary of them in mathematical terms is the statement of convexity of the fundamental equation of state, i.e. the hessian matrix of the specific energy expressed as function of specific volume and specific entropy: is defined positive. This statement corresponds to the two conditions: The first condition is the one ensuring the parameter a is defined real. The characteristic equation finally results: That has three real solutions: Then the matrix has three real eigenvalues all distinguished: the 1D Euler equations are a strictly hyperbolic system. At this point one should determine the three eigenvectors: each one is obtained by substituting one eigenvalue in the eigenvalue equation and then solving it. By substituting the first eigenvalue λ1 one obtains: Basing on the third equation that simply has solution s1=0, the system reduces to: The two equations are redundant as usual, then the eigenvector is defined with a multiplying constant. We choose as right eigenvector: The other two eigenvectors can be found with analogous procedure as: Then the projection matrix can be built: Finally it becomes apparent that the real parameter a previously defined is the speed of propagation of the information characteristic of the hyperbolic system made of Euler equations, i.e. it is the wave speed. It remains to be shown that the sound speed corresponds to the particular case of an isentropic transformation: Compressibility and sound speed Sound speed is defined as the wavespeed of an isentropic transformation: by the definition of the isoentropic compressibility: the soundspeed results always the square root of ratio between the isentropic compressibility and the density: Ideal gas The sound speed in an ideal gas depends only on its temperature: Since the specific enthalpy in an ideal gas is proportional to its temperature: the sound speed in an ideal gas can also be made dependent only on its specific enthalpy: Bernoulli's theorem for steady inviscid flow Bernoulli's theorem is a direct consequence of the Euler equations. Incompressible case and Lamb's form The vector calculus identity of the cross product of a curl holds: where the Feynman subscript notation is used, which means the subscripted gradient operates only on the factor . Lamb in his famous classical book Hydrodynamics (1895), still in print, used this identity to change the convective term of the flow velocity in rotational form: the Euler momentum equation in Lamb's form becomes: Now, basing on the other identity: the Euler momentum equation assumes a form that is optimal to demonstrate Bernoulli's theorem for steady flows: In fact, in case of an external conservative field, by defining its potential φ: In case of a steady flow the time derivative of the flow velocity disappears, so the momentum equation becomes: And by projecting the momentum equation on the flow direction, i.e. along a streamline, the cross product disappears because its result is always perpendicular to the velocity: In the steady incompressible case the mass equation is simply: that is the mass conservation for a steady incompressible flow states that the density along a streamline is constant. Then the Euler momentum equation in the steady incompressible case becomes: The convenience of defining the total head for an inviscid liquid flow is now apparent: which may be simply written as: That is, the momentum balance for a steady inviscid and incompressible flow in an external conservative field states that the total head along a streamline is constant. Compressible case In the most general steady (compressible) case the mass equation in conservation form is: Therefore, the previous expression is rather The right-hand side appears on the energy equation in convective form, which on the steady state reads: The energy equation therefore becomes: so that the internal specific energy now features in the head. Since the external field potential is usually small compared to the other terms, it is convenient to group the latter ones in the total enthalpy: and the Bernoulli invariant for an inviscid gas flow is: which can be written as: That is, the energy balance for a steady inviscid flow in an external conservative field states that the sum of the total enthalpy and the external potential is constant along a streamline. In the usual case of small potential field, simply: Friedmann form and Crocco form By substituting the pressure gradient with the entropy and enthalpy gradient, according to the first law of thermodynamics in the enthalpy form: in the convective form of Euler momentum equation, one arrives to: Friedmann deduced this equation for the particular case of a perfect gas and published it in 1922. However, this equation is general for an inviscid nonconductive fluid and no equation of state is implicit in it. On the other hand, by substituting the enthalpy form of the first law of thermodynamics in the rotational form of Euler momentum equation, one obtains: and by defining the specific total enthalpy: one arrives to the Crocco–Vazsonyi form (Crocco, 1937) of the Euler momentum equation: In the steady case the two variables entropy and total enthalpy are particularly useful since Euler equations can be recast into the Crocco's form: Finally if the flow is also isothermal: by defining the specific total Gibbs free energy: the Crocco's form can be reduced to: From these relationships one deduces that the specific total free energy is uniform in a steady, irrotational, isothermal, isoentropic, inviscid flow. Discontinuities The Euler equations are quasilinear hyperbolic equations and their general solutions are waves. Under certain assumptions they can be simplified leading to Burgers equation. Much like the familiar oceanic waves, waves described by the Euler Equations 'break' and so-called shock waves are formed; this is a nonlinear effect and represents the solution becoming multi-valued. Physically this represents a breakdown of the assumptions that led to the formulation of the differential equations, and to extract further information from the equations we must go back to the more fundamental integral form. Then, weak solutions are formulated by working in 'jumps' (discontinuities) into the flow quantities – density, velocity, pressure, entropy – using the Rankine–Hugoniot equations. Physical quantities are rarely discontinuous; in real flows, these discontinuities are smoothed out by viscosity and by heat transfer. (See Navier–Stokes equations) Shock propagation is studied – among many other fields – in aerodynamics and rocket propulsion, where sufficiently fast flows occur. To properly compute the continuum quantities in discontinuous zones (for example shock waves or boundary layers) from the local forms (all the above forms are local forms, since the variables being described are typical of one point in the space considered, i.e. they are local variables) of Euler equations through finite difference methods generally too many space points and time steps would be necessary for the memory of computers now and in the near future. In these cases it is mandatory to avoid the local forms of the conservation equations, passing some weak forms, like the finite volume one. Rankine–Hugoniot equations Starting from the simplest case, one consider a steady free conservation equation in conservation form in the space domain: where in general F is the flux matrix. By integrating this local equation over a fixed volume Vm, it becomes: Then, basing on the divergence theorem, we can transform this integral in a boundary integral of the flux: This global form simply states that there is no net flux of a conserved quantity passing through a region in the case steady and without source. In 1D the volume reduces to an interval, its boundary being its extrema, then the divergence theorem reduces to the fundamental theorem of calculus: that is the simple finite difference equation, known as the jump relation: That can be made explicit as: where the notation employed is: Or, if one performs an indefinite integral: On the other hand, a transient conservation equation: brings to a jump relation: For one-dimensional Euler equations the conservation variables and the flux are the vectors: where: is the specific volume, is the mass flux. In the one dimensional case the correspondent jump relations, called the Rankine–Hugoniot equations, are:< In the steady one dimensional case the become simply: Thanks to the mass difference equation, the energy difference equation can be simplified without any restriction: where is the specific total enthalpy. These are the usually expressed in the convective variables: where: is the flow speed is the specific internal energy. The energy equation is an integral form of the Bernoulli equation in the compressible case. The former mass and momentum equations by substitution lead to the Rayleigh equation: Since the second term is a constant, the Rayleigh equation always describes a simple line in the pressure volume plane not dependent of any equation of state, i.e. the Rayleigh line. By substitution in the Rankine–Hugoniot equations, that can be also made explicit as: One can also obtain the kinetic equation and to the Hugoniot equation. The analytical passages are not shown here for brevity. These are respectively: The Hugoniot equation, coupled with the fundamental equation of state of the material: describes in general in the pressure volume plane a curve passing by the conditions (v0, p0), i.e. the Hugoniot curve, whose shape strongly depends on the type of material considered. It is also customary to define a Hugoniot function: allowing to quantify deviations from the Hugoniot equation, similarly to the previous definition of the hydraulic head, useful for the deviations from the Bernoulli equation. Finite volume form On the other hand, by integrating a generic conservation equation: on a fixed volume Vm, and then basing on the divergence theorem, it becomes: By integrating this equation also over a time interval: Now by defining the node conserved quantity: we deduce the finite volume form: In particular, for Euler equations, once the conserved quantities have been determined, the convective variables are deduced by back substitution: Then the explicit finite volume expressions of the original convective variables are: Constraints It has been shown that Euler equations are not a complete set of equations, but they require some additional constraints to admit a unique solution: these are the equation of state of the material considered. To be consistent with thermodynamics these equations of state should satisfy the two laws of thermodynamics. On the other hand, by definition non-equilibrium system are described by laws lying outside these laws. In the following we list some very simple equations of state and the corresponding influence on Euler equations. Ideal polytropic gas For an ideal polytropic gas the fundamental equation of state is: where is the specific energy, is the specific volume, is the specific entropy, is the molecular mass, here is considered a constant (polytropic process), and can be shown to correspond to the heat capacity ratio. This equation can be shown to be consistent with the usual equations of state employed by thermodynamics. From this equation one can derive the equation for pressure by its thermodynamic definition: By inverting it one arrives to the mechanical equation of state: Then for an ideal gas the compressible Euler equations can be simply expressed in the mechanical or primitive variables specific volume, flow velocity and pressure, by taking the set of the equations for a thermodynamic system and modifying the energy equation into a pressure equation through this mechanical equation of state. At last, in convective form they result: and in one-dimensional quasilinear form they results: where the conservative vector variable is: and the corresponding jacobian matrix is: Steady flow in material coordinates In the case of steady flow, it is convenient to choose the Frenet–Serret frame along a streamline as the coordinate system for describing the steady momentum Euler equation: where , and denote the flow velocity, the pressure and the density, respectively. Let be a Frenet–Serret orthonormal basis which consists of a tangential unit vector, a normal unit vector, and a binormal unit vector to the streamline, respectively. Since a streamline is a curve that is tangent to the velocity vector of the flow, the left-hand side of the above equation, the convective derivative of velocity, can be described as follows: where and is the radius of curvature of the streamline. Therefore, the momentum part of the Euler equations for a steady flow is found to have a simple form: For barotropic flow , Bernoulli's equation is derived from the first equation: The second equation expresses that, in the case the streamline is curved, there should exist a pressure gradient normal to the streamline because the centripetal acceleration of the fluid parcel is only generated by the normal pressure gradient. The third equation expresses that pressure is constant along the binormal axis. Streamline curvature theorem Let be the distance from the center of curvature of the streamline, then the second equation is written as follows: where This equation states:In a steady flow of an inviscid fluid without external forces, the center of curvature of the streamline lies in the direction of decreasing radial pressure. Although this relationship between the pressure field and flow curvature is very useful, it doesn't have a name in the English-language scientific literature. Japanese fluid-dynamicists call the relationship the "Streamline curvature theorem". This "theorem" explains clearly why there are such low pressures in the centre of vortices, which consist of concentric circles of streamlines. This also is a way to intuitively explain why airfoils generate lift forces. Exact solutions All potential flow solutions are also solutions of the Euler equations, and in particular the incompressible Euler equations when the potential is harmonic. Solutions to the Euler equations with vorticity are: parallel shear flows – where the flow is unidirectional, and the flow velocity only varies in the cross-flow directions, e.g. in a Cartesian coordinate system the flow is for instance in the -direction – with the only non-zero velocity component being only dependent on and and not on Arnold–Beltrami–Childress flow – an exact solution of the incompressible Euler equations. Two solutions of the three-dimensional Euler equations with cylindrical symmetry have been presented by Gibbon, Moore and Stuart in 2003. These two solutions have infinite energy; they blow up everywhere in space in finite time.
Physical sciences
Fluid mechanics
Physics
396275
https://en.wikipedia.org/wiki/Wilderness
Wilderness
Wilderness or wildlands (usually in the plural) are Earth's natural environments that have not been significantly modified by human activity, or any nonurbanized land not under extensive agricultural cultivation. The term has traditionally referred to terrestrial environments, though growing attention is being placed on marine wilderness. Recent maps of wilderness suggest it covers roughly one-quarter of Earth's terrestrial surface, but is being rapidly degraded by human activity. Even less wilderness remains in the ocean, with only 13.2% free from intense human activity. Some governments establish protection for wilderness areas by law to not only preserve what already exists, but also to promote and advance a natural expression and development. These can be set up in preserves, conservation preserves, national forests, national parks and even in urban areas along rivers, gulches or otherwise undeveloped areas. Often these areas are considered important for the survival of certain species, biodiversity, ecological studies, conservation, solitude and recreation. They may also preserve historic genetic traits and provide habitat for wild flora and fauna that may be difficult to recreate in zoos, arboretums or laboratories. History Ancient times and Middle Ages From a visual arts perspective, nature and wildness have been important subjects in various epochs of world history. An early tradition of landscape art occurred in the Tang Dynasty (618–907). The tradition of representing nature as it is became one of the aims of Chinese painting and was a significant influence in Asian art. Artists in the tradition of Shan shui (lit. mountain-water-picture), learned to depict mountains and rivers "from the perspective of nature as a whole and on the basis of their understanding of the laws of nature… as if seen through the eyes of a bird". In the 13th century, Shih Erh Chi recommended avoiding painting "scenes lacking any places made inaccessible by nature". For most of human history, the greater part of Earth's terrain was wilderness, and human attention was concentrated on settled areas. The first known laws to protect parts of nature date back to the Babylonian Empire and Chinese Empire. Ashoka, the Great Mauryan King, defined the first laws in the world to protect flora and fauna in Edicts of Ashoka around the 3rd century B.C. In the Middle Ages, the Kings of England initiated one of the world's first conscious efforts to protect natural areas. They were motivated by a desire to be able to hunt wild animals in private hunting preserves rather than a desire to protect wilderness. Nevertheless, in order to have animals to hunt they would have to protect wildlife from subsistence hunting and the land from villagers gathering firewood. Similar measures were introduced in other European countries. However, in European cultures, throughout the Middle Ages, wilderness generally was not regarded worth protecting but rather judged strongly negative as a dangerous place and as a moral counter-world to the realm of culture and godly life. "While archaic nature religions oriented themselves towards nature, in medieval Christendom this orientation was replaced by one towards divine law. The divine was no longer to be found in nature; instead, uncultivated nature became a site of the sinister and the demonic. It was considered corrupted by the Fall (natura lapsa), becoming a vale of tears in which humans were doomed to live out their existence. Thus, for example, mountains were interpreted [e.g, by Thomas Burnet] as ruins of a once flat earth destroyed by the Flood, with the seas as the remains of that Flood." "If paradise was early man's greatest good, wilderness, as its antipode, was his greatest evil." 15th to 19th century Wilderness was viewed by colonists as being evil in its resistance to their control. The puritanical view of wilderness meant that in order for colonists to be able to live in North America, they had to destroy the wilderness in order to make way for their 'civilized' society. Wilderness was considered to be the root of the colonists' problems, so to make the problems go away, wilderness needed to be destroyed. One of the first steps in doing this, is to get rid of trees in order to clear the land. Military metaphors describing the wilderness as the "enemy" were used, and settler expansion was phrased as "[conquering] the wilderness". In relation to the wilderness, Native Americans were viewed as savages. The relationship between Native Americans and the land was something colonists did not understand and did not try to understand. This mutually beneficial relationship was different from how colonists viewed the land only in relation to how it could benefit themselves by waging a constant battle to beat the land and other living organisms into submission. The belief colonists had of the land being only something to be used was based in Christian ideas. If the earth and animals and plants were created by a Christian God for human use, then the cultivation by colonists was their God-given goal. However, the idea that what European colonists saw upon arriving in North America was pristine and devoid of humans is untrue due to the existence of Native Americans. The land was shaped by Native Americans through practices such as fires. Burning happened frequently and in a controlled manner. The landscapes seen in the US today are very different from the way things looked before colonists came. Fire could be used to maintain food, cords, and baskets. One of the main roles of frequent fires was to prevent the out of control fires which are becoming more and more common. The idea of wilderness having intrinsic value emerged in the Western world in the 19th century. British artists John Constable and J. M. W. Turner turned their attention to capturing the beauty of the natural world in their paintings. Prior to that, paintings had been primarily of religious scenes or of human beings. William Wordsworth's poetry described the wonder of the natural world, which had formerly been viewed as a threatening place. Increasingly the valuing of nature became an aspect of Western culture. By the mid-19th century, in Germany, "Scientific Conservation", as it was called, advocated "the efficient utilization of natural resources through the application of science and technology". Concepts of forest management based on the German approach were applied in other parts of the world, but with varying degrees of success. Over the course of the 19th century wilderness became viewed not as a place to fear but a place to enjoy and protect; hence came the conservation movement in the latter half of the 19th century. Rivers were rafted and mountains were climbed solely for the sake of recreation, not to determine their geographical context. In 1861, following an intense lobbying by artists of the Barbizon school, the French Waters and Forests Military Agency set an "artistic reserve" in Fontainebleau State Forest. With a total of 1,097 hectares, it is thought to be the first nature reserve in the world. Modern conservation Global conservation became an issue at the time of the dissolution of the British Empire in Africa in the late 1940s. The British established great wildlife preserves there. As before, this interest in conservation had an economic motive: in this case, big game hunting. Nevertheless, this led to growing recognition in the 1950s and the early 1960s of the need to protect large spaces for wildlife conservation worldwide. The World Wildlife Fund (WWF), founded in 1961, grew to be one of the largest conservation organizations in the world. Early conservationists advocated the creation of a legal mechanism by which boundaries could be set on human activities in order to preserve natural and unique lands for the enjoyment and use of future generations. This profound shift in wilderness thought reached a pinnacle in the US with the passage of the Wilderness Act of 1964, which allowed for parts of U.S. National Forests to be designated as "wilderness preserves". Similar acts, such as the 1975 Eastern Wilderness Areas Act, followed. Nevertheless, initiatives for wilderness conservation continue to increase. There are a growing number of projects to protect tropical rainforests through conservation initiatives. There are also large-scale projects to conserve wilderness regions, such as Canada's Boreal Forest Conservation Framework. The Framework calls for conservation of 50 percent of the 6,000,000 square kilometres of boreal forest in Canada's north. In addition to the World Wildlife Fund, organizations such as the Wildlife Conservation Society, the WILD Foundation, The Nature Conservancy, Conservation International, The Wilderness Society (United States) and many others are active in such conservation efforts. The 21st century has seen another slight shift in wilderness thought and theory. It is now understood that simply drawing lines around a piece of land and declaring it a wilderness does not necessarily make it a wilderness. All landscapes are intricately connected and what happens outside a wilderness certainly affects what happens inside it. For example, air pollution from Los Angeles and the California Central Valley affects Kern Canyon and Sequoia National Park. The national park has miles of "wilderness" but the air is filled with pollution from the valley. This gives rise to the paradox of what a wilderness really is; a key issue in 21st century wilderness thought. National parks The creation of national parks, beginning in the 19th century, preserved some especially attractive and notable areas, but the pursuits of commerce, lifestyle, and recreation combined with increases in human population have continued to result in human modification of relatively untouched areas. Such human activity often negatively impacts native flora and fauna. As such, to better protect critical habitats and preserve low-impact recreational opportunities, legal concepts of "wilderness" were established in many countries, beginning with the United States (see below). The first National Park was Yellowstone, which was signed into law by U.S. President Ulysses S. Grant on 1 March 1872. The Act of Dedication declared Yellowstone a land "hereby reserved and withdrawn from settlement, occupancy, or sale under the laws of the United States, and dedicated and set apart as a public park or pleasuring ground for the benefit and enjoyment of the people." When national parks were established in an area, the Native Americans that had been living there were forcibly removed so visitors to the park could see nature without humans present. National parks are seen as areas untouched by humans, when in reality, humans existed in these spaces, until settler colonists came in and forced them off their lands in order to create the national parks. The concept glorifies the idea that before settlers came, the US was an uninhabited landscape. This erases the reality of Native Americans, and their relationship with the land and the role they had in shaping the landscape. Such erasure suggests there were areas of the US which were historically unoccupied, once again erasing the existence of Native Americans and their relationship to the land. In the case of Yellowstone, the Grand Canyon, and Yosemite, the 'preservation' of these lands by the US government was what caused the Native Americans who lived in the areas to be systematically removed. Historian Mark David Spence has shown that the case of Glacier National Park and the Blackfeet people who live there is a perfect example of such erasure. The Blackfeet people had specifically designated rights to the area, but the 1910 Glacier National Park act made void those rights. The act of 'preserving' the land was specifically linked to the exclusion of the Blackfeet people. The continued resistance of the Blackfeet people has provided documentation of the importance of the area to many different tribes. The world's second national park, the Royal National Park, located just 32 km to the south of Sydney, Australia, was established in 1879. The U.S. concept of national parks soon caught on in Canada, which created Banff National Park in 1885, at the same time as the transcontinental Canadian Pacific Railway was being built. The creation of this and other parks showed a growing appreciation of wild nature, but also an economic reality. The railways wanted to entice people to travel west. Parks such as Banff and Yellowstone gained favor as the railroads advertised travel to "the great wild spaces" of North America. When outdoorsman Teddy Roosevelt became president of the United States, he began to enlarge the U.S. National Parks system, and established the National Forest system. By the 1920s, travel across North America by train to experience the "wilderness" (often viewing it only through windows) had become very popular. This led to the commercialization of some of Canada's National Parks with the building of great hotels such as the Banff Springs Hotel and Chateau Lake Louise. Despite their similar name, national parks in England and Wales are quite different from national parks in many other countries. Unlike most other countries, in England and Wales, designation as a national park may include substantial settlements and human land uses which are often integral parts of the landscape, and land within a national park remains largely in private ownership. Each park is operated by its own national park authority. The United States philosophy around wilderness preservation through National Parks has been attempted in other countries. However, people living in those countries have different ideas surrounding wilderness than people in the United States, thus, the US concept of wilderness can be damaging in other areas of the world. India is more densely populated and has been settled for a long time. There are complex relationships between agricultural communities and the wilderness. An example of this is the Project Tiger parks in India. By claiming areas as no longer used by humans, the land moves from the hands of poor people to rich people. Having designated tiger reserves is only possible by displacing poor people, who were not involved in the planning of the areas. This situation places the ideal of wilderness above the already existing relationships between people and the land they live on. By placing an imperialistic ideal of nature onto a different country, the desire to reestablish wilderness is being put above the lives of those who live by working the land. Conservation and preservation in 20th century United States By the late 19th century, it had become clear that in many countries wild areas had either disappeared or were in danger of disappearing. This realization gave rise to the conservation movement in the United States, partly through the efforts of writers and activists such as John Burroughs, Aldo Leopold, and John Muir, and politicians such as U.S. President Teddy Roosevelt. The idea of protecting nature for nature's sake began to gain more recognition in the 1930s with American writers like Aldo Leopold, calling for a "land ethic" and urging wilderness protection. It had become increasingly clear that wild spaces were disappearing rapidly and that decisive action was needed to save them. Wilderness preservation is central to deep ecology; a philosophy that believes in an inherent worth of all living beings, regardless of their instrumental utility to human needs. Two different groups had emerged within the US environmental movement by the early 20th century: the conservationists and the preservationists. The initial consensus among conservationists was split into "utilitarian conservationists" later to be referred to as conservationists, and "aesthetic conservationists" or preservationists. The main representative for the former was Gifford Pinchot, first Chief of the United States Forest Service, and they focused on the proper use of nature, whereas the preservationists sought the protection of nature from use. Put another way, conservation sought to regulate human use while preservation sought to eliminate human impact altogether. The management of US public lands during the years 1960s and 70s reflected these dual visions, with conservationists dominating the Forest Service, and preservationists the Park Service Formal wilderness designations International The World Conservation Union (IUCN) classifies wilderness at two levels, 1a (strict nature reserves) and 1b (Wilderness areas). There have been recent calls for the World Heritage Convention to better protect wilderness and to include the word wilderness in their selection criteria for Natural Heritage Sites Forty-eight countries have wilderness areas established via legislative designation as IUCN protected area management Category 1b sites that do not overlap with any other IUCN designation. They are: Australia, Austria, Bahamas, Bangladesh, Bermuda, Bosnia and Herzegovina, Botswana, Canada, Cayman Islands, Costa Rica, Croatia, Cuba, Czech Republic, Democratic Republic of Congo, Denmark, Dominican Republic, Equatorial Guinea, Estonia, Finland, French Guiana, Greenland, Iceland, India, Indonesia, Japan, Latvia, Liechtenstein, Luxembourg, Malta, Marshall Islands, Mexico, Mongolia, Nepal, New Zealand, Norway, Northern Mariana Islands, Portugal, Seychelles, Serbia, Singapore, Slovakia, Slovenia, Spain, Sri Lanka, Sweden, Tanzania, United States of America, and Zimbabwe. At publication, there are 2,992 marine and terrestrial wilderness areas registered with the IUCN as solely Category 1b sites. Twenty-two other countries have wilderness areas. These wilderness areas are established via administrative designation or wilderness zones within protected areas. Whereas the above listing contains countries with wilderness exclusively designated as Category 1b sites, some of the below-listed countries contain protected areas with multiple management categories including Category 1b. They are: Argentina, Bhutan, Brazil, Chile, Honduras, Germany, Italy, Kenya, Malaysia, Namibia, Nepal, Pakistan, Panama, Peru, Philippines, the Russian Federation, South Africa, Switzerland, Uganda, Ukraine, the United Kingdom of Great Britain and Northern Ireland, Venezuela, and Zambia. Germany The German National Strategy on Biological Diversity aims to establish wilderness areas on 2% of its terrestrial territory by 2020 (7,140 km2). However, protected wilderness areas in Germany currently only cover 0.6% of the total terrestrial area. In absence of pristine landscapes, Germany counts national parks (IUCN Category II) as wilderness areas. The government counts the whole area of the 16 national parks as wilderness. This means, also the managed parts are included in the "existing" 0,6%. There is no doubt, that Germany will miss its own time-dependent quantitative goals, but there are also some critics, that point a bad designation practice: Findings of disturbance ecology, according to which process-based nature conservation and the 2% target could be further qualified by more targeted area designation, pre-treatment and introduction of megaherbivores, are widely neglected. Since 2019 the government supports bargains of land that will then be designated as wilderness by 10 Mio. Euro annually. The German minimum size for wilderness candidate sites is normally 10 km2. In some cases (i.e. swamps) the minimum size is 5 km2. Finland There are twelve wilderness areas in the Sami native region in northern Finnish Lapland. They are intended both to preserve the wilderness character of the areas and further the traditional livelihood of the Sami people. This means e.g. that reindeer husbandry, hunting and taking wood for use in the household is permitted. As population is very sparse, this is generally no big threat to the nature. Large scale reindeer husbandry has influence on the ecosystem, but no change is introduced by the act on wilderness areas. The World Commission on Protected Areas (WCPA) classifies the areas as "VI Protected area with sustainable use of natural resources". France Since 1861, the French Waters and Forests Military Agency (Administration des Eaux et Forêts) put a strong protection on what was called the « artistic reserve » in Fontainebleau State Forest. With a total of 1,097 hectares, it is known to be the first World nature reserve. Then in the 1950s, Integral Biological Reserves (Réserves Biologiques Intégrales, RBI) are dedicated to man free ecosystem evolution, on the contrary of Managed Biological reserves (Réserves Biologiques Dirigées, RBD) where a specific management is applied to conserve vulnerable species or threatened habitats. Integral Biological Reserves occurs in French State Forests or City Forests and are therefore managed by the National Forests Office. In such reserves, all harvests coupe are forbidden excepted exotic species elimination or track safety works to avoid fallen tree risk to visitors (already existing tracks in or on the edge of the reserve). At the end of 2014, there were 60 Integral Biological Reserves in French State Forests for a total area of 111,082 hectares and 10 in City Forests for a total of 2,835 hectares. Greece In Greece there are some parks called "ethniki drimoi" (εθνικοί δρυμοί, national forests) that are under protection of the Greek government. Such parks include Olympus, Parnassos and Parnitha National Parks. New Zealand There are seven Wilderness Areas in New Zealand as defined by the National Parks Act 1980 and the Conservation Act 1987 that fall well within the IUCN definition. Wilderness areas cannot have any human intervention and can only have indigenous species re-introduced into the area if it is compatible with conservation management strategies. In New Zealand wilderness areas are remote blocks of land that have high natural character. The Conservation Act 1987 prevents any access by vehicles and livestock, the construction of tracks and buildings, and all indigenous natural resources are protected. They are generally over 400 km2 in size. Three Wilderness Areas are currently recognised, all on the West Coast: Adams Wilderness Area, Hooker/Landsborough Wilderness Area and Paparoa Wilderness Area. United States In the United States, a Wilderness Area is an area of federal land set aside by an act of Congress. It is typically at least 5,000 acres (about 8 mi2 or 20 km2) in size. Human activities in wilderness areas are restricted to scientific study and non-mechanized recreation; horses are permitted but mechanized vehicles and equipment, such as cars and bicycles, are not. The United States was one of the first countries to officially designate land as "wilderness" through the Wilderness Act of 1964. The Wilderness Act is an important part of wilderness designation because it created the legal definition of wilderness and established the National Wilderness Preservation System. The Wilderness Act defines wilderness as "an area where the earth and its community of life are untrammelled by man, where man himself is a visitor who does not remain." Wilderness designation helps preserve the natural state of the land and protects flora and fauna by prohibiting development and providing for non-mechanized recreation only. The first administratively protected wilderness area in the United States was the Gila National Forest. In 1922, Aldo Leopold, then a ranking member of the U.S. Forest Service, proposed a new management strategy for the Gila National Forest. His proposal was adopted in 1924, and 750,000 acres of the Gila National Forest became the Gila Wilderness. The Great Swamp in New Jersey was the first formally designated wilderness refuge in the United States. It was declared a wildlife refuge on 3 November 1960. In 1966 it was declared a National Natural Landmark and, in 1968, it was given wilderness status. Properties in the swamp had been acquired by a small group of residents of the area, who donated the assembled properties to the federal government as a park for perpetual protection. Today the refuge amounts to that are within thirty miles of Manhattan. While wilderness designations were originally granted by an Act of Congress for Federal land that retained a "primeval character", meaning that it had not suffered from human habitation or development, the Eastern Wilderness Act of 1975 extended the protection of the NWPS to areas in the eastern states that were not initially considered for inclusion in the Wilderness Act. This act allowed lands that did not meet the constraints of size, roadlessness, or human impact to be designated as wilderness areas under the belief that they could be returned to a "primeval" state through preservation. Approximately are designated as wilderness in the United States. This accounts for 4.82% of the country's total land area; however, 54% of that amount is found in Alaska (recreation and development in Alaskan wilderness is often less restrictive), while only 2.58% of the lower continental United States is designated as wilderness. As of 2023 there are 806 designated wilderness areas in the United States ranging in size from Florida's Pelican Island at to Alaska's Wrangell-Saint Elias at . Western Australia In Western Australia, a wilderness area is an area that has a wilderness quality rating of 12 or greater and meets a minimum size threshold of 80 km2 in temperate areas or 200 km2 in arid and tropical areas. A wilderness area is gazetted under section 62(1)(a) of the Conservation and Land Management Act 1984 by the Minister on any land that is vested in the Conservation Commission of Western Australia. International movement At the forefront of the international wilderness movement has been The WILD Foundation, its founder Ian Player and its network of sister and partner organizations around the globe. The pioneer World Wilderness Congress in 1977 introduced the wilderness concept as an issue of international importance, and began the process of defining the term in biological and social contexts. Today, this work is continued by many international groups who still look to the World Wilderness Congress as the international venue for wilderness and to The WILD Foundation network for wilderness tools and action. The WILD Foundation also publishes the standard references for wilderness professionals and others involved in the issues: Wilderness Management: Stewardship and Protection of Resources and Values, the International Journal of Wilderness, A Handbook on International Wilderness Law and Policy and Protecting Wild Nature on Native Lands are the backbone of information and management tools for international wilderness issues. The Wilderness Specialist Group within the World Commission on Protected Areas (WTF/WCPA) of the International Union for Conservation of Nature (IUCN) plays a critical role in defining legal and management guidelines for wilderness at the international level and is also a clearing-house for information on wilderness issues. The IUCN Protected Areas Classification System defines wilderness as "A large area of unmodified or slightly modified land, and/or sea retaining its natural character and influence, without permanent or significant habitation, which is protected and managed so as to preserve its natural condition (Category 1b)." The WILD Foundation founded the WTF/WCPA in 2002 and remains co-chair. Extent The most recent efforts to map wilderness show that less than one quarter (~23%) of the world's wilderness area now remains, and that there have been catastrophic declines in wilderness extent over the last two decades. Over 3 million square kilometers (10 percent) of wilderness was converted to human land-uses. The Amazon and Congo rain forests suffered the most loss. Human pressure is extending into almost every corner of the planet. The loss of wilderness could have serious implications for biodiversity conservation. According to a previous study, Wilderness: Earth's Last Wild Places, carried out by Conservation International, 46% of the world's land mass is wilderness. For purposes of this report, "wilderness" was defined as an area that "has 70% or more of its original vegetation intact, covers at least and must have fewer than five people per square kilometer." However, an IUCN/UNEP report published in 2003, found that only 10.9% of the world's land mass is currently a Category 1 Protected Area, that is, either a strict nature reserve (5.5%) or protected wilderness (5.4%). Such areas remain relatively untouched by humans. Of course, there are large tracts of lands in national parks and other protected areas that would also qualify as wilderness. However, many protected areas have some degree of human modification or activity, so a definitive estimate of true wilderness is difficult. The Wildlife Conservation Society generated a human footprint using a number of indicators, the absence of which indicate wildness: human population density, human access via roads and rivers, human infrastructure for agriculture and settlements and the presence of industrial power (lights visible from space). The society estimates that 26% of the Earth's land mass falls into the category of "Last of the wild." The wildest regions of the world include the Arctic Tundra, the Siberia Taiga, the Amazon rainforest, the Tibetan Plateau, the Australia Outback and deserts such as the Sahara, and the Gobi. However, from the 1970s, numerous geoglyphs have been discovered on deforested land in the Amazon rainforest, leading to claims about Pre-Columbian civilizations. The BBC's Unnatural Histories claimed that the Amazon rainforest, rather than being a pristine wilderness, has been shaped by man for at least 11,000 years through practices such as forest gardening and terra preta. The percentage of land area designated wilderness does not necessarily reflect a measure of its biodiversity. Of the last natural wilderness areas, the taiga—which is mostly wilderness—represents 11% of the total land mass in the Northern Hemisphere. Tropical rainforest represent a further 7% of the world's land base. Estimates of the Earth's remaining wilderness underscore the rate at which these lands are being developed, with dramatic declines in biodiversity as a consequence. Critique The American concept of wilderness has been criticized by some nature writers. For example, William Cronon writes that what he calls a wilderness ethic or cult may "teach us to be dismissive or even contemptuous of such humble places and experiences", and that "wilderness tends to privilege some parts of nature at the expense of others", using as an example "the mighty canyon more inspiring than the humble marsh." This is most clearly visible with the fact that nearly all U.S. National Parks preserve spectacular canyons and mountains, and it was not until the 1940s that a swamp became a national park—the Everglades. In the mid-20th century national parks started to protect biodiversity, not simply attractive scenery. Cronon also believes the passion to save wilderness "poses a serious threat to responsible environmentalism" and writes that it allows people to "give ourselves permission to evade responsibility for the lives we actually lead... to the extent that we live in an urban-industrial civilization but at the same time pretend to ourselves that our real home is in the wilderness". Michael Pollan has argued that the wilderness ethic leads people to dismiss areas whose wildness is less than absolute. In his book Second Nature, Pollan writes that "once a landscape is no longer 'virgin' it is typically written off as fallen, lost to nature, irredeemable." Another challenge to the conventional notion of wilderness comes from Robert Winkler in his book, Going Wild: Adventures with Birds in the Suburban Wilderness. "On walks in the unpeopled parts of the suburbs," Winkler writes, "I’ve witnessed the same wild creatures, struggles for survival, and natural beauty that we associate with true wilderness." Attempts have been made, as in the Pennsylvania Scenic Rivers Act, to distinguish "wild" from various levels of human influence: in the Act, "wild rivers" are "not impounded", "usually not accessible except by trail", and their watersheds and shorelines are "essentially primitive". Another source of criticism is that the criteria for wilderness designation is vague and open to interpretation. For example, the Wilderness Act states that wilderness must be roadless. The definition given for roadless is "the absences of roads which have been improved and maintained by mechanical means to insure relatively regular and continuous use". However, there have been added sub-definitions that have, in essence, made this standard unclear and open to interpretation, and some are drawn to narrowly exclude existing roads. Coming from a different direction, some criticism from the Deep Ecology movement argues against conflating "wilderness" with "wilderness reservations", viewing the latter term as an oxymoron that, by allowing the law as a human construct to define nature, unavoidably voids the very freedom and independence of human control that defines wilderness. True wilderness requires the ability of life to undergo speciation with as little interference from humanity as possible. Anthropologist and scholar on wilderness Layla Abdel-Rahim argues that it is necessary to understand the principles that govern the economies of mutual aid and diversification in wilderness from a non-anthropocentric perspective. Others have criticized the American concept of wilderness as rooted in white supremacy, ignoring Native American perspectives on the natural environment and excluding people of color from narratives about human interactions with the environment. Many early conservationists, such as Madison Grant, were also heavily involved in the eugenics movement. Grant, who worked alongside President Theodore Roosevelt to create the Bronx Zoo, also wrote The Passing of the Great Race, a book on eugenics that was later praised by Adolf Hitler. Grant is also known to have featured Ota Benga, a Mbuti man from Central Africa, in the Bronx Zoo monkey house exhibit. John Muir, another important figure in the early conservation movement, referred to African-Americans as "making a great deal of noise and doing little work", and compared Native Americans to unclean animals who did not belong in the wilderness. Environmental history professor Miles A. Powell of Nanyang Technological University has argued that much of the early conservation movement was deeply tied to and inspired by a desire to preserve the Nordic race. Prakash Kashwan, a political science professor at the University of Connecticut who specializes in environmental policies and environmental justice, argues that the racist ideas of many early conservationists created a narrative of wilderness that has led to "fortress conservation" policies that have driven Native Americans off of their land. Kashwan has proposed conservation practices that would allow Indigenous people to continue using the land as a more just and more effective alternative to fortress conservation. The idea that the natural world is primarily made up of remote wilderness areas has also been criticized as classist, with environmental sociologist Dorceta Taylor arguing that this leads to experiencing wilderness becoming a privilege, as working-class people are often unable to afford transportation to wilderness areas. She further argues that, due to poverty and lack of access to transportation caused by systemic racism, this perception is also rooted in racism. Human–nature dichotomy Another critique of wilderness is that it perpetuates the human-nature dichotomy. The idea that nature and humans are separate entities can be traced back to European colonial views. To European settlers, land was an inherited right and was to be used to profit. While native groups saw their relationship with the land in a more holistic view, they were eventually subjected to European property systems. Colonists from Europe saw the American landscape as wild, savage, dark, [etc.] and thus needed to be tamed in order for it to be safe and habitable. Once cleared and settled, these areas were depicted as "Eden itself". Yet the native peoples of those lands saw "wilderness" as that when the connection between humans and nature is broken. For native communities, human intervention was a part of their ecological practices. There is a historical belief that wilderness must not only be tamed to be protected but that humans also need to be outside of it. In order to clear certain areas for conservation, such as national parks, involved the removal of native communities from their land. Some authors have come to describe this type of conservation as conservation-far, where humans and nature are kept separate. The other end of the conservation spectrum then, would be conservation-near, which would mimic native ecological practices of humans integrated into the care of nature. Most scientists and conservationists agree that no place on earth is completely untouched by humanity, either due to past occupation by indigenous people, or through global processes such as climate change or pollution. Activities on the margins of specific wilderness areas, such as fire suppression and the interruption of animal migration, also affect the interior of wildernesses.
Physical sciences
Earth science basics: General
Earth science
396320
https://en.wikipedia.org/wiki/Matrix%20mechanics
Matrix mechanics
Matrix mechanics is a formulation of quantum mechanics created by Werner Heisenberg, Max Born, and Pascual Jordan in 1925. It was the first conceptually autonomous and logically consistent formulation of quantum mechanics. Its account of quantum jumps supplanted the Bohr model's electron orbits. It did so by interpreting the physical properties of particles as matrices that evolve in time. It is equivalent to the Schrödinger wave formulation of quantum mechanics, as manifest in Dirac's bra–ket notation. In some contrast to the wave formulation, it produces spectra of (mostly energy) operators by purely algebraic, ladder operator methods. Relying on these methods, Wolfgang Pauli derived the hydrogen atom spectrum in 1926, before the development of wave mechanics. Development of matrix mechanics In 1925, Werner Heisenberg, Max Born, and Pascual Jordan formulated the matrix mechanics representation of quantum mechanics. Epiphany at Helgoland In 1925 Werner Heisenberg was working in Göttingen on the problem of calculating the spectral lines of hydrogen. By May 1925 he began trying to describe atomic systems by observables only. On June 7, after weeks of failing to alleviate his hay fever with aspirin and cocaine, Heisenberg left for the pollen-free North Sea island of Helgoland. While there, in between climbing and memorizing poems from Goethe's West-östlicher Diwan, he continued to ponder the spectral issue and eventually realised that adopting non-commuting observables might solve the problem. He later wrote: It was about three o' clock at night when the final result of the calculation lay before me. At first I was deeply shaken. I was so excited that I could not think of sleep. So I left the house and awaited the sunrise on the top of a rock. The three fundamental papers After Heisenberg returned to Göttingen, he showed Wolfgang Pauli his calculations, commenting at one point: Everything is still vague and unclear to me, but it seems as if the electrons will no more move on orbits. On July 9 Heisenberg gave the same paper of his calculations to Max Born, saying that "he had written a crazy paper and did not dare to send it in for publication, and that Born should read it and advise him" prior to publication. Heisenberg then departed for a while, leaving Born to analyse the paper. In the paper, Heisenberg formulated quantum theory without sharp electron orbits. Hendrik Kramers had earlier calculated the relative intensities of spectral lines in the Sommerfeld model by interpreting the Fourier coefficients of the orbits as intensities. But his answer, like all other calculations in the old quantum theory, was only correct for large orbits. Heisenberg, after a collaboration with Kramers, began to understand that the transition probabilities were not quite classical quantities, because the only frequencies that appear in the Fourier series should be the ones that are observed in quantum jumps, not the fictional ones that come from Fourier-analyzing sharp classical orbits. He replaced the classical Fourier series with a matrix of coefficients, a fuzzed-out quantum analog of the Fourier series. Classically, the Fourier coefficients give the intensity of the emitted radiation, so in quantum mechanics the magnitude of the matrix elements of the position operator were the intensity of radiation in the bright-line spectrum. The quantities in Heisenberg's formulation were the classical position and momentum, but now they were no longer sharply defined. Each quantity was represented by a collection of Fourier coefficients with two indices, corresponding to the initial and final states. When Born read the paper, he recognized the formulation as one which could be transcribed and extended to the systematic language of matrices, which he had learned from his study under Jakob Rosanes at Breslau University. Born, with the help of his assistant and former student Pascual Jordan, began immediately to make the transcription and extension, and they submitted their results for publication; the paper was received for publication just 60 days after Heisenberg's paper. A follow-on paper was submitted for publication before the end of the year by all three authors. (A brief review of Born's role in the development of the matrix mechanics formulation of quantum mechanics along with a discussion of the key formula involving the non-commutativity of the probability amplitudes can be found in an article by Jeremy Bernstein. A detailed historical and technical account can be found in Mehra and Rechenberg's book The Historical Development of Quantum Theory. Volume 3. The Formulation of Matrix Mechanics and Its Modifications 1925–1926.) Up until this time, matrices were seldom used by physicists; they were considered to belong to the realm of pure mathematics. Gustav Mie had used them in a paper on electrodynamics in 1912 and Born had used them in his work on the lattices theory of crystals in 1921. While matrices were used in these cases, the algebra of matrices with their multiplication did not enter the picture as they did in the matrix formulation of quantum mechanics. Born, however, had learned matrix algebra from Rosanes, as already noted, but Born had also learned Hilbert's theory of integral equations and quadratic forms for an infinite number of variables as was apparent from a citation by Born of Hilbert's work Grundzüge einer allgemeinen Theorie der Linearen Integralgleichungen published in 1912. Jordan, too, was well equipped for the task. For a number of years, he had been an assistant to Richard Courant at Göttingen in the preparation of Courant and David Hilbert's book Methoden der mathematischen Physik I, which was published in 1924. This book, fortuitously, contained a great many of the mathematical tools necessary for the continued development of quantum mechanics. In 1926, John von Neumann became assistant to David Hilbert, and he would coin the term Hilbert space to describe the algebra and analysis which were used in the development of quantum mechanics. A linchpin contribution to this formulation was achieved in Dirac's reinterpretation/synthesis paper of 1925, which invented the language and framework usually employed today, in full display of the noncommutative structure of the entire construction. Heisenberg's reasoning Before matrix mechanics, the old quantum theory described the motion of a particle by a classical orbit, with well defined position and momentum , , with the restriction that the time integral over one period of the momentum times the velocity must be a positive integer multiple of the Planck constant While this restriction correctly selects orbits with more or less the right energy values , the old quantum mechanical formalism did not describe time dependent processes, such as the emission or absorption of radiation. When a classical particle is weakly coupled to a radiation field, so that the radiative damping can be neglected, it will emit radiation in a pattern that repeats itself every orbital period. The frequencies that make up the outgoing wave are then integer multiples of the orbital frequency, and this is a reflection of the fact that is periodic, so that its Fourier representation has frequencies only. The coefficients are complex numbers. The ones with negative frequencies must be the complex conjugates of the ones with positive frequencies, so that will always be real, A quantum mechanical particle, on the other hand, cannot emit radiation continuously; it can only emit photons. Assuming that the quantum particle started in orbit number , emitted a photon, then ended up in orbit number , the energy of the photon is , which means that its frequency is . For large and , but with relatively small, these are the classical frequencies by Bohr's correspondence principle In the formula above, is the classical period of either orbit or orbit , since the difference between them is higher order in . But for small and , or if is large, the frequencies are not integer multiples of any single frequency. Since the frequencies that the particle emits are the same as the frequencies in the Fourier description of its motion, this suggests that something in the time-dependent description of the particle is oscillating with frequency . Heisenberg called this quantity , and demanded that it should reduce to the classical Fourier coefficients in the classical limit. For large values of and but with relatively small, is the th Fourier coefficient of the classical motion at orbit . Since Xnm has opposite frequency to , the condition that is real becomes By definition, only has the frequency , so its time evolution is simple: This is the original form of Heisenberg's equation of motion. Given two arrays and describing two physical quantities, Heisenberg could form a new array of the same type by combining the terms , which also oscillate with the right frequency. Since the Fourier coefficients of the product of two quantities is the convolution of the Fourier coefficients of each one separately, the correspondence with Fourier series allowed Heisenberg to deduce the rule by which the arrays should be multiplied, Born pointed out that this is the law of matrix multiplication, so that the position, the momentum, the energy, all the observable quantities in the theory, are interpreted as matrices. Under this multiplication rule, the product depends on the order: is different from . The matrix is a complete description of the motion of a quantum mechanical particle. Because the frequencies in the quantum motion are not multiples of a common frequency, the matrix elements cannot be interpreted as the Fourier coefficients of a sharp classical trajectory. Nevertheless, as matrices, and satisfy the classical equations of motion; also see Ehrenfest's theorem, below. Matrix basics When it was introduced by Werner Heisenberg, Max Born and Pascual Jordan in 1925, matrix mechanics was not immediately accepted and was a source of controversy, at first. Schrödinger's later introduction of wave mechanics was greatly favored. Part of the reason was that Heisenberg's formulation was in an odd mathematical language, for the time, while Schrödinger's formulation was based on familiar wave equations. But there was also a deeper sociological reason. Quantum mechanics had been developing by two paths, one led by Einstein, who emphasized the wave–particle duality he proposed for photons, and the other led by Bohr, that emphasized the discrete energy states and quantum jumps that Bohr discovered. De Broglie had reproduced the discrete energy states within Einstein's framework – the quantum condition is the standing wave condition, and this gave hope to those in the Einstein school that all the discrete aspects of quantum mechanics would be subsumed into a continuous wave mechanics. Matrix mechanics, on the other hand, came from the Bohr school, which was concerned with discrete energy states and quantum jumps. Bohr's followers did not appreciate physical models that pictured electrons as waves, or as anything at all. They preferred to focus on the quantities that were directly connected to experiments. In atomic physics, spectroscopy gave observational data on atomic transitions arising from the interactions of atoms with light quanta. The Bohr school required that only those quantities that were in principle measurable by spectroscopy should appear in the theory. These quantities include the energy levels and their intensities but they do not include the exact location of a particle in its Bohr orbit. It is very hard to imagine an experiment that could determine whether an electron in the ground state of a hydrogen atom is to the right or to the left of the nucleus. It was a deep conviction that such questions did not have an answer. The matrix formulation was built on the premise that all physical observables are represented by matrices, whose elements are indexed by two different energy levels. The set of eigenvalues of the matrix were eventually understood to be the set of all possible values that the observable can have. Since Heisenberg's matrices are Hermitian, the eigenvalues are real. If an observable is measured and the result is a certain eigenvalue, the corresponding eigenvector is the state of the system immediately after the measurement. The act of measurement in matrix mechanics collapses the state of the system. If one measures two observables simultaneously, the state of the system collapses to a common eigenvector of the two observables. Since most matrices don't have any eigenvectors in common, most observables can never be measured precisely at the same time. This is the uncertainty principle. If two matrices share their eigenvectors, they can be simultaneously diagonalized. In the basis where they are both diagonal, it is clear that their product does not depend on their order because multiplication of diagonal matrices is just multiplication of numbers. The uncertainty principle, by contrast, is an expression of the fact that often two matrices and do not always commute, i.e., that does not necessarily equal 0. The fundamental commutation relation of matrix mechanics, implies then that there are no states that simultaneously have a definite position and momentum. This principle of uncertainty holds for many other pairs of observables as well. For example, the energy does not commute with the position either, so it is impossible to precisely determine the position and energy of an electron in an atom. Nobel Prize In 1928, Albert Einstein nominated Heisenberg, Born, and Jordan for the Nobel Prize in Physics. The announcement of the Nobel Prize in Physics for 1932 was delayed until November 1933. It was at that time that it was announced Heisenberg had won the Prize for 1932 "for the creation of quantum mechanics, the application of which has, inter alia, led to the discovery of the allotropic forms of hydrogen" and Erwin Schrödinger and Paul Adrien Maurice Dirac shared the 1933 Prize "for the discovery of new productive forms of atomic theory". It might well be asked why Born was not awarded the Prize in 1932, along with Heisenberg, and Bernstein proffers speculations on this matter. One of them relates to Jordan joining the Nazi Party on May 1, 1933, and becoming a stormtrooper. Jordan's Party affiliations and Jordan's links to Born may well have affected Born's chance at the Prize at that time. Bernstein further notes that when Born finally won the Prize in 1954, Jordan was still alive, while the Prize was awarded for the statistical interpretation of quantum mechanics, attributable to Born alone. Heisenberg's reactions to Born for Heisenberg receiving the Prize for 1932 and for Born receiving the Prize in 1954 are also instructive in evaluating whether Born should have shared the Prize with Heisenberg. On November 25, 1933, Born received a letter from Heisenberg in which he said he had been delayed in writing due to a "bad conscience" that he alone had received the Prize "for work done in Göttingen in collaboration – you, Jordan and I". Heisenberg went on to say that Born and Jordan's contribution to quantum mechanics cannot be changed by "a wrong decision from the outside". In 1954, Heisenberg wrote an article honoring Max Planck for his insight in 1900. In the article, Heisenberg credited Born and Jordan for the final mathematical formulation of matrix mechanics and Heisenberg went on to stress how great their contributions were to quantum mechanics, which were not "adequately acknowledged in the public eye". Mathematical development Once Heisenberg introduced the matrices for and , he could find their matrix elements in special cases by guesswork, guided by the correspondence principle. Since the matrix elements are the quantum mechanical analogs of Fourier coefficients of the classical orbits, the simplest case is the harmonic oscillator, where the classical position and momentum, and , are sinusoidal. Harmonic oscillator In units where the mass and frequency of the oscillator are equal to one (see nondimensionalization), the energy of the oscillator is The level sets of are the clockwise orbits, and they are nested circles in phase space. The classical orbit with energy is The old quantum condition dictates that the integral of over an orbit, which is the area of the circle in phase space, must be an integer multiple of the Planck constant. The area of the circle of radius is . So or, in natural units where , the energy is an integer. The Fourier components of and are simple, and more so if they are combined into the quantities Both and have only a single frequency, and and can be recovered from their sum and difference. Since has a classical Fourier series with only the lowest frequency, and the matrix element is the th Fourier coefficient of the classical orbit, the matrix for is nonzero only on the line just above the diagonal, where it is equal to . The matrix for is likewise only nonzero on the line below the diagonal, with the same elements. Thus, from and , reconstruction yields and which, up to the choice of units, are the Heisenberg matrices for the harmonic oscillator. Both matrices are Hermitian, since they are constructed from the Fourier coefficients of real quantities. Finding and is direct, since they are quantum Fourier coefficients so they evolve simply with time, The matrix product of and is not hermitian, but has a real and imaginary part. The real part is one half the symmetric expression , while the imaginary part is proportional to the commutator It is simple to verify explicitly that in the case of the harmonic oscillator, is , multiplied by the identity. It is likewise simple to verify that the matrix is a diagonal matrix, with eigenvalues . Conservation of energy The harmonic oscillator is an important case. Finding the matrices is easier than determining the general conditions from these special forms. For this reason, Heisenberg investigated the anharmonic oscillator, with Hamiltonian In this case, the and matrices are no longer simple off-diagonal matrices, since the corresponding classical orbits are slightly squashed and displaced, so that they have Fourier coefficients at every classical frequency. To determine the matrix elements, Heisenberg required that the classical equations of motion be obeyed as matrix equations, He noticed that if this could be done, then , considered as a matrix function of and , will have zero time derivative. where is the anticommutator, Given that all the off diagonal elements have a nonzero frequency; being constant implies that is diagonal. It was clear to Heisenberg that in this system, the energy could be exactly conserved in an arbitrary quantum system, a very encouraging sign. The process of emission and absorption of photons seemed to demand that the conservation of energy will hold at best on average. If a wave containing exactly one photon passes over some atoms, and one of them absorbs it, that atom needs to tell the others that they can't absorb the photon anymore. But if the atoms are far apart, any signal cannot reach the other atoms in time, and they might end up absorbing the same photon anyway and dissipating the energy to the environment. When the signal reached them, the other atoms would have to somehow recall that energy. This paradox led Bohr, Kramers and Slater to abandon exact conservation of energy. Heisenberg's formalism, when extended to include the electromagnetic field, was obviously going to sidestep this problem, a hint that the interpretation of the theory will involve wavefunction collapse. Differentiation trick — canonical commutation relations Demanding that the classical equations of motion are preserved is not a strong enough condition to determine the matrix elements. The Planck constant does not appear in the classical equations, so that the matrices could be constructed for many different values of and still satisfy the equations of motion, but with different energy levels. So, in order to implement his program, Heisenberg needed to use the old quantum condition to fix the energy levels, then fill in the matrices with Fourier coefficients of the classical equations, then alter the matrix coefficients and the energy levels slightly to make sure the classical equations are satisfied. This is clearly not satisfactory. The old quantum conditions refer to the area enclosed by the sharp classical orbits, which do not exist in the new formalism. The most important thing that Heisenberg discovered is how to translate the old quantum condition into a simple statement in matrix mechanics. To do this, he investigated the action integral as a matrix quantity, There are several problems with this integral, all stemming from the incompatibility of the matrix formalism with the old picture of orbits. Which period should be used? Semiclassically, it should be either or , but the difference is order , and an answer to order is sought. The quantum condition tells us that is on the diagonal, so the fact that is classically constant tells us that the off-diagonal elements are zero. His crucial insight was to differentiate the quantum condition with respect to . This idea only makes complete sense in the classical limit, where is not an integer but the continuous action variable , but Heisenberg performed analogous manipulations with matrices, where the intermediate expressions are sometimes discrete differences and sometimes derivatives. In the following discussion, for the sake of clarity, the differentiation will be performed on the classical variables, and the transition to matrix mechanics will be done afterwards, guided by the correspondence principle. In the classical setting, the derivative is the derivative with respect to of the integral which defines , so it is tautologically equal to 1. where the derivatives and should be interpreted as differences with respect to at corresponding times on nearby orbits, exactly what would be obtained if the Fourier coefficients of the orbital motion were differentiated. (These derivatives are symplectically orthogonal in phase space to the time derivatives and ). The final expression is clarified by introducing the variable canonically conjugate to , which is called the angle variable : The derivative with respect to time is a derivative with respect to , up to a factor of , So the quantum condition integral is the average value over one cycle of the Poisson bracket of and . An analogous differentiation of the Fourier series of demonstrates that the off-diagonal elements of the Poisson bracket are all zero. The Poisson bracket of two canonically conjugate variables, such as and , is the constant value 1, so this integral really is the average value of 1; so it is 1, as we knew all along, because it is after all. But Heisenberg, Born and Jordan, unlike Dirac, were not familiar with the theory of Poisson brackets, so, for them, the differentiation effectively evaluated in coordinates. The Poisson Bracket, unlike the action integral, does have a simple translation to matrix mechanics – it normally corresponds to the imaginary part of the product of two variables, the commutator. To see this, examine the (antisymmetrized) product of two matrices and in the correspondence limit, where the matrix elements are slowly varying functions of the index, keeping in mind that the answer is zero classically. In the correspondence limit, when indices , are large and nearby, while , are small, the rate of change of the matrix elements in the diagonal direction is the matrix element of the derivative of the corresponding classical quantity. So it is possible to shift any matrix element diagonally through the correspondence, where the right hand side is really only the th Fourier component of at the orbit near to this semiclassical order, not a full well-defined matrix. The semiclassical time derivative of a matrix element is obtained up to a factor of by multiplying by the distance from the diagonal, since the coefficient is semiclassically the th Fourier coefficient of the th classical orbit. The imaginary part of the product of A and B can be evaluated by shifting the matrix elements around so as to reproduce the classical answer, which is zero. The leading nonzero residual is then given entirely by the shifting. Since all the matrix elements are at indices which have a small distance from the large index position , it helps to introduce two temporary notations: for the matrices, and for the th Fourier components of classical quantities, Flipping the summation variable in the first sum from to , the matrix element becomes, and it is clear that the principal (classical) part cancels. The leading quantum part, neglecting the higher order product of derivatives in the residual expression, is then equal to so that, finally, which can be identified with times the th classical Fourier component of the Poisson bracket. Heisenberg's original differentiation trick was eventually extended to a full semiclassical derivation of the quantum condition, in collaboration with Born and Jordan. Once they were able to establish that this condition replaced and extended the old quantization rule, allowing the matrix elements of and for an arbitrary system to be determined simply from the form of the Hamiltonian. The new quantization rule was assumed to be universally true, even though the derivation from the old quantum theory required semiclassical reasoning. (A full quantum treatment, however, for more elaborate arguments of the brackets, was appreciated in the 1940s to amount to extending Poisson brackets to Moyal brackets.) State vectors and the Heisenberg equation To make the transition to standard quantum mechanics, the most important further addition was the quantum state vector, now written , which is the vector that the matrices act on. Without the state vector, it is not clear which particular motion the Heisenberg matrices are describing, since they include all the motions somewhere. The interpretation of the state vector, whose components are written , was furnished by Born. This interpretation is statistical: the result of a measurement of the physical quantity corresponding to the matrix is random, with an average value equal to Alternatively, and equivalently, the state vector gives the probability amplitude for the quantum system to be in the energy state . Once the state vector was introduced, matrix mechanics could be rotated to any basis, where the matrix need no longer be diagonal. The Heisenberg equation of motion in its original form states that evolves in time like a Fourier component, which can be recast in differential form and it can be restated so that it is true in an arbitrary basis, by noting that the matrix is diagonal with diagonal values , This is now a matrix equation, so it holds in any basis. This is the modern form of the Heisenberg equation of motion. Its formal solution is: All these forms of the equation of motion above say the same thing, that is equivalent to , through a basis rotation by the unitary matrix , a systematic picture elucidated by Dirac in his bra–ket notation. Conversely, by rotating the basis for the state vector at each time by , the time dependence in the matrices can be undone. The matrices are now time independent, but the state vector rotates, This is the Schrödinger equation for the state vector, and this time-dependent change of basis amounts to transformation to the Schrödinger picture, with . In quantum mechanics in the Heisenberg picture the state vector, does not change with time, while an observable satisfies the Heisenberg equation of motion, The extra term is for operators such as which have an explicit time dependence, in addition to the time dependence from the unitary evolution discussed. The Heisenberg picture does not distinguish time from space, so it is better suited to relativistic theories than the Schrödinger equation. Moreover, the similarity to classical physics is more manifest: the Hamiltonian equations of motion for classical mechanics are recovered by replacing the commutator above by the Poisson bracket (see also below). By the Stone–von Neumann theorem, the Heisenberg picture and the Schrödinger picture must be unitarily equivalent, as detailed below. Further results Matrix mechanics rapidly developed into modern quantum mechanics, and gave interesting physical results on the spectra of atoms. Wave mechanics Jordan noted that the commutation relations ensure that acts as a differential operator. The operator identity allows the evaluation of the commutator of with any power of , and it implies that which, together with linearity, implies that a P-commutator effectively differentiates any analytic matrix function of . Assuming limits are defined sensibly, this extends to arbitrary functions−but the extension need not be made explicit until a certain degree of mathematical rigor is required, Since is a Hermitian matrix, it should be diagonalizable, and it will be clear from the eventual form of that every real number can be an eigenvalue. This makes some of the mathematics subtle, since there is a separate eigenvector for every point in space. In the basis where is diagonal, an arbitrary state can be written as a superposition of states with eigenvalues , so that , and the operator multiplies each eigenvector by , Define a linear operator which differentiates , and note that so that the operator obeys the same commutation relation as . Thus, the difference between and must commute with , so it may be simultaneously diagonalized with : its value acting on any eigenstate of is some function of the eigenvalue . This function must be real, because both and are Hermitian, rotating each state by a phase , that is, redefining the phase of the wavefunction: The operator is redefined by an amount: which means that, in the rotated basis, is equal to . Hence, there is always a basis for the eigenvalues of where the action of on any wavefunction is known: and the Hamiltonian in this basis is a linear differential operator on the state-vector components, Thus, the equation of motion for the state vector is but a celebrated differential equation, Since is a differential operator, in order for it to be sensibly defined, there must be eigenvalues of which neighbors every given value. This suggests that the only possibility is that the space of all eigenvalues of is all real numbers, and that is , up to a phase rotation. To make this rigorous requires a sensible discussion of the limiting space of functions, and in this space this is the Stone–von Neumann theorem: any operators and which obey the commutation relations can be made to act on a space of wavefunctions, with a derivative operator. This implies that a Schrödinger picture is always available. Matrix mechanics easily extends to many degrees of freedom in a natural way. Each degree of freedom has a separate operator and a separate effective differential operator , and the wavefunction is a function of all the possible eigenvalues of the independent commuting variables. In particular, this means that a system of interacting particles in 3 dimensions is described by one vector whose components in a basis where all the are diagonal is a mathematical function of -dimensional space describing all their possible positions, effectively a much bigger collection of values than the mere collection of three-dimensional wavefunctions in one physical space. Schrödinger came to the same conclusion independently, and eventually proved the equivalence of his own formalism to Heisenberg's. Since the wavefunction is a property of the whole system, not of any one part, the description in quantum mechanics is not entirely local. The description of several quantum particles has them correlated, or entangled. This entanglement leads to strange correlations between distant particles which violate the classical Bell's inequality. Even if the particles can only be in just two positions, the wavefunction for particles requires complex numbers, one for each total configuration of positions. This is exponentially many numbers in , so simulating quantum mechanics on a computer requires exponential resources. Conversely, this suggests that it might be possible to find quantum systems of size which physically compute the answers to problems which classically require bits to solve. This is the aspiration behind quantum computing. Ehrenfest theorem For the time-independent operators and , so the Heisenberg equation above reduces to: where the square brackets denote the commutator. For a Hamiltonian which is , the and operators satisfy: where the first is classically the velocity, and second is classically the force, or potential gradient. These reproduce Hamilton's form of Newton's laws of motion. In the Heisenberg picture, the and operators satisfy the classical equations of motion. You can take the expectation value of both sides of the equation to see that, in any state : So Newton's laws are exactly obeyed by the expected values of the operators in any given state. This is Ehrenfest's theorem, which is an obvious corollary of the Heisenberg equations of motion, but is less trivial in the Schrödinger picture, where Ehrenfest discovered it. Transformation theory In classical mechanics, a canonical transformation of phase space coordinates is one which preserves the structure of the Poisson brackets. The new variables , have the same Poisson brackets with each other as the original variables , . Time evolution is a canonical transformation, since the phase space at any time is just as good a choice of variables as the phase space at any other time. The Hamiltonian flow is the canonical transformation: Since the Hamiltonian can be an arbitrary function of and , there are such infinitesimal canonical transformations corresponding to every classical quantity , where serves as the Hamiltonian to generate a flow of points in phase space for an increment of time , For a general function on phase space, its infinitesimal change at every step under this map is The quantity is called the infinitesimal generator of the canonical transformation. In quantum mechanics, the quantum analog is now a Hermitian matrix, and the equations of motion are given by commutators, The infinitesimal canonical motions can be formally integrated, just as the Heisenberg equation of motion were integrated, where and is an arbitrary parameter. The definition of a quantum canonical transformation is thus an arbitrary unitary change of basis on the space of all state vectors. is an arbitrary unitary matrix, a complex rotation in phase space, These transformations leave the sum of the absolute square of the wavefunction components invariant, while they take states which are multiples of each other (including states which are imaginary multiples of each other) to states which are the same multiple of each other. The interpretation of the matrices is that they act as generators of motions on the space of states. For example, the motion generated by can be found by solving the Heisenberg equation of motion using as a Hamiltonian, These are translations of the matrix by a multiple of the identity matrix, This is the interpretation of the derivative operator : , the exponential of a derivative operator is a translation (so Lagrange's shift operator). The operator likewise generates translations in . The Hamiltonian generates translations in time, the angular momentum generates rotations in physical space, and the operator generates rotations in phase space. When a transformation, like a rotation in physical space, commutes with the Hamiltonian, the transformation is called a symmetry (behind a degeneracy) of the Hamiltonian – the Hamiltonian expressed in terms of rotated coordinates is the same as the original Hamiltonian. This means that the change in the Hamiltonian under the infinitesimal symmetry generator vanishes, It then follows that the change in the generator under time translation also vanishes, so that the matrix is constant in time: it is conserved. The one-to-one association of infinitesimal symmetry generators and conservation laws was discovered by Emmy Noether for classical mechanics, where the commutators are Poisson brackets, but the quantum-mechanical reasoning is identical. In quantum mechanics, any unitary symmetry transformation yields a conservation law, since if the matrix U has the property that so it follows that and that the time derivative of is zero – it is conserved. The eigenvalues of unitary matrices are pure phases, so that the value of a unitary conserved quantity is a complex number of unit magnitude, not a real number. Another way of saying this is that a unitary matrix is the exponential of times a Hermitian matrix, so that the additive conserved real quantity, the phase, is only well-defined up to an integer multiple of . Only when the unitary symmetry matrix is part of a family that comes arbitrarily close to the identity are the conserved real quantities single-valued, and then the demand that they are conserved become a much more exacting constraint. Symmetries which can be continuously connected to the identity are called continuous, and translations, rotations, and boosts are examples. Symmetries which cannot be continuously connected to the identity are discrete, and the operation of space-inversion, or parity, and charge conjugation are examples. The interpretation of the matrices as generators of canonical transformations is due to Paul Dirac. The correspondence between symmetries and matrices was shown by Eugene Wigner to be complete, if antiunitary matrices which describe symmetries which include time-reversal are included. Selection rules It was physically clear to Heisenberg that the absolute squares of the matrix elements of , which are the Fourier coefficients of the oscillation, would yield the rate of emission of electromagnetic radiation. In the classical limit of large orbits, if a charge with position and charge is oscillating next to an equal and opposite charge at position 0, the instantaneous dipole moment is , and the time variation of this moment translates directly into the space-time variation of the vector potential, which yields nested outgoing spherical waves. For atoms, the wavelength of the emitted light is about 10,000 times the atomic radius, and the dipole moment is the only contribution to the radiative field, while all other details of the atomic charge distribution can be ignored. Ignoring back-reaction, the power radiated in each outgoing mode is a sum of separate contributions from the square of each independent time Fourier mode of , Now, in Heisenberg's representation, the Fourier coefficients of the dipole moment are the matrix elements of . This correspondence allowed Heisenberg to provide the rule for the transition intensities, the fraction of the time that, starting from an initial state , a photon is emitted and the atom jumps to a final state , This then allowed the magnitude of the matrix elements to be interpreted statistically: they give the intensity of the spectral lines, the probability for quantum jumps from the emission of dipole radiation. Since the transition rates are given by the matrix elements of , wherever is zero, the corresponding transition should be absent. These were called the selection rules, which were a puzzle until the advent of matrix mechanics. An arbitrary state of the hydrogen atom, ignoring spin, is labelled by , where the value of is a measure of the total orbital angular momentum and is its -component, which defines the orbit orientation. The components of the angular momentum pseudovector are where the products in this expression are independent of order and real, because different components of and commute. The commutation relations of with all three coordinate matrices , , (or with any vector) are easy to find, which confirms that the operator generates rotations between the three components of the vector of coordinate matrices . From this, the commutator of and the coordinate matrices , , can be read off, This means that the quantities and have a simple commutation rule, Just like the matrix elements of and for the harmonic oscillator Hamiltonian, this commutation law implies that these operators only have certain off diagonal matrix elements in states of definite , meaning that the matrix takes an eigenvector of with eigenvalue to an eigenvector with eigenvalue . Similarly, decrease by one unit, while does not change the value of . So, in a basis of states where and have definite values, the matrix elements of any of the three components of the position are zero, except when is the same or changes by one unit. This places a constraint on the change in total angular momentum. Any state can be rotated so that its angular momentum is in the -direction as much as possible, where . The matrix element of the position acting on can only produce values of which are bigger by one unit, so that if the coordinates are rotated so that the final state is , the value of can be at most one bigger than the biggest value of that occurs in the initial state. So is at most . The matrix elements vanish for , and the reverse matrix element is determined by Hermiticity, so these vanish also when : Dipole transitions are forbidden with a change in angular momentum of more than one unit. Sum rules The Heisenberg equation of motion determines the matrix elements of in the Heisenberg basis from the matrix elements of . which turns the diagonal part of the commutation relation into a sum rule for the magnitude of the matrix elements: This yields a relation for the sum of the spectroscopic intensities to and from any given state, although to be absolutely correct, contributions from the radiative capture probability for unbound scattering states must be included in the sum:
Physical sciences
Quantum mechanics
Physics
396345
https://en.wikipedia.org/wiki/Falkland%20Islands%20wolf
Falkland Islands wolf
The Falkland Islands wolf (Dusicyon australis), also known as the warrah ( or ) and occasionally as the Falkland Islands dog, Falkland Islands fox, warrah fox, or Antarctic wolf, was the only native land mammal of the Falkland Islands. This endemic canid became extinct in 1876, the first known canid to have become extinct in historical times. Traditionally, it had been supposed that the most closely related genus was Lycalopex, including the culpeo, which has been introduced to the Falkland Islands in modern times. A 2009 cladistic analysis of DNA identified the Falkland Islands wolf's closest living relative as the maned wolf (Chrysocyon brachyurus), an unusually long-legged, fox-like South American canid, from which it separated about 6.7 million years ago. However, the Falkland Islands wolf diverged from its mainland ancestor Dusicyon avus very recently, around 16,000 years ago. Dusicyon avus persisted on the South American mainland until around 400 years ago. The Falkland Islands wolf existed on both West and East Falkland, but Charles Darwin was uncertain if they were differentiated varieties or subspecies. Its fur had a tawny colour and the tip of the tail was white. Its diet is unknown, but due to the absence of native rodents on the Falklands, probably consisted of ground-nesting birds, such as geese and penguins, seal pups and insects, as well as seashore scavenging. It has sometimes been said that it may have lived in burrows. History The first recorded sighting was by Capt. John Strong in 1690. Captain Strong took one on his ship, but during the voyage back to Europe it became frightened by the firing of the ship's cannon and jumped overboard. Louis Antoine de Bougainville, who established the first settlement in the Falkland Islands termed it a loup-renard ("wolf-fox"). The name "warrah" is an anglicised approximation of the term aguará (meaning "fox" in Guaraní, a Native American language), because of its similarity to the maned wolf (aguará guazú). When Charles Darwin visited the islands in 1833 he found the species present in both West and East Falkland and tame. However, at the time of his visit the animal was already very rare on East Falkland, and even on West Falkland its numbers were declining rapidly. By 1865, it was no longer found on the eastern part of East Falkland. He predicted that the animal would join the dodo among the extinct within "a very few years." It was hunted for its valuable fur, and settlers regarding the wolf as a threat to their sheep, poisoned it. However, the belief that Falkland Islands wolf was a threat to sheep was probably due to the sheep mistaking the Falkland Islands wolves for dogs (especially at night), and, in terror, the sheep ran into bogs and swamps, where they became lost. There were no forests for the animal to hide in, and it had no fear of humans; it was possible to lure the animal with a chunk of meat held in one hand, and kill it with a knife or stick held in the other. However, it would defend itself occasionally if it needed to, as Admiral George Grey noted when they landed on West Falkland at Port Edgar on 17 December 1836: A live wolf was taken to London Zoo, England in 1868. Another "Antarctic wolf" arrived in 1870. Neither animal survived long. Only a dozen or so museum specimens exist today. In 1880, after the animal had become extinct, Thomas Huxley classified it as related to the coyote. In 1914, Oldfield Thomas moved it to the genus Dusicyon, with the culpeo and other South American foxes. (These other canids have since been removed to Lycalopex.) Darwin's description Darwin writing about his 1834 visit to the Falklands in his Journal and Remarks (The Voyage of the Beagle) has the following to say of Canis antarcticus: Biogeography and evolution Darwin's comments When organising his notes on the last stage of the Beagle expedition, Darwin wrote of his growing suspicions that the differences between the various Galápagos Islands mockingbirds and tortoises, as well as the possible dissimilarity of West Falkland and East Falkland Islands wolves, were but variants that differed depending on which island they came from: The word "would" was added after this passage was first written, suggesting a cautious qualification from his initial bold statement. He later wrote that such facts "seemed to me to throw some light on the origin of species". Related species A DNA analysis and a study of comparative brain anatomy suggest that the closest living relative of the Falkland Islands wolf is the South American maned wolf. Their most recent common ancestor was estimated to have lived some 6 million years ago and was close to the most recent common ancestor of all South American canids, Eucyon or a close relative. It would seem that the lineages of the maned wolf and the Falkland Islands wolf separated in North America; canids did not appear in South America until roughly 3 million years ago in a paleozoogeographical event called the Great American Biotic Interchange, in which the continents of North and South America were newly connected by the formation of the Isthmus of Panama. However, no fossil from North America can be assigned to the Falkland Islands wolf or its immediate ancestors. Dusicyon avus, known from fossils from southern South America as recent as 400 years ago, was the closest known relative of the Falkland Islands wolf. In terms of skull shape and feeding habits, the animal was an opportunistic predator, more like a jackal. Biogeographical isolation on the Falklands The route by which the Falkland Islands wolf was established in the islands was unknown for a long time, as the islands have never been connected to the mainland and there are no other native land mammals. No other oceanic island as remote as the Falklands has a native canid; the island fox of California in the US and Darwin's fox of Chile both inhabit islands much closer to a continent. Berta and other authors suggest that it was unlikely that the wolf's ancestors could have survived the last Ice Age on the Falklands and they must therefore have arrived later, within the last ten thousand years, crossing a wide expanse of the South Atlantic. Its close relative, Dusicyon avus, did survive in South America until a few thousand years ago, but swimming such a distance or even drifting on a floating log would appear effectively impossible for the wolf. A study by a University of Maine team in 2021 reports evidence of potential visitation to the islands by indigenous South Americans before the Age of Discovery. The authors speculated that the ancestors of the wolf could have been domesticated and brought with the visitors. The oldest known remains of Falklands Islands wolves date to approximately 3396–3752 years Before Present, found at Spring Point Farm in West Falkland, the only place in the Falkland Islands where subfossil bones of the wolf have been found. The scarcity of remains is likely due to the acidic peaty soil of most of the Falklands, which rapidly degrades bones. Genetics DNA of the extinct mainland relative, D. avus, analyzed in 2013 suggests that its genetic history diverged from the Falkland Islands wolf only some 16,000 years ago, during the last glacial phase. This is strong evidence that the ancestors of the wolf were isolated on the islands only since the last glacial maximum. A 2009 analysis of mitochondrial DNA from five museum specimens of the Falkland Islands wolf indicated that they had multiple mitochondrial haplotypes whose most recent common ancestor lived about 330,000 years ago, giving some idea of the genetic diversity of the founding population. Ice Age land bridge An Ice Age land bridge or ice connection between the Falkland Islands and South America, enabling the species' ancestors to traverse the gap, has long been suggested. There was never a true land bridge between the islands and South America, but submarine terraces have been found on the Argentine coastal shelf, formed by low sea-stands during the last glacial phase. This suggests that there was a shallow strait as narrow as 20 km, which may have frozen completely at times. It is possible that the founding population of the wolf crossed on this ice bridge during the last Ice Age. The absence of other mainland mammals on the islands might be due to the difficulty of an ice crossing. In culture Locations that are named after the wolf: Fox Bay, a bay and settlement on West Falkland Warrah River, West Falkland
Biology and health sciences
Canines
Animals
396550
https://en.wikipedia.org/wiki/Cargo
Cargo
In transportation, freight refers to goods conveyed by land, water or air, while cargo refers specifically to freight when conveyed via water or air. In economics, freight refers to goods transported at a freight rate for commercial gain. The term cargo is also used in case of goods in the cold-chain, because the perishable inventory is always in transit towards a final end-use, even when it is held in cold storage or other similar climate-controlled facilities, including warehouses. Multi-modal container units, designed as reusable carriers to facilitate unit load handling of the goods contained, are also referred to as cargo, especially by shipping lines and logistics operators. When empty containers are shipped each unit is documented as a cargo and when goods are stored within, the contents are termed containerized cargo. Similarly, aircraft ULD boxes are also documented as cargo, with an associated packing list of the items contained within. Description Marine Seaport terminals handle a wide range of maritime cargoes. Break bulk / general cargo are goods that are handled and stowed piecemeal to some degree, as opposed to cargo in bulk or modern shipping containers. Typically bundled in batches for hoisting, either with cargo nets, slings, crates, or stacked on trays, pallets or skids; at best (and today mostly) lifted directly into and out of a vessel's holds, but otherwise onto and off its deck, by cranes or derricks present on the dock or on the ship itself. If hoisted on deck instead of straight into the hold, liftable or rolling unit loads, like bags, barrels/vats, boxes, cartons and crates, then have to be man-handled and stowed competently by stevedores. Securing break bulk and general freight inside a vessel, includes the use of dunnage. When no hoisting equipment is available, break bulk would previously be man-carried on and off the ship, over a plank, or by passing via human chain. Since the 1960s, the volume of break bulk cargo has enormously declined worldwide in favour of mass adoption of containers. Bulk cargo, such as salt, oil, tallow, but also scrap metal, is usually defined as commodities that are neither on pallets nor in containers. Bulk cargoes are not handled as individual pieces, the way heavy-lift and project cargo are. Alumina, grain, gypsum, logs, and wood chips, for instance, are bulk cargoes. Bulk cargo is classified as liquid or dry. Air Air cargo refers to any goods shipped by air, whereas air freight refers specifically to goods transported in the cargo hold of a dedicated cargo plane. Aircraft were first used to carry mail as cargo in 1911. Eventually manufacturers started designing aircraft for other types of freight as well. There are many commercial aircraft suitable for carrying cargo such as the Boeing 747 and the more prominent An‑124, which was purposely built for easy conversion into a cargo aircraft. Such large aircraft employ standardized quick-loading containers known as unit load devices (ULDs), comparable to ISO containers on cargo ships. ULDs can be stowed in the lower decks (front and rear) of several wide-body aircraft, and on the main deck of some narrow-bodies. Some dedicated cargo planes have a large opening front for loading. Air freight shipments are very similar to LTL shipments in terms of size and packaging requirements. However, air freight or air cargo shipments typically need to move at much faster speeds than per hour. While shipments move faster than standard LTL, air shipments do not always actually move by air. Air shipments may be booked directly with the carriers, through brokers or with online marketplace services. In the US, there are certain restrictions on cargo moving via air freight on passenger aircraft, most notably the transport of rechargeable lithium-ion battery shipments. Shippers in the US must be approved and be "known" in the Known Shipper Management System before their shipments can be tendered on passenger aircraft. Rail Trains are capable of transporting a large number of containers that come from shipping ports. Trains are also used to transport water, cement, grain, steel, wood and coal. They are used because they can carry a large amount and generally have a direct route to the destination. Under the right circumstances, freight transport by rail is more economical and energy efficient than by road, mainly when carried in bulk or over long distances. The main disadvantage of rail freight is its lack of flexibility. For this reason, rail has lost much of the freight business to road transport. Rail freight is often subject to transshipment costs, since it must be transferred from one mode of transportation to another. Practices such as containerization aim at minimizing these costs. When transporting point-to-point bulk loads such as cement or grain, with specialised bulk handling facilities at the rail sidings, the rail mode of transport remains the most convenient and preferred option. Many governments are encouraging shippers to increase their use of rail rather than transport because of trains' lower environmental disbenefits. Road Many firms, like Parcelforce, FedEx and R+L Carriers transport all types of cargo by road. Delivering everything from letters to houses to cargo containers, these firms offer fast, sometimes same-day, delivery. A good example of road cargo is food, as supermarkets require deliveries daily to replenish their shelves with goods. Retailers and manufacturers of all kinds rely upon delivery trucks, be they full size semi trucks or smaller delivery vans. These smaller road haulage companies constantly strive for the best routes and prices to ship out their products. Indeed, the level of commercial freight transported by smaller businesses is often a good barometer of healthy economic development as these types of vehicles move and transport anything literally, including couriers transporting parcels and mail. You can see the different types and weights of vehicles that are used to move cargo around . Less-than-truckload freight Less than truckload (LTL) cargo is the first category of freight shipment, representing the majority of freight shipments and the majority of business-to-business (B2B) shipments. LTL shipments are also often referred to as motor freight and the carriers involved are referred to as motor carriers. LTL shipments range from , being less than the majority of times. The average single piece of LTL freight is and the size of a standard pallet. Long freight and/or large freight are subject to extreme length and cubic capacity surcharges. Trailers used in LTL can range from . The standard for city deliveries is usually . In tight and residential environments the trailer is used the most. The shipments are usually palletized, stretch [shrink]-wrapped and packaged for a mixed-freight environment. Unlike express or parcel, LTL shippers must provide their own packaging, as carriers do not provide any packaging supplies or assistance. However, circumstances may require crating or another substantial packaging. Truckload freight In the United States, shipments larger than about are typically classified as truckload (TL) freight. This is because it is more efficient and economical for a large shipment to have exclusive use of one larger trailer rather than share space on a smaller LTL trailer. By the Federal Bridge Gross Weight Formula the total weight of a loaded truck (tractor and trailer, 5-axle rig) cannot exceed in the United States. In ordinary circumstances, long-haul equipment will weigh about , leaving about of freight capacity. Similarly a load is limited to the space available in the trailer, normally or long, wide, high and high overall. While express, parcel and LTL shipments are always intermingled with other shipments on a single piece of equipment and are typically reloaded across multiple pieces of equipment during their transport, TL shipments usually travel as the only shipment on a trailer. In fact, TL shipments usually deliver on exactly the same trailer as they are picked up on. Shipment categories the type of item being carried. For example, a kettle could fit into the category 'household goods'. how large the shipment is, in terms of both item size and quantity. how long the item for delivery will be in transit. Household goods (HHG) include furniture, art and similar items. Express: Very small business or personal items like envelopes are considered overnight express or express letter shipments. These shipments are rarely over a few kilograms or pounds and almost always travel in the carrier's own packaging. Express shipments almost always travel some distance by air. An envelope may go coast to coast in the United States overnight or it may take several days, depending on the service options and prices chosen by the shipper. Parcel: Larger items like small boxes are considered parcels or ground shipments. These shipments are rarely over , with no single piece of the shipment weighing more than about . Parcel shipments are always boxed, sometimes in the shipper's packaging and sometimes in carrier-provided packaging. Service levels are again variable but most ground shipments will move about per day. Depending on the package's origin, it can travel from coast to coast in the United States in about four days. Parcel shipments rarely travel by air and typically move via road and rail. Parcels represent the majority of business-to-consumer (B2C) shipments. Freight: Beyond HHG, express, and parcel shipments, movements are termed freight shipments. Shipping costs An LTL shipper often realizes savings by utilizing a freight broker, online marketplace or another intermediary, instead of contracting directly with a trucking company. Brokers can shop the marketplace and obtain lower rates than most smaller shippers can obtain directly. In the LTL marketplace, intermediaries typically receive 50% to 80% discounts from published rates, whereas a small shipper may only be offered a 5% to 30% discount by the carrier. Intermediaries are licensed by the DOT and have the requirements to provide proof of insurance. Truckload (TL) carriers usually charge a rate per kilometre or mile. The rate varies depending on the distance, geographic location of the delivery, items being shipped, equipment type required, and service times required. TL shipments usually receive a variety of surcharges very similar to those described for LTL shipments above. There are thousands more small carriers in the TL market than in the LTL market. Therefore, the use of transportation intermediaries or brokers is widespread. Another cost-saving method is facilitating pickups or deliveries at the carrier's terminals. Carriers or intermediaries can provide shippers with the address and phone number for the closest shipping terminal to the origin and/or destination. By doing this, shippers avoid any accessorial fees that might normally be charged for liftgate, residential pickup/delivery, inside pickup/delivery, or notifications/appointments. Shipping experts optimize their service and costs by sampling rates from several carriers, brokers and online marketplaces. When obtaining rates from different providers, shippers may find a wide range in the pricing offered. If a shipper in the United States uses a broker, freight forwarder or another transportation intermediary, it is common for the shipper to receive a copy of the carrier's Federal Operating Authority. Freight brokers and intermediaries are also required by Federal Law to be licensed by the Federal Highway Administration. Experienced shippers avoid unlicensed brokers and forwarders because if brokers are working outside the law by not having a Federal Operating License, the shipper has no protection in case of a problem. Also, shippers typically ask for a copy of the broker's insurance certificate and any specific insurance that applies to the shipment. Overall, shipping costs have fallen over the past decades. A further drop in shipping costs in the future might be realized through the application of improved 3D printing technologies. Security concerns Governments are very concerned with cargo shipment, as it may bring security risks to a country. Therefore, many governments have enacted rules and regulations, administered by a customs agency, for the handling of cargo to minimize risks of terrorism and other crime. Governments are mainly concerned with cargo entering through a country's borders. The United States has been one of the leaders in securing cargo. They see cargo as a concern to national security. After the terrorist attacks of September 11th, the security of this magnitude of cargo has become highlighted on the over 6 million cargo containers that enter the United States ports each year. The latest US Government response to this threat is the CSI: Container Security Initiative. CSI is a program intended to help increase security for containerized cargo shipped to the United States from around the world. Europe is also focusing on this issue, with several EU-funded projects underway. Stabilization Many ways and materials are available to stabilize and secure cargo in various modes of transport. Conventional load securing methods and materials such as steel strapping and plastic/wood blocking and bracing have been used for decades and are still widely used. Present load-securing methods offer several other options, including polyester strapping and lashing, synthetic webbings and dunnage bags, also known as airbags or inflatable bags. Practical advice on stabilization is given in the International Guidelines on Safe Load Securing for Road Transport.
Technology
Basics_11
null
21035
https://en.wikipedia.org/wiki/Migraine
Migraine
Migraine (, ) is a genetically-influenced complex neurological disorder characterized by episodes of moderate-to-severe headache, most often unilateral and generally associated with nausea and light and sound sensitivity. Other characterizing symptoms may include vomiting, cognitive dysfunction, allodynia, and dizziness. Exacerbation or worsening of headache symptoms during physical activity is another distinguishing feature. Up to one-third of people with migraine experience aura, a premonitory period of sensory disturbance widely accepted to be caused by cortical spreading depression at the onset of a migraine attack. Although primarily considered to be a headache disorder, migraine is highly heterogenous in its clinical presentation and is better thought of as a spectrum disease rather than a distinct clinical entity. Disease burden can range from episodic discrete attacks to chronic disease. Migraine is believed to be caused by a mixture of environmental and genetic factors that influence the excitation and inhibition of nerve cells in the brain. An incomplete "vascular hypothesis" postulated that the aura of migraine is produced by vasoconstriction and the headache of migraine is produced by vasodilation. However, the vasoconstrictive mechanism has been disproven, and the role of vasodilation in migraine pathophysiology is uncertain. The accepted hypothesis suggests that multiple primary neuronal impairments lead to a series of intracranial and extracranial changes, triggering a physiological cascade that leads to migraine symptomatology. Initial recommended treatment for acute attacks is with over-the-counter analgesics (pain medication) such as ibuprofen and paracetamol (acetaminophen) for headache, antiemetics (anti-nausea medication) for nausea, and the avoidance of migraine triggers. Specific medications such as triptans, ergotamines, or calcitonin gene-related peptide receptor antagonist (CGRP) inhibitors may be used in those experiencing headaches that do not respond to the over-the-counter pain medications. For people who experience four or more attacks per month, or could otherwise benefit from prevention, prophylactic medication is recommended. Commonly prescribed prophylactic medications include beta blockers like propranolol, anticonvulsants like sodium valproate, antidepressants like amitriptyline, and other off-label classes of medications. Preventive medications inhibit migraine pathophysiology through various mechanisms, such as blocking calcium and sodium channels, blocking gap junctions, and inhibiting matrix metalloproteinases, among other mechanisms. Non-pharmacological preventive therapies include nutritional supplementation, dietary interventions, sleep improvement, and aerobic exercise. In 2018, the first medication (Erenumab) of a new class of drugs specifically designed for migraine prevention called calcitonin gene-related peptide receptor antagonists (CGRPs) was approved by the FDA. As of July 2023, the FDA has approved eight drugs that act on the CGRP system for use in the treatment of migraine. Globally, approximately 15% of people are affected by migraine. In the Global Burden of Disease Study, conducted in 2010, migraine ranked as the third-most prevalent disorder in the world. It most often starts at puberty and is worst during middle age. , it is one of the most common causes of disability. Signs and symptoms Migraine typically presents with self-limited, recurrent severe headache associated with autonomic symptoms. About 15–30% of people living with migraine experience episodes with aura, and they also frequently experience episodes without aura. The severity of the pain, duration of the headache, and frequency of attacks are variable. A migraine attack lasting longer than 72 hours is termed status migrainosus. There are four possible phases to a migraine attack, although not all the phases are necessarily experienced: The prodrome, which occurs hours or days before the headache The aura, which immediately precedes the headache The pain phase, also known as headache phase The postdrome, the effects experienced following the end of a migraine attack Migraine is associated with major depression, bipolar disorder, anxiety disorders, and obsessive–compulsive disorder. These psychiatric disorders are approximately 2–5 times more common in people without aura, and 3–10 times more common in people with aura. Prodrome phase Prodromal or premonitory symptoms occur in about 60% of those with migraine, with an onset that can range from two hours to two days before the start of pain or the aura. These symptoms may include a wide variety of phenomena, including altered mood, irritability, depression or euphoria, fatigue, craving for certain food(s), stiff muscles (especially in the neck), constipation or diarrhea, and sensitivity to smells or noise. This may occur in those with either migraine with aura or migraine without aura. Neuroimaging indicates the limbic system and hypothalamus as the origin of prodromal symptoms in migraine. Aura phase Aura is a transient focal neurological phenomenon that occurs before or during the headache. Aura appears gradually over a number of minutes (usually occurring over 5–60 minutes) and generally lasts less than 60 minutes. Symptoms can be visual, sensory or motoric in nature, and many people experience more than one. Visual effects occur most frequently: they occur in up to 99% of cases and in more than 50% of cases are not accompanied by sensory or motor effects. If any symptom remains after 60 minutes, the state is known as persistent aura. Visual disturbances often consist of a scintillating scotoma (an area of partial alteration in the field of vision which flickers and may interfere with a person's ability to read or drive). These typically start near the center of vision and then spread out to the sides with zigzagging lines which have been described as looking like fortifications or walls of a castle. Usually the lines are in black and white but some people also see colored lines. Some people lose part of their field of vision known as hemianopsia while others experience blurring. Sensory aura are the second most common type; they occur in 30–40% of people with auras. Often a feeling of pins-and-needles begins on one side in the hand and arm and spreads to the nose–mouth area on the same side. Numbness usually occurs after the tingling has passed with a loss of position sense. Other symptoms of the aura phase can include speech or language disturbances, world spinning, and less commonly motor problems. Motor symptoms indicate that this is a hemiplegic migraine, and weakness often lasts longer than one hour unlike other auras. Auditory hallucinations or delusions have also been described. Pain phase Classically the headache is unilateral, throbbing, and moderate to severe in intensity. It usually comes on gradually and is aggravated by physical activity during a migraine attack. However, the effects of physical activity on migraine are complex, and some researchers have concluded that, while exercise can trigger migraine attacks, regular exercise may have a prophylactic effect and decrease frequency of attacks. The feeling of pulsating pain is not in phase with the pulse. In more than 40% of cases, however, the pain may be bilateral (both sides of the head), and neck pain is commonly associated with it. Bilateral pain is particularly common in those who have migraine without aura. Less commonly pain may occur primarily in the back or top of the head. The pain usually lasts 4 to 72 hours in adults; however, in young children frequently lasts less than 1 hour. The frequency of attacks is variable, from a few in a lifetime to several a week, with the average being about one a month. The pain is frequently accompanied by nausea, vomiting, sensitivity to light, sensitivity to sound, sensitivity to smells, fatigue, and irritability. Many thus seek a dark and quiet room. In a basilar migraine, a migraine with neurological symptoms related to the brain stem or with neurological symptoms on both sides of the body, common effects include a sense of the world spinning, light-headedness, and confusion. Nausea occurs in almost 90% of people, and vomiting occurs in about one-third. Other symptoms may include blurred vision, nasal stuffiness, diarrhea, frequent urination, pallor, or sweating. Swelling or tenderness of the scalp may occur as can neck stiffness. Associated symptoms are less common in the elderly. Silent migraine Sometimes, aura occurs without a subsequent headache. This is known in modern classification as a typical aura without headache, or acephalgic migraine in previous classification, or commonly as a silent migraine. However, silent migraine can still produce debilitating symptoms, with visual disturbance, vision loss in half of both eyes, alterations in color perception, and other sensory problems, like sensitivity to light, sound, and odors. It can last from 15 to 30 minutes, usually no longer than 60 minutes, and it can recur or appear as an isolated event. Postdrome The migraine postdrome could be defined as that constellation of symptoms occurring once the acute headache has settled. Many report a sore feeling in the area where the migraine was, and some report impaired thinking for a few days after the headache has passed. The person may feel tired or "hung over" and have head pain, cognitive difficulties, gastrointestinal symptoms, mood changes, and weakness. According to one summary, "Some people feel unusually refreshed or euphoric after an attack, whereas others note depression and malaise." Cause The underlying cause of migraine is unknown. However, it is believed to be related to a mix of environmental and genetic factors. Migraine runs in families in about two-thirds of cases and rarely occur due to a single gene defect. While migraine attacks were once believed to be more common in those of high intelligence, this does not appear to be true. A number of psychological conditions are associated, including depression, anxiety, and bipolar disorder. Success of the surgical migraine treatment by decompression of extracranial sensory nerves adjacent to vessels suggests that people with migraine may have anatomical predisposition for neurovascular compression that may be caused by both intracranial and extracranial vasodilation due to migraine triggers. This, along with the existence of numerous cranial neural interconnections, may explain the multiple cranial nerve involvement and consequent diversity of migraine symptoms. Genetics Studies of twins indicate a 34–51% genetic influence on the likelihood of developing migraine. This genetic relationship is stronger for migraine with aura than for migraine without aura. It is clear from family and populations studies that migraine is a complex disorder, where numerous genetic risk variants exist, and where each variant increases the risk of migraine marginally. It is also known that having several of these risk variants increases the risk by a small to moderate amount. Single gene disorders that result in migraine are rare. One of these is known as familial hemiplegic migraine, a type of migraine with aura, which is inherited in an autosomal dominant fashion. Four genes have been shown to be involved in familial hemiplegic migraine. Three of these genes are involved in ion transport. The fourth is the axonal protein PRRT2, associated with the exocytosis complex. Another genetic disorder associated with migraine is CADASIL syndrome or cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy. One meta-analysis found a protective effect from angiotensin converting enzyme polymorphisms on migraine. The TRPM8 gene, which codes for a cation channel, has been linked to migraine. The common forms migraine are polygenetic, where common variants of numerous genes contributes to the predisposition for migraine. These genes can be placed in three categories increasing the risk of migraine in general, specifically migraine with aura, or migraine without aura. Three of these genes, CALCA, CALCB, and HTR1F are already target for migraine specific treatments. Five genes are specific risk to migraine with aura, PALMD, ABO, LRRK2, CACNA1A and PRRT2, and 13 genes are specific to migraine without aura. Using the accumulated genetic risk of the common variations, into a so-called polygenetic risk, it is possible to assess e.g. the treatment response to triptans. Triggers Migraine may be induced by triggers, with some reporting it as an influence in a minority of cases and others the majority. Many things such as fatigue, certain foods, alcohol, and weather have been labeled as triggers; however, the strength and significance of these relationships are uncertain. Most people with migraine report experiencing triggers. Symptoms may start up to 24 hours after a trigger. Physiological aspects Common triggers quoted are stress, hunger, and fatigue (these equally contribute to tension headaches). Psychological stress has been reported as a factor by 50–80% of people. Migraine has also been associated with post-traumatic stress disorder and abuse. Migraine episodes are more likely to occur around menstruation. Other hormonal influences, such as menarche, oral contraceptive use, pregnancy, perimenopause, and menopause, also play a role. These hormonal influences seem to play a greater role in migraine without aura. Migraine episodes typically do not occur during the second and third trimesters of pregnancy, or following menopause. Dietary aspects Between 12% and 60% of people report foods as triggers. There are many reports that tyramine – which is naturally present in chocolate, alcoholic beverages, most cheeses, processed meats, and other foods – can trigger migraine symptoms in some individuals. Monosodium glutamate (MSG) has been reported as a trigger for migraine, but a systematic review concluded that "a causal relationship between MSG and headache has not been proven... It would seem premature to conclude that the MSG present in food causes headache". Environmental aspects A 2009 review on potential triggers in the indoor and outdoor environment concluded that while there were insufficient studies to confirm environmental factors as causing migraine, "migraineurs worldwide consistently report similar environmental triggers ... such as barometric pressure change, bright sunlight, flickering lights, air quality and odors". Pathophysiology Migraine is believed to be primarily a neurological disorder, while others believe it to be a neurovascular disorder with blood vessels playing the key role, although evidence does not support this completely. Others believe both are likely important. One theory is related to increased excitability of the cerebral cortex and abnormal control of pain neurons in the trigeminal nucleus of the brainstem. Sensitization of trigeminal pathways is a key pathophysiological phenomenon in migraine. It is debatable whether sensitization starts in the periphery or in the brain. Aura Cortical spreading depression, or spreading depression according to Leão, is a burst of neuronal activity followed by a period of inactivity, which is seen in those with migraine with aura. There are a number of explanations for its occurrence, including activation of NMDA receptors leading to calcium entering the cell. After the burst of activity, the blood flow to the cerebral cortex in the area affected is decreased for two to six hours. It is believed that when depolarization travels down the underside of the brain, nerves that sense pain in the head and neck are triggered. Pain The exact mechanism of the head pain which occurs during a migraine episode is unknown. Some evidence supports a primary role for central nervous system structures (such as the brainstem and diencephalon), while other data support the role of peripheral activation (such as via the sensory nerves that surround blood vessels of the head and neck). The potential candidate vessels include dural arteries, pial arteries and extracranial arteries such as those of the scalp. The role of vasodilatation of the extracranial arteries, in particular, is believed to be significant. Neuromodulators Adenosine, a neuromodulator, may be involved. Released after the progressive cleavage of adenosine triphosphate (ATP), adenosine acts on adenosine receptors to put the body and brain in a low activity state by dilating blood vessels and slowing the heart rate, such as before and during the early stages of sleep. Adenosine levels have been found to be high during migraine attacks. Caffeine's role as an inhibitor of adenosine may explain its effect in reducing migraine. Low levels of the neurotransmitter serotonin, also known as 5-hydroxytryptamine (5-HT), are also believed to be involved. Calcitonin gene-related peptides (CGRPs) have been found to play a role in the pathogenesis of the pain associated with migraine, as levels of it become elevated during an attack. Diagnosis The diagnosis of a migraine is based on signs and symptoms. Neuroimaging tests are not necessary to diagnose migraine, but may be used to find other causes of headaches in those whose examination and history do not confirm a migraine diagnosis. It is believed that a substantial number of people with the condition remain undiagnosed. The diagnosis of migraine without aura, according to the International Headache Society, can be made according the "5, 4, 3, 2, 1 criteria", which is as follows: Five or more attacks – for migraine with aura, two attacks are sufficient for diagnosis. Four hours to three days in duration Two or more of the following: Unilateral (affecting one side of the head) Pulsating Moderate or severe pain intensity Worsened by or causing avoidance of routine physical activity One or more of the following: Nausea and/or vomiting Sensitivity to both light (photophobia) and sound (phonophobia) If someone experiences two of the following: photophobia, nausea, or inability to work or study for a day, the diagnosis is more likely. In those with four out of five of the following: pulsating headache, duration of 4–72 hours, pain on one side of the head, nausea, or symptoms that interfere with the person's life, the probability that this is a migraine attack is 92%. In those with fewer than three of these symptoms, the probability is 17%. Classification Migraine was first comprehensively classified in 1988. The International Headache Society updated their classification of headaches in 2004. A third version was published in 2018. According to this classification, migraine is a primary headache disorder along with tension-type headaches and cluster headaches, among others. Migraine is divided into six subclasses (some of which include further subdivisions): Migraine without aura, or "common migraine", involves migraine headaches that are not accompanied by aura. Migraine with aura, or "classic migraine", usually involves migraine headaches accompanied by aura. Less commonly, aura can occur without a headache, or with a nonmigraine headache. Two other varieties are familial hemiplegic migraine and sporadic hemiplegic migraine, in which a person has migraine with aura and with accompanying motor weakness. If a close relative has had the same condition, it is called "familial", otherwise it is called "sporadic". Another variety is basilar-type migraine, where a headache and aura are accompanied by difficulty speaking, world spinning, ringing in ears, or a number of other brainstem-related symptoms, but not motor weakness. This type was initially believed to be due to spasms of the basilar artery, the artery that supplies the brainstem. Now that this mechanism is not believed to be primary, the symptomatic term migraine with brainstem aura (MBA) is preferred. Retinal migraine (which is distinct from visual or optical migraine) involves migraine headaches accompanied by visual disturbances or even temporary blindness in one eye. Childhood periodic syndromes that are commonly precursors of migraine include cyclical vomiting (occasional intense periods of vomiting), abdominal migraine (abdominal pain, usually accompanied by nausea), and benign paroxysmal vertigo of childhood (occasional attacks of vertigo). Complications of migraine describe migraine headaches and/or auras that are unusually long or unusually frequent, or associated with a seizure or brain lesion. Probable migraine describes conditions that have some characteristics of migraine, but where there is not enough evidence to diagnose it as migraine with certainty (in the presence of concurrent medication overuse). Chronic migraine is a complication of migraine, and is a headache that fulfills diagnostic criteria for migraine headache and occurs for a greater time interval. Specifically, greater or equal to 15 days/month for longer than 3 months. Abdominal migraine The diagnosis of abdominal migraine is controversial. Some evidence indicates that recurrent episodes of abdominal pain in the absence of a headache may be a type of migraine or are at least a precursor to migraine attacks. These episodes of pain may or may not follow a migraine-like prodrome and typically last minutes to hours. They often occur in those with either a personal or family history of typical migraine. Other syndromes that are believed to be precursors include cyclical vomiting syndrome and benign paroxysmal vertigo of childhood. Differential diagnosis Other conditions that can cause similar symptoms to a migraine headache include temporal arteritis, cluster headaches, acute glaucoma, meningitis and subarachnoid hemorrhage. Temporal arteritis typically occurs in people over 50 years old and presents with tenderness over the temple, cluster headache presents with one-sided nose stuffiness, tears and severe pain around the orbits, acute glaucoma is associated with vision problems, meningitis with fevers, and subarachnoid hemorrhage with a very fast onset. Tension headaches typically occur on both sides, are not pounding, and are less disabling. Those with stable headaches that meet criteria for migraine should not receive neuroimaging to look for other intracranial disease. This requires that other concerning findings such as papilledema (swelling of the optic disc) are not present. People with migraine are not at an increased risk of having another cause for severe headaches. Management Management of migraine includes prevention of migraine attacks and rescue treatment. There are three main aspects of treatment: trigger avoidance, acute (abortive), and preventive (prophylactic) control. Modern approaches to migraine management emphasize personalized care that considers individual patient needs. Lifestyle modifications, such as managing triggers and addressing comorbidities, form the foundation of treatment. Behavioral techniques and supplements like magnesium and riboflavin can serve as supportive options for some individuals. Behavioral techniques that may reduce the frequency of migraines include Cognitive Behavioral Therapy (CBT), relaxation training, and mindfulness-based therapies, although the evidence supporting these therapies is limited. Acute treatments, including NSAIDs and triptans, are most effective when administered early in an attack, while preventive medications are recommended for those experiencing frequent or severe migraines. Proven preventive options include beta blockers, topiramate, and CGRP inhibitors like erenumab and galcanezumab, which have demonstrated significant efficacy in clinical studies. The European Consensus Statement provides a framework for diagnosis and management, emphasizing the importance of accurate assessment, patient education, and consistent adherence to prescribed treatments. Innovative therapies of oral medications used to treat migraine symptoms, such as gepants and ditans, are emerging as alternatives for patients who cannot use traditional options. Prognosis "Migraine exists on a continuum of different attack frequencies and associated levels of disability." For those with occasional, episodic migraine, a "proper combination of drugs for prevention and treatment of migraine attacks" can limit the disease's impact on patients' personal and professional lives. But fewer than half of people with migraine seek medical care and more than half go undiagnosed and undertreated. "Responsive prevention and treatment of migraine is incredibly important" because evidence shows "an increased sensitivity after each successive attack, eventually leading to chronic daily migraine in some individuals." Repeated migraine results in "reorganization of brain circuitry", causing "profound functional as well as structural changes in the brain." "One of the most important problems in clinical migraine is the progression from an intermittent, self-limited inconvenience to a life-changing disorder of chronic pain, sensory amplification, and autonomic and affective disruption. This progression, sometimes termed chronification in the migraine literature, is common, affecting 3% of migraineurs in a given year, such that 8% of migraineurs have chronic migraine in any given year." Brain imagery reveals that the electrophysiological changes seen during an attack become permanent in people with chronic migraine; "thus, from an electrophysiological point of view, chronic migraine indeed resembles a never-ending migraine attack." Severe migraine ranks in the highest category of disability, according to the World Health Organization, which uses objective metrics to determine disability burden for the authoritative annual Global Burden of Disease report. The report classifies severe migraine alongside severe depression, active psychosis, quadriplegia, and terminal-stage cancer. Migraine with aura appears to be a risk factor for ischemic stroke doubling the risk. Being a young adult, being female, using hormonal birth control, and smoking further increases this risk. There also appears to be an association with cervical artery dissection. Migraine without aura does not appear to be a factor. The relationship with heart problems is inconclusive with a single study supporting an association. Migraine does not appear to increase the risk of death from stroke or heart disease. Preventative therapy of migraine in those with migraine with aura may prevent associated strokes. People with migraine, particularly women, may develop higher than average numbers of white matter brain lesions of unclear significance. Epidemiology Migraine is common, with around 33% of women and 18% of men affected at some point in their lifetime. Onset can be at any age, but prevalence rises sharply around puberty, and remains high until declining after age 50. Before puberty, boys and girls are equally impacted, with around 5% of children experiencing migraine attacks. From puberty onwards, women experience migraine attacks at greater rates than men. From age 30 to 50, up to 4 times as many women experience migraine attacks as men., this is most pronounced in migraine without aura. Worldwide, migraine affects nearly 15% or approximately one billion people. In the United States, about 6% of men and 18% of women experience a migraine attack in a given year, with a lifetime risk of about 18% and 43% respectively. In Europe, migraine affects 12–28% of people at some point in their lives with about 6–15% of adult men and 14–35% of adult women getting at least one attack yearly. Rates of migraine are slightly lower in Asia and Africa than in Western countries. Chronic migraine occurs in approximately 1.4–2.2% of the population. During perimenopause symptoms often get worse before decreasing in severity. While symptoms resolve in about two-thirds of the elderly, in 3–10% they persist. History An early description consistent with migraine is contained in the Ebers Papyrus, written around 1500 BCE in ancient Egypt. The word migraine is from the Greek ἡμικρᾱνίᾱ (hēmikrāníā), 'pain in half of the head', from ἡμι- (hēmi-), 'half' and κρᾱνίον (krāníon), 'skull'. In 200 BCE, writings from the Hippocratic school of medicine described the visual aura that can precede the headache and a partial relief occurring through vomiting. A second-century description by Aretaeus of Cappadocia divided headaches into three types: cephalalgia, cephalea, and heterocrania. Galen of Pergamon used the term hemicrania (half-head), from which the word migraine was eventually derived. He also proposed that the pain arose from the meninges and blood vessels of the head. Migraine was first divided into the two now used types – migraine with aura (migraine ophthalmique) and migraine without aura (migraine vulgaire) in 1887 by Louis Hyacinthe Thomas, a French librarian. The mystical visions of Hildegard von Bingen, which she described as "reflections of the living light", are consistent with the visual aura experienced during migraine attacks. Trepanation, the deliberate drilling of holes into a skull, was practiced as early as 7,000 BCE. While sometimes people survived, many would have died from the procedure due to infection. It was believed to work via "letting evil spirits escape". William Harvey recommended trepanation as a treatment for migraine in the 17th century. The association between trepanation and headaches in ancient history may simply be a myth or unfounded speculation that originated several centuries later. In 1913, the world-famous American physician William Osler misinterpreted the French anthropologist and physician Paul Broca's words about a set of children's skulls from the Neolithic age that he found during the 1870s. These skulls presented no evident signs of fractures that could justify this complex surgery for mere medical reasons. Trepanation was probably born of superstitions, to remove "confined demons" inside the head, or to create healing or fortune talismans with the bone fragments removed from the skulls of the patients. However, Osler wanted to make Broca's theory more palatable to his modern audiences, and explained that trepanation procedures were used for mild conditions such as "infantile convulsions headache and various cerebral diseases believed to be caused by confined demons." While many treatments for migraine have been attempted, it was not until 1868 that use of a substance which eventually turned out to be effective began. This substance was the fungus ergot from which ergotamine was isolated in 1918 and first used to treat migraine in 1925. Methysergide was developed in 1959 and the first triptan, sumatriptan, was developed in 1988. During the 20th century with better study-design, effective preventive measures were found and confirmed. Society and culture Migraine is a significant source of both medical costs and lost productivity. It has been estimated that migraine is the most costly neurological disorder in the European Community, costing more than €27 billion per year. In the United States, direct costs have been estimated at $17 billion, while indirect costs – such as missed or decreased ability to work – is estimated at $15 billion. Nearly a tenth of the direct cost is due to the cost of triptans. In those who do attend work during a migraine attack, effectiveness is decreased by around a third. Negative impacts also frequently occur for a person's family. Research Prevention mechanisms Transcranial magnetic stimulation shows promise, as does transcutaneous supraorbital nerve stimulation. There is preliminary evidence that a ketogenic diet may help prevent episodic and long-term migraine. Sex dependency Statistical data indicates that women may be more prone to having migraine, showing migraine incidence three times higher among women than men. The Society for Women's Health Research has also mentioned hormonal influences, mainly estrogen, as having a considerable role in provoking migraine pain. Studies and research related to the sex dependencies of migraine are still ongoing, and conclusions have yet to be achieved.
Biology and health sciences
Non-infectious disease
null
21061
https://en.wikipedia.org/wiki/Mica
Mica
Micas ( ) are a group of silicate minerals whose outstanding physical characteristic is that individual mica crystals can easily be split into fragile elastic plates. This characteristic is described as perfect basal cleavage. Mica is common in igneous and metamorphic rock and is occasionally found as small flakes in sedimentary rock. It is particularly prominent in many granites, pegmatites, and schists, and "books" (large individual crystals) of mica several feet across have been found in some pegmatites. Micas are used in products such as drywalls, paints, and fillers, especially in parts for automobiles, roofing, and in electronics. The mineral is used in cosmetics and food to add "shimmer" or "frost". Properties and structure The mica group comprises 37 phyllosilicate minerals. All crystallize in the monoclinic system, with a tendency towards pseudohexagonal crystals, and are similar in structure but vary in chemical composition. Micas are translucent to opaque with a distinct vitreous or pearly luster, and different mica minerals display colors ranging from white to green or red to black. Deposits of mica tend to have a flaky or platy appearance. The crystal structure of mica is described as TOT-c, meaning that it is composed of parallel TOT layers weakly bonded to each other by cations (c). The TOT layers in turn consist of two tetrahedral sheets (T) strongly bonded to the two faces of a single octahedral sheet (O). The relatively weak ionic bonding between TOT layers gives mica its perfect basal cleavage. The tetrahedral sheets consist of silica tetrahedra, each silicon ion surrounded by four oxygen ions. In most micas, one in four silicon ions is replaced by an aluminium ion, while aluminium ions replace half the silicon ions in brittle micas. The tetrahedra share three of their four oxygen ions with neighbouring tetrahedra to produce a hexagonal sheet. The remaining oxygen ion (the apical oxygen ion) is available to bond with the octahedral sheet. The octahedral sheet can be dioctahedral or trioctahedral. A trioctahedral sheet has the structure of a sheet of the mineral brucite, with magnesium or ferrous iron being the most common cation. A dioctahedral sheet has the structure and (typically) the composition of a gibbsite sheet, with aluminium being the cation. Apical oxygens take the place of some of the hydroxyl ions that would be present in a brucite or gibbsite sheet, bonding the tetrahedral sheets tightly to the octahedral sheet. Tetrahedral sheets have a strong negative charge since their bulk composition is AlSi3O105-. The octahedral sheet has a positive charge, since its bulk composition is Al(OH)2+ (for a dioctahedral sheet with the apical sites vacant) or M3(OH)24+ (for a trioctahedral site with the apical sites vacant; M represents a divalent ion such as ferrous iron or magnesium) The combined TOT layer has a residual negative charge, since its bulk composition is Al2(AlSi3O10)(OH)2− or M3(AlSi3O10)(OH)2−. The remaining negative charge of the TOT layer is neutralized by the interlayer cations (typically sodium, potassium, or calcium ions). Because the hexagons in the T and O sheets are slightly different in size, the sheets are slightly distorted when they bond into a TOT layer. This breaks the hexagonal symmetry and reduces it to monoclinic symmetry. However, the original hexahedral symmetry is discernible in the pseudohexagonal character of mica crystals. The short-range order of K+ ions on cleaved muscovite mica has been resolved. Classification Chemically, micas can be given the general formula X2Y4–6Z8O20(OH, F)4, in which X is K, Na, or Ca or less commonly Ba, Rb, or Cs; Y is Al, Mg, or Fe or less commonly Mn, Cr, Ti, Li, etc.; Z is chiefly Si or Al, but also may include Fe3+ or Ti. Structurally, micas can be classed as dioctahedral (Y = 4) and trioctahedral (Y = 6). If the X ion is K or Na, the mica is a common mica, whereas if the X ion is Ca, the mica is classed as a brittle mica. Dioctahedral micas Muscovite Paragonite Brittle micas: Margarite Trioctahedral micas Common micas: Biotite Lepidolite Phlogopite Zinnwaldite Brittle micas: Clintonite Interlayer-deficient micas Very fine-grained micas, which typically show more variation in ion and water content, are informally termed "clay micas". They include: Hydro-muscovite with H3O+ along with K in the X site; Illite with a K deficiency in the X site and correspondingly more Si in the Z site; Phengite with Mg or Fe2+ substituting for Al in the Y site and a corresponding increase in Si in the Z site. Sericite is the name given to very fine, ragged grains and aggregates of white (colorless) micas. Occurrence and production Mica is widely distributed and occurs in igneous, metamorphic and sedimentary regimes. Large crystals of mica used for various applications are typically mined from granitic pegmatites. The largest documented single crystal of mica (phlogopite) was found in Lacey Mine, Ontario, Canada; it measured and weighed about . Similar-sized crystals were also found in Karelia, Russia. Scrap and flake mica is produced all over the world. In 2010, the major producers were Russia (100,000 tonnes), Finland (68,000 t), the United States (53,000 t), South Korea (50,000 t), France (20,000 t) and Canada (15,000 t). The total global production was 350,000 t, although no reliable data were available for China. Most sheet mica was produced in India (3,500 t) and Russia (1,500 t). Flake mica comes from several sources: the metamorphic rock called schist as a byproduct of processing feldspar and kaolin resources, from placer deposits, and pegmatites. Sheet mica is considerably less abundant than flake and scrap mica, and is occasionally recovered from mining scrap and flake mica. The most important sources of sheet mica are pegmatite deposits. Sheet mica prices vary with grade and can range from less than $1 per kilogram for low-quality mica to more than $2,000 per kilogram for the highest quality. In Madagascar and India, it is also mined artisanally, in poor working conditions and with the help of child labour. Uses The commercially important micas are muscovite and phlogopite, which are used in a variety of applications. Useful properties Mica's value is based on its unique physical properties: the crystalline structure of mica forms layers that can be split or delaminated into thin sheets usually causing foliation in rocks. These sheets are chemically inert, dielectric, elastic, flexible, hydrophilic, insulating, lightweight, platy, reflective, refractive, resilient, and range in opacity from transparent to opaque. Mica is stable when exposed to electricity, light, moisture, and extreme temperatures. It has superior electrical properties as an insulator and as a dielectric, and can support an electrostatic field while dissipating minimal energy in the form of heat; it can be split very thin (0.025 to 0.125 millimeters or thinner) while maintaining its electrical properties, has a high dielectric breakdown, is thermally stable to , and is resistant to corona discharge. Muscovite, the principal mica used by the electrical industry, is used in capacitors that are ideal for high frequency and radio frequency. Phlogopite mica remains stable at higher temperatures (to ) and is used in applications in which a combination of high-heat stability and electrical properties is required. Muscovite and phlogopite are used in sheet and ground forms. Ground mica The leading use of dry-ground mica in the US is in the joint compound for filling and finishing seams and blemishes in gypsum wallboard (drywall). The mica acts as a filler and extender, provides a smooth consistency, improves the workability of the compound, and provides resistance to cracking. In 2008, joint compounds accounted for 54% of dry-ground mica consumption. In the paint industry, ground mica is used as a pigment extender that also facilitates suspension, reduces chalking, prevents shrinking and shearing of the paint film, increases the resistance of the paint film to water penetration and weathering and brightens the tone of colored pigments. Mica also promotes paint adhesion in aqueous and oleoresinous formulations. Consumption of dry-ground mica in paint, the second-ranked use, accounted for 22% of the dry-ground mica used in 2008. Ground mica is used in the well-drilling industry as an additive to drilling fluids. The coarsely ground mica flakes help prevent the loss of circulation by sealing porous sections of the drill hole. Well-drilling muds accounted for 15% of dry-ground mica use in 2008. The plastics industry used dry-ground mica as an extender and filler, especially in parts for automobiles as lightweight insulation to suppress sound and vibration. Mica is used in plastic automobiles fascia and fenders as a reinforcing material, providing improved mechanical properties and increased dimensional stability, stiffness, and strength. Mica-reinforced plastics also have high-heat dimensional stability, reduced warpage, and the best surface properties of any filled plastic composite. In 2008, consumption of dry-ground mica in plastic applications accounted for 2% of the market. The rubber industry used ground mica as an inert filler and mold release compound in the manufacture of molded rubber products such as tires and roofing. The platy texture acts as an anti-blocking, anti-sticking agent. Rubber mold lubricant accounted for 1.5% of the dry-ground mica used in 2008. As a rubber additive, mica reduces gas permeation and improves resiliency. Dry-ground mica is used in the production of rolled roofing and asphalt shingles, where it serves as a surface coating to prevent sticking of adjacent surfaces. The coating is not absorbed by freshly manufactured roofing because mica's platy structure is unaffected by the acid in asphalt or by weather conditions. Mica is used in decorative coatings on wallpaper, concrete, stucco, and tile surfaces. It also is used as an ingredient in flux coatings on welding rods, in some special greases, and as coatings for core and mold release compounds, facing agents, and mold washes in foundry applications. Dry-ground phlogopite mica is used in automotive brake linings and clutch plates to reduce noise and vibration (asbestos substitute); as sound-absorbing insulation for coatings and polymer systems; in reinforcing additives for polymers to increase strength and stiffness and to improve stability to heat, chemicals, and ultraviolet (UV) radiation; in heat shields and temperature insulation; in industrial coating additive to decrease the permeability of moisture and hydrocarbons; and in polar polymer formulations to increase the strength of epoxies, nylons, and polyesters. Paints and cosmetics Wet-ground mica, which retains the brilliance of its cleavage faces, is used primarily in pearlescent paints by the automotive industry. Many metallic-looking pigments are composed of a substrate of mica coated with another mineral, usually titanium dioxide (TiO2). The resultant pigment produces a reflective color depending on the thickness of the coating. These products are used to produce automobile paint, shimmery plastic containers, and high-quality inks used in advertising and security applications. In the cosmetics industry, its reflective and refractive properties make mica an important ingredient in blushes, eye liner, eye shadow, foundation, hair and body glitter, lipstick, lip gloss, mascara, moisturizing lotions, and nail polish. Some brands of toothpaste include powdered white mica. This acts as a mild abrasive to aid the polishing of the tooth surface and also adds a cosmetically pleasing, glittery shimmer to the paste. Mica is added to latex balloons to provide a colored shiny surface. Built-up mica Muscovite and phlogopite splittings can be fabricated into various built-up mica products, also known as micanite. Produced by mechanized or hand setting of overlapping splittings and alternate layers of binders and splittings, built-up mica is used primarily as an electrical insulation material. Mica insulation is used in high-temperature and fire-resistant power cables in aluminium plants, blast furnaces, critical wiring circuits (for example, defence systems, fire and security alarm systems, and surveillance systems), heaters and boilers, lumber kilns, metal smelters, and tanks and furnace wiring. Specific high-temperature mica-insulated wire and cable are rated to work for up to 15 minutes in molten aluminium, glass, and steel. Major products are bonding materials; flexible, heater, molding, and segment plates; mica paper; and tape. Flexible plate is used in electric motor and generator armatures, field coil insulation, and magnet and commutator core insulation. Mica consumption in flexible plates was about 21 tonnes in 2008 in the US. A heater plate is used where high-temperature insulation is required. The molding plate is sheet mica from which V-rings are cut and stamped for use in insulating the copper segments from the steel shaft ends of a commutator. The molding plate is also fabricated into tubes and rings for insulation in armatures, motor starters, and transformers. Segment plate acts as insulation between the copper commutator segments of direct-current universal motors and generators. Phlogopite built-up mica is preferred because it wears at the same rate as the copper segments. Although muscovite has a greater resistance to wear, it causes uneven ridges that may interfere with the operation of a motor or generator. Consumption of segment plates was about 149 t in 2008 in the US. Some types of built-up mica have bonded splittings reinforced with cloth, glass, linen, muslin, plastic, silk, or special paper. These products are very flexible and are produced in wide, continuous sheets that are either shipped, rolled, or cut into ribbons or tapes, or trimmed to specified dimensions. Built-up mica products may also be corrugated or reinforced by multiple layering. In 2008, about 351 t of built-up mica was consumed in the US, mostly for molding plates (19%) and segment plates (42%). Sheet mica Sheet mica is a versatile and durable material widely used in electrical and thermal insulation applications. It exhibits excellent electrical properties, heat resistance, and chemical stability. Technical grade sheet mica is used in electrical components, electronics, atomic force microscopy and as window sheets. Other uses include diaphragms for oxygen-breathing equipment, marker dials for navigation compasses, optical filters, pyrometers, thermal regulators, stove and kerosene heater windows, radiation aperture covers for microwave ovens, and micathermic heater elements. Mica is birefringent and is therefore commonly used to make quarter and half wave plates. Specialized applications for sheet mica are found in aerospace components in air-, ground-, and sea-launched missile systems, laser devices, medical electronics and radar systems. Mica is mechanically stable in micrometer-thin sheets which are relatively transparent to radiation (such as alpha particles) while being impervious to most gases. It is therefore used as a window on radiation detectors such as Geiger–Müller tubes. In 2008, mica splittings represented the largest part of the sheet mica industry in the United States. Consumption of muscovite and phlogopite splittings was about 308 t in 2008. Muscovite splittings from India accounted for essentially all US consumption. The remainder was primarily imported from Madagascar. Small squared pieces of sheet mica are also used in the traditional Japanese Kōdō ceremony to burn incense: A burning piece of coal is placed inside a cone made of white ash. The sheet of mica is placed on top, acting as a separator between the heat source and the incense, to spread the fragrance without burning it. Electrical and electronic Sheet mica is used principally in the electronic and electrical industries. Its usefulness in these applications is derived from its unique electrical and thermal properties and its mechanical properties, which allow it to be cut, punched, stamped, and machined to close tolerances. Specifically, mica is unusual in that it is a good electrical insulator at the same time as being a good thermal conductor. The leading use of block mica is as an electrical insulator in electronic equipment. High-quality block mica is processed to line the gauge glasses of high-pressure steam boilers because of its flexibility, transparency, and resistance to heat and chemical attack. Only high-quality muscovite film mica, which is variously called India ruby mica or ruby muscovite mica, is used as a dielectric in capacitors. The highest quality mica film is used to manufacture capacitors for calibration standards. The next lower grade is used in transmitting capacitors. Receiving capacitors use a slightly lower grade of high-quality muscovite. Mica sheets are used to provide structure for heating wire (such as in Kanthal or Nichrome) in heating elements and can withstand up to . Single-ended self-starting lamps are insulated with a mica disc and contained in a borosilicate glass gas discharge tube (arc tube) and a metal cap. They include the sodium-vapor lamp that is the gas-discharge lamp in street lighting. Atomic force microscopy Another use of mica is as a substrate in the production of ultra-flat, thin-film surfaces, e.g. gold surfaces. Although the deposited film surface is still rough due to deposition kinetics, the back side of the film at the mica-film interface is ultra-flat once the film is removed from the substrate. Freshly-cleaved mica surfaces have been used as clean imaging substrates in atomic force microscopy, enabling for example the imaging of bismuth films, plasma glycoproteins, membrane bilayers, and DNA molecules. Peepholes Thin transparent sheets of mica were used for peepholes in boilers, lanterns, stoves, and kerosene heaters because they were less likely to shatter than glass when exposed to extreme temperature gradients. Such peepholes were also fitted in horse-drawn carriages and early 20th-century cars, where they were called isinglass curtains. Etymology The word mica is derived from the Latin word , meaning a crumb, and probably influenced by , to glitter. Early history Human use of mica dates back to prehistoric times. Mica was known to ancient Indian, Egyptian, Greek, Roman, and Chinese civilizations, as well as the Aztec civilization of the New World. The earliest use of mica has been found in cave paintings created during the Upper Paleolithic period (40,000 BC to 10,000 BC). The first hues were red (iron oxide, hematite, or red ochre) and black (manganese dioxide, pyrolusite), though black from juniper or pine carbons has also been discovered. White from kaolin or mica was used occasionally. A few kilometers northeast of Mexico City stands the ancient site of Teotihuacan. Mica was found in the noble palace complex "Viking Group" during an excavation led by Pedro Armillas between 1942 and 1944. Later, a second deposit was located in the Xalla Complex, another palatial structure east of Street of the Dead. There is a claim mica was found within the Pyramid of the Sun, which originates from Peter Tompkins in his book Mysteries of the Mexican Pyramids. But it is not yet proven. Natural mica was and still is used by the Taos and Picuris Pueblos Indians in north-central New Mexico to make pottery. The pottery is made from weathered Precambrian mica schist and has flecks of mica throughout the vessels. Tewa Pueblo Pottery is made by coating the clay with mica to provide a dense, glittery micaceous finish over the entire object. Mica flakes (called abrak in Urdu and written as ابرک) are also used in Pakistan to embellish women's summer clothes, especially dupattas (long light-weight scarves, often colorful and matching the dress). Thin mica flakes are added to a hot starch water solution, and the dupatta is dipped in this water mixture for 3–5 minutes. Then it is hung to air dry. Mica powder Throughout the ages, fine powders of mica have been used for various purposes, including decorations. Powdered mica glitter is used to decorate traditional water clay pots in India, Pakistan and Bangladesh; it is also used on traditional Pueblo pottery, though not restricted to use on water pots in this case. The gulal and abir (colored powders) used by North Indian Hindus during the festive season of Holi contain fine crystals of mica to create a sparkling effect. The majestic Padmanabhapuram Palace, from Trivandrum in India, has colored mica windows. Mica powder is also used as a decoration in traditional Japanese woodblock printmaking, as when applied to wet ink with gelatin as thickener using kirazuri technique and allowed to dry, it sparkles and reflects light. Earlier examples are found among paper decorations, with the height as the Nishi Honganji 36 Poets Collection, codices of illuminated manuscripts in and after ACE 1112. For metallic glitter, Ukiyo-e prints employed very thick solution either with or without color pigments stencilled on hairpins, sword blades or fish scales on . The soil around Nishio in central Japan is rich in mica deposits, which were already mined in the Nara period. Yatsuomote ware is a type of local Japanese pottery from there. After an incident at Mount Yatsuomote a small bell was offered to soothe the kami. Katō Kumazō started a local tradition where small ceramic zodiac bells (きらら鈴) were made out of local mica kneaded into the clay, and after burning in the kiln the bell would make a pleasing sound when rung. Medicine Ayurveda, the Hindu system of ancient medicine prevalent in India, includes the purification and processing of mica in preparing Abhraka bhasma, which is claimed as a treatment for diseases of the respiratory and digestive tracts. Health impact Mica dust in the workplace is regarded as a hazardous substance for respiratory exposure above certain concentrations. United States The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for mica exposure in the workplace as 20 million parts per cubic foot (706,720,000 parts per cubic meter) over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 3 mg/m3 respiratory exposure over an 8-hour workday. At levels of 1,500 mg/m3, mica is immediately dangerous to life and health. Substitutes Some lightweight aggregates, such as diatomite, perlite, and vermiculite, may be substituted for ground mica when used as filler. Ground synthetic fluorophlogopite, a fluorine-rich mica, may replace natural ground mica for uses that require thermal and electrical properties of mica. Many materials can be substituted for mica in numerous electrical, electronic, and insulation uses. Substitutes include acrylate polymers, cellulose acetate, fiberglass, fishpaper, nylon, phenolics, polycarbonate, polyester, styrene, vinyl-PVC, and vulcanized fiber. Mica paper made from scrap mica can be substituted for sheet mica in electrical and insulation applications.
Physical sciences
Mineralogy
null
21062
https://en.wikipedia.org/wiki/Muscovite
Muscovite
Muscovite (also known as common mica, isinglass, or potash mica) is a hydrated phyllosilicate mineral of aluminium and potassium with formula KAl2(AlSi3O10)(F,OH)2, or (KF)2(Al2O3)3(SiO2)6(H2O). It has a highly perfect basal cleavage yielding remarkably thin laminae (sheets) which are often highly elastic. Sheets of muscovite 5 meters × 3 meters (16.5 feet × 10 feet) have been found in Nellore, India. Muscovite has a Mohs hardness of 2–2.25 parallel to the [001] face, 4 perpendicular to the [001] and a specific gravity of 2.76–3. It can be colorless or tinted through grays, violet or red, and can be transparent or translucent. It is anisotropic and has high birefringence. Its crystal system is monoclinic. The green, chromium-rich variety is called fuchsite; mariposite is also a chromium-rich type of muscovite. Muscovite is the most common mica, found in granites, pegmatites, gneisses, and schists, and as a contact metamorphic rock or as a secondary mineral resulting from the alteration of topaz, feldspar, kyanite, etc. It is characteristic of peraluminous rock, in which the content of aluminum is relatively high. In pegmatites, it is often found in immense sheets that are commercially valuable. Muscovite is in demand for the manufacture of fireproofing and insulating materials and to some extent as a lubricant. Naming The name muscovite comes from Muscovy-glass, a name given to the mineral in Elizabethan England due to its use in medieval Russia (Muscovy) as a cheaper alternative to glass in windows. This usage became widely known in England during the sixteenth century with its first mention appearing in letters by George Turberville, the secretary of England's ambassador to the Russian tsar Ivan the Terrible, in 1568. Distinguishing characteristics Micas are distinguished from other minerals by their pseudohexagonal crystal shape and their perfect cleavage, which allows the crystals to be pulled apart into very thin elastic sheets. Pyrophyllite, and talc are softer than micas and have a greasy feel, while chlorite is green in color and its cleavage sheets are inelastic. The other common mica mineral, biotite, is almost always much darker in color than muscovite. Paragonite can be difficult to distinguish from muscovite but is much less common, though it is likely mistaken for muscovite often enough that it may be more common that is generally appreciated. Muscovite mica from Brazil is red due to manganese(3+). Composition and structure Like all mica minerals, muscovite is a phyllosilicate (sheet silicate) mineral with a TOT-c structure. In other words, a crystal of muscovite consists of layers (TOT) bonded to each other by potassium cations (c). Each layer is composed of three sheets. The outer sheets ('T' or tetrahedral sheets) consist of silicon-oxygen tetrahedra and aluminium-oxygen tetrahedra, with three of the oxygen anions of each tetrahedron shared with neighboring tetrahedra to form a hexagonal sheet. The fourth oxygen anion in each tetrahedral sheet is called an apical oxygen anion. There are three silicon cations for each aluminium cation but the arrangement of aluminium and silicon cations is largely disordered. The middle octahedral (O) sheet consists of aluminium cations that are each surrounded by six oxygen or hydroxide anions forming an octahedron, with the octahedrons sharing anions to form a hexagonal sheet similar to the tetrahedral sheets. The apical oxygen anions of the outer T sheets face inwards and are shared by the octahedral sheet, binding the sheets firmly together. The relatively strong binding between oxygen anions and aluminium and silicon cations within a layer, compared with the weaker binding of potassium cations between layers, gives muscovite its perfect basal cleavage. In muscovite, alternate layers are slightly offset from each other, so that the structure repeats every two layers. This is called the 1M polytype of the general mica structure. The formula for muscovite is typically given as , but it is common for small amounts of other elements to substitute for the main constituents. Alkali metals such as sodium, rubidium, and caesium substitute for potassium; magnesium, iron, lithium, chromium, titanium, or vanadium can substitute for aluminium in the octahedral sheet; fluorine or chlorine can substitute for hydroxide; and the ratio of aluminium to silicon in the tetrahedral sheets can change to maintain charge balance where necessary (as when magnesium cations, with a charge of +2, substitute for aluminium ions, with a charge of +3). Up to 10% of the potassium may be replaced by sodium, and up to 20% of the hydroxide by fluorine. Chlorine rarely replaces more than 1% of the hydroxide. Muscovite in which the mole fraction of silicon is greater than aluminium, and magnesium or iron replaces some of the aluminium to maintain charge balance, is called phengite. Chromium-rich and vanadium-rich muscovite are known respectively as fuchsite and roscoelite. Uses Muscovite can be cleaved into very thin transparent sheets that can substitute for glass, particularly for high-temperature applications such as industrial furnace or oven windows. It is also used in the manufacture of a wide variety of electronics and as a filler in paints, plastic, and wallboard. It lends a silky luster to wallpaper. It is also used in tire manufacture as a mold release agent, in drilling mud, and in various cosmetics for its luster. Gallery
Physical sciences
Silicate minerals
Earth science
21113
https://en.wikipedia.org/wiki/Napster
Napster
Napster was an American peer-to-peer (P2P) file sharing application primarily associated with digital audio file distribution. Founded by Shawn Fanning and Sean Parker, the platform originally launched on June 1, 1999. Audio shared on the service was typically encoded in the MP3 format. As the software became popular, the company encountered legal difficulties over copyright infringement. Napster ceased operations in 2001 after losing multiple lawsuits and filed for bankruptcy in June 2002. The P2P model employed by Napster involved a centralized database that indexed a complete list of all songs being shared from connected clients. While effective, the service could not function without the central database, which was hosted by Napster and eventually forced to shut down. Following Napster's demise, alternative decentralized methods of P2P file-sharing emerged, including LimeWire, Gnutella, Freenet, FastTrack, and BitTorrent. Napster's assets were eventually acquired by Roxio, and it re-emerged as an online music store commonly known as Napster 2.0. Best Buy later purchased the service and merged it with its Rhapsody streaming service on December 1, 2011. In 2016, the original branding was restored when Rhapsody was renamed Napster. In 2022, the Napster streaming service was acquired by two Web3 companies, Hivemind and Algorand. Jon Vlassopulos was appointed as CEO. Origin Napster was founded by Shawn Fanning and Sean Parker. Initially, Napster was envisioned by Fanning as an independent peer-to-peer file sharing service. The service operated between June 1999 and July 2001. Its technology enabled people to easily share their MP3 files with other participants. Although the original service was shut down by court order, the Napster brand survived after the company's assets were liquidated and purchased by other companies through bankruptcy proceedings. History Although there were already networks that facilitated the distribution of files across the Internet, such as IRC, Hotline, and Usenet, Napster specialized in MP3 files of music and had a user-friendly interface. At its peak, the Napster service had about 80 million registered users. Napster made it relatively easy for music enthusiasts to download copies of songs that were otherwise difficult to obtain, such as older songs, unreleased recordings, studio recordings, and songs from concert bootleg recordings. Napster paved the way for streaming media services and transformed music into a public good for a brief time. High-speed networks in college dormitories became overloaded, with as much as 61% of external network traffic consisting of MP3 file transfers. Many colleges blocked its use for this reason, even before concerns about liability for facilitating copyright violations on campus. Macintosh version The service and software program began as Windows-only. However, in 2000, Black Hole Media wrote a Macintosh client called Macster. Macster was later bought by Napster and designated the official Mac Napster client ("Napster for the Mac"), at which point the Macster name was discontinued. Even before the acquisition of Macster, the Macintosh community had a variety of independently developed Napster clients. The most notable was the open source client called MacStar, released by Squirrel Software in early 2000, and Rapster, released by Overcaster Family in Brazil. The release of MacStar's source code paved the way for third-party Napster clients across all computing platforms, giving users advertisement-free music distribution options. Legal challenges Heavy metal band Metallica discovered a demo of their song "I Disappear" had been circulating across the network before it was released. This led to it being played on several radio stations across the United States, which alerted Metallica to the fact that their entire back catalogue of studio material was also available. On April 13, 2000, they filed a lawsuit against Napster. A month later, rapper and producer Dr. Dre, who shared a litigator and legal firm with Metallica, filed a similar lawsuit after Napster refused his written request to remove his works from its service. Separately, Metallica and Dr. Dre later delivered to Napster thousands of usernames of people who they believed were pirating their songs. In March 2001, Napster settled both suits, after being shut down by the Ninth Circuit Court of Appeals in a separate lawsuit from several major record labels (see below). In 2000, Madonna's single "Music" was leaked out onto the web and Napster prior to its commercial release, causing widespread media coverage. Verified Napster use peaked with 26.4 million users worldwide in February 2001. In 2000, the American musical recording company A&M Records along with several other recording companies, through the Recording Industry Association of America (RIAA), sued Napster (A&M Records, Inc. v. Napster, Inc.) on grounds of contributory and vicarious copyright infringement under the US Digital Millennium Copyright Act (DMCA). Napster was faced with the following allegations from the music industry: That its users were directly violating the plaintiffs' copyrights. That Napster was responsible for contributory infringement of the plaintiff's copyrights. That Napster was responsible for the vicarious infringement of the plaintiff's copyrights. Napster lost the case in the District Court but then appealed to the U.S. Court of Appeals for the Ninth Circuit. Although it was clear that Napster could have commercially significant non-infringing uses, the Ninth Circuit upheld the District Court's decision. Immediately after, the District Court commanded Napster to keep track of the activities of its network and to restrict access to infringing material when informed of that material's location. Napster wasn't able to comply and thus had to close down its service in July 2001. In 2002, Napster announced that it had filed for bankruptcy and sold its assets to a third party. In a 2018 Rolling Stone article, Kirk Hammett of Metallica upheld the band's opinion that suing Napster was the "right" thing to do. Promotional power Along with the accusations that Napster was hurting the sales of the record industry, some felt just the opposite, that file trading on Napster stimulated, rather than hurt, sales. Some evidence may have come in July 2000 when tracks from English rock band Radiohead's album Kid A found their way to Napster three weeks before the album's release. Unlike Madonna, Dr. Dre, or Metallica, Radiohead had never hit the top 20 in the US. Furthermore, Kid A was an album without any singles released, and received relatively little radio airplay. By the time of the album's release, the album was estimated to have been downloaded for free by millions of people worldwide, and in October 2000 Kid A captured the number one spot on the Billboard 200 sales chart in its debut week. According to Richard Menta of MP3 Newswire, the effect of Napster in this instance was isolated from other elements that could be credited for driving sales, and the album's unexpected success suggested that Napster was a good promotional tool for music. Since 2000, many musical artists, particularly those not signed to major labels and without access to traditional mass media outlets such as radio and television, have said that Napster and successive Internet file-sharing networks have helped get their music heard, spread word of mouth, and may have improved their sales in the long term. One such musician to publicly defend Napster as a promotional tool for independent artists was DJ Xealot, who became directly involved in the 2000 A&M Records Lawsuit. Chuck D from Public Enemy also came out and publicly supported Napster. Lawsuit Napster's facilitation of the transfer of copyrighted material was objected to by the Recording Industry Association of America (RIAA), which filed a lawsuit against the service on December 6, 1999. The legal action, while intended to shut down the service, brought it a great deal of publicity and an influx of millions of new users, many of whom were college students. After a failed appeal to the Ninth Circuit Court, an injunction was issued on March 5, 2001, ordering Napster to prevent the trading of copyrighted music on its network. Lawrence Lessig claimed, however, that this decision made little sense from the perspective of copyright protection: "When Napster told the district court that it had developed a technology to block the transfer of 99.4 percent of identified infringing material, the district court told counsel for Napster 99.4 percent was not good enough. Napster had to push the infringements 'down to zero.' If 99.4 percent is not good enough," Lessig concluded, "then this is a war on file-sharing technologies, not a war on copyright infringement." Shutdown On July 11, 2001, Napster shut down its entire network to comply with the injunction. On September 24, 2001, the case was partially settled. Napster agreed to pay music creators and copyright owners a $26 million settlement for past, unauthorized uses of music, and as an advance against future licensing royalties of $10 million. To pay those fees, Napster attempted to convert its free service into a subscription system, and thus traffic to Napster was reduced. A prototype solution was tested in 2002: the Napster 3.0 Alpha, using the ".nap" secure file format from PlayMedia Systems and audio fingerprinting technology licensed from Relatable. Napster 3.0 was, according to many former Napster employees, ready to deploy, but it had significant trouble obtaining licenses to distribute major-label music. On May 17, 2002, Napster announced that its assets would be acquired by German media firm Bertelsmann for $85 million to transform Napster into an online music subscription service. The two companies had been collaborating since the middle of 2000 when Bertelsmann became the first major label to drop its copyright lawsuit against Napster. Pursuant to the terms of the acquisition agreement, on June 3 Napster filed for Chapter 11 protection under United States bankruptcy laws. On September 3, 2002, an American bankruptcy judge blocked the sale to Bertelsmann and forced Napster to liquidate its assets. Reuse of name Napster's brand and logos were acquired at a bankruptcy auction by Roxio which used them to re-brand the Pressplay music service as Napster 2.0. In September 2008, Napster was purchased by US electronics retailer Best Buy for US$121 million. On December 1, 2011, pursuant to a deal with Best Buy, Napster merged with Rhapsody, with Best Buy receiving a minority stake in Rhapsody. On July 14, 2016, Rhapsody phased out the Rhapsody brand in favor of Napster and has since branded its service internationally as Napster and expanded toward other markets by providing music on-demand as a service to other brands like the iHeartRadio app and their All Access music subscription service that provides subscribers with an on-demand music experience as well as premium radio. On August 25, 2020, Napster was sold to virtual reality concerts company MelodyVR. On May 10, 2022, Napster was sold to Hivemind and Algorand. The investor consortium also includes ATC Management, BH Digital, G20 Ventures, SkyBridge, RSE Ventures, Arrington Capital, Borderless Capital, and others. Media There have been several books that document the experiences of people working at Napster, including: Joseph Menn's "All the Rave: The Rise and Fall of Shawn Fanning's Napster" John Alderman's "Sonic Boom: Napster, MP3, and the New Pioneers of Music" Steve Knopper's "Appetite for Self Destruction: The Spectacular Crash of the Record Industry in the Digital Age." The 2003 film The Italian Job features Napster co-founder Shawn Fanning as a cameo of himself. This gave credence to one of the characters fictional back-story as the original "Napster". The 2010 film The Social Network features Napster co-founder Sean Parker (played by Justin Timberlake) in the rise of the popular website Facebook. The 2013 film Downloaded is a documentary about sharing media on the Internet and includes the history of Napster. The 2024 film How Music Got Free, a documentary based on the non-fiction book How Music Got Free mentions file sharing on the Internet with mentions of Napster and other applications.
Technology
Internet
null
21120
https://en.wikipedia.org/wiki/Neuron
Neuron
A neuron, neurone, or nerve cell is an excitable cell that fires electric signals called action potentials across a neural network in the nervous system. They are located in the brain and spinal cord and help to receive and conduct impulses. Neurons communicate with other cells via synapses, which are specialized connections that commonly use minute amounts of chemical neurotransmitters to pass the electric signal from the presynaptic neuron to the target cell through the synaptic gap. Neurons are the main components of nervous tissue in all animals except sponges and placozoans. Plants and fungi do not have nerve cells. Molecular evidence suggests that the ability to generate electric signals first appeared in evolution some 700 to 800 million years ago, during the Tonian period. Predecessors of neurons were the peptidergic secretory cells. They eventually gained new gene modules which enabled cells to create post-synaptic scaffolds and ion channels that generate fast electrical signals. The ability to generate electric signals was a key innovation in the evolution of the nervous system. Neurons are typically classified into three types based on their function. Sensory neurons respond to stimuli such as touch, sound, or light that affect the cells of the sensory organs, and they send signals to the spinal cord or brain. Motor neurons receive signals from the brain and spinal cord to control everything from muscle contractions to glandular output. Interneurons connect neurons to other neurons within the same region of the brain or spinal cord. When multiple neurons are functionally connected together, they form what is called a neural circuit. A neuron contains all the structures of other cells such as a nucleus, mitochondria, and Golgi bodies but has additional unique structures such as an axon, and dendrites. The soma is a compact structure, and the axon and dendrites are filaments extruding from the soma. Dendrites typically branch profusely and extend a few hundred micrometers from the soma. The axon leaves the soma at a swelling called the axon hillock and travels for as far as 1 meter in humans or more in other species. It branches but usually maintains a constant diameter. At the farthest tip of the axon's branches are axon terminals, where the neuron can transmit a signal across the synapse to another cell. Neurons may lack dendrites or have no axons. The term neurite is used to describe either a dendrite or an axon, particularly when the cell is undifferentiated. Most neurons receive signals via the dendrites and soma and send out signals down the axon. At the majority of synapses, signals cross from the axon of one neuron to the dendrite of another. However, synapses can connect an axon to another axon or a dendrite to another dendrite. The signaling process is partly electrical and partly chemical. Neurons are electrically excitable, due to the maintenance of voltage gradients across their membranes. If the voltage changes by a large enough amount over a short interval, the neuron generates an all-or-nothing electrochemical pulse called an action potential. This potential travels rapidly along the axon and activates synaptic connections as it reaches them. Synaptic signals may be excitatory or inhibitory, increasing or reducing the net voltage that reaches the soma. In most cases, neurons are generated by neural stem cells during brain development and childhood. Neurogenesis largely ceases during adulthood in most areas of the brain. Nervous system Neurons are the primary components of the nervous system, along with the glial cells that give them structural and metabolic support. The nervous system is made up of the central nervous system, which includes the brain and spinal cord, and the peripheral nervous system, which includes the autonomic, enteric and somatic nervous systems. In vertebrates, the majority of neurons belong to the central nervous system, but some reside in peripheral ganglia, and many sensory neurons are situated in sensory organs such as the retina and cochlea. Axons may bundle into nerve fascicles that make up the nerves in the peripheral nervous system (like strands of wire that make up a cable). In the central nervous system bundles of axons are called nerve tracts. Anatomy and histology Neurons are highly specialized for the processing and transmission of cellular signals. Given the diversity of functions performed in different parts of the nervous system, there is a wide variety in their shape, size, and electrochemical properties. For instance, the soma of a neuron can vary from 4 to 100 micrometers in diameter. The soma is the body of the neuron. As it contains the nucleus, most protein synthesis occurs here. The nucleus can range from 3 to 18 micrometers in diameter. The dendrites of a neuron are cellular extensions with many branches. This overall shape and structure are referred to metaphorically as a dendritic tree. This is where the majority of input to the neuron occurs via the dendritic spine. The axon is a finer, cable-like projection that can extend tens, hundreds, or even tens of thousands of times the diameter of the soma in length. The axon primarily carries nerve signals away from the soma and carries some types of information back to it. Many neurons have only one axon, but this axon may—and usually will—undergo extensive branching, enabling communication with many target cells. The part of the axon where it emerges from the soma is called the axon hillock. Besides being an anatomical structure, the axon hillock also has the greatest density of voltage-dependent sodium channels. This makes it the most easily excited part of the neuron and the spike initiation zone for the axon. In electrophysiological terms, it has the most negative threshold potential. While the axon and axon hillock are generally involved in information outflow, this region can also receive input from other neurons. The axon terminal is found at the end of the axon farthest from the soma and contains synapses. Synaptic boutons are specialized structures where neurotransmitter chemicals are released to communicate with target neurons. In addition to synaptic boutons at the axon terminal, a neuron may have en passant boutons, which are located along the length of the axon. The accepted view of the neuron attributes dedicated functions to its various anatomical components; however, dendrites and axons often act in ways contrary to their so-called main function. Axons and dendrites in the central nervous system are typically only about one micrometer thick, while some in the peripheral nervous system are much thicker. The soma is usually about 10–25 micrometers in diameter and often is not much larger than the cell nucleus it contains. The longest axon of a human motor neuron can be over a meter long, reaching from the base of the spine to the toes. Sensory neurons can have axons that run from the toes to the posterior column of the spinal cord, over 1.5 meters in adults. Giraffes have single axons several meters in length running along the entire length of their necks. Much of what is known about axonal function comes from studying the squid giant axon, an ideal experimental preparation because of its relatively immense size (0.5–1 millimeter thick, several centimeters long). Fully differentiated neurons are permanently postmitotic however, stem cells present in the adult brain may regenerate functional neurons throughout the life of an organism (see neurogenesis). Astrocytes are star-shaped glial cells that have been observed to turn into neurons by virtue of their stem cell-like characteristic of pluripotency. Membrane Like all animal cells, the cell body of every neuron is enclosed by a plasma membrane, a bilayer of lipid molecules with many types of protein structures embedded in it. A lipid bilayer is a powerful electrical insulator, but in neurons, many of the protein structures embedded in the membrane are electrically active. These include ion channels that permit electrically charged ions to flow across the membrane and ion pumps that chemically transport ions from one side of the membrane to the other. Most ion channels are permeable only to specific types of ions. Some ion channels are voltage gated, meaning that they can be switched between open and closed states by altering the voltage difference across the membrane. Others are chemically gated, meaning that they can be switched between open and closed states by interactions with chemicals that diffuse through the extracellular fluid. The ion materials include sodium, potassium, chloride, and calcium. The interactions between ion channels and ion pumps produce a voltage difference across the membrane, typically a bit less than 1/10 of a volt at baseline. This voltage has two functions: first, it provides a power source for an assortment of voltage-dependent protein machinery that is embedded in the membrane; second, it provides a basis for electrical signal transmission between different parts of the membrane. Histology and internal structure Numerous microscopic clumps called Nissl bodies (or Nissl substance) are seen when nerve cell bodies are stained with a basophilic ("base-loving") dye. These structures consist of rough endoplasmic reticulum and associated ribosomal RNA. Named after German psychiatrist and neuropathologist Franz Nissl (1860–1919), they are involved in protein synthesis and their prominence can be explained by the fact that nerve cells are very metabolically active. Basophilic dyes such as aniline or (weakly) hematoxylin highlight negatively charged components, and so bind to the phosphate backbone of the ribosomal RNA. The cell body of a neuron is supported by a complex mesh of structural proteins called neurofilaments, which together with neurotubules (neuronal microtubules) are assembled into larger neurofibrils. Some neurons also contain pigment granules, such as neuromelanin (a brownish-black pigment that is byproduct of synthesis of catecholamines), and lipofuscin (a yellowish-brown pigment), both of which accumulate with age. Other structural proteins that are important for neuronal function are actin and the tubulin of microtubules. Class III β-tubulin is found almost exclusively in neurons. Actin is predominately found at the tips of axons and dendrites during neuronal development. There the actin dynamics can be modulated via an interplay with microtubule. There are different internal structural characteristics between axons and dendrites. Typical axons seldom contain ribosomes, except some in the initial segment. Dendrites contain granular endoplasmic reticulum or ribosomes, in diminishing amounts as the distance from the cell body increases. Classification Neurons vary in shape and size and can be classified by their morphology and function. The anatomist Camillo Golgi grouped neurons into two types; type I with long axons used to move signals over long distances and type II with short axons, which can often be confused with dendrites. Type I cells can be further classified by the location of the soma. The basic morphology of type I neurons, represented by spinal motor neurons, consists of a cell body called the soma and a long thin axon covered by a myelin sheath. The dendritic tree wraps around the cell body and receives signals from other neurons. The end of the axon has branching axon terminals that release neurotransmitters into a gap called the synaptic cleft between the terminals and the dendrites of the next neuron. Structural classification Polarity Most neurons can be anatomically characterized as: Unipolar: single process. Unipolar cells are exclusively sensory neurons. Their dendrites receive sensory information, sometimes directly from the stimulus itself. The cell bodies of unipolar neurons are always found in ganglia. Sensory reception is a peripheral function, so the cell body is in the periphery, though closer to the CNS in a ganglion. The axon projects from the dendrite endings, past the cell body in a ganglion, and into the central nervous system. Bipolar: 1 axon and 1 dendrite. They are found mainly in the olfactory epithelium, and as part of the retina. Multipolar: 1 axon and 2 or more dendrites Golgi I: neurons with long-projecting axonal processes; examples are pyramidal cells, Purkinje cells, and anterior horn cells Golgi II: neurons whose axonal process projects locally; the best example is the granule cell Anaxonic: where the axon cannot be distinguished from the dendrite(s) Pseudounipolar: 1 process which then serves as both an axon and a dendrite Other Some unique neuronal types can be identified according to their location in the nervous system and distinct shape. Some examples are: Basket cells, interneurons that form a dense plexus of terminals around the soma of target cells, found in the cortex and cerebellum Betz cells, large motor neurons in primary motor cortex Lugaro cells, interneurons of the cerebellum Medium spiny neurons, most neurons in the corpus striatum Purkinje cells, huge neurons in the cerebellum, a type of Golgi I multipolar neuron Pyramidal cells, neurons with triangular soma, a type of Golgi I Rosehip cells, unique human inhibitory neurons that interconnect with Pyramidal cells Renshaw cells, neurons with both ends linked to alpha motor neurons Unipolar brush cells, interneurons with unique dendrite ending in a brush-like tuft Granule cells, a type of Golgi II neuron Anterior horn cells, motoneurons located in the spinal cord Spindle cells, interneurons that connect widely separated areas of the brain Functional classification Direction Afferent neurons convey information from tissues and organs into the central nervous system and are also called sensory neurons. Efferent neurons (motor neurons) transmit signals from the central nervous system to the effector cells. Interneurons connect neurons within specific regions of the central nervous system. Afferent and efferent also refer generally to neurons that, respectively, bring information to or send information from the brain. Action on other neurons A neuron affects other neurons by releasing a neurotransmitter that binds to chemical receptors. The effect on the postsynaptic neuron is determined by the type of receptor that is activated, not by the presynaptic neuron or by the neurotransmitter. A neurotransmitter can be thought of as a key, and a receptor as a lock: the same neurotransmitter can activate multiple types of receptors. Receptors can be classified broadly as excitatory (causing an increase in firing rate), inhibitory (causing a decrease in firing rate), or modulatory (causing long-lasting effects not directly related to firing rate). The two most common (90%+) neurotransmitters in the brain, glutamate and GABA, have largely consistent actions. Glutamate acts on several types of receptors and has effects that are excitatory at ionotropic receptors and a modulatory effect at metabotropic receptors. Similarly, GABA acts on several types of receptors, but all of them have inhibitory effects (in adult animals, at least). Because of this consistency, it is common for neuroscientists to refer to cells that release glutamate as "excitatory neurons", and cells that release GABA as "inhibitory neurons". Some other types of neurons have consistent effects, for example, "excitatory" motor neurons in the spinal cord that release acetylcholine, and "inhibitory" spinal neurons that release glycine. The distinction between excitatory and inhibitory neurotransmitters is not absolute. Rather, it depends on the class of chemical receptors present on the postsynaptic neuron. In principle, a single neuron, releasing a single neurotransmitter, can have excitatory effects on some targets, inhibitory effects on others, and modulatory effects on others still. For example, photoreceptor cells in the retina constantly release the neurotransmitter glutamate in the absence of light. So-called OFF bipolar cells are, like most neurons, excited by the released glutamate. However, neighboring target neurons called ON bipolar cells are instead inhibited by glutamate, because they lack typical ionotropic glutamate receptors and instead express a class of inhibitory metabotropic glutamate receptors. When light is present, the photoreceptors cease releasing glutamate, which relieves the ON bipolar cells from inhibition, activating them; this simultaneously removes the excitation from the OFF bipolar cells, silencing them. It is possible to identify the type of inhibitory effect a presynaptic neuron will have on a postsynaptic neuron, based on the proteins the presynaptic neuron expresses. Parvalbumin-expressing neurons typically dampen the output signal of the postsynaptic neuron in the visual cortex, whereas somatostatin-expressing neurons typically block dendritic inputs to the postsynaptic neuron. Discharge patterns Neurons have intrinsic electroresponsive properties like intrinsic transmembrane voltage oscillatory patterns. So neurons can be classified according to their electrophysiological characteristics: Tonic or regular spiking. Some neurons are typically constantly (tonically) active, typically firing at a constant frequency. Example: interneurons in neurostriatum. Phasic or bursting. Neurons that fire in bursts are called phasic. Fast-spiking. Some neurons are notable for their high firing rates, for example, some types of cortical inhibitory interneurons, cells in globus pallidus, retinal ganglion cells. Neurotransmitter Neurotransmitters are chemical messengers passed from one neuron to another neuron or to a muscle cell or gland cell. Cholinergic neurons – acetylcholine. Acetylcholine is released from presynaptic neurons into the synaptic cleft. It acts as a ligand for both ligand-gated ion channels and metabotropic (GPCRs) muscarinic receptors. Nicotinic receptors are pentameric ligand-gated ion channels composed of alpha and beta subunits that bind nicotine. Ligand binding opens the channel causing the influx of Na+ depolarization and increases the probability of presynaptic neurotransmitter release. Acetylcholine is synthesized from choline and acetyl coenzyme A. Adrenergic neurons – noradrenaline. Noradrenaline (norepinephrine) is released from most postganglionic neurons in the sympathetic nervous system onto two sets of GPCRs: alpha adrenoceptors and beta adrenoceptors. Noradrenaline is one of the three common catecholamine neurotransmitters, and the most prevalent of them in the peripheral nervous system; as with other catecholamines, it is synthesized from tyrosine. GABAergic neurons – gamma aminobutyric acid. GABA is one of two neuroinhibitors in the central nervous system (CNS), along with glycine. GABA has a homologous function to ACh, gating anion channels that allow Cl− ions to enter the post synaptic neuron. Cl− causes hyperpolarization within the neuron, decreasing the probability of an action potential firing as the voltage becomes more negative (for an action potential to fire, a positive voltage threshold must be reached). GABA is synthesized from glutamate neurotransmitters by the enzyme glutamate decarboxylase. Glutamatergic neurons – glutamate. Glutamate is one of two primary excitatory amino acid neurotransmitters, along with aspartate. Glutamate receptors are one of four categories, three of which are ligand-gated ion channels and one of which is a G-protein coupled receptor (often referred to as GPCR). AMPA and Kainate receptors function as cation channels permeable to Na+ cation channels mediating fast excitatory synaptic transmission. NMDA receptors are another cation channel that is more permeable to Ca2+. The function of NMDA receptors depends on glycine receptor binding as a co-agonist within the channel pore. NMDA receptors do not function without both ligands present. Metabotropic receptors, GPCRs modulate synaptic transmission and postsynaptic excitability. Glutamate can cause excitotoxicity when blood flow to the brain is interrupted, resulting in brain damage. When blood flow is suppressed, glutamate is released from presynaptic neurons, causing greater NMDA and AMPA receptor activation than normal outside of stress conditions, leading to elevated Ca2+ and Na+ entering the post synaptic neuron and cell damage. Glutamate is synthesized from the amino acid glutamine by the enzyme glutamate synthase. Dopaminergic neurons—dopamine. Dopamine is a neurotransmitter that acts on D1 type (D1 and D5) Gs-coupled receptors, which increase cAMP and PKA, and D2 type (D2, D3, and D4) receptors, which activate Gi-coupled receptors that decrease cAMP and PKA. Dopamine is connected to mood and behavior and modulates both pre- and post-synaptic neurotransmission. Loss of dopamine neurons in the substantia nigra has been linked to Parkinson's disease. Dopamine is synthesized from the amino acid tyrosine. Tyrosine is catalyzed into levodopa (or L-DOPA) by tyrosine hydroxlase, and levodopa is then converted into dopamine by the aromatic amino acid decarboxylase. Serotonergic neurons—serotonin. Serotonin (5-Hydroxytryptamine, 5-HT) can act as excitatory or inhibitory. Of its four 5-HT receptor classes, 3 are GPCR and 1 is a ligand-gated cation channel. Serotonin is synthesized from tryptophan by tryptophan hydroxylase, and then further by decarboxylase. A lack of 5-HT at postsynaptic neurons has been linked to depression. Drugs that block the presynaptic serotonin transporter are used for treatment, such as Prozac and Zoloft. Purinergic neurons—ATP. ATP is a neurotransmitter acting at both ligand-gated ion channels (P2X receptors) and GPCRs (P2Y) receptors. ATP is, however, best known as a cotransmitter. Such purinergic signaling can also be mediated by other purines like adenosine, which particularly acts at P2Y receptors. Histaminergic neurons—histamine. Histamine is a monoamine neurotransmitter and neuromodulator. Histamine-producing neurons are found in the tuberomammillary nucleus of the hypothalamus. Histamine is involved in arousal and regulating sleep/wake behaviors. Multimodel classification Since 2012 there has been a push from the cellular and computational neuroscience community to come up with a universal classification of neurons that will apply to all neurons in the brain as well as across species. This is done by considering the three essential qualities of all neurons: electrophysiology, morphology, and the individual transcriptome of the cells. Besides being universal this classification has the advantage of being able to classify astrocytes as well. A method called patch-sequencing in which all three qualities can be measured at once is used extensively by the Allen Institute for Brain Science. In 2023, a comprehensive cell atlas of the adult, and developing human brain at the transcriptional, epigenetic, and functional levels was created through an international collaboration of researchers using the most cutting-edge molecular biology approaches. Connectivity Neurons communicate with each other via synapses, where either the axon terminal of one cell contacts another neuron's dendrite, soma, or, less commonly, axon. Neurons such as Purkinje cells in the cerebellum can have over 1000 dendritic branches, making connections with tens of thousands of other cells; other neurons, such as the magnocellular neurons of the supraoptic nucleus, have only one or two dendrites, each of which receives thousands of synapses. Synapses can be excitatory or inhibitory, either increasing or decreasing activity in the target neuron, respectively. Some neurons also communicate via electrical synapses, which are direct, electrically conductive junctions between cells. When an action potential reaches the axon terminal, it opens voltage-gated calcium channels, allowing calcium ions to enter the terminal. Calcium causes synaptic vesicles filled with neurotransmitter molecules to fuse with the membrane, releasing their contents into the synaptic cleft. The neurotransmitters diffuse across the synaptic cleft and activate receptors on the postsynaptic neuron. High cytosolic calcium in the axon terminal triggers mitochondrial calcium uptake, which, in turn, activates mitochondrial energy metabolism to produce ATP to support continuous neurotransmission. An autapse is a synapse in which a neuron's axon connects to its dendrites. The human brain has some 8.6 x 1010 (eighty six billion) neurons. Each neuron has on average 7,000 synaptic connections to other neurons. It has been estimated that the brain of a three-year-old child has about 1015 synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 1014 to 5 x 1014 synapses (100 to 500 trillion). Nonelectrochemical signaling Beyond electrical and chemical signaling, studies suggest neurons in healthy human brains can also communicate through: force generated by the enlargement of dendritic spines the transfer of proteins – transneuronally transported proteins (TNTPs) They can also get modulated by input from the environment and hormones released from other parts of the organism, which could be influenced more or less directly by neurons. This also applies to neurotrophins such as BDNF. The gut microbiome is also connected with the brain. Neurons also communicate with microglia, the brain's main immune cells via specialized contact sites, called "somatic junctions". These connections enable microglia to constantly monitor and regulate neuronal functions, and exert neuroprotection when needed. Mechanisms for propagating action potentials In 1937 John Zachary Young suggested that the squid giant axon could be used to study neuronal electrical properties. It is larger than but similar to human neurons, making it easier to study. By inserting electrodes into the squid giant axons, accurate measurements were made of the membrane potential. The cell membrane of the axon and soma contain voltage-gated ion channels that allow the neuron to generate and propagate an electrical signal (an action potential). Some neurons also generate subthreshold membrane potential oscillations. These signals are generated and propagated by charge-carrying ions including sodium (Na+), potassium (K+), chloride (Cl−), and calcium (Ca2+). Several stimuli can activate a neuron leading to electrical activity, including pressure, stretch, chemical transmitters, and changes in the electric potential across the cell membrane. Stimuli cause specific ion-channels within the cell membrane to open, leading to a flow of ions through the cell membrane, changing the membrane potential. Neurons must maintain the specific electrical properties that define their neuron type. Thin neurons and axons require less metabolic expense to produce and carry action potentials, but thicker axons convey impulses more rapidly. To minimize metabolic expense while maintaining rapid conduction, many neurons have insulating sheaths of myelin around their axons. The sheaths are formed by glial cells: oligodendrocytes in the central nervous system and Schwann cells in the peripheral nervous system. The sheath enables action potentials to travel faster than in unmyelinated axons of the same diameter, whilst using less energy. The myelin sheath in peripheral nerves normally runs along the axon in sections about 1 mm long, punctuated by unsheathed nodes of Ranvier, which contain a high density of voltage-gated ion channels. Multiple sclerosis is a neurological disorder that results from the demyelination of axons in the central nervous system. Some neurons do not generate action potentials but instead generate a graded electrical signal, which in turn causes graded neurotransmitter release. Such non-spiking neurons tend to be sensory neurons or interneurons, because they cannot carry signals long distances. Neural coding Neural coding is concerned with how sensory and other information is represented in the brain by neurons. The main goal of studying neural coding is to characterize the relationship between the stimulus and the individual or ensemble neuronal responses and the relationships among the electrical activities of the neurons within the ensemble. It is thought that neurons can encode both digital and analog information. All-or-none principle The conduction of nerve impulses is an example of an all-or-none response. In other words, if a neuron responds at all, then it must respond completely. Greater intensity of stimulation, like brighter image/louder sound, does not produce a stronger signal but can increase firing frequency. Receptors respond in different ways to stimuli. Slowly adapting or tonic receptors respond to a steady stimulus and produce a steady rate of firing. Tonic receptors most often respond to increased stimulus intensity by increasing their firing frequency, usually as a power function of stimulus plotted against impulses per second. This can be likened to an intrinsic property of light where greater intensity of a specific frequency (color) requires more photons, as the photons can not become "stronger" for a specific frequency. Other receptor types include quickly adapting or phasic receptors, where firing decreases or stops with a steady stimulus; examples include skin which, when touched causes neurons to fire, but if the object maintains even pressure, the neurons stop firing. The neurons of the skin and muscles that are responsive to pressure and vibration have filtering accessory structures that aid their function. The pacinian corpuscle is one such structure. It has concentric layers like an onion, which form around the axon terminal. When pressure is applied and the corpuscle is deformed, mechanical stimulus is transferred to the axon, which fires. If the pressure is steady, the stimulus ends; thus, these neurons typically respond with a transient depolarization during the initial deformation and again when the pressure is removed, which causes the corpuscle to change shape again. Other types of adaptation are important in extending the function of several other neurons. Etymology and spelling The German anatomist Heinrich Wilhelm Waldeyer introduced the term neuron in 1891, based on the ancient Greek νεῦρον neuron 'sinew, cord, nerve'. The word was adopted in French with the spelling neurone. That spelling was also used by many writers in English, but has now become rare in American usage and uncommon in British usage. Some previous works used nerve cell (cellule nervose), as adopted in Camillo Golgi's 1873 paper on the discovery of the silver staining technique used to visualize nervous tissue under light microscopy. History The neuron's place as the primary functional unit of the nervous system was first recognized in the late 19th century through the work of the Spanish anatomist Santiago Ramón y Cajal. To make the structure of individual neurons visible, Ramón y Cajal improved a silver staining process that had been developed by Camillo Golgi. The improved process involves a technique called "double impregnation" and is still in use. In 1888 Ramón y Cajal published a paper about the bird cerebellum. In this paper, he stated that he could not find evidence for anastomosis between axons and dendrites and called each nervous element "an autonomous canton." This became known as the neuron doctrine, one of the central tenets of modern neuroscience. In 1891, the German anatomist Heinrich Wilhelm Waldeyer wrote a highly influential review of the neuron doctrine in which he introduced the term neuron to describe the anatomical and physiological unit of the nervous system. The silver impregnation stains are a useful method for neuroanatomical investigations because, for reasons unknown, it stains only a small percentage of cells in a tissue, exposing the complete micro structure of individual neurons without much overlap from other cells. Neuron doctrine The neuron doctrine is the now fundamental idea that neurons are the basic structural and functional units of the nervous system. The theory was put forward by Santiago Ramón y Cajal in the late 19th century. It held that neurons are discrete cells (not connected in a meshwork), acting as metabolically distinct units. Later discoveries yielded refinements to the doctrine. For example, glial cells, which are non-neuronal, play an essential role in information processing. Also, electrical synapses are more common than previously thought, comprising direct, cytoplasmic connections between neurons; In fact, neurons can form even tighter couplings: the squid giant axon arises from the fusion of multiple axons. Ramón y Cajal also postulated the Law of Dynamic Polarization, which states that a neuron receives signals at its dendrites and cell body and transmits them, as action potentials, along the axon in one direction: away from the cell body. The Law of Dynamic Polarization has important exceptions; dendrites can serve as synaptic output sites of neurons and axons can receive synaptic inputs. Compartmental modelling of neurons Although neurons are often described as "fundamental units" of the brain, they perform internal computations. Neurons integrate input within dendrites, and this complexity is lost in models that assume neurons to be a fundamental unit. Dendritic branches can be modeled as spatial compartments, whose activity is related to passive membrane properties, but may also be different depending on input from synapses. Compartmental modelling of dendrites is especially helpful for understanding the behavior of neurons that are too small to record with electrodes, as is the case for Drosophila melanogaster. Neurons in the brain The number of neurons in the brain varies dramatically from species to species. In a human, there are an estimated 10–20 billion neurons in the cerebral cortex and 55–70 billion neurons in the cerebellum. By contrast, the nematode worm Caenorhabditis elegans has just 302 neurons, making it an ideal model organism as scientists have been able to map all of its neurons. The fruit fly Drosophila melanogaster, a common subject in biological experiments, has around 100,000 neurons and exhibits many complex behaviors. Many properties of neurons, from the type of neurotransmitters used to ion channel composition, are maintained across species, allowing scientists to study processes occurring in more complex organisms in much simpler experimental systems. Neurological disorders Charcot–Marie–Tooth disease (CMT) is a heterogeneous inherited disorder of nerves (neuropathy) that is characterized by loss of muscle tissue and touch sensation, predominantly in the feet and legs extending to the hands and arms in advanced stages. Presently incurable, this disease is one of the most common inherited neurological disorders, affecting 36 in 100,000 people. Alzheimer's disease (AD), also known simply as Alzheimer's, is a neurodegenerative disease characterized by progressive cognitive deterioration, together with declining activities of daily living and neuropsychiatric symptoms or behavioral changes. The most striking early symptom is loss of short-term memory (amnesia), which usually manifests as minor forgetfulness that becomes steadily more pronounced with illness progression, with relative preservation of older memories. As the disorder progresses, cognitive (intellectual) impairment extends to the domains of language (aphasia), skilled movements (apraxia), and recognition (agnosia), and functions such as decision-making and planning become impaired. Parkinson's disease (PD), also known as Parkinson's, is a degenerative disorder of the central nervous system that often impairs motor skills and speech. Parkinson's disease belongs to a group of conditions called movement disorders. It is characterized by muscle rigidity, tremor, a slowing of physical movement (bradykinesia), and in extreme cases, a loss of physical movement (akinesia). The primary symptoms are the results of decreased stimulation of the motor cortex by the basal ganglia, normally caused by the insufficient formation and action of dopamine, which is produced in the dopaminergic neurons of the brain. Secondary symptoms may include high-level cognitive dysfunction and subtle language problems. PD is both chronic and progressive. Myasthenia gravis is a neuromuscular disease leading to fluctuating muscle weakness and fatigability during simple activities. Weakness is typically caused by circulating antibodies that block acetylcholine receptors at the postsynaptic neuromuscular junction, inhibiting the stimulative effect of the neurotransmitter acetylcholine. Myasthenia is treated with immunosuppressants, cholinesterase inhibitors and, in selected cases, thymectomy. Demyelination Demyelination is a process characterized by the gradual loss of the myelin sheath enveloping nerve fibers. When myelin deteriorates, signal conduction along nerves can be significantly impaired or lost, and the nerve eventually withers. Demyelination may affect both central and peripheral nervous systems, contributing to various neurological disorders such as multiple sclerosis, Guillain-Barré syndrome, and chronic inflammatory demyelinating polyneuropathy. Although demyelination is often caused by an autoimmune reaction, it may also be caused by viral infections, metabolic disorders, trauma, and some medications. Axonal degeneration Although most injury responses include a calcium influx signaling to promote resealing of severed parts, axonal injuries initially lead to acute axonal degeneration, which is the rapid separation of the proximal and distal ends, occurring within 30 minutes of injury. Degeneration follows with swelling of the axolemma, and eventually leads to bead-like formation. Granular disintegration of the axonal cytoskeleton and inner organelles occurs after axolemma degradation. Early changes include accumulation of mitochondria in the paranodal regions at the site of injury. The endoplasmic reticulum degrades and mitochondria swell up and eventually disintegrate. The disintegration is dependent on ubiquitin and calpain proteases (caused by the influx of calcium ions), suggesting that axonal degeneration is an active process that produces complete fragmentation. The process takes about roughly 24 hours in the PNS and longer in the CNS. The signaling pathways leading to axolemma degeneration are unknown. Development Neurons develop through the process of neurogenesis, in which neural stem cells divide to produce differentiated neurons. Once fully differentiated they are no longer capable of undergoing mitosis. Neurogenesis primarily occurs during embryonic development. Neurons initially develop from the neural tube in the embryo. The neural tube has three layers – a ventricular zone, an intermediate zone, and a marginal zone. The ventricular zone surrounds the tube's central canal and becomes the ependyma. Dividing cells of the ventricular zone form the intermediate zone which stretches to the outermost layer of the neural tube called the pial layer. The gray matter of the brain is derived from the intermediate zone. The extensions of the neurons in the intermediate zone make up the marginal zone when myelinated becomes the brain's white matter. Differentiation of the neurons is ordered by their size. Large motor neurons are first. Smaller sensory neurons together with glial cell differentiate at birth. Adult neurogenesis can occur and studies of the age of human neurons suggest that this process occurs only for a minority of cells and that the vast majority of neurons in the neocortex form before birth and persist without replacement. The extent to which adult neurogenesis exists in humans, and its contribution to cognition are controversial, with conflicting reports published in 2018. The body contains a variety of stem cell types that can differentiate into neurons. Researchers found a way to transform human skin cells into nerve cells using transdifferentiation, in which "cells are forced to adopt new identities". During neurogenesis in the mammalian brain, progenitor and stem cells progress from proliferative divisions to differentiative divisions. This progression leads to the neurons and glia that populate cortical layers. Epigenetic modifications play a key role in regulating gene expression in differentiating neural stem cells, and are critical for cell fate determination in the developing and adult mammalian brain. Epigenetic modifications include DNA cytosine methylation to form 5-methylcytosine and 5-methylcytosine demethylation. These modifications are critical for cell fate determination in the developing and adult mammalian brain. DNA cytosine methylation is catalyzed by DNA methyltransferases (DNMTs). Methylcytosine demethylation is catalyzed in several stages by TET enzymes that carry out oxidative reactions (e.g. 5-methylcytosine to 5-hydroxymethylcytosine) and enzymes of the DNA base excision repair (BER) pathway. At different stages of mammalian nervous system development, two DNA repair processes are employed in the repair of DNA double-strand breaks. These pathways are homologous recombinational repair used in proliferating neural precursor cells, and non-homologous end joining used mainly at later developmental stages Intercellular communication between developing neurons and microglia is also indispensable for proper neurogenesis and brain development. Nerve regeneration Peripheral axons can regrow if they are severed, but one neuron cannot be functionally replaced by one of another type (Llinás' law).
Biology and health sciences
Nervous system
null
21140
https://en.wikipedia.org/wiki/Noble%20gas
Noble gas
|- ! colspan=2 style="text-align:left;" | ↓ Period |- ! 1 | |- ! 2 | |- ! 3 | |- ! 4 | |- ! 5 | |- ! 6 | |- ! 7 | |- | colspan="2"| Legend |} The noble gases (historically the inert gases, sometimes referred to as aerogens) are the members of group 18 of the periodic table: helium (He), neon (Ne), argon (Ar), krypton (Kr), xenon (Xe), radon (Rn) and, in some cases, oganesson (Og). Under standard conditions, the first six of these elements are odorless, colorless, monatomic gases with very low chemical reactivity and cryogenic boiling points. The properties of the seventh, unstable, element, Og, are uncertain. The intermolecular force between noble gas atoms is the very weak London dispersion force, so their boiling points are all cryogenic, below . The noble gases' inertness, or tendency not to react with other chemical substances, results from their electron configuration: their outer shell of valence electrons is "full", giving them little tendency to participate in chemical reactions. Only a few hundred noble gas compounds are known to exist. The inertness of noble gases makes them useful whenever chemical reactions are unwanted. For example, argon is used as a shielding gas in welding and as a filler gas in incandescent light bulbs. Helium is used to provide buoyancy in blimps and balloons. Helium and neon are also used as refrigerants due to their low boiling points. Industrial quantities of the noble gases, except for radon, are obtained by separating them from air using the methods of liquefaction of gases and fractional distillation. Helium is also a byproduct of the mining of natural gas. Radon is usually isolated from the radioactive decay of dissolved radium, thorium, or uranium compounds. The seventh member of group 18 is oganesson, an unstable synthetic element whose chemistry is still uncertain because only five very short-lived atoms (t1/2 = 0.69 ms) have ever been synthesized (). IUPAC uses the term "noble gas" interchangeably with "group 18" and thus includes oganesson; however, due to relativistic effects, oganesson is predicted to be a solid under standard conditions and reactive enough not to qualify functionally as "noble". History Noble gas is translated from the German noun , first used in 1900 by Hugo Erdmann to indicate their extremely low level of reactivity. The name makes an analogy to the term "noble metals", which also have low reactivity. The noble gases have also been referred to as inert gases, but this label is deprecated as many noble gas compounds are now known. Rare gases is another term that was used, but this is also inaccurate because argon forms a fairly considerable part (0.94% by volume, 1.3% by mass) of the Earth's atmosphere due to decay of radioactive potassium-40. Pierre Janssen and Joseph Norman Lockyer had discovered a new element on 18 August 1868 while looking at the chromosphere of the Sun, and named it helium after the Greek word for the Sun, (). No chemical analysis was possible at the time, but helium was later found to be a noble gas. Before them, in 1784, the English chemist and physicist Henry Cavendish had discovered that air contains a small proportion of a substance less reactive than nitrogen. A century later, in 1895, Lord Rayleigh discovered that samples of nitrogen from the air were of a different density than nitrogen resulting from chemical reactions. Along with Scottish scientist William Ramsay at University College, London, Lord Rayleigh theorized that the nitrogen extracted from air was mixed with another gas, leading to an experiment that successfully isolated a new element, argon, from the Greek word (, "idle" or "lazy"). With this discovery, they realized an entire class of gases was missing from the periodic table. During his search for argon, Ramsay also managed to isolate helium for the first time while heating cleveite, a mineral. In 1902, having accepted the evidence for the elements helium and argon, Dmitri Mendeleev included these noble gases as group 0 in his arrangement of the elements, which would later become the periodic table. Ramsay continued his search for these gases using the method of fractional distillation to separate liquid air into several components. In 1898, he discovered the elements krypton, neon, and xenon, and named them after the Greek words (, "hidden"), (, "new"), and (, "stranger"), respectively. Radon was first identified in 1898 by Friedrich Ernst Dorn, and was named radium emanation, but was not considered a noble gas until 1904 when its characteristics were found to be similar to those of other noble gases. Rayleigh and Ramsay received the 1904 Nobel Prizes in Physics and in Chemistry, respectively, for their discovery of the noble gases; in the words of J. E. Cederblom, then president of the Royal Swedish Academy of Sciences, "the discovery of an entirely new group of elements, of which no single representative had been known with any certainty, is something utterly unique in the history of chemistry, being intrinsically an advance in science of peculiar significance". The discovery of the noble gases aided in the development of a general understanding of atomic structure. In 1895, French chemist Henri Moissan attempted to form a reaction between fluorine, the most electronegative element, and argon, one of the noble gases, but failed. Scientists were unable to prepare compounds of argon until the end of the 20th century, but these attempts helped to develop new theories of atomic structure. Learning from these experiments, Danish physicist Niels Bohr proposed in 1913 that the electrons in atoms are arranged in shells surrounding the nucleus, and that for all noble gases except helium the outermost shell always contains eight electrons. In 1916, Gilbert N. Lewis formulated the octet rule, which concluded an octet of electrons in the outer shell was the most stable arrangement for any atom; this arrangement caused them to be unreactive with other elements since they did not require any more electrons to complete their outer shell. In 1962, Neil Bartlett discovered the first chemical compound of a noble gas, xenon hexafluoroplatinate. Compounds of other noble gases were discovered soon after: in 1962 for radon, radon difluoride (), which was identified by radiotracer techniques and in 1963 for krypton, krypton difluoride (). The first stable compound of argon was reported in 2000 when argon fluorohydride (HArF) was formed at a temperature of . In October 2006, scientists from the Joint Institute for Nuclear Research and Lawrence Livermore National Laboratory successfully created synthetically oganesson, the seventh element in group 18, by bombarding californium with calcium. Physical and atomic properties The noble gases have weak interatomic force, and consequently have very low melting and boiling points. They are all monatomic gases under standard conditions, including the elements with larger atomic masses than many normally solid elements. Helium has several unique qualities when compared with other elements: its boiling point at 1 atm is lower than those of any other known substance; it is the only element known to exhibit superfluidity; and, it is the only element that cannot be solidified by cooling at atmospheric pressure (an effect explained by quantum mechanics as its zero point energy is too high to permit freezing) – a pressure of must be applied at a temperature of to convert it to a solid while a pressure of about is required at room temperature. The noble gases up to xenon have multiple stable isotopes; krypton and xenon also have naturally occurring radioisotopes, namely 78Kr, 124Xe, and 136Xe, all have very long lives (> 1021 years) and can undergo double electron capture or double beta decay. Radon has no stable isotopes; its longest-lived isotope, 222Rn, has a half-life of 3.8 days and decays to form helium and polonium, which ultimately decays to lead. Oganesson also has no stable isotopes, and its only known isotope 294Og is very short-lived (half-life 0.7 ms). Melting and boiling points increase going down the group. The noble gas atoms, like atoms in most groups, increase steadily in atomic radius from one period to the next due to the increasing number of electrons. The size of the atom is related to several properties. For example, the ionization potential decreases with an increasing radius because the valence electrons in the larger noble gases are farther away from the nucleus and are therefore not held as tightly together by the atom. Noble gases have the largest ionization potential among the elements of each period, which reflects the stability of their electron configuration and is related to their relative lack of chemical reactivity. Some of the heavier noble gases, however, have ionization potentials small enough to be comparable to those of other elements and molecules. It was the insight that xenon has an ionization potential similar to that of the oxygen molecule that led Bartlett to attempt oxidizing xenon using platinum hexafluoride, an oxidizing agent known to be strong enough to react with oxygen. Noble gases cannot accept an electron to form stable anions; that is, they have a negative electron affinity. The macroscopic physical properties of the noble gases are dominated by the weak van der Waals forces between the atoms. The attractive force increases with the size of the atom as a result of the increase in polarizability and the decrease in ionization potential. This results in systematic group trends: as one goes down group 18, the atomic radius increases, and with it the interatomic forces increase, resulting in an increasing melting point, boiling point, enthalpy of vaporization, and solubility. The increase in density is due to the increase in atomic mass. The noble gases are nearly ideal gases under standard conditions, but their deviations from the ideal gas law provided important clues for the study of intermolecular interactions. The Lennard-Jones potential, often used to model intermolecular interactions, was deduced in 1924 by John Lennard-Jones from experimental data on argon before the development of quantum mechanics provided the tools for understanding intermolecular forces from first principles. The theoretical analysis of these interactions became tractable because the noble gases are monatomic and the atoms spherical, which means that the interaction between the atoms is independent of direction, or isotropic. Chemical properties The noble gases are colorless, odorless, tasteless, and nonflammable under standard conditions. They were once labeled group 0 in the periodic table because it was believed they had a valence of zero, meaning their atoms cannot combine with those of other elements to form compounds. However, it was later discovered some do indeed form compounds, causing this label to fall into disuse. Electron configuration Like other groups, the members of this family show patterns in its electron configuration, especially the outermost shells resulting in trends in chemical behavior: The noble gases have full valence electron shells. Valence electrons are the outermost electrons of an atom and are normally the only electrons that participate in chemical bonding. Atoms with full valence electron shells are extremely stable and therefore do not tend to form chemical bonds and have little tendency to gain or lose electrons. However, heavier noble gases such as radon are held less firmly together by electromagnetic force than lighter noble gases such as helium, making it easier to remove outer electrons from heavy noble gases. As a result of a full shell, the noble gases can be used in conjunction with the electron configuration notation to form the noble gas notation. To do this, the nearest noble gas that precedes the element in question is written first, and then the electron configuration is continued from that point forward. For example, the electron notation of phosphorus is , while the noble gas notation is . This more compact notation makes it easier to identify elements, and is shorter than writing out the full notation of atomic orbitals. The noble gases cross the boundary between blocks—helium is an s-element whereas the rest of members are p-elements—which is unusual among the IUPAC groups. All other IUPAC groups contain elements from one block each. This causes some inconsistencies in trends across the table, and on those grounds some chemists have proposed that helium should be moved to group 2 to be with other s2 elements, but this change has not generally been adopted. Compounds The noble gases show extremely low chemical reactivity; consequently, only a few hundred noble gas compounds have been formed. Neutral compounds in which helium and neon are involved in chemical bonds have not been formed (although some helium-containing ions exist and there is some theoretical evidence for a few neutral helium-containing ones), while xenon, krypton, and argon have shown only minor reactivity. The reactivity follows the order Ne < He < Ar < Kr < Xe < Rn ≪ Og. In 1933, Linus Pauling predicted that the heavier noble gases could form compounds with fluorine and oxygen. He predicted the existence of krypton hexafluoride () and xenon hexafluoride () and speculated that xenon octafluoride () might exist as an unstable compound, and suggested that xenic acid could form perxenate salts. These predictions were shown to be generally accurate, except that is now thought to be both thermodynamically and kinetically unstable. Xenon compounds are the most numerous of the noble gas compounds that have been formed. Most of them have the xenon atom in the oxidation state of +2, +4, +6, or +8 bonded to highly electronegative atoms such as fluorine or oxygen, as in xenon difluoride (), xenon tetrafluoride (), xenon hexafluoride (), xenon tetroxide (), and sodium perxenate (). Xenon reacts with fluorine to form numerous xenon fluorides according to the following equations: Xe + F2 → XeF2 Xe + 2F2 → XeF4 Xe + 3F2 → XeF6 Some of these compounds have found use in chemical synthesis as oxidizing agents; , in particular, is commercially available and can be used as a fluorinating agent. As of 2007, about five hundred compounds of xenon bonded to other elements have been identified, including organoxenon compounds (containing xenon bonded to carbon), and xenon bonded to nitrogen, chlorine, gold, mercury, and xenon itself. Compounds of xenon bound to boron, hydrogen, bromine, iodine, beryllium, sulphur, titanium, copper, and silver have also been observed but only at low temperatures in noble gas matrices, or in supersonic noble gas jets. Radon is more reactive than xenon, and forms chemical bonds more easily than xenon does. However, due to the high radioactivity and short half-life of radon isotopes, only a few fluorides and oxides of radon have been formed in practice. Radon goes further towards metallic behavior than xenon; the difluoride RnF2 is highly ionic, and cationic Rn2+ is formed in halogen fluoride solutions. For this reason, kinetic hindrance makes it difficult to oxidize radon beyond the +2 state. Only tracer experiments appear to have succeeded in doing so, probably forming RnF4, RnF6, and RnO3. Krypton is less reactive than xenon, but several compounds have been reported with krypton in the oxidation state of +2. Krypton difluoride is the most notable and easily characterized. Under extreme conditions, krypton reacts with fluorine to form KrF2 according to the following equation: Kr + F2 → KrF2 Compounds in which krypton forms a single bond to nitrogen and oxygen have also been characterized, but are only stable below and respectively. Krypton atoms chemically bound to other nonmetals (hydrogen, chlorine, carbon) as well as some late transition metals (copper, silver, gold) have also been observed, but only either at low temperatures in noble gas matrices, or in supersonic noble gas jets. Similar conditions were used to obtain the first few compounds of argon in 2000, such as argon fluorohydride (HArF), and some bound to the late transition metals copper, silver, and gold. As of 2007, no stable neutral molecules involving covalently bound helium or neon are known. Extrapolation from periodic trends predict that oganesson should be the most reactive of the noble gases; more sophisticated theoretical treatments indicate greater reactivity than such extrapolations suggest, to the point where the applicability of the descriptor "noble gas" has been questioned. Oganesson is expected to be rather like silicon or tin in group 14: a reactive element with a common +4 and a less common +2 state, which at room temperature and pressure is not a gas but rather a solid semiconductor. Empirical / experimental testing will be required to validate these predictions. (On the other hand, flerovium, despite being in group 14, is predicted to be unusually volatile, which suggests noble gas-like properties.) The noble gases—including helium—can form stable molecular ions in the gas phase. The simplest is the helium hydride molecular ion, HeH+, discovered in 1925. Because it is composed of the two most abundant elements in the universe, hydrogen and helium, it was believed to occur naturally in the interstellar medium, and it was finally detected in April 2019 using the airborne SOFIA telescope. In addition to these ions, there are many known neutral excimers of the noble gases. These are compounds such as ArF and KrF that are stable only when in an excited electronic state; some of them find application in excimer lasers. In addition to the compounds where a noble gas atom is involved in a covalent bond, noble gases also form non-covalent compounds. The clathrates, first described in 1949, consist of a noble gas atom trapped within cavities of crystal lattices of certain organic and inorganic substances. The essential condition for their formation is that the guest (noble gas) atoms must be of appropriate size to fit in the cavities of the host crystal lattice. For instance, argon, krypton, and xenon form clathrates with hydroquinone, but helium and neon do not because they are too small or insufficiently polarizable to be retained. Neon, argon, krypton, and xenon also form clathrate hydrates, where the noble gas is trapped in ice. Noble gases can form endohedral fullerene compounds, in which the noble gas atom is trapped inside a fullerene molecule. In 1993, it was discovered that when , a spherical molecule consisting of 60 carbon atoms, is exposed to noble gases at high pressure, complexes such as can be formed (the @ notation indicates He is contained inside but not covalently bound to it). As of 2008, endohedral complexes with helium, neon, argon, krypton, and xenon have been created. These compounds have found use in the study of the structure and reactivity of fullerenes by means of the nuclear magnetic resonance of the noble gas atom. Noble gas compounds such as xenon difluoride () are considered to be hypervalent because they violate the octet rule. Bonding in such compounds can be explained using a three-center four-electron bond model. This model, first proposed in 1951, considers bonding of three collinear atoms. For example, bonding in is described by a set of three molecular orbitals (MOs) derived from p-orbitals on each atom. Bonding results from the combination of a filled p-orbital from Xe with one half-filled p-orbital from each F atom, resulting in a filled bonding orbital, a filled non-bonding orbital, and an empty antibonding orbital. The highest occupied molecular orbital is localized on the two terminal atoms. This represents a localization of charge that is facilitated by the high electronegativity of fluorine. The chemistry of the heavier noble gases, krypton and xenon, are well established. The chemistry of the lighter ones, argon and helium, is still at an early stage, while a neon compound is yet to be identified. Occurrence and production The abundances of the noble gases in the universe decrease as their atomic numbers increase. Helium is the most common element in the universe after hydrogen, with a mass fraction of about 24%. Most of the helium in the universe was formed during Big Bang nucleosynthesis, but the amount of helium is steadily increasing due to the fusion of hydrogen in stellar nucleosynthesis (and, to a very slight degree, the alpha decay of heavy elements). Abundances on Earth follow different trends; for example, helium is only the third most abundant noble gas in the atmosphere. The reason is that there is no primordial helium in the atmosphere; due to the small mass of the atom, helium cannot be retained by the Earth's gravitational field. Helium on Earth comes from the alpha decay of heavy elements such as uranium and thorium found in the Earth's crust, and tends to accumulate in natural gas deposits. The abundance of argon, on the other hand, is increased as a result of the beta decay of potassium-40, also found in the Earth's crust, to form argon-40, which is the most abundant isotope of argon on Earth despite being relatively rare in the Solar System. This process is the basis for the potassium-argon dating method. Xenon has an unexpectedly low abundance in the atmosphere, in what has been called the missing xenon problem; one theory is that the missing xenon may be trapped in minerals inside the Earth's crust. After the discovery of xenon dioxide, research showed that Xe can substitute for Si in quartz. Radon is formed in the lithosphere by the alpha decay of radium. It can seep into buildings through cracks in their foundation and accumulate in areas that are not well ventilated. Due to its high radioactivity, radon presents a significant health hazard; it is implicated in an estimated 21,000 lung cancer deaths per year in the United States alone. Oganesson does not occur in nature and is instead created manually by scientists. For large-scale use, helium is extracted by fractional distillation from natural gas, which can contain up to 7% helium. Neon, argon, krypton, and xenon are obtained from air using the methods of liquefaction of gases, to convert elements to a liquid state, and fractional distillation, to separate mixtures into component parts. Helium is typically produced by separating it from natural gas, and radon is isolated from the radioactive decay of radium compounds. The prices of the noble gases are influenced by their natural abundance, with argon being the cheapest and xenon the most expensive. As an example, the adjacent table lists the 2004 prices in the United States for laboratory quantities of each gas. Biological chemistry None of the elements in this group has any biological importance. Applications Noble gases have very low boiling and melting points, which makes them useful as cryogenic refrigerants. In particular, liquid helium, which boils at , is used for superconducting magnets, such as those needed in nuclear magnetic resonance imaging and nuclear magnetic resonance. Liquid neon, although it does not reach temperatures as low as liquid helium, also finds use in cryogenics because it has over 40 times more refrigerating capacity than liquid helium and over three times more than liquid hydrogen. Helium is used as a component of breathing gases to replace nitrogen, due its low solubility in fluids, especially in lipids. Gases are absorbed by the blood and body tissues when under pressure like in scuba diving, which causes an anesthetic effect known as nitrogen narcosis. Due to its reduced solubility, little helium is taken into cell membranes, and when helium is used to replace part of the breathing mixtures, such as in trimix or heliox, a decrease in the narcotic effect of the gas at depth is obtained. Helium's reduced solubility offers further advantages for the condition known as decompression sickness, or the bends. The reduced amount of dissolved gas in the body means that fewer gas bubbles form during the decrease in pressure of the ascent. Another noble gas, argon, is considered the best option for use as a drysuit inflation gas for scuba diving. Helium is also used as filling gas in nuclear fuel rods for nuclear reactors. Since the Hindenburg disaster in 1937, helium has replaced hydrogen as a lifting gas in blimps and balloons: despite an 8.6% decrease in buoyancy compared to hydrogen, helium is not combustible. In many applications, the noble gases are used to provide an inert atmosphere. Argon is used in the synthesis of air-sensitive compounds that are sensitive to nitrogen. Solid argon is also used for the study of very unstable compounds, such as reactive intermediates, by trapping them in an inert matrix at very low temperatures. Helium is used as the carrier medium in gas chromatography, as a filler gas for thermometers, and in devices for measuring radiation, such as the Geiger counter and the bubble chamber. Helium and argon are both commonly used to shield welding arcs and the surrounding base metal from the atmosphere during welding and cutting, as well as in other metallurgical processes and in the production of silicon for the semiconductor industry. Noble gases are commonly used in lighting because of their lack of chemical reactivity. Argon, mixed with nitrogen, is used as a filler gas for incandescent light bulbs. Krypton is used in high-performance light bulbs, which have higher color temperatures and greater efficiency, because it reduces the rate of evaporation of the filament more than argon; halogen lamps, in particular, use krypton mixed with small amounts of compounds of iodine or bromine. The noble gases glow in distinctive colors when used inside gas-discharge lamps, such as "neon lights". These lights are called after neon but often contain other gases and phosphors, which add various hues to the orange-red color of neon. Xenon is commonly used in xenon arc lamps, which, due to their nearly continuous spectrum that resembles daylight, find application in film projectors and as automobile headlamps. The noble gases are used in excimer lasers, which are based on short-lived electronically excited molecules known as excimers. The excimers used for lasers may be noble gas dimers such as Ar2, Kr2 or Xe2, or more commonly, the noble gas is combined with a halogen in excimers such as ArF, KrF, XeF, or XeCl. These lasers produce ultraviolet light, which, due to its short wavelength (193 nm for ArF and 248 nm for KrF), allows for high-precision imaging. Excimer lasers have many industrial, medical, and scientific applications. They are used for microlithography and microfabrication, which are essential for integrated circuit manufacture, and for laser surgery, including laser angioplasty and eye surgery. Some noble gases have direct application in medicine. Helium is sometimes used to improve the ease of breathing of people with asthma. Xenon is used as an anesthetic because of its high solubility in lipids, which makes it more potent than the usual nitrous oxide, and because it is readily eliminated from the body, resulting in faster recovery. Xenon finds application in medical imaging of the lungs through hyperpolarized MRI. Radon, which is highly radioactive and is only available in minute amounts, is used in radiotherapy. Noble gases, particularly xenon, are predominantly used in ion engines due to their inertness. Since ion engines are not driven by chemical reactions, chemically inert fuels are desired to prevent unwanted reaction between the fuel and anything else on the engine. Oganesson is too unstable to work with and has no known application other than research. Noble gases in Earth sciences application The relative isotopic abundances of noble gases serve as an important geochemical tracing tool in earth science. They can unravel the Earth's degassing history and its effects to the surrounding environment (i.e., atmosphere composition). Due to their inert nature and low abundances, change in the noble gas concentration and their isotopic ratios can be used to resolve and quantify the processes influencing their current signatures across geological settings. Helium Helium has two abundant isotopes: helium-3, which is primordial with high abundance in earth's core and mantle, and helium-4, which originates from decay of radionuclides (232Th, 235,238U) abundant in the earth's crust. Isotopic ratios of helium are represented by RA value, a value relative to air measurement (3He/4He = 1.39*10−6). Volatiles that originate from the earth's crust have a 0.02-0.05 RA, which indicate an enrichment of helium-4. Volatiles that originate from deeper sources such as subcontinental lithospheric mantle (SCLM), have a 6.1± 0.9 RA and mid-oceanic ridge basalts (MORB) display higher values (8 ± 1 RA). Mantle plume samples have even higher values than > 8 RA. Solar wind, which represent an unmodified primordial signature is reported to have ~ 330 RA. Neon Neon has three main stable isotopes:20Ne, 21Ne and 22Ne, with 20Ne produced by cosmic nucleogenic reactions, causing high abundance in the atmosphere. 21Ne and 22Ne are produced in the earth's crust as a result of interactions between alpha and neutron particles with light elements; 18O, 19F and 24,25Mg. The neon ratios (20Ne/22Ne and 21Ne/22Ne) are systematically used to discern the heterogeneity in the Earth's mantle and volatile sources. Complimenting He isotope data, neon isotope data additionally provide insight to thermal evolution of Earth's systems. Argon Argon has three stable isotopes: 36Ar, 38Ar and 40Ar. 36Ar and 38Ar are primordial, with their inventory on the earth's crust dependent on the equilibration of meteoric water with the crustal fluids. This explains huge inventory of 36Ar in the atmosphere. Production of these two isotopes (36Ar and 38Ar) is negligible within the earth's crust, only limited concentrations of 38Ar can be produced by interaction between alpha particles from decay of 235,238U and 232Th and light elements (37Cl and 41K). While 36Ar is continuously being produced by Beta-decay of 36Cl. 40Ar is a product of radiogenic decay of 40K. Different endmembers values for 40Ar/36Ar have been reported; Air = 295.5, MORB = 40,000, and crust = 3000. Krypton Krypton has several isotopes, with 78, 80, 82Kr being primordial, while 83,84, 86Kr results from spontaneous fission of 244Pu and radiogenic decay of 238U. Krypton's isotopes geochemical signature in mantle reservoirs resembling the modern atmosphere. preserves the solar-like primordial signature. Krypton isotopes have been used to decipher the mechanism of volatiles delivery to earth's system, which had great implication to evolution of earth (nitrogen, oxygen, and oxygen) and emergence of life. This is largely due to a clear distinction of krypton isotope signature from various sources such as chondritic material, solar wind and cometary. Xenon Xenon has nine isotopes, most of which are produced by the radiogenic decay. Krypton and xenon noble gases requires pristine, robust geochemical sampling protocol to avoid atmospheric contamination. Furthermore, sophisticated instrumentation is required to resolve mass peaks among many isotopes with narrow mass difference during analysis. Sampling of noble gases Noble gas measurements can be obtained from sources like volcanic vents, springs, and geothermal wells following specific sampling protocols. The classic specific sampling protocol include the following. Copper tubes - These are standard refrigeration copper tubes, cut to ~10 cm³ with a 3/8” outer diameter, and are used for sampling volatile discharges by connecting an inverted funnel to the tube via TygonⓇ tubing, ensuring one-way inflow and preventing air contamination. Their malleability allows for cold welding or pinching off to seal the ends after sufficient flushing with the sample. Giggenbach bottles - Giggenbach bottles are evacuated glass flasks with a Teflon stopcock, used for sampling and storing gases. They require pre-evacuation before sampling, as noble gases accumulate in the headspace. These bottles were first invented and distributed by a Werner F. Giggenbach, a German chemist. Analysis of noble gases Noble gases have numerous isotopes and subtle mass variation that requires high-precision detection systems. Originally, scientists used magnetic sector mass spectrometry, which is time-consuming and has low sensitivity due to "peak jumping mode". Multiple-collector mass spectrometers, like Quadrupole mass spectrometers (QMS), enable simultaneous detection of isotopes, improving sensitivity and throughput. Before analysis, sample preparation is essential due to the low abundance of noble gases, involving extraction, purification system. Extraction allows liberation of noble gases from their carrier (major phase; fluid or solid) while purification remove impurities and improve concentration per unit sample volume. Cryogenic traps are used for sequential analysis without peak interference by stepwise temperature raise. Research labs have successfully developed miniaturized field-based mass spectrometers, such as the portable mass spectrometer (miniRuedi), which can analyze noble gases with an analytical uncertainty of 1-3% using low-cost vacuum systems and quadrupole mass analyzers. Discharge color The color of gas discharge emission depends on several factors, including the following: discharge parameters (local value of current density and electric field, temperature, etc. – note the color variation along the discharge in the top row); gas purity (even small fraction of certain gases can affect color); material of the discharge tube envelope – note suppression of the UV and blue components in the bottom-row tubes made of thick household glass.
Physical sciences
Chemical element groups
null
21147
https://en.wikipedia.org/wiki/Natural%20selection
Natural selection
Natural selection is the differential survival and reproduction of individuals due to differences in phenotype. It is a key mechanism of evolution, the change in the heritable traits characteristic of a population over generations. Charles Darwin popularised the term "natural selection", contrasting it with artificial selection, which is intentional, whereas natural selection is not. Variation of traits, both genotypic and phenotypic, exists within all populations of organisms. However, some traits are more likely to facilitate survival and reproductive success. Thus, these traits are passed onto the next generation. These traits can also become more common within a population if the environment that favours these traits remains fixed. If new traits become more favored due to changes in a specific niche, microevolution occurs. If new traits become more favored due to changes in the broader environment, macroevolution occurs. Sometimes, new species can arise especially if these new traits are radically different from the traits possessed by their predecessors. The likelihood of these traits being 'selected' and passed down are determined by many factors. Some are likely to be passed down because they adapt well to their environments. Others are passed down because these traits are actively preferred by mating partners, which is known as sexual selection. Female bodies also prefer traits that confer the lowest cost to their reproductive health, which is known as fecundity selection. Natural selection is a cornerstone of modern biology. The concept, published by Darwin and Alfred Russel Wallace in a joint presentation of papers in 1858, was elaborated in Darwin's influential 1859 book On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. He described natural selection as analogous to artificial selection, a process by which animals and plants with traits considered desirable by human breeders are systematically favoured for reproduction. The concept of natural selection originally developed in the absence of a valid theory of heredity; at the time of Darwin's writing, science had yet to develop modern theories of genetics. The union of traditional Darwinian evolution with subsequent discoveries in classical genetics formed the modern synthesis of the mid-20th century. The addition of molecular genetics has led to evolutionary developmental biology, which explains evolution at the molecular level. While genotypes can slowly change by random genetic drift, natural selection remains the primary explanation for adaptive evolution. Historical development Pre-Darwinian theories Several philosophers of the classical era, including Empedocles and his intellectual successor, the Roman poet Lucretius, expressed the idea that nature produces a huge variety of creatures, randomly, and that only those creatures that manage to provide for themselves and reproduce successfully persist. Empedocles' idea that organisms arose entirely by the incidental workings of causes such as heat and cold was criticised by Aristotle in Book II of Physics. He posited natural teleology in its place, and believed that form was achieved for a purpose, citing the regularity of heredity in species as proof. Nevertheless, he accepted in his biology that new types of animals, monstrosities (τερας), can occur in very rare instances (Generation of Animals, Book IV). As quoted in Darwin's 1872 edition of The Origin of Species, Aristotle considered whether different forms (e.g., of teeth) might have appeared accidentally, but only the useful forms survived: But Aristotle rejected this possibility in the next paragraph, making clear that he is talking about the development of animals as embryos with the phrase "either invariably or normally come about", not the origin of species: The struggle for existence was later described by the Islamic writer Al-Jahiz in the 9th century, particularly in the context of top-down population regulation, but not in reference to individual variation or natural selection. At the turn of the 16th century Leonardo da Vinci collected a set of fossils of ammonites as well as other biological material. He extensively reasoned in his writings that the shapes of animals are not given once and forever by the "upper power" but instead are generated in different forms naturally and then selected for reproduction by their compatibility with the environment. The more recent classical arguments were reintroduced in the 18th century by Pierre Louis Maupertuis and others, including Darwin's grandfather, Erasmus Darwin. Until the early 19th century, the prevailing view in Western societies was that differences between individuals of a species were uninteresting departures from their Platonic ideals (or typus) of created kinds. However, the theory of uniformitarianism in geology promoted the idea that simple, weak forces could act continuously over long periods of time to produce radical changes in the Earth's landscape. The success of this theory raised awareness of the vast scale of geological time and made plausible the idea that tiny, virtually imperceptible changes in successive generations could produce consequences on the scale of differences between species. The early 19th-century zoologist Jean-Baptiste Lamarck suggested the inheritance of acquired characteristics as a mechanism for evolutionary change; adaptive traits acquired by an organism during its lifetime could be inherited by that organism's progeny, eventually causing transmutation of species. This theory, Lamarckism, was an influence on the Soviet biologist Trofim Lysenko's ill-fated antagonism to mainstream genetic theory as late as the mid-20th century. Between 1835 and 1837, the zoologist Edward Blyth worked on the area of variation, artificial selection, and how a similar process occurs in nature. Darwin acknowledged Blyth's ideas in the first chapter on variation of On the Origin of Species. Darwin's theory In 1859, Charles Darwin set out his theory of evolution by natural selection as an explanation for adaptation and speciation. He defined natural selection as the "principle by which each slight variation [of a trait], if useful, is preserved". The concept was simple but powerful: individuals best adapted to their environments are more likely to survive and reproduce. As long as there is some variation between them and that variation is heritable, there will be an inevitable selection of individuals with the most advantageous variations. If the variations are heritable, then differential reproductive success leads to the evolution of particular populations of a species, and populations that evolve to be sufficiently different eventually become different species. Darwin's ideas were inspired by the observations that he had made on the second voyage of HMS Beagle (1831–1836), and by the work of a political economist, Thomas Robert Malthus, who, in An Essay on the Principle of Population (1798), noted that population (if unchecked) increases exponentially, whereas the food supply grows only arithmetically; thus, inevitable limitations of resources would have demographic implications, leading to a "struggle for existence". When Darwin read Malthus in 1838 he was already primed by his work as a naturalist to appreciate the "struggle for existence" in nature. It struck him that as population outgrew resources, "favourable variations would tend to be preserved, and unfavourable ones to be destroyed. The result of this would be the formation of new species." Darwin wrote: Once he had this hypothesis, Darwin was meticulous about gathering and refining evidence of consilience to meet standards of methodology before making his scientific theory public. He was in the process of writing his "big book" to present his research when the naturalist Alfred Russel Wallace independently conceived of the principle and described it in an essay he sent to Darwin to forward to Charles Lyell. Lyell and Joseph Dalton Hooker decided to present his essay together with unpublished writings that Darwin had sent to fellow naturalists, and On the Tendency of Species to form Varieties; and on the Perpetuation of Varieties and Species by Natural Means of Selection was read to the Linnean Society of London announcing co-discovery of the principle in July 1858. Darwin published a detailed account of his evidence and conclusions in On the Origin of Species in 1859. In later editions Darwin acknowledged that earlier writers—like William Charles Wells in 1813, and Patrick Matthew in 1831—had proposed similar basic ideas. However, they had not developed their ideas, or presented evidence to persuade others that the concept was useful. Darwin thought of natural selection by analogy to how farmers select crops or livestock for breeding, which he called "artificial selection"; in his early manuscripts he referred to a "Nature" which would do the selection. At the time, other mechanisms of evolution such as evolution by genetic drift were not yet explicitly formulated, and Darwin believed that selection was likely only part of the story: "I am convinced that Natural Selection has been the main but not exclusive means of modification." In a letter to Charles Lyell in September 1860, Darwin regretted the use of the term "Natural Selection", preferring the term "Natural Preservation". For Darwin and his contemporaries, natural selection was in essence synonymous with evolution by natural selection. After the publication of On the Origin of Species, educated people generally accepted that evolution had occurred in some form. However, natural selection remained controversial as a mechanism, partly because it was perceived to be too weak to explain the range of observed characteristics of living organisms, and partly because even supporters of evolution balked at its "unguided" and non-progressive nature, a response that has been characterised as the single most significant impediment to the idea's acceptance. However, some thinkers enthusiastically embraced natural selection; after reading Darwin, Herbert Spencer introduced the phrase survival of the fittest, which became a popular summary of the theory. The fifth edition of On the Origin of Species published in 1869 included Spencer's phrase as an alternative to natural selection, with credit given: "But the expression often used by Mr. Herbert Spencer of the Survival of the Fittest is more accurate, and is sometimes equally convenient." Although the phrase is still often used by non-biologists, modern biologists avoid it because it is tautological if "fittest" is read to mean "functionally superior" and is applied to individuals rather than considered as an averaged quantity over populations. The modern synthesis Natural selection relies crucially on the idea of heredity, but developed before the basic concepts of genetics. Although the Moravian monk Gregor Mendel, the father of modern genetics, was a contemporary of Darwin's, his work lay in obscurity, only being rediscovered in 1900. With the early 20th-century integration of evolution with Mendel's laws of inheritance, the so-called modern synthesis, scientists generally came to accept natural selection. The synthesis grew from advances in different fields. Ronald Fisher developed the required mathematical language and wrote The Genetical Theory of Natural Selection (1930). J. B. S. Haldane introduced the concept of the "cost" of natural selection. Sewall Wright elucidated the nature of selection and adaptation. In his book Genetics and the Origin of Species (1937), Theodosius Dobzhansky established the idea that mutation, once seen as a rival to selection, actually supplied the raw material for natural selection by creating genetic diversity. A second synthesis Ernst Mayr recognised the key importance of reproductive isolation for speciation in his Systematics and the Origin of Species (1942). W. D. Hamilton conceived of kin selection in 1964. This synthesis cemented natural selection as the foundation of evolutionary theory, where it remains today. A second synthesis was brought about at the end of the 20th century by advances in molecular genetics, creating the field of evolutionary developmental biology ("evo-devo"), which seeks to explain the evolution of form in terms of the genetic regulatory programs which control the development of the embryo at molecular level. Natural selection is here understood to act on embryonic development to change the morphology of the adult body. Terminology The term natural selection is most often defined to operate on heritable traits, because these directly participate in evolution. However, natural selection is "blind" in the sense that changes in phenotype can give a reproductive advantage regardless of whether or not the trait is heritable. Following Darwin's primary usage, the term is used to refer both to the evolutionary consequence of blind selection and to its mechanisms. It is sometimes helpful to explicitly distinguish between selection's mechanisms and its effects; when this distinction is important, scientists define "(phenotypic) natural selection" specifically as "those mechanisms that contribute to the selection of individuals that reproduce", without regard to whether the basis of the selection is heritable. Traits that cause greater reproductive success of an organism are said to be selected for, while those that reduce success are selected against. Mechanism Heritable variation, differential reproduction Natural variation occurs among the individuals of any population of organisms. Some differences may improve an individual's chances of surviving and reproducing such that its lifetime reproductive rate is increased, which means that it leaves more offspring. If the traits that give these individuals a reproductive advantage are also heritable, that is, passed from parent to offspring, then there will be differential reproduction, that is, a slightly higher proportion of fast rabbits or efficient algae in the next generation. Even if the reproductive advantage is very slight, over many generations any advantageous heritable trait becomes dominant in the population. In this way the natural environment of an organism "selects for" traits that confer a reproductive advantage, causing evolutionary change, as Darwin described. This gives the appearance of purpose, but in natural selection there is no intentional choice. Artificial selection is purposive where natural selection is not, though biologists often use teleological language to describe it. The peppered moth exists in both light and dark colours in Great Britain, but during the Industrial Revolution, many of the trees on which the moths rested became blackened by soot, giving the dark-coloured moths an advantage in hiding from predators. This gave dark-coloured moths a better chance of surviving to produce dark-coloured offspring, and in just fifty years from the first dark moth being caught, nearly all of the moths in industrial Manchester were dark. The balance was reversed by the effect of the Clean Air Act 1956, and the dark moths became rare again, demonstrating the influence of natural selection on peppered moth evolution. A recent study, using image analysis and avian vision models, shows that pale individuals more closely match lichen backgrounds than dark morphs and for the first time quantifies the camouflage of moths to predation risk. Fitness The concept of fitness is central to natural selection. In broad terms, individuals that are more "fit" have better potential for survival, as in the well-known phrase "survival of the fittest", but the precise meaning of the term is much more subtle. Modern evolutionary theory defines fitness not by how long an organism lives, but by how successful it is at reproducing. If an organism lives half as long as others of its species, but has twice as many offspring surviving to adulthood, its genes become more common in the adult population of the next generation. Though natural selection acts on individuals, the effects of chance mean that fitness can only really be defined "on average" for the individuals within a population. The fitness of a particular genotype corresponds to the average effect on all individuals with that genotype. A distinction must be made between the concept of "survival of the fittest" and "improvement in fitness". "Survival of the fittest" does not give an "improvement in fitness", it only represents the removal of the less fit variants from a population. A mathematical example of "survival of the fittest" is given by Haldane in his paper "The Cost of Natural Selection". Haldane called this process "substitution" or more commonly in biology, this is called "fixation". This is correctly described by the differential survival and reproduction of individuals due to differences in phenotype. On the other hand, "improvement in fitness" is not dependent on the differential survival and reproduction of individuals due to differences in phenotype, it is dependent on the absolute survival of the particular variant. The probability of a beneficial mutation occurring on some member of a population depends on the total number of replications of that variant. The mathematics of "improvement in fitness was described by Kleinman. An empirical example of "improvement in fitness" is given by the Kishony Mega-plate experiment. In this experiment, "improvement in fitness" depends on the number of replications of the particular variant for a new variant to appear that is capable of growing in the next higher drug concentration region. Fixation or substitution is not required for this "improvement in fitness". On the other hand, "improvement in fitness" can occur in an environment where "survival of the fittest" is also acting. Richard Lenski's classic E. coli long-term evolution experiment is an example of adaptation in a competitive environment, ("improvement in fitness" during "survival of the fittest"). The probability of a beneficial mutation occurring on some member of the lineage to give improved fitness is slowed by the competition. The variant which is a candidate for a beneficial mutation in this limited carrying capacity environment must first out-compete the "less fit" variants in order to accumulate the requisite number of replications for there to be a reasonable probability of that beneficial mutation occurring. Competition In biology, competition is an interaction between organisms in which the fitness of one is lowered by the presence of another. This may be because both rely on a limited supply of a resource such as food, water, or territory. Competition may be within or between species, and may be direct or indirect. Species less suited to compete should in theory either adapt or die out, since competition plays a powerful role in natural selection, but according to the "room to roam" theory it may be less important than expansion among larger clades. Competition is modelled by r/K selection theory, which is based on Robert MacArthur and E. O. Wilson's work on island biogeography. In this theory, selective pressures drive evolution in one of two stereotyped directions: r- or K-selection. These terms, r and K, can be illustrated in a logistic model of population dynamics: where r is the growth rate of the population (N), and K is the carrying capacity of its local environmental setting. Typically, r-selected species exploit empty niches, and produce many offspring, each with a relatively low probability of surviving to adulthood. In contrast, K-selected species are strong competitors in crowded niches, and invest more heavily in much fewer offspring, each with a relatively high probability of surviving to adulthood. Classification Natural selection can act on any heritable phenotypic trait, and selective pressure can be produced by any aspect of the environment, including sexual selection and competition with members of the same or other species. However, this does not imply that natural selection is always directional and results in adaptive evolution; natural selection often results in the maintenance of the status quo by eliminating less fit variants. Selection can be classified in several different ways, such as by its effect on a trait, on genetic diversity, by the life cycle stage where it acts, by the unit of selection, or by the resource being competed for. By effect on a trait Selection has different effects on traits. Stabilizing selection acts to hold a trait at a stable optimum, and in the simplest case all deviations from this optimum are selectively disadvantageous. Directional selection favours extreme values of a trait. The uncommon disruptive selection also acts during transition periods when the current mode is sub-optimal, but alters the trait in more than one direction. In particular, if the trait is quantitative and univariate then both higher and lower trait levels are favoured. Disruptive selection can be a precursor to speciation. By effect on genetic diversity Alternatively, selection can be divided according to its effect on genetic diversity. Purifying or negative selection acts to remove genetic variation from the population (and is opposed by de novo mutation, which introduces new variation. In contrast, balancing selection acts to maintain genetic variation in a population, even in the absence of de novo mutation, by negative frequency-dependent selection. One mechanism for this is heterozygote advantage, where individuals with two different alleles have a selective advantage over individuals with just one allele. The polymorphism at the human ABO blood group locus has been explained in this way. By life cycle stage Another option is to classify selection by the life cycle stage at which it acts. Some biologists recognise just two types: viability (or survival) selection, which acts to increase an organism's probability of survival, and fecundity (or fertility or reproductive) selection, which acts to increase the rate of reproduction, given survival. Others split the life cycle into further components of selection. Thus viability and survival selection may be defined separately and respectively as acting to improve the probability of survival before and after reproductive age is reached, while fecundity selection may be split into additional sub-components including sexual selection, gametic selection, acting on gamete survival, and compatibility selection, acting on zygote formation. By unit of selection Selection can also be classified by the level or unit of selection. Individual selection acts on the individual, in the sense that adaptations are "for" the benefit of the individual, and result from selection among individuals. Gene selection acts directly at the level of the gene. In kin selection and intragenomic conflict, gene-level selection provides a more apt explanation of the underlying process. Group selection, if it occurs, acts on groups of organisms, on the assumption that groups replicate and mutate in an analogous way to genes and individuals. There is an ongoing debate over the degree to which group selection occurs in nature. By resource being competed for Finally, selection can be classified according to the resource being competed for. Sexual selection results from competition for mates. Sexual selection typically proceeds via fecundity selection, sometimes at the expense of viability. Ecological selection is natural selection via any means other than sexual selection, such as kin selection, competition, and infanticide. Following Darwin, natural selection is sometimes defined as ecological selection, in which case sexual selection is considered a separate mechanism. Sexual selection as first articulated by Darwin (using the example of the peacock's tail) refers specifically to competition for mates, which can be intrasexual, between individuals of the same sex, that is male–male competition, or intersexual, where one gender chooses mates, most often with males displaying and females choosing. However, in some species, mate choice is primarily by males, as in some fishes of the family Syngnathidae. Phenotypic traits can be displayed in one sex and desired in the other sex, causing a positive feedback loop called a Fisherian runaway, for example, the extravagant plumage of some male birds such as the peacock. An alternate theory proposed by the same Ronald Fisher in 1930 is the sexy son hypothesis, that mothers want promiscuous sons to give them large numbers of grandchildren and so choose promiscuous fathers for their children. Aggression between members of the same sex is sometimes associated with very distinctive features, such as the antlers of stags, which are used in combat with other stags. More generally, intrasexual selection is often associated with sexual dimorphism, including differences in body size between males and females of a species. Arms races Natural selection is seen in action in the development of antibiotic resistance in microorganisms. Since the discovery of penicillin in 1928, antibiotics have been used to fight bacterial diseases. The widespread misuse of antibiotics has selected for microbial resistance to antibiotics in clinical use, to the point that the methicillin-resistant Staphylococcus aureus (MRSA) has been described as a "superbug" because of the threat it poses to health and its relative invulnerability to existing drugs. Response strategies typically include the use of different, stronger antibiotics; however, new strains of MRSA have recently emerged that are resistant even to these drugs. This is an evolutionary arms race, in which bacteria develop strains less susceptible to antibiotics, while medical researchers attempt to develop new antibiotics that can kill them. A similar situation occurs with pesticide resistance in plants and insects. Arms races are not necessarily induced by man; a well-documented example involves the spread of a gene in the butterfly Hypolimnas bolina suppressing male-killing activity by Wolbachia bacteria parasites on the island of Samoa, where the spread of the gene is known to have occurred over a period of just five years. Evolution by means of natural selection A prerequisite for natural selection to result in adaptive evolution, novel traits and speciation is the presence of heritable genetic variation that results in fitness differences. Genetic variation is the result of mutations, genetic recombinations and alterations in the karyotype (the number, shape, size and internal arrangement of the chromosomes). Any of these changes might have an effect that is highly advantageous or highly disadvantageous, but large effects are rare. In the past, most changes in the genetic material were considered neutral or close to neutral because they occurred in noncoding DNA or resulted in a synonymous substitution. However, many mutations in non-coding DNA have deleterious effects. Although both mutation rates and average fitness effects of mutations are dependent on the organism, a majority of mutations in humans are slightly deleterious. Some mutations occur in "toolkit" or regulatory genes. Changes in these often have large effects on the phenotype of the individual because they regulate the function of many other genes. Most, but not all, mutations in regulatory genes result in non-viable embryos. Some nonlethal regulatory mutations occur in HOX genes in humans, which can result in a cervical rib or polydactyly, an increase in the number of fingers or toes. When such mutations result in a higher fitness, natural selection favours these phenotypes and the novel trait spreads in the population. Established traits are not immutable; traits that have high fitness in one environmental context may be much less fit if environmental conditions change. In the absence of natural selection to preserve such a trait, it becomes more variable and deteriorate over time, possibly resulting in a vestigial manifestation of the trait, also called evolutionary baggage. In many circumstances, the apparently vestigial structure may retain a limited functionality, or may be co-opted for other advantageous traits in a phenomenon known as preadaptation. A famous example of a vestigial structure, the eye of the blind mole-rat, is believed to retain function in photoperiod perception. Speciation Speciation requires a degree of reproductive isolation—that is, a reduction in gene flow. However, it is intrinsic to the concept of a species that hybrids are selected against, opposing the evolution of reproductive isolation, a problem that was recognised by Darwin. The problem does not occur in allopatric speciation with geographically separated populations, which can diverge with different sets of mutations. E. B. Poulton realized in 1903 that reproductive isolation could evolve through divergence, if each lineage acquired a different, incompatible allele of the same gene. Selection against the heterozygote would then directly create reproductive isolation, leading to the Bateson–Dobzhansky–Muller model, further elaborated by H. Allen Orr and Sergey Gavrilets. With reinforcement, however, natural selection can favor an increase in pre-zygotic isolation, influencing the process of speciation directly. Genetic basis Genotype and phenotype Natural selection acts on an organism's phenotype, or physical characteristics. Phenotype is determined by an organism's genetic make-up (genotype) and the environment in which the organism lives. When different organisms in a population possess different versions of a gene for a certain trait, each of these versions is known as an allele. It is this genetic variation that underlies differences in phenotype. An example is the ABO blood type antigens in humans, where three alleles govern the phenotype. Some traits are governed by only a single gene, but most traits are influenced by the interactions of many genes. A variation in one of the many genes that contributes to a trait may have only a small effect on the phenotype; together, these genes can produce a continuum of possible phenotypic values. Directionality of selection When some component of a trait is heritable, selection alters the frequencies of the different alleles, or variants of the gene that produces the variants of the trait. Selection can be divided into three classes, on the basis of its effect on allele frequencies: directional, stabilizing, and disruptive selection. Directional selection occurs when an allele has a greater fitness than others, so that it increases in frequency, gaining an increasing share in the population. This process can continue until the allele is fixed and the entire population shares the fitter phenotype. Far more common is stabilizing selection, which lowers the frequency of alleles that have a deleterious effect on the phenotype—that is, produce organisms of lower fitness. This process can continue until the allele is eliminated from the population. Stabilizing selection conserves functional genetic features, such as protein-coding genes or regulatory sequences, over time by selective pressure against deleterious variants. Disruptive (or diversifying) selection is selection favoring extreme trait values over intermediate trait values. Disruptive selection may cause sympatric speciation through niche partitioning. Some forms of balancing selection do not result in fixation, but maintain an allele at intermediate frequencies in a population. This can occur in diploid species (with pairs of chromosomes) when heterozygous individuals (with just one copy of the allele) have a higher fitness than homozygous individuals (with two copies). This is called heterozygote advantage or over-dominance, of which the best-known example is the resistance to malaria in humans heterozygous for sickle-cell anaemia. Maintenance of allelic variation can also occur through disruptive or diversifying selection, which favours genotypes that depart from the average in either direction (that is, the opposite of over-dominance), and can result in a bimodal distribution of trait values. Finally, balancing selection can occur through frequency-dependent selection, where the fitness of one particular phenotype depends on the distribution of other phenotypes in the population. The principles of game theory have been applied to understand the fitness distributions in these situations, particularly in the study of kin selection and the evolution of reciprocal altruism. Selection, genetic variation, and drift A portion of all genetic variation is functionally neutral, producing no phenotypic effect or significant difference in fitness. Motoo Kimura's neutral theory of molecular evolution by genetic drift proposes that this variation accounts for a large fraction of observed genetic diversity. Neutral events can radically reduce genetic variation through population bottlenecks. which among other things can cause the founder effect in initially small new populations. When genetic variation does not result in differences in fitness, selection cannot directly affect the frequency of such variation. As a result, the genetic variation at those sites is higher than at sites where variation does influence fitness. However, after a period with no new mutations, the genetic variation at these sites is eliminated due to genetic drift. Natural selection reduces genetic variation by eliminating maladapted individuals, and consequently the mutations that caused the maladaptation. At the same time, new mutations occur, resulting in a mutation–selection balance. The exact outcome of the two processes depends both on the rate at which new mutations occur and on the strength of the natural selection, which is a function of how unfavourable the mutation proves to be. Genetic linkage occurs when the loci of two alleles are close on a chromosome. During the formation of gametes, recombination reshuffles the alleles. The chance that such a reshuffle occurs between two alleles is inversely related to the distance between them. Selective sweeps occur when an allele becomes more common in a population as a result of positive selection. As the prevalence of one allele increases, closely linked alleles can also become more common by "genetic hitchhiking", whether they are neutral or even slightly deleterious. A strong selective sweep results in a region of the genome where the positively selected haplotype (the allele and its neighbours) are in essence the only ones that exist in the population. Selective sweeps can be detected by measuring linkage disequilibrium, or whether a given haplotype is overrepresented in the population. Since a selective sweep also results in selection of neighbouring alleles, the presence of a block of strong linkage disequilibrium might indicate a 'recent' selective sweep near the centre of the block. Background selection is the opposite of a selective sweep. If a specific site experiences strong and persistent purifying selection, linked variation tends to be weeded out along with it, producing a region in the genome of low overall variability. Because background selection is a result of deleterious new mutations, which can occur randomly in any haplotype, it does not produce clear blocks of linkage disequilibrium, although with low recombination it can still lead to slightly negative linkage disequilibrium overall. Impact Darwin's ideas, along with those of Adam Smith and Karl Marx, had a profound influence on 19th century thought, including his radical claim that "elaborately constructed forms, so different from each other, and dependent on each other in so complex a manner" evolved from the simplest forms of life by a few simple principles. This inspired some of Darwin's most ardent supporters—and provoked the strongest opposition. Natural selection had the power, according to Stephen Jay Gould, to "dethrone some of the deepest and most traditional comforts of Western thought", such as the belief that humans have a special place in the world. In the words of the philosopher Daniel Dennett, "Darwin's dangerous idea" of evolution by natural selection is a "universal acid," which cannot be kept restricted to any vessel or container, as it soon leaks out, working its way into ever-wider surroundings. Thus, in the last decades, the concept of natural selection has spread from evolutionary biology to other disciplines, including evolutionary computation, quantum Darwinism, evolutionary economics, evolutionary epistemology, evolutionary psychology, and cosmological natural selection. This unlimited applicability has been called universal Darwinism. Origin of life How life originated from inorganic matter remains an unresolved problem in biology. One prominent hypothesis is that life first appeared in the form of short self-replicating RNA polymers. On this view, life may have come into existence when RNA chains first experienced the basic conditions, as conceived by Charles Darwin, for natural selection to operate. These conditions are: heritability, variation of type, and competition for limited resources. The fitness of an early RNA replicator would likely have been a function of adaptive capacities that were intrinsic (i.e., determined by the nucleotide sequence) and the availability of resources. The three primary adaptive capacities could logically have been: (1) the capacity to replicate with moderate fidelity (giving rise to both heritability and variation of type), (2) the capacity to avoid decay, and (3) the capacity to acquire and process resources. These capacities would have been determined initially by the folded configurations (including those configurations with ribozyme activity) of the RNA replicators that, in turn, would have been encoded in their individual nucleotide sequences. Cell and molecular biology In 1881, the embryologist Wilhelm Roux published Der Kampf der Theile im Organismus (The Struggle of Parts in the Organism) in which he suggested that the development of an organism results from a Darwinian competition between the parts of the embryo, occurring at all levels, from molecules to organs. In recent years, a modern version of this theory has been proposed by Jean-Jacques Kupiec. According to this cellular Darwinism, random variation at the molecular level generates diversity in cell types whereas cell interactions impose a characteristic order on the developing embryo. Social and psychological theory The social implications of the theory of evolution by natural selection also became the source of continuing controversy. Friedrich Engels, a German political philosopher and co-originator of the ideology of communism, wrote in 1872 that "Darwin did not know what a bitter satire he wrote on mankind, and especially on his countrymen, when he showed that free competition, the struggle for existence, which the economists celebrate as the highest historical achievement, is the normal state of the animal kingdom." Herbert Spencer and the eugenics advocate Francis Galton's interpretation of natural selection as necessarily progressive, leading to supposed advances in intelligence and civilisation, became a justification for colonialism, eugenics, and social Darwinism. For example, in 1940, Konrad Lorenz, in writings that he subsequently disowned, used the theory as a justification for policies of the Nazi state. He wrote "... selection for toughness, heroism, and social utility ... must be accomplished by some human institution, if mankind, in default of selective factors, is not to be ruined by domestication-induced degeneracy. The racial idea as the basis of our state has already accomplished much in this respect." Others have developed ideas that human societies and culture evolve by mechanisms analogous to those that apply to evolution of species. More recently, work among anthropologists and psychologists has led to the development of sociobiology and later of evolutionary psychology, a field that attempts to explain features of human psychology in terms of adaptation to the ancestral environment. The most prominent example of evolutionary psychology, notably advanced in the early work of Noam Chomsky and later by Steven Pinker, is the hypothesis that the human brain has adapted to acquire the grammatical rules of natural language. Other aspects of human behaviour and social structures, from specific cultural norms such as incest avoidance to broader patterns such as gender roles, have been hypothesised to have similar origins as adaptations to the early environment in which modern humans evolved. By analogy to the action of natural selection on genes, the concept of memes—"units of cultural transmission," or culture's equivalents of genes undergoing selection and recombination—has arisen, first described in this form by Richard Dawkins in 1976 and subsequently expanded upon by philosophers such as Daniel Dennett as explanations for complex cultural activities, including human consciousness. Information and systems theory In 1922, Alfred J. Lotka proposed that natural selection might be understood as a physical principle that could be described in terms of the use of energy by a system, a concept later developed by Howard T. Odum as the maximum power principle in thermodynamics, whereby evolutionary systems with selective advantage maximise the rate of useful energy transformation. The principles of natural selection have inspired a variety of computational techniques, such as "soft" artificial life, that simulate selective processes and can be highly efficient in 'adapting' entities to an environment defined by a specified fitness function. For example, a class of heuristic optimisation algorithms known as genetic algorithms, pioneered by John Henry Holland in the 1970s and expanded upon by David E. Goldberg, identify optimal solutions by simulated reproduction and mutation of a population of solutions defined by an initial probability distribution. Such algorithms are particularly useful when applied to problems whose energy landscape is very rough or has many local minima. In fiction Darwinian evolution by natural selection is pervasive in literature, whether taken optimistically in terms of how humanity may evolve towards perfection, or pessimistically in terms of the dire consequences of the interaction of human nature and the struggle for survival. Among major responses is Samuel Butler's 1872 pessimistic Erewhon ("nowhere", written mostly backwards). In 1893 H. G. Wells imagined "The Man of the Year Million", transformed by natural selection into a being with a huge head and eyes, and shrunken body.
Biology and health sciences
Biology
null
21170
https://en.wikipedia.org/wiki/Numeral%20system
Numeral system
A numeral system is a writing system for expressing numbers; that is, a mathematical notation for representing numbers of a given set, using digits or other symbols in a consistent manner. The same sequence of symbols may represent different numbers in different numeral systems. For example, "11" represents the number eleven in the decimal or base-10 numeral system (today, the most common system globally), the number three in the binary or base-2 numeral system (used in modern computers), and the number two in the unary numeral system (used in tallying scores). The number the numeral represents is called its value. Not all number systems can represent the same set of numbers; for example, Roman numerals cannot represent the number zero. Ideally, a numeral system will: Represent a useful set of numbers (e.g. all integers, or rational numbers) Give every number represented a unique representation (or at least a standard representation) Reflect the algebraic and arithmetic structure of the numbers. For example, the usual decimal representation gives every nonzero natural number a unique representation as a finite sequence of digits, beginning with a non-zero digit. Numeral systems are sometimes called number systems, but that name is ambiguous, as it could refer to different systems of numbers, such as the system of real numbers, the system of complex numbers, various hypercomplex number systems, the system of p-adic numbers, etc. Such systems are, however, not the topic of this article. History The first true written positional numeral system is considered to be the Hindu–Arabic numeral system. This system was established by the 7th century in India, but was not yet in its modern form because the use of the digit zero had not yet been widely accepted. Instead of a zero sometimes the digits were marked with dots to indicate their significance, or a space was used as a placeholder. The first widely acknowledged use of zero was in 876. The original numerals were very similar to the modern ones, even down to the glyphs used to represent digits. By the 13th century, Western Arabic numerals were accepted in European mathematical circles (Fibonacci used them in his ). They began to enter common use in the 15th century. By the end of the 20th century virtually all non-computerized calculations in the world were done with Arabic numerals, which have replaced native numeral systems in most cultures. Other historical numeral systems using digits The exact age of the Maya numerals is unclear, but it is possible that it is older than the Hindu–Arabic system. The system was vigesimal (base 20), so it has twenty digits. The Mayas used a shell symbol to represent zero. Numerals were written vertically, with the ones place at the bottom. The Mayas had no equivalent of the modern decimal separator, so their system could not represent fractions. The Thai numeral system is identical to the Hindu–Arabic numeral system except for the symbols used to represent digits. The use of these digits is less common in Thailand than it once was, but they are still used alongside Arabic numerals. The rod numerals, the written forms of counting rods once used by Chinese and Japanese mathematicians, are a decimal positional system used for performing decimal calculations. Rods were placed on a counting board and slid forwards or backwards to change the decimal place. The Sūnzĭ Suànjīng, a mathematical treatise dated to between the 3rd and 5th centuries AD, provides detailed instructions for the system, which is thought to have been in use since at least the 4th century BC. Zero was not initially treated as a number, but as a vacant position. Later sources introduced conventions for the expression of zero and negative numbers. The use of a round symbol for zero is first attested in the Mathematical Treatise in Nine Sections of 1247 AD. The origin of this symbol is unknown; it may have been produced by modifying a square symbol. The Suzhou numerals, a descendant of rod numerals, are still used today for some commercial purposes. Main numeral systems The most commonly used system of numerals is decimal. Indian mathematicians are credited with developing the integer version, the Hindu–Arabic numeral system. Aryabhata of Kusumapura developed the place-value notation in the 5th century and a century later Brahmagupta introduced the symbol for zero. The system slowly spread to other surrounding regions like Arabia due to their commercial and military activities with India. Middle-Eastern mathematicians extended the system to include negative powers of 10 (fractions), as recorded in a treatise by Syrian mathematician Abu'l-Hasan al-Uqlidisi in 952–953, and the decimal point notation was introduced by Sind ibn Ali, who also wrote the earliest treatise on Arabic numerals. The Hindu–Arabic numeral system then spread to Europe due to merchants trading, and the digits used in Europe are called Arabic numerals, as they learned them from the Arabs. The simplest numeral system is the unary numeral system, in which every natural number is represented by a corresponding number of symbols. If the symbol is chosen, for example, then the number seven would be represented by . Tally marks represent one such system still in common use. The unary system is only useful for small numbers, although it plays an important role in theoretical computer science. Elias gamma coding, which is commonly used in data compression, expresses arbitrary-sized numbers by using unary to indicate the length of a binary numeral. The unary notation can be abbreviated by introducing different symbols for certain new values. Very commonly, these values are powers of 10; so for instance, if / stands for one, − for ten and + for 100, then the number 304 can be compactly represented as and the number 123 as without any need for zero. This is called sign-value notation. The ancient Egyptian numeral system was of this type, and the Roman numeral system was a modification of this idea. More useful still are systems which employ special abbreviations for repetitions of symbols; for example, using the first nine letters of the alphabet for these abbreviations, with A standing for "one occurrence", B "two occurrences", and so on, one could then write C+ D/ for the number 304 (the number of these abbreviations is sometimes called the base of the system). This system is used when writing Chinese numerals and other East Asian numerals based on Chinese. The number system of the English language is of this type ("three hundred [and] four"), as are those of other spoken languages, regardless of what written systems they have adopted. However, many languages use mixtures of bases, and other features, for instance 79 in French is soixante dix-neuf () and in Welsh is pedwar ar bymtheg a thrigain () or (somewhat archaic) pedwar ugain namyn un (). In English, one could say "four score less one", as in the famous Gettysburg Address representing "87 years ago" as "four score and seven years ago". More elegant is a positional system, also known as place-value notation. The positional systems are classified by their base or radix, which is the number of symbols called digits used by the system. In base 10, ten different digits 0, ..., 9 are used and the position of a digit is used to signify the power of ten that the digit is to be multiplied with, as in or more precisely . Zero, which is not needed in the other systems, is of crucial importance here, in order to be able to "skip" a power. The Hindu–Arabic numeral system, which originated in India and is now used throughout the world, is a positional base 10 system. Arithmetic is much easier in positional systems than in the earlier additive ones; furthermore, additive systems need a large number of different symbols for the different powers of 10; a positional system needs only ten different symbols (assuming that it uses base 10). The positional decimal system is presently universally used in human writing. The base 1000 is also used (albeit not universally), by grouping the digits and considering a sequence of three decimal digits as a single digit. This is the meaning of the common notation 1,000,234,567 used for very large numbers. In computers, the main numeral systems are based on the positional system in base 2 (binary numeral system), with two binary digits, 0 and 1. Positional systems obtained by grouping binary digits by three (octal numeral system) or four (hexadecimal numeral system) are commonly used. For very large integers, bases 232 or 264 (grouping binary digits by 32 or 64, the length of the machine word) are used, as, for example, in GMP. In certain biological systems, the unary coding system is employed. Unary numerals used in the neural circuits responsible for birdsong production. The nucleus in the brain of the songbirds that plays a part in both the learning and the production of bird song is the HVC (high vocal center). The command signals for different notes in the birdsong emanate from different points in the HVC. This coding works as space coding which is an efficient strategy for biological circuits due to its inherent simplicity and robustness. The numerals used when writing numbers with digits or symbols can be divided into two types that might be called the arithmetic numerals (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) and the geometric numerals (1, 10, 100, 1000, 10000 ...), respectively. The sign-value systems use only the geometric numerals and the positional systems use only the arithmetic numerals. A sign-value system does not need arithmetic numerals because they are made by repetition (except for the Ionic system), and a positional system does not need geometric numerals because they are made by position. However, the spoken language uses both arithmetic and geometric numerals. In some areas of computer science, a modified base k positional system is used, called bijective numeration, with digits 1, 2, ..., k (), and zero being represented by an empty string. This establishes a bijection between the set of all such digit-strings and the set of non-negative integers, avoiding the non-uniqueness caused by leading zeros. Bijective base-k numeration is also called k-adic notation, not to be confused with p-adic numbers. Bijective base 1 is the same as unary. Positional systems in detail In a positional base b numeral system (with b a natural number greater than 1 known as the radix or base of the system), b basic symbols (or digits) corresponding to the first b natural numbers including zero are used. To generate the rest of the numerals, the position of the symbol in the figure is used. The symbol in the last position has its own value, and as it moves to the left its value is multiplied by b. For example, in the decimal system (base 10), the numeral 4327 means , noting that . In general, if b is the base, one writes a number in the numeral system of base b by expressing it in the form and writing the enumerated digits in descending order. The digits are natural numbers between 0 and , inclusive. If a text (such as this one) discusses multiple bases, and if ambiguity exists, the base (itself represented in base 10) is added in subscript to the right of the number, like this: numberbase. Unless specified by context, numbers without subscript are considered to be decimal. By using a dot to divide the digits into two groups, one can also write fractions in the positional system. For example, the base 2 numeral 10.11 denotes . In general, numbers in the base b system are of the form: The numbers bk and b−k are the weights of the corresponding digits. The position k is the logarithm of the corresponding weight w, that is . The highest used position is close to the order of magnitude of the number. The number of tally marks required in the unary numeral system for describing the weight would have been w. In the positional system, the number of digits required to describe it is only , for k ≥ 0. For example, to describe the weight 1000 then four digits are needed because . The number of digits required to describe the position is (in positions 1, 10, 100,... only for simplicity in the decimal example). A number has a terminating or repeating expansion if and only if it is rational; this does not depend on the base. A number that terminates in one base may repeat in another (thus ). An irrational number stays aperiodic (with an infinite number of non-repeating digits) in all integral bases. Thus, for example in base 2, can be written as the aperiodic 11.001001000011111...2. Putting overscores, , or dots, ṅ, above the common digits is a convention used to represent repeating rational expansions. Thus: 14/11 = 1.272727272727... = 1.   or   321.3217878787878... = 321.321. If b = p is a prime number, one can define base-p numerals whose expansion to the left never stops; these are called the p-adic numbers. It is also possible to define a variation of base b in which digits may be positive or negative; this is called a signed-digit representation. Generalized variable-length integers More general is using a mixed radix notation (here written little-endian) like for , etc. This is used in Punycode, one aspect of which is the representation of a sequence of non-negative integers of arbitrary size in the form of a sequence without delimiters, of "digits" from a collection of 36: a–z and 0–9, representing 0–25 and 26–35 respectively. There are also so-called threshold values () which are fixed for every position in the number. A digit (in a given position in the number) that is lower than its corresponding threshold value means that it is the most-significant digit, hence in the string this is the end of the number, and the next symbol (if present) is the least-significant digit of the next number. For example, if the threshold value for the first digit is b (i.e. 1) then a (i.e. 0) marks the end of the number (it has just one digit), so in numbers of more than one digit, first-digit range is only b–9 (i.e. 1–35), therefore the weight b1 is 35 instead of 36. More generally, if tn is the threshold for the n-th digit, it is easy to show that . Suppose the threshold values for the second and third digits are c (i.e. 2), then the second-digit range is a–b (i.e. 0–1) with the second digit being most significant, while the range is c–9 (i.e. 2–35) in the presence of a third digit. Generally, for any n, the weight of the (n + 1)-th digit is the weight of the previous one times (36 − threshold of the n-th digit). So the weight of the second symbol is . And the weight of the third symbol is . So we have the following sequence of the numbers with at most 3 digits: a (0), ba (1), ca (2), ..., 9a (35), bb (36), cb (37), ..., 9b (70), bca (71), ..., 99a (1260), bcb (1261), ..., 99b (2450). Unlike a regular n-based numeral system, there are numbers like 9b where 9 and b each represent 35; yet the representation is unique because ac and aca are not allowed – the first a would terminate each of these numbers. The flexibility in choosing threshold values allows optimization for number of digits depending on the frequency of occurrence of numbers of various sizes. The case with all threshold values equal to 1 corresponds to bijective numeration, where the zeros correspond to separators of numbers with digits which are non-zero.
Mathematics
Basics
null
21175
https://en.wikipedia.org/wiki/Nitrogen
Nitrogen
Nitrogen is a chemical element; it has symbol N and atomic number 7. Nitrogen is a nonmetal and the lightest member of group 15 of the periodic table, often called the pnictogens. It is a common element in the universe, estimated at seventh in total abundance in the Milky Way and the Solar System. At standard temperature and pressure, two atoms of the element bond to form N2, a colourless and odourless diatomic gas. N2 forms about 78% of Earth's atmosphere, making it the most abundant chemical species in air. Because of the volatility of nitrogen compounds, nitrogen is relatively rare in the solid parts of the Earth. It was first discovered and isolated by Scottish physician Daniel Rutherford in 1772 and independently by Carl Wilhelm Scheele and Henry Cavendish at about the same time. The name was suggested by French chemist Jean-Antoine-Claude Chaptal in 1790 when it was found that nitrogen was present in nitric acid and nitrates. Antoine Lavoisier suggested instead the name azote, from the "no life", as it is an asphyxiant gas; this name is used in a number of languages, and appears in the English names of some nitrogen compounds such as hydrazine, azides and azo compounds. Elemental nitrogen is usually produced from air by pressure swing adsorption technology. About 2/3 of commercially produced elemental nitrogen is used as an inert (oxygen-free) gas for commercial uses such as food packaging, and much of the rest is used as liquid nitrogen in cryogenic applications. Many industrially important compounds, such as ammonia, nitric acid, organic nitrates (propellants and explosives), and cyanides, contain nitrogen. The extremely strong triple bond in elemental nitrogen (N≡N), the second strongest bond in any diatomic molecule after carbon monoxide (CO), dominates nitrogen chemistry. This causes difficulty for both organisms and industry in converting N2 into useful compounds, but at the same time it means that burning, exploding, or decomposing nitrogen compounds to form nitrogen gas releases large amounts of often useful energy. Synthetically produced ammonia and nitrates are key industrial fertilisers, and fertiliser nitrates are key pollutants in the eutrophication of water systems. Apart from its use in fertilisers and energy stores, nitrogen is a constituent of organic compounds as diverse as aramids used in high-strength fabric and cyanoacrylate used in superglue. Nitrogen occurs in all organisms, primarily in amino acids (and thus proteins), in the nucleic acids (DNA and RNA) and in the energy transfer molecule adenosine triphosphate. The human body contains about 3% nitrogen by mass, the fourth most abundant element in the body after oxygen, carbon, and hydrogen. The nitrogen cycle describes the movement of the element from the air, into the biosphere and organic compounds, then back into the atmosphere. Nitrogen is a constituent of every major pharmacological drug class, including antibiotics. Many drugs are mimics or prodrugs of natural nitrogen-containing signal molecules: for example, the organic nitrates nitroglycerin and nitroprusside control blood pressure by metabolising into nitric oxide. Many notable nitrogen-containing drugs, such as the natural caffeine and morphine or the synthetic amphetamines, act on receptors of animal neurotransmitters. History Nitrogen compounds have a very long history, ammonium chloride having been known to Herodotus. They were well-known by the Middle Ages. Alchemists knew nitric acid as aqua fortis (strong water), as well as other nitrogen compounds such as ammonium salts and nitrate salts. The mixture of nitric and hydrochloric acids was known as aqua regia (royal water), celebrated for its ability to dissolve gold, the king of metals. The discovery of nitrogen is attributed to the Scottish physician Daniel Rutherford in 1772, who called it noxious air. Though he did not recognise it as an entirely different chemical substance, he clearly distinguished it from Joseph Black's "fixed air", or carbon dioxide. The fact that there was a component of air that does not support combustion was clear to Rutherford, although he was not aware that it was an element. Nitrogen was also studied at about the same time by Carl Wilhelm Scheele, Henry Cavendish, and Joseph Priestley, who referred to it as burnt air or phlogisticated air. French chemist Antoine Lavoisier referred to nitrogen gas as "mephitic air" or azote, from the Greek word (azotikos), "no life", because it is asphyxiant. In an atmosphere of pure nitrogen, animals died and flames were extinguished. Though Lavoisier's name was not accepted in English since it was pointed out that all gases but oxygen are either asphyxiant or outright toxic, it is used in many languages (French, Italian, Portuguese, Polish, Russian, Albanian, Turkish, etc.; the German Stickstoff similarly refers to the same characteristic, viz. ersticken "to choke or suffocate") and still remains in English in the common names of many nitrogen compounds, such as hydrazine and compounds of the azide ion. Finally, it led to the name "pnictogens" for the group headed by nitrogen, from the Greek πνίγειν "to choke". The English word nitrogen (1794) entered the language from the French , coined in 1790 by French chemist Jean-Antoine Chaptal (1756–1832), from the French nitre (potassium nitrate, also called saltpetre) and the French suffix -gène, "producing", from the Greek -γενής (-genes, "begotten"). Chaptal's meaning was that nitrogen is the essential part of nitric acid, which in turn was produced from nitre. In earlier times, nitre had been confused with Egyptian "natron" (sodium carbonate) – called νίτρον (nitron) in Greek – which, despite the name, contained no nitrate. The earliest military, industrial, and agricultural applications of nitrogen compounds used saltpetre (sodium nitrate or potassium nitrate), most notably in gunpowder, and later as fertiliser. In 1910, Lord Rayleigh discovered that an electrical discharge in nitrogen gas produced "active nitrogen", a monatomic allotrope of nitrogen. The "whirling cloud of brilliant yellow light" produced by his apparatus reacted with mercury to produce explosive mercury nitride. For a long time, sources of nitrogen compounds were limited. Natural sources originated either from biology or deposits of nitrates produced by atmospheric reactions. Nitrogen fixation by industrial processes like the Frank–Caro process (1895–1899) and Haber–Bosch process (1908–1913) eased this shortage of nitrogen compounds, to the extent that half of global food production now relies on synthetic nitrogen fertilisers. At the same time, use of the Ostwald process (1902) to produce nitrates from industrial nitrogen fixation allowed the large-scale industrial production of nitrates as feedstock in the manufacture of explosives in the World Wars of the 20th century. Properties Atomic A nitrogen atom has seven electrons. In the ground state, they are arranged in the electron configuration 1s2s2p2p2p. It, therefore, has five valence electrons in the 2s and 2p orbitals, three of which (the p-electrons) are unpaired. It has one of the highest electronegativities among the elements (3.04 on the Pauling scale), exceeded only by chlorine (3.16), oxygen (3.44), and fluorine (3.98). (The light noble gases, helium, neon, and argon, would presumably also be more electronegative, and in fact are on the Allen scale.) Following periodic trends, its single-bond covalent radius of 71 pm is smaller than those of boron (84 pm) and carbon (76 pm), while it is larger than those of oxygen (66 pm) and fluorine (57 pm). The nitride anion, N3−, is much larger at 146 pm, similar to that of the oxide (O2−: 140 pm) and fluoride (F−: 133 pm) anions. The first three ionisation energies of nitrogen are 1.402, 2.856, and 4.577 MJ·mol−1, and the sum of the fourth and fifth is . Due to these very high figures, nitrogen has no simple cationic chemistry. The lack of radial nodes in the 2p subshell is directly responsible for many of the anomalous properties of the first row of the p-block, especially in nitrogen, oxygen, and fluorine. The 2p subshell is very small and has a very similar radius to the 2s shell, facilitating orbital hybridisation. It also results in very large electrostatic forces of attraction between the nucleus and the valence electrons in the 2s and 2p shells, resulting in very high electronegativities. Hypervalency is almost unknown in the 2p elements for the same reason, because the high electronegativity makes it difficult for a small nitrogen atom to be a central atom in an electron-rich three-center four-electron bond since it would tend to attract the electrons strongly to itself. Thus, despite nitrogen's position at the head of group 15 in the periodic table, its chemistry shows huge differences from that of its heavier congeners phosphorus, arsenic, antimony, and bismuth. Nitrogen may be usefully compared to its horizontal neighbours' carbon and oxygen as well as its vertical neighbours in the pnictogen column, phosphorus, arsenic, antimony, and bismuth. Although each period 2 element from lithium to oxygen shows some similarities to the period 3 element in the next group (from magnesium to chlorine; these are known as diagonal relationships), their degree drops off abruptly past the boron–silicon pair. The similarities of nitrogen to sulfur are mostly limited to sulfur nitride ring compounds when both elements are the only ones present. Nitrogen does not share the proclivity of carbon for catenation. Like carbon, nitrogen tends to form ionic or metallic compounds with metals. Nitrogen forms an extensive series of nitrides with carbon, including those with chain-, graphitic-, and fullerenic-like structures. It resembles oxygen with its high electronegativity and concomitant capability for hydrogen bonding and the ability to form coordination complexes by donating its lone pairs of electrons. There are some parallels between the chemistry of ammonia NH3 and water H2O. For example, the capacity of both compounds to be protonated to give NH4+ and H3O+ or deprotonated to give NH2− and OH−, with all of these able to be isolated in solid compounds. Nitrogen shares with both its horizontal neighbours a preference for forming multiple bonds, typically with carbon, oxygen, or other nitrogen atoms, through pπ–pπ interactions. Thus, for example, nitrogen occurs as diatomic molecules and therefore has very much lower melting (−210 °C) and boiling points (−196 °C) than the rest of its group, as the N2 molecules are only held together by weak van der Waals interactions and there are very few electrons available to create significant instantaneous dipoles. This is not possible for its vertical neighbours; thus, the nitrogen oxides, nitrites, nitrates, nitro-, nitroso-, azo-, and diazo-compounds, azides, cyanates, thiocyanates, and imino-derivatives find no echo with phosphorus, arsenic, antimony, or bismuth. By the same token, however, the complexity of the phosphorus oxoacids finds no echo with nitrogen. Setting aside their differences, nitrogen and phosphorus form an extensive series of compounds with one another; these have chain, ring, and cage structures. Table of thermal and physical properties of nitrogen (N2) at atmospheric pressure: Isotopes Nitrogen has two stable isotopes: 14N and 15N. The first is much more common, making up 99.634% of natural nitrogen, and the second (which is slightly heavier) makes up the remaining 0.366%. This leads to an atomic weight of around 14.007 u. Both of these stable isotopes are produced in the CNO cycle in stars, but 14N is more common as its proton capture is the rate-limiting step. 14N is one of the five stable odd–odd nuclides (a nuclide having an odd number of protons and neutrons); the other four are 2H, 6Li, 10B, and 180mTa. The relative abundance of 14N and 15N is practically constant in the atmosphere but can vary elsewhere, due to natural isotopic fractionation from biological redox reactions and the evaporation of natural ammonia or nitric acid. Biologically mediated reactions (e.g., assimilation, nitrification, and denitrification) strongly control nitrogen dynamics in the soil. These reactions typically result in 15N enrichment of the substrate and depletion of the product. The heavy isotope 15N was first discovered by S. M. Naudé in 1929, and soon after heavy isotopes of the neighbouring elements oxygen and carbon were discovered. It presents one of the lowest thermal neutron capture cross-sections of all isotopes. It is frequently used in nuclear magnetic resonance (NMR) spectroscopy to determine the structures of nitrogen-containing molecules, due to its fractional nuclear spin of one-half, which offers advantages for NMR such as narrower line width. 14N, though also theoretically usable, has an integer nuclear spin of one and thus has a quadrupole moment that leads to wider and less useful spectra. 15N NMR nevertheless has complications not encountered in the more common 1H and 13C NMR spectroscopy. The low natural abundance of 15N (0.36%) significantly reduces sensitivity, a problem which is only exacerbated by its low gyromagnetic ratio, (only 10.14% that of 1H). As a result, the signal-to-noise ratio for 1H is about 300 times as much as that for 15N at the same magnetic field strength. This may be somewhat alleviated by isotopic enrichment of 15N by chemical exchange or fractional distillation. 15N-enriched compounds have the advantage that under standard conditions, they do not undergo chemical exchange of their nitrogen atoms with atmospheric nitrogen, unlike compounds with labelled hydrogen, carbon, and oxygen isotopes that must be kept away from the atmosphere. The 15N:14N ratio is commonly used in stable isotope analysis in the fields of geochemistry, hydrology, paleoclimatology and paleoceanography, where it is called δ15N. Of the thirteen other isotopes produced synthetically, ranging from 9N to 23N, 13N has a half-life of ten minutes and the remaining isotopes have half-lives less than eight seconds. Given the half-life difference, 13N is the most important nitrogen radioisotope, being relatively long-lived enough to use in positron emission tomography (PET), although its half-life is still short and thus it must be produced at the venue of the PET, for example in a cyclotron via proton bombardment of 16O producing 13N and an alpha particle. The radioisotope 16N is the dominant radionuclide in the coolant of pressurised water reactors or boiling water reactors during normal operation. It is produced from 16O (in water) via an (n,p) reaction, in which the 16O atom captures a neutron and expels a proton. It has a short half-life of about 7.1 s, but its decay back to 16O produces high-energy gamma radiation (5 to 7 MeV). Because of this, access to the primary coolant piping in a pressurised water reactor must be restricted during reactor power operation. It is a sensitive and immediate indicator of leaks from the primary coolant system to the secondary steam cycle and is the primary means of detection for such leaks. Allotropes Atomic nitrogen, also known as active nitrogen, is highly reactive, being a triradical with three unpaired electrons. Free nitrogen atoms easily react with most elements to form nitrides, and even when two free nitrogen atoms collide to produce an excited N2 molecule, they may release so much energy on collision with even such stable molecules as carbon dioxide and water to cause homolytic fission into radicals such as CO and O or OH and H. Atomic nitrogen is prepared by passing an electric discharge through nitrogen gas at 0.1–2 mmHg, which produces atomic nitrogen along with a peach-yellow emission that fades slowly as an afterglow for several minutes even after the discharge terminates. Given the great reactivity of atomic nitrogen, elemental nitrogen usually occurs as molecular N2, dinitrogen. This molecule is a colourless, odourless, and tasteless diamagnetic gas at standard conditions: it melts at −210 °C and boils at −196 °C. Dinitrogen is mostly unreactive at room temperature, but it will nevertheless react with lithium metal and some transition metal complexes. This is due to its bonding, which is unique among the diatomic elements at standard conditions in that it has an N≡N triple bond. Triple bonds have short bond lengths (in this case, 109.76 pm) and high dissociation energies (in this case, 945.41 kJ/mol), and are thus very strong, explaining dinitrogen's low level of chemical reactivity. Other nitrogen oligomers and polymers may be possible. If they could be synthesised, they may have potential applications as materials with a very high energy density, that could be used as powerful propellants or explosives. Under extremely high pressures (1.1 million atm) and high temperatures (2000 K), as produced in a diamond anvil cell, nitrogen polymerises into the single-bonded cubic gauche crystal structure. This structure is similar to that of diamond, and both have extremely strong covalent bonds, resulting in its nickname "nitrogen diamond". At atmospheric pressure, molecular nitrogen condenses (liquefies) at 77 K (−195.79 °C) and freezes at 63 K (−210.01 °C) into the beta hexagonal close-packed crystal allotropic form. Below 35.4 K (−237.6 °C) nitrogen assumes the cubic crystal allotropic form (called the alpha phase). Liquid nitrogen, a colourless fluid resembling water in appearance, but with 80.8% of the density (the density of liquid nitrogen at its boiling point is 0.808 g/mL), is a common cryogen. Solid nitrogen has many crystalline modifications. It forms a significant dynamic surface coverage on Pluto and outer moons of the Solar System such as Triton. Even at the low temperatures of solid nitrogen it is fairly volatile and can sublime to form an atmosphere, or condense back into nitrogen frost. It is very weak and flows in the form of glaciers, and on Triton geysers of nitrogen gas come from the polar ice cap region. Chemistry and compounds Dinitrogen complexes The first example of a dinitrogen complex to be discovered was [Ru(NH3)5(N2)]2+ (see figure at right), and soon many other such complexes were discovered. These complexes, in which a nitrogen molecule donates at least one lone pair of electrons to a central metal cation, illustrate how N2 might bind to the metal(s) in nitrogenase and the catalyst for the Haber process: these processes involving dinitrogen activation are vitally important in biology and in the production of fertilisers. Dinitrogen is able to coordinate to metals in five different ways. The more well-characterised ways are the end-on M←N≡N (η1) and M←N≡N→M (μ, bis-η1), in which the lone pairs on the nitrogen atoms are donated to the metal cation. The less well-characterised ways involve dinitrogen donating electron pairs from the triple bond, either as a bridging ligand to two metal cations (μ, bis-η2) or to just one (η2). The fifth and unique method involves triple-coordination as a bridging ligand, donating all three electron pairs from the triple bond (μ3-N2). A few complexes feature multiple N2 ligands and some feature N2 bonded in multiple ways. Since N2 is isoelectronic with carbon monoxide (CO) and acetylene (C2H2), the bonding in dinitrogen complexes is closely allied to that in carbonyl compounds, although N2 is a weaker σ-donor and π-acceptor than CO. Theoretical studies show that σ donation is a more important factor allowing the formation of the M–N bond than π back-donation, which mostly only weakens the N–N bond, and end-on (η1) donation is more readily accomplished than side-on (η2) donation. Today, dinitrogen complexes are known for almost all the transition metals, accounting for several hundred compounds. They are normally prepared by three methods: Replacing labile ligands such as H2O, H−, or CO directly by nitrogen: these are often reversible reactions that proceed at mild conditions. Reducing metal complexes in the presence of a suitable co-ligand in excess under nitrogen gas. A common choice includes replacing chloride ligands with dimethylphenylphosphine (PMe2Ph) to make up for the smaller number of nitrogen ligands attached to the original chlorine ligands. Converting a ligand with N–N bonds, such as hydrazine or azide, directly into a dinitrogen ligand. Occasionally the N≡N bond may be formed directly within a metal complex, for example by directly reacting coordinated ammonia (NH3) with nitrous acid (HNO2), but this is not generally applicable. Most dinitrogen complexes have colours within the range white-yellow-orange-red-brown; a few exceptions are known, such as the blue [{Ti(η5-C5H5)2}2-(N2)]. Nitrides, azides, and nitrido complexes Nitrogen bonds to almost all the elements in the periodic table except the first two noble gases, helium and neon, and some of the very short-lived elements after bismuth, creating an immense variety of binary compounds with varying properties and applications. Many binary compounds are known: with the exception of the nitrogen hydrides, oxides, and fluorides, these are typically called nitrides. Many stoichiometric phases are usually present for most elements (e.g. MnN, Mn6N5, Mn3N2, Mn2N, Mn4N, and MnxN for 9.2 < x < 25.3). They may be classified as "salt-like" (mostly ionic), covalent, "diamond-like", and metallic (or interstitial), although this classification has limitations generally stemming from the continuity of bonding types instead of the discrete and separate types that it implies. They are normally prepared by directly reacting a metal with nitrogen or ammonia (sometimes after heating), or by thermal decomposition of metal amides: 3 Ca + N2 → Ca3N2 3 Mg + 2 NH3 → Mg3N2 + 3 H2 (at 900 °C) 3 Zn(NH2)2 → Zn3N2 + 4 NH3 Many variants on these processes are possible. The most ionic of these nitrides are those of the alkali metals and alkaline earth metals, Li3N (Na, K, Rb, and Cs do not form stable nitrides for steric reasons) and M3N2 (M = Be, Mg, Ca, Sr, Ba). These can formally be thought of as salts of the N3− anion, although charge separation is not actually complete even for these highly electropositive elements. However, the alkali metal azides NaN3 and KN3, featuring the linear anion, are well-known, as are Sr(N3)2 and Ba(N3)2. Azides of the B-subgroup metals (those in groups 11 through 16) are much less ionic, have more complicated structures, and detonate readily when shocked. Many covalent binary nitrides are known. Examples include cyanogen ((CN)2), triphosphorus pentanitride (P3N5), disulfur dinitride (S2N2), and tetrasulfur tetranitride (S4N4). The essentially covalent silicon nitride (Si3N4) and germanium nitride (Ge3N4) are also known: silicon nitride, in particular, would make a promising ceramic if not for the difficulty of working with and sintering it. In particular, the group 13 nitrides, most of which are promising semiconductors, are isoelectronic with graphite, diamond, and silicon carbide and have similar structures: their bonding changes from covalent to partially ionic to metallic as the group is descended. In particular, since the B–N unit is isoelectronic to C–C, and carbon is essentially intermediate in size between boron and nitrogen, much of organic chemistry finds an echo in boron–nitrogen chemistry, such as in borazine ("inorganic benzene"). Nevertheless, the analogy is not exact due to the ease of nucleophilic attack at boron due to its deficiency in electrons, which is not possible in a wholly carbon-containing ring. The largest category of nitrides are the interstitial nitrides of formulae MN, M2N, and M4N (although variable composition is perfectly possible), where the small nitrogen atoms are positioned in the gaps in a metallic cubic or hexagonal close-packed lattice. They are opaque, very hard, and chemically inert, melting only at very high temperatures (generally over 2500 °C). They have a metallic lustre and conduct electricity as do metals. They hydrolyse only very slowly to give ammonia or nitrogen. The nitride anion (N3−) is the strongest π donor known among ligands (the second-strongest is O2−). Nitrido complexes are generally made by the thermal decomposition of azides or by deprotonating ammonia, and they usually involve a terminal {≡N}3− group. The linear azide anion (), being isoelectronic with nitrous oxide, carbon dioxide, and cyanate, forms many coordination complexes. Further catenation is rare, although (isoelectronic with carbonate and nitrate) is known. Hydrides Industrially, ammonia (NH3) is the most important compound of nitrogen and is prepared in larger amounts than any other compound because it contributes significantly to the nutritional needs of terrestrial organisms by serving as a precursor to food and fertilisers. It is a colourless alkaline gas with a characteristic pungent smell. The presence of hydrogen bonding has very significant effects on ammonia, conferring on it its high melting (−78 °C) and boiling (−33 °C) points. As a liquid, it is a very good solvent with a high heat of vaporisation (enabling it to be used in vacuum flasks), that also has a low viscosity and electrical conductivity and high dielectric constant, and is less dense than water. However, the hydrogen bonding in NH3 is weaker than that in H2O due to the lower electronegativity of nitrogen compared to oxygen and the presence of only one lone pair in NH3 rather than two in H2O. It is a weak base in aqueous solution (pKb 4.74); its conjugate acid is ammonium, . It can also act as an extremely weak acid, losing a proton to produce the amide anion, . It thus undergoes self-dissociation, similar to water, to produce ammonium and amide. Ammonia burns in air or oxygen, though not readily, to produce nitrogen gas; it burns in fluorine with a greenish-yellow flame to give nitrogen trifluoride. Reactions with the other nonmetals are very complex and tend to lead to a mixture of products. Ammonia reacts on heating with metals to give nitrides. Many other binary nitrogen hydrides are known, but the most important are hydrazine (N2H4) and hydrogen azide (HN3). Although it is not a nitrogen hydride, hydroxylamine (NH2OH) is similar in properties and structure to ammonia and hydrazine as well. Hydrazine is a fuming, colourless liquid that smells similar to ammonia. Its physical properties are very similar to those of water (melting point 2.0 °C, boiling point 113.5 °C, density 1.00 g/cm3). Despite it being an endothermic compound, it is kinetically stable. It burns quickly and completely in air very exothermically to give nitrogen and water vapour. It is a very useful and versatile reducing agent and is a weaker base than ammonia. It is also commonly used as a rocket fuel. Hydrazine is generally made by reaction of ammonia with alkaline sodium hypochlorite in the presence of gelatin or glue: NH3 + OCl− → NH2Cl + OH− NH2Cl + NH3 → + Cl− (slow) + OH− → N2H4 + H2O (fast) (The attacks by hydroxide and ammonia may be reversed, thus passing through the intermediate NHCl− instead.) The reason for adding gelatin is that it removes metal ions such as Cu2+ that catalyses the destruction of hydrazine by reaction with monochloramine (NH2Cl) to produce ammonium chloride and nitrogen. Hydrogen azide (HN3) was first produced in 1890 by the oxidation of aqueous hydrazine by nitrous acid. It is very explosive and even dilute solutions can be dangerous. It has a disagreeable and irritating smell and is a potentially lethal (but not cumulative) poison. It may be considered the conjugate acid of the azide anion, and is similarly analogous to the hydrohalic acids. Halides and oxohalides All four simple nitrogen trihalides are known. A few mixed halides and hydrohalides are known, but are mostly unstable; examples include NClF2, NCl2F, NBrF2, NF2H, NFH2, NCl2H, and NClH2. Nitrogen trifluoride (NF3, first prepared in 1928) is a colourless and odourless gas that is thermodynamically stable, and most readily produced by the electrolysis of molten ammonium fluoride dissolved in anhydrous hydrogen fluoride. Like carbon tetrafluoride, it is not at all reactive and is stable in water or dilute aqueous acids or alkalis. Only when heated does it act as a fluorinating agent, and it reacts with copper, arsenic, antimony, and bismuth on contact at high temperatures to give tetrafluorohydrazine (N2F4). The cations and are also known (the latter from reacting tetrafluorohydrazine with strong fluoride-acceptors such as arsenic pentafluoride), as is ONF3, which has aroused interest due to the short N–O distance implying partial double bonding and the highly polar and long N–F bond. Tetrafluorohydrazine, unlike hydrazine itself, can dissociate at room temperature and above to give the radical NF2•. Fluorine azide (FN3) is very explosive and thermally unstable. Dinitrogen difluoride (N2F2) exists as thermally interconvertible cis and trans isomers, and was first found as a product of the thermal decomposition of FN3. Nitrogen trichloride (NCl3) is a dense, volatile, and explosive liquid whose physical properties are similar to those of carbon tetrachloride, although one difference is that NCl3 is easily hydrolysed by water while CCl4 is not. It was first synthesised in 1811 by Pierre Louis Dulong, who lost three fingers and an eye to its explosive tendencies. As a dilute gas it is less dangerous and is thus used industrially to bleach and sterilise flour. Nitrogen tribromide (NBr3), first prepared in 1975, is a deep red, temperature-sensitive, volatile solid that is explosive even at −100 °C. Nitrogen triiodide (NI3) is still more unstable and was only prepared in 1990. Its adduct with ammonia, which was known earlier, is very shock-sensitive: it can be set off by the touch of a feather, shifting air currents, or even alpha particles. For this reason, small amounts of nitrogen triiodide are sometimes synthesised as a demonstration to high school chemistry students or as an act of "chemical magic". Chlorine azide (ClN3) and bromine azide (BrN3) are extremely sensitive and explosive. Two series of nitrogen oxohalides are known: the nitrosyl halides (XNO) and the nitryl halides (XNO2). The first is very reactive gases that can be made by directly halogenating nitrous oxide. Nitrosyl fluoride (NOF) is colourless and a vigorous fluorinating agent. Nitrosyl chloride (NOCl) behaves in much the same way and has often been used as an ionising solvent. Nitrosyl bromide (NOBr) is red. The reactions of the nitryl halides are mostly similar: nitryl fluoride (FNO2) and nitryl chloride (ClNO2) are likewise reactive gases and vigorous halogenating agents. Oxides Nitrogen forms nine molecular oxides, some of which were the first gases to be identified: N2O (nitrous oxide), NO (nitric oxide), N2O3 (dinitrogen trioxide), NO2 (nitrogen dioxide), N2O4 (dinitrogen tetroxide), N2O5 (dinitrogen pentoxide), N4O (nitrosylazide), and N(NO2)3 (trinitramide). All are thermally unstable towards decomposition to their elements. One other possible oxide that has not yet been synthesised is oxatetrazole (N4O), an aromatic ring. Nitrous oxide (N2O), better known as laughing gas, is made by thermal decomposition of molten ammonium nitrate at 250 °C. This is a redox reaction and thus nitric oxide and nitrogen are also produced as byproducts. It is mostly used as a propellant and aerating agent for sprayed canned whipped cream, and was formerly commonly used as an anaesthetic. Despite appearances, it cannot be considered to be the anhydride of hyponitrous acid (H2N2O2) because that acid is not produced by the dissolution of nitrous oxide in water. It is rather unreactive (not reacting with the halogens, the alkali metals, or ozone at room temperature, although reactivity increases upon heating) and has the unsymmetrical structure N–N–O (N≡N+O−↔−N=N+=O): above 600 °C it dissociates by breaking the weaker N–O bond. Nitric oxide (NO) is the simplest stable molecule with an odd number of electrons. In mammals, including humans, it is an important cellular signalling molecule involved in many physiological and pathological processes. It is formed by catalytic oxidation of ammonia. It is a colourless paramagnetic gas that, being thermodynamically unstable, decomposes to nitrogen and oxygen gas at 1100–1200 °C. Its bonding is similar to that in nitrogen, but one extra electron is added to a π* antibonding orbital and thus the bond order has been reduced to approximately 2.5; hence dimerisation to O=N–N=O is unfavourable except below the boiling point (where the cis isomer is more stable) because it does not actually increase the total bond order and because the unpaired electron is delocalised across the NO molecule, granting it stability. There is also evidence for the asymmetric red dimer O=N–O=N when nitric oxide is condensed with polar molecules. It reacts with oxygen to give brown nitrogen dioxide and with halogens to give nitrosyl halides. It also reacts with transition metal compounds to give nitrosyl complexes, most of which are deeply coloured. Blue dinitrogen trioxide (N2O3) is only available as a solid because it rapidly dissociates above its melting point to give nitric oxide, nitrogen dioxide (NO2), and dinitrogen tetroxide (N2O4). The latter two compounds are somewhat difficult to study individually because of the equilibrium between them, although sometimes dinitrogen tetroxide can react by heterolytic fission to nitrosonium and nitrate in a medium with high dielectric constant. Nitrogen dioxide is an acrid, corrosive brown gas. Both compounds may be easily prepared by decomposing a dry metal nitrate. Both react with water to form nitric acid. Dinitrogen tetroxide is very useful for the preparation of anhydrous metal nitrates and nitrato complexes, and it became the storable oxidiser of choice for many rockets in both the United States and USSR by the late 1950s. This is because it is a hypergolic propellant in combination with a hydrazine-based rocket fuel and can be easily stored since it is liquid at room temperature. The thermally unstable and very reactive dinitrogen pentoxide (N2O5) is the anhydride of nitric acid, and can be made from it by dehydration with phosphorus pentoxide. It is of interest for the preparation of explosives. It is a deliquescent, colourless crystalline solid that is sensitive to light. In the solid state it is ionic with structure [NO2]+[NO3]−; as a gas and in solution it is molecular O2N–O–NO2. Hydration to nitric acid comes readily, as does analogous reaction with hydrogen peroxide giving peroxonitric acid (HOONO2). It is a violent oxidising agent. Gaseous dinitrogen pentoxide decomposes as follows: N2O5 NO2 + NO3 → NO2 + O2 + NO N2O5 + NO 3 NO2 Oxoacids, oxoanions, and oxoacid salts Many nitrogen oxoacids are known, though most of them are unstable as pure compounds and are known only as aqueous solutions or as salts. Hyponitrous acid (H2N2O2) is a weak diprotic acid with the structure HON=NOH (pKa1 6.9, pKa2 11.6). Acidic solutions are quite stable but above pH 4 base-catalysed decomposition occurs via [HONNO]− to nitrous oxide and the hydroxide anion. Hyponitrites (involving the anion) are stable to reducing agents and more commonly act as reducing agents themselves. They are an intermediate step in the oxidation of ammonia to nitrite, which occurs in the nitrogen cycle. Hyponitrite can act as a bridging or chelating bidentate ligand. Nitrous acid (HNO2) is not known as a pure compound, but is a common component in gaseous equilibria and is an important aqueous reagent: its aqueous solutions may be made from acidifying cool aqueous nitrite (, bent) solutions, although already at room temperature disproportionation to nitrate and nitric oxide is significant. It is a weak acid with pKa 3.35 at 18 °C. They may be titrimetrically analysed by their oxidation to nitrate by permanganate. They are readily reduced to nitrous oxide and nitric oxide by sulfur dioxide, to hyponitrous acid with tin(II), and to ammonia with hydrogen sulfide. Salts of hydrazinium react with nitrous acid to produce azides which further react to give nitrous oxide and nitrogen. Sodium nitrite is mildly toxic in concentrations above 100 mg/kg, but small amounts are often used to cure meat and as a preservative to avoid bacterial spoilage. It is also used to synthesise hydroxylamine and to diazotise primary aromatic amines as follows: ArNH2 + HNO2 → [ArNN]Cl + 2 H2O Nitrite is also a common ligand that can coordinate in five ways. The most common are nitro (bonded from the nitrogen) and nitrito (bonded from an oxygen). Nitro-nitrito isomerism is common, where the nitrito form is usually less stable. Nitric acid (HNO3) is by far the most important and the most stable of the nitrogen oxoacids. It is one of the three most used acids (the other two being sulfuric acid and hydrochloric acid) and was first discovered by alchemists in the 13th century. It is made by the catalytic oxidation of ammonia to nitric oxide, which is oxidised to nitrogen dioxide, and then dissolved in water to give concentrated nitric acid. In the United States of America, over seven million tonnes of nitric acid are produced every year, most of which is used for nitrate production for fertilisers and explosives, among other uses. Anhydrous nitric acid may be made by distilling concentrated nitric acid with phosphorus pentoxide at low pressure in glass apparatus in the dark. It can only be made in the solid state, because upon melting it spontaneously decomposes to nitrogen dioxide, and liquid nitric acid undergoes self-ionisation to a larger extent than any other covalent liquid as follows: 2 HNO3 + H2O + [NO2]+ + [NO3]− Two hydrates, HNO3·H2O and HNO3·3H2O, are known that can be crystallised. It is a strong acid and concentrated solutions are strong oxidising agents, though gold, platinum, rhodium, and iridium are immune to attack. A 3:1 mixture of concentrated hydrochloric acid and nitric acid, called aqua regia, is still stronger and successfully dissolves gold and platinum, because free chlorine and nitrosyl chloride are formed and chloride anions can form strong complexes. In concentrated sulfuric acid, nitric acid is protonated to form nitronium, which can act as an electrophile for aromatic nitration: HNO3 + 2 H2SO4 + H3O+ + 2 The thermal stabilities of nitrates (involving the trigonal planar anion) depends on the basicity of the metal, and so do the products of decomposition (thermolysis), which can vary between the nitrite (for example, sodium), the oxide (potassium and lead), or even the metal itself (silver) depending on their relative stabilities. Nitrate is also a common ligand with many modes of coordination. Finally, although orthonitric acid (H3NO4), which would be analogous to orthophosphoric acid, does not exist, the tetrahedral orthonitrate anion is known in its sodium and potassium salts: NaNO3{} + Na2O ->[\ce{Ag~crucible}][\ce{300^\circ C~for~7 days}] Na3NO4 These white crystalline salts are very sensitive to water vapour and carbon dioxide in the air: Na3NO4 + H2O + CO2 → NaNO3 + NaOH + NaHCO3 Despite its limited chemistry, the orthonitrate anion is interesting from a structural point of view due to its regular tetrahedral shape and the short N–O bond lengths, implying significant polar character to the bonding. Organic nitrogen compounds Nitrogen is one of the most important elements in organic chemistry. Many organic functional groups involve a carbon–nitrogen bond, such as amides (RCONR2), amines (R3N), imines (RC(=NR)R), imides (RCO)2NR, azides (RN3), azo compounds (RN2R), cyanates (ROCN), isocyanates (RNCO), nitrates (RONO2), nitriles (RCN), isonitriles (RNC), nitrites (RONO), nitro compounds (RNO2), nitroso compounds (RNO), oximes (RC(=NOH)R), and pyridine derivatives. C–N bonds are strongly polarised towards nitrogen. In these compounds, nitrogen is usually trivalent (though it can be tetravalent in quaternary ammonium salts, R4N+), with a lone pair that can confer basicity on the compound by being coordinated to a proton. This may be offset by other factors: for example, amides are not basic because the lone pair is delocalised into a double bond (though they may act as bases at very low pH, being protonated at the oxygen), and pyrrole is not basic because the lone pair is delocalised as part of an aromatic ring. The amount of nitrogen in a chemical substance can be determined by the Kjeldahl method. In particular, nitrogen is an essential component of nucleic acids, amino acids and thus proteins, and the energy-carrying molecule adenosine triphosphate and is thus vital to all life on Earth. Occurrence Nitrogen is the most common pure element in the earth, making up 78.1% of the volume of the atmosphere (75.5% by mass), around 3.89 million gigatonnes (). Despite this, it is not very abundant in Earth's crust, making up somewhere around 19 parts per million of this, on par with niobium, gallium, and lithium. (This represents 300,000 to a million gigatonnes of nitrogen, depending on the mass of the crust.) The only important nitrogen minerals are nitre (potassium nitrate, saltpetre) and soda nitre (sodium nitrate, Chilean saltpetre). However, these have not been an important source of nitrates since the 1920s, when the industrial synthesis of ammonia and nitric acid became common. Nitrogen compounds constantly interchange between the atmosphere and living organisms. Nitrogen must first be processed, or "fixed", into a plant-usable form, usually ammonia. Some nitrogen fixation is done by lightning strikes producing the nitrogen oxides, but most is done by diazotrophic bacteria through enzymes known as nitrogenases (although today industrial nitrogen fixation to ammonia is also significant). When the ammonia is taken up by plants, it is used to synthesise proteins. These plants are then digested by animals who use the nitrogen compounds to synthesise their proteins and excrete nitrogen-bearing waste. Finally, these organisms die and decompose, undergoing bacterial and environmental oxidation and denitrification, returning free dinitrogen to the atmosphere. Industrial nitrogen fixation by the Haber process is mostly used as fertiliser, although excess nitrogen–bearing waste, when leached, leads to eutrophication of freshwater and the creation of marine dead zones, as nitrogen-driven bacterial growth depletes water oxygen to the point that all higher organisms die. Furthermore, nitrous oxide, which is produced during denitrification, attacks the atmospheric ozone layer. Many saltwater fish manufacture large amounts of trimethylamine oxide to protect them from the high osmotic effects of their environment; conversion of this compound to dimethylamine is responsible for the early odour in unfresh saltwater fish. In animals, free radical nitric oxide (derived from an amino acid), serves as an important regulatory molecule for circulation. Nitric oxide's rapid reaction with water in animals results in the production of its metabolite nitrite. Animal metabolism of nitrogen in proteins, in general, results in the excretion of urea, while animal metabolism of nucleic acids results in the excretion of urea and uric acid. The characteristic odour of animal flesh decay is caused by the creation of long-chain, nitrogen-containing amines, such as putrescine and cadaverine, which are breakdown products of the amino acids ornithine and lysine, respectively, in decaying proteins. Production Nitrogen gas is an industrial gas produced by the fractional distillation of liquid air, or by mechanical means using gaseous air (pressurised reverse osmosis membrane or pressure swing adsorption). Nitrogen gas generators using membranes or pressure swing adsorption (PSA) are typically more cost and energy efficient than bulk-delivered nitrogen. Commercial nitrogen is often a byproduct of air-processing for industrial concentration of oxygen for steelmaking and other purposes. When supplied compressed in cylinders it is often called OFN (oxygen-free nitrogen). Commercial-grade nitrogen already contains at most 20 ppm oxygen, and specially purified grades containing at most 2 ppm oxygen and 10 ppm argon are also available. In a chemical laboratory, it is prepared by treating an aqueous solution of ammonium chloride with sodium nitrite. NH4Cl + NaNO2 → N2 + NaCl + 2 H2O Small amounts of the impurities NO and HNO3 are also formed in this reaction. The impurities can be removed by passing the gas through aqueous sulfuric acid containing potassium dichromate. It can also be obtained by the thermal decomposition of ammonium dichromate. 3(NH4)2Cr2O7 → 2N2 + 9H2O + 3Cr2O3 + 2NH3 + 32O2 Very pure nitrogen can be prepared by the thermal decomposition of barium azide or sodium azide. 2 NaN3 → 2 Na + 3 N2 Applications The applications of nitrogen compounds are naturally extremely widely varied due to the huge size of this class: hence, only applications of pure nitrogen itself will be considered here. Two-thirds (2/3) of nitrogen produced by industry is sold as gas and the remaining one-third (1/3) as a liquid. Gas The gas is mostly used as a low reactivity safe atmosphere wherever the oxygen in the air would pose a fire, explosion, or oxidising hazard. Some examples include: As a modified atmosphere, pure or mixed with carbon dioxide, to nitrogenate and preserve the freshness of packaged or bulk foods (by delaying rancidity and other forms of oxidative damage). Pure nitrogen as food additive is labelled in the European Union with the E number E941. In incandescent light bulbs as an inexpensive alternative to argon. In fire suppression systems for Information technology (IT) equipment. In the manufacture of stainless steel. In the case-hardening of steel by nitriding. In some aircraft fuel systems to reduce fire hazard (see inerting system). To inflate race car and aircraft tires, reducing the problems of inconsistent expansion and contraction caused by moisture and oxygen in natural air. Nitrogen is commonly used during sample preparation in chemical analysis. It is used to concentrate and reduce the volume of liquid samples. Directing a pressurised stream of nitrogen gas perpendicular to the surface of the liquid causes the solvent to evaporate while leaving the solute(s) and un-evaporated solvent behind. Nitrogen can be used as a replacement, or in combination with, carbon dioxide to pressurise kegs of some beers, particularly stouts and British ales, due to the smaller bubbles it produces, which makes the dispensed beer smoother and headier. A pressure-sensitive nitrogen capsule known commonly as a "widget" allows nitrogen-charged beers to be packaged in cans and bottles. Nitrogen tanks are also replacing carbon dioxide as the main power source for paintball guns. Nitrogen must be kept at a higher pressure than CO2, making N2 tanks heavier and more expensive. Equipment Some construction equipment uses pressurised nitrogen gas to help hydraulic system to provide extra power to devices such as hydraulic hammer. Nitrogen gas, formed from the decomposition of sodium azide, is used for the inflation of airbags. Execution As nitrogen is an asphyxiant gas in itself, some jurisdictions have considered asphyxiation by inhalation of pure nitrogen as a means of capital punishment (as a substitute for lethal injection). In January 2024, Kenneth Eugene Smith became the first person executed by nitrogen asphyxiation. Liquid Liquid nitrogen is a cryogenic liquid which looks like water. When insulated in proper containers such as dewar flasks, it can be transported and stored with a low rate of evaporative loss. Like dry ice, the main use of liquid nitrogen is for cooling to low temperatures. It is used in the cryopreservation of biological materials such as blood and reproductive cells (sperm and eggs). It is used in cryotherapy to remove cysts and warts on the skin by freezing them. It is used in laboratory cold traps, and in cryopumps to obtain lower pressures in vacuum pumped systems. It is used to cool heat-sensitive electronics such as infrared detectors and X-ray detectors. Other uses include freeze-grinding and machining materials that are soft or rubbery at room temperature, shrink-fitting and assembling engineering components, and more generally to attain very low temperatures where necessary. Because of its low cost, liquid nitrogen is often used for cooling even when such low temperatures are not strictly necessary, such as refrigeration of food, freeze-branding livestock, freezing pipes to halt flow when valves are not present, and consolidating unstable soil by freezing whenever excavation is going on underneath. Safety Gas Although nitrogen is non-toxic, when released into an enclosed space it can displace oxygen, and therefore presents an asphyxiation hazard. This may happen with few warning symptoms, since the human carotid body is a relatively poor and slow low-oxygen (hypoxia) sensing system. An example occurred shortly before the launch of the first Space Shuttle mission on March 19, 1981, when two technicians died from asphyxiation after they walked into a space located in the Space Shuttle's mobile launcher platform that was pressurised with pure nitrogen as a precaution against fire. When inhaled at high partial pressures (more than about 4 bar, encountered at depths below about 30 m in scuba diving), nitrogen is an anaesthetic agent, causing nitrogen narcosis, a temporary state of mental impairment similar to nitrous oxide intoxication. Nitrogen dissolves in the blood and body fats. Rapid decompression (as when divers ascend too quickly or astronauts decompress too quickly from cabin pressure to spacesuit pressure) can lead to a potentially fatal condition called decompression sickness (formerly known as caisson sickness or the bends), when nitrogen bubbles form in the bloodstream, nerves, joints, and other sensitive or vital areas. Bubbles from other "inert" gases (gases other than carbon dioxide and oxygen) cause the same effects, so replacement of nitrogen in breathing gases may prevent nitrogen narcosis, but does not prevent decompression sickness. Liquid As a cryogenic liquid, liquid nitrogen can be dangerous by causing cold burns on contact, although the Leidenfrost effect provides protection for very short exposure (about one second). Ingestion of liquid nitrogen can cause severe internal damage. For example, in 2012, a young woman in England had to have her stomach removed after ingesting a cocktail made with liquid nitrogen. Because the liquid-to-gas expansion ratio of nitrogen is 1:694 at 20 °C, a tremendous amount of force can be generated if liquid nitrogen is rapidly vaporised in an enclosed space. In an incident on January 12, 2006, at Texas A&M University, the pressure-relief devices of a tank of liquid nitrogen were malfunctioning and later sealed. As a result of the subsequent pressure buildup, the tank failed catastrophically. The force of the explosion was sufficient to propel the tank through the ceiling immediately above it, shatter a reinforced concrete beam immediately below it, and blow the walls of the laboratory 0.1–0.2 m off their foundations. Liquid nitrogen readily evaporates to form gaseous nitrogen, and hence the precautions associated with gaseous nitrogen also apply to liquid nitrogen. For example, oxygen sensors are sometimes used as a safety precaution when working with liquid nitrogen to alert workers of gas spills into a confined space. Vessels containing liquid nitrogen can condense oxygen from air. The liquid in such a vessel becomes increasingly enriched in oxygen (boiling point −183 °C, higher than that of nitrogen) as the nitrogen evaporates, and can cause violent oxidation of organic material. Oxygen deficiency monitors Oxygen deficiency monitors are used to measure levels of oxygen in confined spaces and any place where nitrogen gas or liquid are stored or used. In the event of a nitrogen leak, and a decrease in oxygen to a pre-set alarm level, an oxygen deficiency monitor can be programmed to set off audible and visual alarms, thereby providing notification of the possible impending danger. Most commonly the oxygen range to alert personnel is when oxygen levels get below 19.5%. OSHA specifies that a hazardous atmosphere may include one where the oxygen concentration is below 19.5% or above 23.5%. Oxygen deficiency monitors can either be fixed, mounted to the wall and hard-wired into the building's power supply or simply plugged into a power outlet, or a portable hand-held or wearable monitor.
Physical sciences
Chemistry
null
21226
https://en.wikipedia.org/wiki/Neurology
Neurology
Neurology (from , "string, nerve" and the suffix -logia, "study of") is the branch of medicine dealing with the diagnosis and treatment of all categories of conditions and disease involving the nervous system, which comprises the brain, the spinal cord and the peripheral nerves. Neurological practice relies heavily on the field of neuroscience, the scientific study of the nervous system, using various techniques of neurotherapy. A neurologist is a physician specializing in neurology and trained to investigate, diagnose and treat neurological disorders. Neurologists diagnose and treat myriad neurologic conditions, including stroke, epilepsy, movement disorders such as Parkinson's disease, brain infections, autoimmune neurologic disorders such as multiple sclerosis, sleep disorders, brain injury, headache disorders like migraine, tumors of the brain and dementias such as Alzheimer's disease. Neurologists may also have roles in clinical research, clinical trials, and basic or translational research. Neurology is a nonsurgical specialty, its corresponding surgical specialty is neurosurgery. History The academic discipline began between the 15th and 16th centuries with the work and research of many neurologists such as Thomas Willis, Robert Whytt, Matthew Baillie, Charles Bell, Moritz Heinrich Romberg, Duchenne de Boulogne, William A. Hammond, Jean-Martin Charcot, C. Miller Fisher and John Hughlings Jackson. Neo-Latin neurologia appeared in various texts from 1610 denoting an anatomical focus on the nerves (variably understood as vessels), and was most notably used by Willis, who preferred Greek νευρολογία. Training In the United States and Canada, neurologists are physicians who have completed a postgraduate training period known as residency specializing in neurology after graduation from medical school. This additional training period typically lasts four years, with the first year devoted to training in internal medicine. On average, neurologists complete a total of eight to ten years of training. This includes four years of medical school, four years of residency and an optional one to two years of fellowship. While neurologists may treat general neurologic conditions, some neurologists go on to receive additional training focusing on a particular subspecialty in the field of neurology. These training programs are called fellowships, and are one to three years in duration. Subspecialties in the United States include brain injury medicine, clinical neurophysiology, epilepsy, neurodevelopmental disabilities, neuromuscular medicine, pain medicine, sleep medicine, neurocritical care, vascular neurology (stroke), behavioral neurology, headache, neuroimmunology and infectious disease, movement disorders, neuroimaging, neurooncology, and neurorehabilitation. In Germany, a compulsory year of psychiatry must be done to complete a residency of neurology. In the United Kingdom and Ireland, neurology is a subspecialty of general (internal) medicine. After five years of medical school and two years as a Foundation Trainee, an aspiring neurologist must pass the examination for Membership of the Royal College of Physicians (or the Irish equivalent) and complete two years of core medical training before entering specialist training in neurology. Up to the 1960s, some intending to become neurologists would also spend two years working in psychiatric units before obtaining a diploma in psychological medicine. However, that was uncommon and, now that the MRCPsych takes three years to obtain, would no longer be practical. A period of research is essential, and obtaining a higher degree aids career progression. Many found it was eased after an attachment to the Institute of Neurology at Queen Square, London. Some neurologists enter the field of rehabilitation medicine (known as physiatry in the US) to specialise in neurological rehabilitation, which may include stroke medicine, as well as traumatic brain injuries. Physical examination During a neurological examination, the neurologist reviews the patient's health history with special attention to the patient's neurologic complaints. The patient then takes a neurological exam. Typically, the exam tests mental status, function of the cranial nerves (including vision), strength, coordination, reflexes, sensation and gait. This information helps the neurologist determine whether the problem exists in the nervous system and the clinical localization. Localization of the pathology is the key process by which neurologists develop their differential diagnosis. Further tests may be needed to confirm a diagnosis and ultimately guide therapy and appropriate management. Useful adjunct imaging studies in neurology include CT scanning and MRI. Other tests used to assess muscle and nerve function include nerve conduction studies and electromyography. Clinical tasks Neurologists examine patients who are referred to them by other physicians in both the inpatient and outpatient settings. Neurologists begin their interactions with patients by taking a comprehensive medical history, and then performing a physical examination focusing on evaluating the nervous system. Components of the neurological examination include assessment of the patient's cognitive function, cranial nerves, motor strength, sensation, reflexes, coordination, and gait. In some instances, neurologists may order additional diagnostic tests as part of the evaluation. Commonly employed tests in neurology include imaging studies such as computed axial tomography (CAT) scans, magnetic resonance imaging (MRI), and ultrasound of major blood vessels of the head and neck. Neurophysiologic studies, including electroencephalography (EEG), needle electromyography (EMG), nerve conduction studies (NCSs) and evoked potentials are also commonly ordered. Neurologists frequently perform lumbar punctures to assess characteristics of a patient's cerebrospinal fluid. Advances in genetic testing have made genetic testing an important tool in the classification of inherited neuromuscular disease and diagnosis of many other neurogenetic diseases. The role of genetic influences on the development of acquired neurologic diseases is an active area of research. Neurotherapy Neurotherapy involves systemic targeted delivery of an energy stimulus or chemical agents to a specific neurological zone in the body. Some of the commonly encountered conditions treated by neurologists include headaches, radiculopathy, neuropathy, stroke, dementia, seizures and epilepsy, Alzheimer's disease, attention deficit/hyperactivity disorder, Parkinson's disease, Tourette's syndrome, multiple sclerosis, head trauma, sleep disorders, neuromuscular diseases, and various infections and tumors of the nervous system. Neurologists are also asked to evaluate unresponsive patients on life support to confirm brain death. Treatment options vary depending on the neurological problem. They can include referring the patient to a physiotherapist, prescribing medications, or recommending a surgical procedure. Some neurologists specialize in certain parts of the nervous system or in specific procedures. For example, clinical neurophysiologists specialize in the use of EEG and intraoperative monitoring to diagnose certain neurological disorders. Other neurologists specialize in the use of electrodiagnostic medicine studies – needle EMG and NCSs. In the US, physicians do not typically specialize in all the aspects of clinical neurophysiology – i.e. sleep, EEG, EMG, and NCSs. The American Board of Clinical Neurophysiology certifies US physicians in general clinical neurophysiology, epilepsy, and intraoperative monitoring. The American Board of Electrodiagnostic Medicine certifies US physicians in electrodiagnostic medicine and certifies technologists in nerve-conduction studies. Sleep medicine is a subspecialty field in the US under several medical specialties including anesthesiology, internal medicine, family medicine, and neurology. Neurosurgery is a distinct specialty that involves a different training path and emphasizes the surgical treatment of neurological disorders. Also, many nonmedical doctors, those with doctoral degrees (usually PhDs) in subjects such as biology and chemistry, study and research the nervous system. Working in laboratories in universities, hospitals, and private companies, these neuroscientists perform clinical and laboratory experiments and tests to learn more about the nervous system and find cures or new treatments for diseases and disorders. A great deal of overlap occurs between neuroscience and neurology. Many neurologists work in academic training hospitals, where they conduct research as neuroscientists in addition to treating patients and teaching neurology to medical students. General caseload Neurologists are responsible for the diagnosis, treatment, and management of all the conditions mentioned above. When surgical or endovascular intervention is required, the neurologist may refer the patient to a neurosurgeon or an interventional neuroradiologist. In some countries, additional legal responsibilities of a neurologist may include making a finding of brain death when it is suspected that a patient has died. Neurologists frequently care for people with hereditary (genetic) diseases when the major manifestations are neurological, as is frequently the case. Lumbar punctures are frequently performed by neurologists. Some neurologists may develop an interest in particular subfields, such as stroke, dementia, movement disorders, neurointensive care, headaches, epilepsy, sleep disorders, chronic pain management, multiple sclerosis, or neuromuscular diseases. Overlapping areas Some overlap also occurs with other specialties, varying from country to country and even within a local geographic area. Acute head trauma is most often treated by neurosurgeons, whereas sequelae of head trauma may be treated by neurologists or specialists in rehabilitation medicine. Although stroke cases have been traditionally managed by internal medicine or hospitalists, the emergence of vascular neurology and interventional neuroradiology has created a demand for stroke specialists. The establishment of Joint Commission-certified stroke centers has increased the role of neurologists in stroke care in many primary, as well as tertiary, hospitals. Some cases of nervous system infectious diseases are treated by infectious disease specialists. Most cases of headache are diagnosed and treated primarily by general practitioners, at least the less severe cases. Likewise, most cases of sciatica are treated by general practitioners, though they may be referred to neurologists or surgeons (neurosurgeons or orthopedic surgeons). Sleep disorders are also treated by pulmonologists and psychiatrists. Cerebral palsy is initially treated by pediatricians, but care may be transferred to an adult neurologist after the patient reaches a certain age. Physical medicine and rehabilitation physicians may treat patients with neuromuscular diseases with electrodiagnostic studies (needle EMG and nerve-conduction studies) and other diagnostic tools. In the United Kingdom and other countries, many of the conditions encountered by older patients such as movement disorders, including Parkinson's disease, stroke, dementia, or gait disorders, are managed predominantly by specialists in geriatric medicine. Clinical neuropsychologists are often called upon to evaluate brain-behavior relationships for the purpose of assisting with differential diagnosis, planning rehabilitation strategies, documenting cognitive strengths and weaknesses, and measuring change over time (e.g., for identifying abnormal aging or tracking the progression of a dementia). Relationship to clinical neurophysiology In some countries such as the United States and Germany, neurologists may subspecialize in clinical neurophysiology, the field responsible for EEG and intraoperative monitoring, or in electrodiagnostic medicine nerve conduction studies, EMG, and evoked potentials. In other countries, this is an autonomous specialty (e.g., United Kingdom, Sweden, Spain). Overlap with psychiatry In the past, prior to the advent of more advanced diagnostic techniques such as MRI some neurologists have considered psychiatry and neurology to overlap. Although mental illnesses are believed by many to be neurological disorders affecting the central nervous system, traditionally they are classified separately, and treated by psychiatrists. In a 2002 review article in the American Journal of Psychiatry, Professor Joseph B. Martin, Dean of Harvard Medical School and a neurologist by training, wrote, "the separation of the two categories is arbitrary, often influenced by beliefs rather than proven scientific observations. And the fact that the brain and mind are one makes the separation artificial anyway". Neurological disorders often have psychiatric manifestations, such as post-stroke depression, depression and dementia associated with Parkinson's disease, mood and cognitive dysfunctions in Alzheimer's disease, and Huntington disease, to name a few. Hence, the sharp distinction between neurology and psychiatry is not always on a biological basis. The dominance of psychoanalytic theory in the first three-quarters of the 20th century has since then been largely replaced by a focus on pharmacology. Despite the shift to a medical model, brain science has not advanced to a point where scientists or clinicians can point to readily discernible pathological lesions or genetic abnormalities that in and of themselves serve as reliable or predictive biomarkers of a given mental disorder. Neurological enhancement The emerging field of neurological enhancement highlights the potential of therapies to improve such things as workplace efficacy, attention in school, and overall happiness in personal lives. However, this field has also given rise to questions about neuroethics. Neurological UX Neurological UX is a specialised branch of web accessibility focusing on designing for individuals with neurological dispositions, such as ADHD, dyslexia, and autism. Coined by Gareth Slinn in his book NeurologicalUX neurologicalux.com, this field aims to create inclusive digital experiences that reduce anxiety, enhance readability, and improve usability for diverse cognitive needs. It emphasises thoughtful design choices like accessible colour themes, simplified navigation, and adaptable interfaces to accommodate varying neurological profiles.
Biology and health sciences
Fields of medicine
null
21245
https://en.wikipedia.org/wiki/Neuroscience
Neuroscience
Neuroscience is the scientific study of the nervous system (the brain, spinal cord, and peripheral nervous system), its functions, and its disorders. It is a multidisciplinary science that combines physiology, anatomy, molecular biology, developmental biology, cytology, psychology, physics, computer science, chemistry, medicine, statistics, and mathematical modeling to understand the fundamental and emergent properties of neurons, glia and neural circuits. The understanding of the biological basis of learning, memory, behavior, perception, and consciousness has been described by Eric Kandel as the "epic challenge" of the biological sciences. The scope of neuroscience has broadened over time to include different approaches used to study the nervous system at different scales. The techniques used by neuroscientists have expanded enormously, from molecular and cellular studies of individual neurons to imaging of sensory, motor and cognitive tasks in the brain. History The earliest study of the nervous system dates to ancient Egypt. Trepanation, the surgical practice of either drilling or scraping a hole into the skull for the purpose of curing head injuries or mental disorders, or relieving cranial pressure, was first recorded during the Neolithic period. Manuscripts dating to 1700 BC indicate that the Egyptians had some knowledge about symptoms of brain damage. Early views on the function of the brain regarded it to be a "cranial stuffing" of sorts. In Egypt, from the late Middle Kingdom onwards, the brain was regularly removed in preparation for mummification. It was believed at the time that the heart was the seat of intelligence. According to Herodotus, the first step of mummification was to "take a crooked piece of iron, and with it draw out the brain through the nostrils, thus getting rid of a portion, while the skull is cleared of the rest by rinsing with drugs." The view that the heart was the source of consciousness was not challenged until the time of the Greek physician Hippocrates. He believed that the brain was not only involved with sensation—since most specialized organs (e.g., eyes, ears, tongue) are located in the head near the brain—but was also the seat of intelligence. Plato also speculated that the brain was the seat of the rational part of the soul. Aristotle, however, believed the heart was the center of intelligence and that the brain regulated the amount of heat from the heart. This view was generally accepted until the Roman physician Galen, a follower of Hippocrates and physician to Roman gladiators, observed that his patients lost their mental faculties when they had sustained damage to their brains. Abulcasis, Averroes, Avicenna, Avenzoar, and Maimonides, active in the Medieval Muslim world, described a number of medical problems related to the brain. In Renaissance Europe, Vesalius (1514–1564), René Descartes (1596–1650), Thomas Willis (1621–1675) and Jan Swammerdam (1637–1680) also made several contributions to neuroscience. Luigi Galvani's pioneering work in the late 1700s set the stage for studying the electrical excitability of muscles and neurons. In 1843 Emil du Bois-Reymond demonstrated the electrical nature of the nerve signal, whose speed Hermann von Helmholtz proceeded to measure, and in 1875 Richard Caton found electrical phenomena in the cerebral hemispheres of rabbits and monkeys. Adolf Beck published in 1890 similar observations of spontaneous electrical activity of the brain of rabbits and dogs. Studies of the brain became more sophisticated after the invention of the microscope and the development of a staining procedure by Camillo Golgi during the late 1890s. The procedure used a silver chromate salt to reveal the intricate structures of individual neurons. His technique was used by Santiago Ramón y Cajal and led to the formation of the neuron doctrine, the hypothesis that the functional unit of the brain is the neuron. Golgi and Ramón y Cajal shared the Nobel Prize in Physiology or Medicine in 1906 for their extensive observations, descriptions, and categorizations of neurons throughout the brain. In parallel with this research, in 1815 Jean Pierre Flourens induced localized lesions of the brain in living animals to observe their effects on motricity, sensibility and behavior. Work with brain-damaged patients by Marc Dax in 1836 and Paul Broca in 1865 suggested that certain regions of the brain were responsible for certain functions. At the time, these findings were seen as a confirmation of Franz Joseph Gall's theory that language was localized and that certain psychological functions were localized in specific areas of the cerebral cortex. The localization of function hypothesis was supported by observations of epileptic patients conducted by John Hughlings Jackson, who correctly inferred the organization of the motor cortex by watching the progression of seizures through the body. Carl Wernicke further developed the theory of the specialization of specific brain structures in language comprehension and production. Modern research through neuroimaging techniques, still uses the Brodmann cerebral cytoarchitectonic map (referring to the study of cell structure) anatomical definitions from this era in continuing to show that distinct areas of the cortex are activated in the execution of specific tasks. During the 20th century, neuroscience began to be recognized as a distinct academic discipline in its own right, rather than as studies of the nervous system within other disciplines. Eric Kandel and collaborators have cited David Rioch, Francis O. Schmitt, and Stephen Kuffler as having played critical roles in establishing the field. Rioch originated the integration of basic anatomical and physiological research with clinical psychiatry at the Walter Reed Army Institute of Research, starting in the 1950s. During the same period, Schmitt established a neuroscience research program within the Biology Department at the Massachusetts Institute of Technology, bringing together biology, chemistry, physics, and mathematics. The first freestanding neuroscience department (then called Psychobiology) was founded in 1964 at the University of California, Irvine by James L. McGaugh. This was followed by the Department of Neurobiology at Harvard Medical School, which was founded in 1966 by Stephen Kuffler. In the process of treating epilepsy, Wilder Penfield produced maps of the location of various functions (motor, sensory, memory, vision) in the brain. He summarized his findings in a 1950 book called The Cerebral Cortex of Man. Wilder Penfield and his co-investigators Edwin Boldrey and Theodore Rasmussen are considered to be the originators of the cortical homunculus. The understanding of neurons and of nervous system function became increasingly precise and molecular during the 20th century. For example, in 1952, Alan Lloyd Hodgkin and Andrew Huxley presented a mathematical model for the transmission of electrical signals in neurons of the giant axon of a squid, which they called "action potentials", and how they are initiated and propagated, known as the Hodgkin–Huxley model. In 1961–1962, Richard FitzHugh and J. Nagumo simplified Hodgkin–Huxley, in what is called the FitzHugh–Nagumo model. In 1962, Bernard Katz modeled neurotransmission across the space between neurons known as synapses. Beginning in 1966, Eric Kandel and collaborators examined biochemical changes in neurons associated with learning and memory storage in Aplysia. In 1981 Catherine Morris and Harold Lecar combined these models in the Morris–Lecar model. Such increasingly quantitative work gave rise to numerous biological neuron models and models of neural computation. As a result of the increasing interest about the nervous system, several prominent neuroscience organizations have been formed to provide a forum to all neuroscientists during the 20th century. For example, the International Brain Research Organization was founded in 1961, the International Society for Neurochemistry in 1963, the European Brain and Behaviour Society in 1968, and the Society for Neuroscience in 1969. Recently, the application of neuroscience research results has also given rise to applied disciplines as neuroeconomics, neuroeducation, neuroethics, and neurolaw. Over time, brain research has gone through philosophical, experimental, and theoretical phases, with work on neural implants and brain simulation predicted to be important in the future. Modern neuroscience The scientific study of the nervous system increased significantly during the second half of the twentieth century, principally due to advances in molecular biology, electrophysiology, and computational neuroscience. This has allowed neuroscientists to study the nervous system in all its aspects: how it is structured, how it works, how it develops, how it malfunctions, and how it can be changed. For example, it has become possible to understand, in much detail, the complex processes occurring within a single neuron. Neurons are cells specialized for communication. They are able to communicate with neurons and other cell types through specialized junctions called synapses, at which electrical or electrochemical signals can be transmitted from one cell to another. Many neurons extrude a long thin filament of axoplasm called an axon, which may extend to distant parts of the body and are capable of rapidly carrying electrical signals, influencing the activity of other neurons, muscles, or glands at their termination points. A nervous system emerges from the assemblage of neurons that are connected to each other in neural circuits, and networks. The vertebrate nervous system can be split into two parts: the central nervous system (defined as the brain and spinal cord), and the peripheral nervous system. In many species—including all vertebrates—the nervous system is the most complex organ system in the body, with most of the complexity residing in the brain. The human brain alone contains around one hundred billion neurons and one hundred trillion synapses; it consists of thousands of distinguishable substructures, connected to each other in synaptic networks whose intricacies have only begun to be unraveled. At least one out of three of the approximately 20,000 genes belonging to the human genome is expressed mainly in the brain. Due to the high degree of plasticity of the human brain, the structure of its synapses and their resulting functions change throughout life. Making sense of the nervous system's dynamic complexity is a formidable research challenge. Ultimately, neuroscientists would like to understand every aspect of the nervous system, including how it works, how it develops, how it malfunctions, and how it can be altered or repaired. Analysis of the nervous system is therefore performed at multiple levels, ranging from the molecular and cellular levels to the systems and cognitive levels. The specific topics that form the main focus of research change over time, driven by an ever-expanding base of knowledge and the availability of increasingly sophisticated technical methods. Improvements in technology have been the primary drivers of progress. Developments in electron microscopy, computer science, electronics, functional neuroimaging, and genetics and genomics have all been major drivers of progress. Advances in the classification of brain cells have been enabled by electrophysiological recording, single-cell genetic sequencing, and high-quality microscopy, which have combined into a single method pipeline called patch-sequencing in which all three methods are simultaneously applied using miniature tools. The efficiency of this method and the large amounts of data that is generated has allowed researchers to make some general conclusions about cell types; for example that the human and mouse brain have different versions of fundamentally the same cell types. Molecular and cellular neuroscience Basic questions addressed in molecular neuroscience include the mechanisms by which neurons express and respond to molecular signals and how axons form complex connectivity patterns. At this level, tools from molecular biology and genetics are used to understand how neurons develop and how genetic changes affect biological functions. The morphology, molecular identity, and physiological characteristics of neurons and how they relate to different types of behavior are also of considerable interest. Questions addressed in cellular neuroscience include the mechanisms of how neurons process signals physiologically and electrochemically. These questions include how signals are processed by neurites and somas and how neurotransmitters and electrical signals are used to process information in a neuron. Neurites are thin extensions from a neuronal cell body, consisting of dendrites (specialized to receive synaptic inputs from other neurons) and axons (specialized to conduct nerve impulses called action potentials). Somas are the cell bodies of the neurons and contain the nucleus. Another major area of cellular neuroscience is the investigation of the development of the nervous system. Questions include the patterning and regionalization of the nervous system, axonal and dendritic development, trophic interactions, synapse formation and the implication of fractones in neural stem cells, differentiation of neurons and glia (neurogenesis and gliogenesis), and neuronal migration. Computational neurogenetic modeling is concerned with the development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes, on the cellular level (Computational Neurogenetic Modeling (CNGM) can also be used to model neural systems). Neural circuits and systems Systems neuroscience research centers on the structural and functional architecture of the developing human brain, and the functions of large-scale brain networks, or functionally-connected systems within the brain. Alongside brain development, systems neuroscience also focuses on how the structure and function of the brain enables or restricts the processing of sensory information, using learned mental models of the world, to motivate behavior. Questions in systems neuroscience include how neural circuits are formed and used anatomically and physiologically to produce functions such as reflexes, multisensory integration, motor coordination, circadian rhythms, emotional responses, learning, and memory. In other words, this area of research studies how connections are made and morphed in the brain, and the effect it has on human sensation, movement, attention, inhibitory control, decision-making, reasoning, memory formation, reward, and emotion regulation. Specific areas of interest for the field include observations of how the structure of neural circuits effect skill acquisition, how specialized regions of the brain develop and change (neuroplasticity), and the development of brain atlases, or wiring diagrams of individual developing brains. The related fields of neuroethology and neuropsychology address the question of how neural substrates underlie specific animal and human behaviors. Neuroendocrinology and psychoneuroimmunology examine interactions between the nervous system and the endocrine and immune systems, respectively. Despite many advancements, the way that networks of neurons perform complex cognitive processes and behaviors is still poorly understood. Cognitive and behavioral neuroscience Cognitive neuroscience addresses the questions of how psychological functions are produced by neural circuitry. The emergence of powerful new measurement techniques such as neuroimaging (e.g., fMRI, PET, SPECT), EEG, MEG, electrophysiology, optogenetics and human genetic analysis combined with sophisticated experimental techniques from cognitive psychology allows neuroscientists and psychologists to address abstract questions such as how cognition and emotion are mapped to specific neural substrates. Although many studies hold a reductionist stance looking for the neurobiological basis of cognitive phenomena, recent research shows that there is an interplay between neuroscientific findings and conceptual research, soliciting and integrating both perspectives. For example, neuroscience research on empathy solicited an interdisciplinary debate involving philosophy, psychology and psychopathology. Moreover, the neuroscientific identification of multiple memory systems related to different brain areas has challenged the idea of memory as a literal reproduction of the past, supporting a view of memory as a generative, constructive and dynamic process. Neuroscience is also allied with the social and behavioral sciences, as well as with nascent interdisciplinary fields. Examples of such alliances include neuroeconomics, decision theory, social neuroscience, and neuromarketing to address complex questions about interactions of the brain with its environment. A study into consumer responses for example uses EEG to investigate neural correlates associated with narrative transportation into stories about energy efficiency. Computational neuroscience Questions in computational neuroscience can span a wide range of levels of traditional analysis, such as development, structure, and cognitive functions of the brain. Research in this field utilizes mathematical models, theoretical analysis, and computer simulation to describe and verify biologically plausible neurons and nervous systems. For example, biological neuron models are mathematical descriptions of spiking neurons which can be used to describe both the behavior of single neurons as well as the dynamics of neural networks. Computational neuroscience is often referred to as theoretical neuroscience. Neuroscience and medicine Clinical neuroscience Neurology, psychiatry, neurosurgery, psychosurgery, anesthesiology and pain medicine, neuropathology, neuroradiology, ophthalmology, otolaryngology, clinical neurophysiology, addiction medicine, and sleep medicine are some medical specialties that specifically address the diseases of the nervous system. These terms also refer to clinical disciplines involving diagnosis and treatment of these diseases. Neurology works with diseases of the central and peripheral nervous systems, such as amyotrophic lateral sclerosis (ALS) and stroke, and their medical treatment. Psychiatry focuses on affective, behavioral, cognitive, and perceptual disorders. Anesthesiology focuses on perception of pain, and pharmacologic alteration of consciousness. Neuropathology focuses upon the classification and underlying pathogenic mechanisms of central and peripheral nervous system and muscle diseases, with an emphasis on morphologic, microscopic, and chemically observable alterations. Neurosurgery and psychosurgery work primarily with surgical treatment of diseases of the central and peripheral nervous systems. Neuroscience underlies the development of various neurotherapy methods to treat diseases of the nervous system. Translational research Recently, the boundaries between various specialties have blurred, as they are all influenced by basic research in neuroscience. For example, brain imaging enables objective biological insight into mental illnesses, which can lead to faster diagnosis, more accurate prognosis, and improved monitoring of patient progress over time. Integrative neuroscience describes the effort to combine models and information from multiple levels of research to develop a coherent model of the nervous system. For example, brain imaging coupled with physiological numerical models and theories of fundamental mechanisms may shed light on psychiatric disorders. Another important area of translational research is brain–computer interfaces (BCIs), or machines that are able to communicate and influence the brain. They are currently being researched for their potential to repair neural systems and restore certain cognitive functions. However, some ethical considerations have to be dealt with before they are accepted. Major branches Modern neuroscience education and research activities can be very roughly categorized into the following major branches, based on the subject and scale of the system in examination as well as distinct experimental or curricular approaches. Individual neuroscientists, however, often work on questions that span several distinct subfields. Careers in neuroscience Bachelor's Level Master's Level Advanced Degree Neuroscience organizations The largest professional neuroscience organization is the Society for Neuroscience (SFN), which is based in the United States but includes many members from other countries. Since its founding in 1969 the SFN has grown steadily: as of 2010 it recorded 40,290 members from 83 countries. Annual meetings, held each year in a different American city, draw attendance from researchers, postdoctoral fellows, graduate students, and undergraduates, as well as educational institutions, funding agencies, publishers, and hundreds of businesses that supply products used in research. Other major organizations devoted to neuroscience include the International Brain Research Organization (IBRO), which holds its meetings in a country from a different part of the world each year, and the Federation of European Neuroscience Societies (FENS), which holds a meeting in a different European city every two years. FENS comprises a set of 32 national-level organizations, including the British Neuroscience Association, the German Neuroscience Society (), and the French . The first National Honor Society in Neuroscience, Nu Rho Psi, was founded in 2006. Numerous youth neuroscience societies which support undergraduates, graduates and early career researchers also exist, such as Simply Neuroscience and Project Encephalon. In 2013, the BRAIN Initiative was announced in the US. The International Brain Initiative was created in 2017, currently integrated by more than seven national-level brain research initiatives (US, Europe, Allen Institute, Japan, China, Australia, Canada, Korea, and Israel) spanning four continents. Public education and outreach In addition to conducting traditional research in laboratory settings, neuroscientists have also been involved in the promotion of awareness and knowledge about the nervous system among the general public and government officials. Such promotions have been done by both individual neuroscientists and large organizations. For example, individual neuroscientists have promoted neuroscience education among young students by organizing the International Brain Bee, which is an academic competition for high school or secondary school students worldwide. In the United States, large organizations such as the Society for Neuroscience have promoted neuroscience education by developing a primer called Brain Facts, collaborating with public school teachers to develop Neuroscience Core Concepts for K-12 teachers and students, and cosponsoring a campaign with the Dana Foundation called Brain Awareness Week to increase public awareness about the progress and benefits of brain research. In Canada, the Canadian Institutes of Health Research's (CIHR) Canadian National Brain Bee is held annually at McMaster University. Neuroscience educators formed a Faculty for Undergraduate Neuroscience (FUN) in 1992 to share best practices and provide travel awards for undergraduates presenting at Society for Neuroscience meetings. Neuroscientists have also collaborated with other education experts to study and refine educational techniques to optimize learning among students, an emerging field called educational neuroscience. Federal agencies in the United States, such as the National Institute of Health (NIH) and National Science Foundation (NSF), have also funded research that pertains to best practices in teaching and learning of neuroscience concepts. Engineering applications of neuroscience Neuromorphic computer chips Neuromorphic engineering is a branch of neuroscience that deals with creating functional physical models of neurons for the purposes of useful computation. The emergent computational properties of neuromorphic computers are fundamentally different from conventional computers in the sense that they are complex systems, and that the computational components are interrelated with no central processor. One example of such a computer is the SpiNNaker supercomputer. Sensors can also be made smart with neuromorphic technology. An example of this is the Event Camera's BrainScaleS (brain-inspired Multiscale Computation in Neuromorphic Hybrid Systems), a hybrid analog neuromorphic supercomputer located at Heidelberg University in Germany. It was developed as part of the Human Brain Project's neuromorphic computing platform and is the complement to the SpiNNaker supercomputer, which is based on digital technology. The architecture used in BrainScaleS mimics biological neurons and their connections on a physical level; additionally, since the components are made of silicon, these model neurons operate on average 864 times (24 hours of real time is 100 seconds in the machine simulation) that of their biological counterparts. Recent advances in neuromorphic microchip technology have led a group of scientists to create an artificial neuron that can replace real neurons in diseases. Nobel prizes related to neuroscience
Biology and health sciences
Basics
null
21272
https://en.wikipedia.org/wiki/Neutron
Neutron
The neutron is a subatomic particle, symbol or , that has no electric charge, and a mass slightly greater than that of a proton. Protons and neutrons constitute the nuclei of atoms. Since protons and neutrons behave similarly within the nucleus, they are both referred to as nucleons. Nucleons have a mass of approximately one atomic mass unit, or dalton (symbol: Da). Their properties and interactions are described by nuclear physics. Protons and neutrons are not elementary particles; each is composed of three quarks. The chemical properties of an atom are mostly determined by the configuration of electrons that orbit the atom's heavy nucleus. The electron configuration is determined by the charge of the nucleus, which is determined by the number of protons, or atomic number. The number of neutrons is the neutron number. Neutrons do not affect the electron configuration. Atoms of a chemical element that differ only in neutron number are called isotopes. For example, carbon, with atomic number 6, has an abundant isotope carbon-12 with 6 neutrons and a rare isotope carbon-13 with 7 neutrons. Some elements occur in nature with only one stable isotope, such as fluorine. Other elements occur with many stable isotopes, such as tin with ten stable isotopes, or with no stable isotope, such as technetium. The properties of an atomic nucleus depend on both atomic and neutron numbers. With their positive charge, the protons within the nucleus are repelled by the long-range electromagnetic force, but the much stronger, but short-range, nuclear force binds the nucleons closely together. Neutrons are required for the stability of nuclei, with the exception of the single-proton hydrogen nucleus. Neutrons are produced copiously in nuclear fission and fusion. They are a primary contributor to the nucleosynthesis of chemical elements within stars through fission, fusion, and neutron capture processes. The neutron is essential to the production of nuclear power. In the decade after the neutron was discovered by James Chadwick in 1932, neutrons were used to induce many different types of nuclear transmutations. With the discovery of nuclear fission in 1938, it was quickly realized that, if a fission event produced neutrons, each of these neutrons might cause further fission events, in a cascade known as a nuclear chain reaction. These events and findings led to the first self-sustaining nuclear reactor (Chicago Pile-1, 1942) and the first nuclear weapon (Trinity, 1945). Dedicated neutron sources like neutron generators, research reactors and spallation sources produce free neutrons for use in irradiation and in neutron scattering experiments. A free neutron spontaneously decays to a proton, an electron, and an antineutrino, with a mean lifetime of about 15 minutes. Free neutrons do not directly ionize atoms, but they do indirectly cause ionizing radiation, so they can be a biological hazard, depending on dose. A small natural "neutron background" flux of free neutrons exists on Earth, caused by cosmic ray showers, and by the natural radioactivity of spontaneously fissionable elements in the Earth's crust. Neutrons in an atomic nucleus An atomic nucleus is formed by a number of protons, Z (the atomic number), and a number of neutrons, N (the neutron number), bound together by the nuclear force. Protons and neutrons each have a mass of approximately one dalton. The atomic number determines the chemical properties of the atom, and the neutron number determines the isotope or nuclide. The terms isotope and nuclide are often used synonymously, but they refer to chemical and nuclear properties, respectively. Isotopes are nuclides with the same atomic number, but different neutron number. Nuclides with the same neutron number, but different atomic number, are called isotones. The atomic mass number, A, is equal to the sum of atomic and neutron numbers. Nuclides with the same atomic mass number, but different atomic and neutron numbers, are called isobars. The mass of a nucleus is always slightly less than the sum of its proton and neutron masses: the difference in mass represents the mass equivalent to nuclear binding energy, the energy which would need to be added to take the nucleus apart. The nucleus of the most common isotope of the hydrogen atom (with the chemical symbol 1H) is a lone proton. The nuclei of the heavy hydrogen isotopes deuterium (D or 2H) and tritium (T or 3H) contain one proton bound to one and two neutrons, respectively. All other types of atomic nuclei are composed of two or more protons and various numbers of neutrons. The most common nuclide of the common chemical element lead, 208Pb, has 82 protons and 126 neutrons, for example. The table of nuclides comprises all the known nuclides. Even though it is not a chemical element, the neutron is included in this table. Protons and neutrons behave almost identically under the influence of the nuclear force within the nucleus. They are therefore both referred to collectively as nucleons. The concept of isospin, in which the proton and neutron are viewed as two quantum states of the same particle, is used to model the interactions of nucleons by the nuclear or weak forces. Nuclear energy Because of the strength of the nuclear force at short distances, the nuclear energy binding nucleons is many orders of magnitude greater than the electromagnetic energy binding electrons in atoms. In nuclear fission, the absorption of a neutron by some heavy nuclides (such as uranium-235) can cause the nuclide to become unstable and break into lighter nuclides and additional neutrons. The positively charged light nuclides, or "fission fragments", then repel, releasing electromagnetic potential energy. If this reaction occurs within a mass of fissile material, the additional neutrons cause additional fission events, inducing a cascade known as a nuclear chain reaction. For a given mass of fissile material, such nuclear reactions release energy that is approximately ten million times that from an equivalent mass of a conventional chemical explosive. Ultimately, the ability of the nuclear force to store energy arising from the electromagnetic repulsion of nuclear components is the basis for most of the energy that makes nuclear reactors or bombs possible; most of the energy released from fission is the kinetic energy of the fission fragments. Beta decay Neutrons and protons within a nucleus behave similarly and can exchange their identities by similar reactions. These reactions are a form of radioactive decay known as beta decay. Beta decay, in which neutrons decay to protons, or vice versa, is governed by the weak force, and it requires the emission or absorption of electrons and neutrinos, or their antiparticles. The neutron and proton decay reactions are: where , , and denote the proton, electron and electron anti-neutrino decay products, and where , , and denote the neutron, positron and electron neutrino decay products. The electron and positron produced in these reactions are historically known as beta particles, denoted β− or β+ respectively, lending the name to the decay process. In these reactions, the original particle is not composed of the product particles; rather, the product particles are created at the instant of the reaction. The "free" neutron "Free" neutrons or protons are nucleons that exist independently, free of any nucleus. The free neutron has a mass of , or . This mass is equal to , or . The neutron has a mean-square radius of about , or , and it is a spin-½ fermion. The neutron has no measurable electric charge. With its positive electric charge, the proton is directly influenced by electric fields, whereas the neutron is unaffected by electric fields. The neutron has a magnetic moment, however, so it is influenced by magnetic fields. The specific properties of the neutron are described below in the Intrinsic properties section. Outside the nucleus, free neutrons undergo beta decay with a mean lifetime of about 14 minutes, 38 seconds, corresponding to a half-life of about 10 minutes, 11 s. The mass of the neutron is greater than that of the proton by , hence the neutron's mass provides energy sufficient for the creation of the proton, electron, and anti-neutrino. In the decay process, the proton, electron, and electron anti-neutrino conserve the energy, charge, and lepton number of the neutron. The electron can acquire a kinetic energy up to . Still unexplained, different experimental methods for measuring the neutron's lifetime, the "bottle" and "beam" methods, produce different values for it. The "bottle" method employs "cold" neutrons trapped in a bottle, while the "beam" method employs energetic neutrons in a particle beam. The measurements by the two methods have not been converging with time. The lifetime from the bottle method is presently 877.75 s which is 10 seconds below the value from the beam method of 887.7 s A small fraction (about one per thousand) of free neutrons decay with the same products, but add an extra particle in the form of an emitted gamma ray: Called a "radiative decay mode" of the neutron, the gamma ray may be thought of as resulting from an "internal bremsstrahlung" that arises from the electromagnetic interaction of the emitted beta particle with the proton. A smaller fraction (about four per million) of free neutrons decay in so-called "two-body (neutron) decays", in which a proton, electron and antineutrino are produced as usual, but the electron fails to gain the necessary energy to escape the proton (the ionization energy of hydrogen), and therefore simply remains bound to it, forming a neutral hydrogen atom (one of the "two bodies"). In this type of free neutron decay, almost all of the neutron decay energy is carried off by the antineutrino (the other "body"). (The hydrogen atom recoils with a speed of only about (decay energy)/(hydrogen rest energy) times the speed of light, or .) Neutrons and protons bound in a nucleus Neutrons are a necessary constituent of any atomic nucleus that contains more than one proton. As a result of their positive charges, interacting protons have a mutual electromagnetic repulsion that is stronger than their attractive nuclear interaction, so proton-only nuclei are unstable (see diproton and neutron–proton ratio). Neutrons bind with protons and one another in the nucleus via the nuclear force, effectively moderating the repulsive forces between the protons and stabilizing the nucleus. Heavy nuclei carry a large positive charge, hence they require "extra" neutrons to be stable. While a free neutron is unstable and a free proton is stable, within nuclei neutrons are often stable and protons are sometimes unstable. When bound within a nucleus, nucleons can decay by the beta decay process. The neutrons and protons in a nucleus form a quantum mechanical system according to the nuclear shell model. Protons and neutrons of a nuclide are organized into discrete hierarchical energy levels with unique quantum numbers. Nucleon decay within a nucleus can occur if allowed by basic energy conservation and quantum mechanical constraints. The decay products, that is, the emitted particles, carry away the energy excess as a nucleon falls from one quantum state to one with less energy, while the neutron (or proton) changes to a proton (or neutron). For a neutron to decay, the resulting proton requires an available state at lower energy than the initial neutron state. In stable nuclei the possible lower energy states are all filled, meaning each state is occupied by a pair of protons, one with spin up, another with spin down. When all available proton states are filled, the Pauli exclusion principle disallows the decay of a neutron to a proton. The situation is similar to electrons of an atom, where electrons that occupy distinct atomic orbitals are prevented by the exclusion principle from decaying to lower, already-occupied, energy states. The stability of matter is a consequence of these constraints. The decay of a neutron within a nuclide is illustrated by the decay of the carbon isotope carbon-14, which has 6 protons and 8 neutrons. With its excess of neutrons, this isotope decays by beta decay to nitrogen-14 (7 protons, 7 neutrons), a process with a half-life of about . Nitrogen-14 is stable. "Beta decay" reactions can also occur by the capture of a lepton by the nucleon. The transformation of a proton to a neutron inside of a nucleus is possible through electron capture: A rarer reaction, inverse beta decay, involves the capture of a neutrino by a nucleon. Rarer still, positron capture by neutrons can occur in the high-temperature environment of stars. Competition of beta decay types Three types of beta decay in competition are illustrated by the single isotope copper-64 (29 protons, 35 neutrons), which has a half-life of about 12.7 hours. This isotope has one unpaired proton and one unpaired neutron, so either the proton or the neutron can decay. This particular nuclide is almost equally likely to undergo proton decay (by positron emission, 18% or by electron capture, 43%; both forming ) or neutron decay (by electron emission, 39%; forming ). The neutron in elementary particle physics - the Standard Model Within the theoretical framework of the Standard Model for particle physics, a neutron comprises two down quarks with charge and one up quark with charge . The neutron is therefore a composite particle classified as a hadron. The neutron is also classified as a baryon, because it is composed of three valence quarks. The finite size of the neutron and its magnetic moment both indicate that the neutron is a composite, rather than elementary, particle. The quarks of the neutron are held together by the strong force, mediated by gluons. The nuclear force results from secondary effects of the more fundamental strong force. The only possible decay mode for the neutron that obeys the conservation law for the baryon number is for one of the neutron's quarks to change flavour (through a Cabibbo–Kobayashi–Maskawa matrix) via the weak interaction. The decay of one of the neutron's down quarks into a lighter up quark can be achieved by the emission of a W boson. By this process, the Standard Model description of beta decay, the neutron decays into a proton (which contains one down and two up quarks), an electron, and an electron antineutrino. The decay of the proton to a neutron occurs similarly through the weak force. The decay of one of the proton's up quarks into a down quark can be achieved by the emission of a W boson. The proton decays into a neutron, a positron, and an electron neutrino. This reaction can only occur within an atomic nucleus which has a quantum state at lower energy available for the created neutron. Discovery The story of the discovery of the neutron and its properties is central to the extraordinary developments in atomic physics that occurred in the first half of the 20th century, leading ultimately to the atomic bomb in 1945. In the 1911 Rutherford model, the atom consisted of a small positively charged massive nucleus surrounded by a much larger cloud of negatively charged electrons. In 1920, Ernest Rutherford suggested that the nucleus consisted of positive protons and neutrally charged particles, suggested to be a proton and an electron bound in some way. Electrons were assumed to reside within the nucleus because it was known that beta radiation consisted of electrons emitted from the nucleus. About the time Rutherford suggested the neutral proton-electron composite, several other publications appeared making similar suggestions, and in 1921 the American chemist W. D. Harkins first named the hypothetical particle a "neutron". The name derives from the Latin root for neutralis (neuter) and the Greek suffix -on (a suffix used in the names of subatomic particles, i.e. electron and proton).
Physical sciences
Physics
null
21273
https://en.wikipedia.org/wiki/Neon
Neon
Neon is a chemical element; it has the symbol Ne and atomic number 10. It is the second noble gas in the periodic table. Neon is a colorless, odorless, inert monatomic gas under standard conditions, with approximately two-thirds the density of air. Neon was discovered in 1898 alongside krypton and xenon, identified as one of the three remaining rare inert elements in dry air after the removal of nitrogen, oxygen, argon, and carbon dioxide. Its discovery was marked by the distinctive bright red emission spectrum it exhibited, leading to its immediate recognition as a new element. The name neon originates from the Greek word , a neuter singular form of (), meaning 'new'. Neon is a chemically inert gas, existing neon compounds are primarily ionic molecules or fragile molecules held together by van der Waals forces. The synthesis of most neon in the cosmos resulted from the nuclear fusion within stars of oxygen and helium through the alpha-capture process. Despite its abundant presence in the universe and Solar System—ranking fifth in cosmic abundance following hydrogen, helium, oxygen, and carbon—neon is comparatively scarce on Earth. It constitutes about 18.2 ppm of Earth's atmospheric volume and a lesser fraction in the Earth's crust. The high volatility of neon and its inability to form compounds that would anchor it to solids explain its limited presence on Earth and the inner terrestrial planets. Neon’s high volatility facilitated its escape from planetesimals under the early Solar System's nascent Sun's warmth. Neon's notable applications include its use in low-voltage neon glow lamps, high-voltage discharge tubes, and neon advertising signs, where it emits a distinct reddish-orange glow. This same red emission line is responsible for the characteristic red light of helium–neon lasers. Although neon has some applications in plasma tubes and as a refrigerant, its commercial uses are relatively limited. It is primarily obtained through the fractional distillation of liquid air, making it significantly more expensive than helium due to air being its sole source. History Neon was discovered in 1898 by the British chemists Sir William Ramsay (1852–1916) and Morris Travers (1872–1961) in London. Neon was discovered when Ramsay chilled a sample of air until it became a liquid, then warmed the liquid and captured the gases as they boiled off. The gases nitrogen, oxygen, and argon had been identified, but the remaining gases were isolated in roughly their order of abundance, in a six-week period beginning at the end of May 1898. The first remaining gas to be identified was krypton; the next, after krypton had been removed, was a gas which gave a brilliant red light under spectroscopic discharge. This gas, identified in June, was named "neon", the Greek analogue of the Latin ('new') suggested by Ramsay's son. The characteristic brilliant red-orange color emitted by gaseous neon when excited electrically was noted immediately. Travers later wrote: "the blaze of crimson light from the tube told its own story and was a sight to dwell upon and never forget." A second gas was also reported along with neon, having approximately the same density as argon but with a different spectrum – Ramsay and Travers named it metargon. However, the subsequent spectroscopic analysis revealed it to be argon contaminated with carbon monoxide. Finally, the same team discovered xenon by the same process, in September 1898. Neon's scarcity precluded its prompt application for lighting along the lines of Moore tubes, which used nitrogen and which were commercialized in the early 1900s. After 1902, Georges Claude's company Air Liquide produced industrial quantities of neon as a byproduct of his air-liquefaction business. In December 1910 Claude demonstrated modern neon lighting based on a sealed tube of neon. Claude tried briefly to sell neon tubes for indoor domestic lighting, due to their intensity, but the market failed because homeowners objected to the color. In 1912, Claude's associate began selling neon discharge tubes as eye-catching advertising signs and was instantly more successful. Neon tubes were introduced to the U.S. in 1923 with two large neon signs bought by a Los Angeles Packard car dealership. The glow and arresting red color made neon advertising completely different from the competition. The intense color and vibrancy of neon equated with American society at the time, suggesting a "century of progress" and transforming cities into sensational new environments filled with radiating advertisements and "electro-graphic architecture". Neon played a role in the basic understanding of the nature of atoms in 1913, when J. J. Thomson, as part of his exploration into the composition of canal rays, channeled streams of neon ions through a magnetic and an electric field and measured the deflection of the streams with a photographic plate. Thomson observed two separate patches of light on the photographic plate (see image), which suggested two different parabolas of deflection. Thomson eventually concluded that some of the atoms in the neon gas were of higher mass than the rest. Though not understood at the time by Thomson, this was the first discovery of isotopes of stable atoms. Thomson's device was a crude version of the instrument we now term a mass spectrometer. Isotopes Neon has three stable isotopes: 20Ne (90.48%), 21Ne (0.27%) and 22Ne (9.25%). 21Ne and 22Ne are partly primordial and partly nucleogenic (i.e. made by nuclear reactions of other nuclides with neutrons or other particles in the environment) and their variations in natural abundance are well understood. In contrast, 20Ne (the chief primordial isotope made in stellar nucleosynthesis) is not known to be nucleogenic or radiogenic, except from the decay of oxygen-20, which is produced in very rare cases of cluster decay by thorium-228. The causes of the variation of 20Ne in the Earth have thus been hotly debated. The principal nuclear reactions generating nucleogenic neon isotopes start from 24Mg and 25Mg, which produce 21Ne and 22Ne respectively, after neutron capture and immediate emission of an alpha particle. The neutrons that produce the reactions are mostly produced by secondary spallation reactions from alpha particles, in turn derived from uranium-series decay chains. The net result yields a trend towards lower 20Ne/22Ne and higher 21Ne/22Ne ratios observed in uranium-rich rocks such as granites. In addition, isotopic analysis of exposed terrestrial rocks has demonstrated the cosmogenic (cosmic ray) production of 21Ne. This isotope is generated by spallation reactions on magnesium, sodium, silicon, and aluminium. By analyzing all three isotopes, the cosmogenic component can be resolved from magmatic neon and nucleogenic neon. This suggests that neon will be a useful tool in determining cosmic exposure ages of surface rocks and meteorites. Neon in solar wind contains a higher proportion of 20Ne than nucleogenic and cosmogenic sources. Neon content observed in samples of volcanic gases and diamonds is also enriched in 20Ne, suggesting a primordial, possibly solar origin. Characteristics Neon is the second-lightest noble gas, after helium. Like other noble gases, neon is colorless and odorless. It glows reddish-orange in a vacuum discharge tube. It has over 40 times the refrigerating capacity (per unit volume) of liquid helium and three times that of liquid hydrogen. In most applications it is a less expensive refrigerant than helium. Despite helium surpassing neon in terms of ionization energy, neon is theorized to be the least reactive of all the elements, even less so than the former. Neon plasma has the most intense light discharge at normal voltages and currents of all the noble gases. The average color of this light to the human eye is red-orange due to many lines in this range; it also contains a strong green line, which is hidden, unless the visual components are dispersed by a spectroscope. Occurrence Stable isotopes of neon are produced in stars. Neon's most abundant isotope 20Ne (90.48%) is created by the nuclear fusion of carbon and carbon in the carbon-burning process of stellar nucleosynthesis. This requires temperatures above 500 megakelvins, which occur in the cores of stars of more than 8 solar masses. Neon is abundant on a universal scale; it is the fifth most abundant chemical element in the universe by mass, after hydrogen, helium, oxygen, and carbon (see chemical element). Its relative rarity on Earth, like that of helium, is due to its relative lightness, high vapor pressure at very low temperatures, and chemical inertness, all properties which tend to keep it from being trapped in the condensing gas and dust clouds that formed the smaller and warmer solid planets like Earth. Neon is monatomic, making it lighter than the molecules of diatomic nitrogen and oxygen which form the bulk of Earth's atmosphere; a balloon filled with neon will rise in air, albeit more slowly than a helium balloon. Neon's abundance in the universe is about 1 part in 750 by mass; in the Sun and presumably in its proto-solar system nebula, about 1 part in 600. The Galileo spacecraft atmospheric entry probe found that in the upper atmosphere of Jupiter, the abundance of neon is reduced (depleted) by about a factor of 10, to a level of 1 part in 6,000 by mass. This may indicate that the ice-planetesimals that brought neon into Jupiter from the outer solar system formed in a region that was too warm to retain the neon atmospheric component (abundances of heavier inert gases on Jupiter are several times that found in the Sun), or that neon is selectively sequestered in the planet's interior. Neon comprises 1 part in 55,000 in the Earth's atmosphere, or 18.2 ppm by volume (this is about the same as the molecule or mole fraction), or 1 part in 79,000 of air by mass. It comprises a smaller fraction in the crust. It is industrially produced by cryogenic fractional distillation of liquefied air. On 17 August 2015, based on studies with the Lunar Atmosphere and Dust Environment Explorer (LADEE) spacecraft, NASA scientists reported the detection of neon in the exosphere of the moon. Chemistry Neon is the first p-block noble gas and the first element with a true octet of electrons. It is inert: as is the case with its lighter analog, helium, no strongly bound neutral molecules containing neon have been identified. An example of neon compound is Cr(CO)5Ne, which contains a very weak Ne-Cr bond. The ions [NeAr]+, [NeH]+, and [HeNe]+ have been observed from optical and mass spectrometric studies. Solid neon clathrate hydrate was produced from water ice and neon gas at pressures 350–480 MPa and temperatures about −30 °C. Ne atoms are not bonded to water and can freely move through this material. They can be extracted by placing the clathrate into a vacuum chamber for several days, yielding ice XVI, the least dense crystalline form of water. The familiar Pauling electronegativity scale relies upon chemical bond energies, but such values have obviously not been measured for inert helium and neon. The Allen electronegativity scale, which relies only upon (measurable) atomic energies, identifies neon as the most electronegative element, closely followed by fluorine and helium. The triple point temperature of neon (24.5561 K) is a defining fixed point in the International Temperature Scale of 1990. Production Neon is produced from air in cryogenic air-separation plants. A gas-phase mixture mainly of nitrogen, neon, helium, and hydrogen is withdrawn from the main condenser at the top of the high-pressure air-separation column and fed to the bottom of a side column for rectification of the neon. It can then be further purified from helium by bringing it into contact with activated charcoal. Hydrogen is purified from the neon by adding oxygen so water is formed and is condensed. One pound of pure neon can be produced from the processing of 88,000 pounds of the gas-phase mixture. Before the 2022 escalation of the war with Russia about 70% of the global neon supply was produced in Ukraine as a by-product of steel production in Russia. , the company Iceblick, with plants in Odesa and Moscow, supplies 65% of the world's production of neon, as well as 15% of the krypton and xenon. 2022 shortage Global neon prices jumped by about 600% after the 2014 Russian annexation of Crimea, spurring some chip manufacturers to start shifting away from Russian and Ukrainian suppliers and toward suppliers in China. The 2022 Russian invasion of Ukraine also shut down two companies in Ukraine that produced about half of the global supply: Cryoin Engineering () and Inhaz (), located in Odesa and Mariupol, respectively. The closure was predicted to exacerbate the COVID-19 chip shortage, which may further shift neon production to China. Applications Lighting and signage Two quite different kinds of neon lighting are in common use. Neon glow lamps are generally tiny, with most operating between 100 and 250 volts. They have been widely used as power-on indicators and in circuit-testing equipment, but light-emitting diodes (LEDs) now dominate in those applications. These simple neon devices were the forerunners of plasma displays and plasma television screens. Neon signs typically operate at much higher voltages (2–15 kilovolts), and the luminous tubes are commonly meters long. The glass tubing is often formed into shapes and letters for signage, as well as architectural and artistic applications. In neon signs, neon produces an unmistakable bright reddish-orange light when electric current passes through it under low pressure. Although tube lights with other colors are often called "neon", they use different noble gases or varied colors of fluorescent lighting, for example, argon produces a lavender or blue hue. As of 2012, there are over one hundred colors available. Other Neon is used in vacuum tubes, high-voltage indicators, lightning arresters, wavemeter tubes, television tubes, and helium–neon lasers. Gas mixtures that include high-purity neon are used in lasers for photolithography in semiconductor device fabrication. Liquefied neon is commercially used as a cryogenic refrigerant in applications not requiring the lower temperature range attainable with the more extreme liquid helium refrigeration.
Physical sciences
Chemical elements_2
null
21274
https://en.wikipedia.org/wiki/Nickel
Nickel
Nickel is a chemical element; it has symbol Ni and atomic number 28. It is a silvery-white lustrous metal with a slight golden tinge. Nickel is a hard and ductile transition metal. Pure nickel is chemically reactive, but large pieces are slow to react with air under standard conditions because a passivation layer of nickel oxide forms on the surface that prevents further corrosion. Even so, pure native nickel is found in Earth's crust only in tiny amounts, usually in ultramafic rocks, and in the interiors of larger nickel–iron meteorites that were not exposed to oxygen when outside Earth's atmosphere. Meteoric nickel is found in combination with iron, a reflection of the origin of those elements as major end products of supernova nucleosynthesis. An iron–nickel mixture is thought to compose Earth's outer and inner cores. Use of nickel (as natural meteoric nickel–iron alloy) has been traced as far back as 3500 BCE. Nickel was first isolated and classified as an element in 1751 by Axel Fredrik Cronstedt, who initially mistook the ore for a copper mineral, in the cobalt mines of Los, Hälsingland, Sweden. The element's name comes from a mischievous sprite of German miner mythology, Nickel (similar to Old Nick). Nickel minerals can be green, like copper ores, and were known as kupfernickel – Nickel's copper – because they produced no copper. Although most nickel in the earth's crust exists as oxides, economically more important nickel ores are sulfides, especially pentlandite. Major production sites include Sulawesi, Indonesia, the Sudbury region, Canada (which is thought to be of meteoric origin), New Caledonia in the Pacific, Western Australia, and Norilsk, Russia. Nickel is one of four elements (the others are iron, cobalt, and gadolinium) that are ferromagnetic at about room temperature. Alnico permanent magnets based partly on nickel are of intermediate strength between iron-based permanent magnets and rare-earth magnets. The metal is used chiefly in alloys and corrosion-resistant plating. About 68% of world production is used in stainless steel. A further 10% is used for nickel-based and copper-based alloys, 9% for plating, 7% for alloy steels, 3% in foundries, and 4% in other applications such as in rechargeable batteries, including those in electric vehicles (EVs). Nickel is widely used in coins, though nickel-plated objects sometimes provoke nickel allergy. As a compound, nickel has a number of niche chemical manufacturing uses, such as a catalyst for hydrogenation, cathodes for rechargeable batteries, pigments and metal surface treatments. Nickel is an essential nutrient for some microorganisms and plants that have enzymes with nickel as an active site. Properties Atomic and physical properties Nickel is a silvery-white metal with a slight golden tinge that takes a high polish. It is one of only four elements that are ferromagnetic at or near room temperature; the others are iron, cobalt and gadolinium. Its Curie temperature is , meaning that bulk nickel is non-magnetic above this temperature. The unit cell of nickel is a face-centered cube; it has lattice parameter of 0.352 nm, giving an atomic radius of 0.124 nm. This crystal structure is stable to pressures of at least 70 GPa. Nickel is hard, malleable and ductile, and has a relatively high electrical and thermal conductivity for transition metals. The high compressive strength of 34 GPa, predicted for ideal crystals, is never obtained in the real bulk material due to formation and movement of dislocations. However, it has been reached in Ni nanoparticles. Electron configuration dispute Nickel has two atomic electron configurations, [Ar] 3d 4s and [Ar] 3d 4s, which are very close in energy; [Ar] denotes the complete argon core structure. There is some disagreement on which configuration has the lower energy. Chemistry textbooks quote nickel's electron configuration as [Ar] 4s 3d, also written [Ar] 3d 4s. This configuration agrees with the Madelung energy ordering rule, which predicts that 4s is filled before 3d. It is supported by the experimental fact that the lowest energy state of the nickel atom is a 3d 4s energy level, specifically the 3d(F) 4s F, J = 4 level. However, each of these two configurations splits into several energy levels due to fine structure, and the two sets of energy levels overlap. The average energy of states with [Ar] 3d 4s is actually lower than the average energy of states with [Ar] 3d 4s. Therefore, the research literature on atomic calculations quotes the ground state configuration as [Ar] 3d 4s. Isotopes The isotopes of nickel range in atomic weight from 48 u () to 82 u (). Natural nickel is composed of five stable isotopes, , , , and , of which is the most abundant (68.077% natural abundance). Nickel-62 has the highest binding energy per nucleon of any nuclide: 8.7946 MeV/nucleon. Its binding energy is greater than both and , more abundant nuclides often incorrectly cited as having the highest binding energy. Though this would seem to predict nickel as the most abundant heavy element in the universe, the high rate of photodisintegration of nickel in stellar interiors causes iron to be by far the most abundant. Nickel-60 is the daughter product of the extinct radionuclide (half-life 2.6 million years). Due to the long half-life of , its persistence in materials in the Solar System may generate observable variations in the isotopic composition of . Therefore, the abundance of in extraterrestrial material may give insight into the origin of the Solar System and its early history. At least 26 nickel radioisotopes have been characterized; the most stable are with half-life 76,000 years, (100 years), and (6 days). All other radioisotopes have half-lives less than 60 hours and most these have half-lives less than 30 seconds. This element also has one meta state. Radioactive nickel-56 is produced by the silicon burning process and later set free in large amounts in type Ia supernovae. The shape of the light curve of these supernovae at intermediate to late-times corresponds to the decay via electron capture of to cobalt-56 and ultimately to iron-56. Nickel-59 is a long-lived cosmogenic radionuclide; half-life 76,000 years. has found many applications in isotope geology. has been used to date the terrestrial age of meteorites and to determine abundances of extraterrestrial dust in ice and sediment. Nickel-78, with a half-life of 110 milliseconds, is believed an important isotope in supernova nucleosynthesis of elements heavier than iron. Ni, discovered in 1999, is the most proton-rich heavy element isotope known. With 28 protons and 20 neutrons, Ni is "doubly magic", as is Ni with 28 protons and 50 neutrons. Both are therefore unusually stable for nuclei with so large a proton–neutron imbalance. Nickel-63 is a contaminant found in the support structure of nuclear reactors. It is produced through neutron capture by nickel-62. Small amounts have also been found near nuclear weapon test sites in the South Pacific. Occurrence Nickel ores are classified as oxides or sulfides. Oxides include laterite, where the principal mineral mixtures are nickeliferous limonite, (Fe,Ni)O(OH), and garnierite (a mixture of various hydrous nickel and nickel-rich silicates). Nickel sulfides commonly exist as solid solutions with iron in minerals such as pentlandite and pyrrhotite with the formula Fe9−xNixS8 and Fe7−xNixS6, respectively. Other common Ni-containing minerals are millerite and the arsenide niccolite. Identified land-based resources throughout the world averaging 1% nickel or greater comprise at least 130 million tons of nickel (about the double of known reserves). About 60% is in laterites and 40% in sulfide deposits. On geophysical evidence, most of the nickel on Earth is believed to be in Earth's outer and inner cores. Kamacite and taenite are naturally occurring alloys of iron and nickel. For kamacite, the alloy is usually in the proportion of 90:10 to 95:5, though impurities (such as cobalt or carbon) may be present. Taenite is 20% to 65% nickel. Kamacite and taenite are also found in nickel iron meteorites. Nickel is commonly found in iron meteorites as the alloys kamacite and taenite. Nickel in meteorites was first detected in 1799 by Joseph-Louis Proust, a French chemist who then worked in Spain. Proust analyzed samples of the meteorite from Campo del Cielo (Argentina), which had been obtained in 1783 by Miguel Rubín de Celis, discovering the presence in them of nickel (about 10%) along with iron. Compounds The most common oxidation state of nickel is +2, but compounds of , , and are well known, and the exotic oxidation states and have been characterized. Nickel(0) Nickel tetracarbonyl ), discovered by Ludwig Mond, is a volatile, highly toxic liquid at room temperature. On heating, the complex decomposes back to nickel and carbon monoxide: This behavior is exploited in the Mond process for purifying nickel, as described above. The related nickel(0) complex bis(cyclooctadiene)nickel(0) is a useful catalyst in organonickel chemistry because the cyclooctadiene (or cod) ligands are easily displaced. Nickel(I) Nickel(I) complexes are uncommon, but one example is the tetrahedral complex . Many nickel(I) complexes have Ni–Ni bonding, such as the dark red diamagnetic prepared by reduction of with sodium amalgam. This compound is oxidized in water, liberating . It is thought that the nickel(I) oxidation state is important to nickel-containing enzymes, such as [NiFe]-hydrogenase, which catalyzes the reversible reduction of protons to . Nickel(II) Nickel(II) forms compounds with all common anions, including sulfide, sulfate, carbonate, hydroxide, carboxylates, and halides. Nickel(II) sulfate is produced in large amounts by dissolving nickel metal or oxides in sulfuric acid, forming both a hexa- and heptahydrate useful for electroplating nickel. Common salts of nickel, such as chloride, nitrate, and sulfate, dissolve in water to give green solutions of the metal aquo complex . The four halides form nickel compounds, which are solids with molecules with octahedral Ni centres. Nickel(II) chloride is most common, and its behavior is illustrative of the other halides. Nickel(II) chloride is made by dissolving nickel or its oxide in hydrochloric acid. It is usually found as the green hexahydrate, whose formula is usually written . When dissolved in water, this salt forms the metal aquo complex . Dehydration of gives yellow anhydrous . Some tetracoordinate nickel(II) complexes, e.g. bis(triphenylphosphine)nickel chloride, exist both in tetrahedral and square planar geometries. The tetrahedral complexes are paramagnetic; the square planar complexes are diamagnetic. In having properties of magnetic equilibrium and formation of octahedral complexes, they contrast with the divalent complexes of the heavier group 10 metals, palladium(II) and platinum(II), which form only square-planar geometry. Nickelocene has an electron count of 20. Many chemical reactions of nickelocene tend to yield 18-electron products. Nickel(III) and (IV) Many Ni(III) compounds are known. Ni(III) forms simple salts with fluoride or oxide ions. Ni(III) can be stabilized by σ-donor ligands such as thiols and organophosphines. Ni(III) occurs in nickel oxide hydroxide, which is used as the cathode in many rechargeable batteries, including nickel–cadmium, nickel–iron, nickel–hydrogen, and nickel–metal hydride, and used by certain manufacturers in Li-ion batteries. Ni(IV) remains a rare oxidation state and very few compounds are known. Ni(IV) occurs in the mixed oxide . History Unintentional use of nickel can be traced back as far as 3500 BCE. Bronzes from what is now Syria have been found to contain as much as 2% nickel. Some ancient Chinese manuscripts suggest that "white copper" (cupronickel, known as baitong) was used there in 1700–1400 BCE. This Paktong white copper was exported to Britain as early as the 17th century, but the nickel content of this alloy was not discovered until 1822. Coins of nickel-copper alloy were minted by Bactrian kings Agathocles, Euthydemus II, and Pantaleon in the 2nd century BCE, possibly out of the Chinese cupronickel. In medieval Germany, a metallic yellow mineral was found in the Ore Mountains that resembled copper ore. But when miners were unable to get any copper from it, they blamed a mischievous sprite of German mythology, Nickel (similar to Old Nick), for besetting the copper. They called this ore from German 'copper'. This ore is now known as the mineral nickeline (formerly niccolite), a nickel arsenide. In 1751, Baron Axel Fredrik Cronstedt tried to extract copper from kupfernickel at a cobalt mine in the village of Los, Sweden, and instead produced a white metal that he named nickel after the spirit that had given its name to the mineral. In modern German, Kupfernickel or Kupfer-Nickel designates the alloy cupronickel. Originally, the only source for nickel was the rare Kupfernickel. Beginning in 1824, nickel was obtained as a byproduct of cobalt blue production. The first large-scale smelting of nickel began in Norway in 1848 from nickel-rich pyrrhotite. The introduction of nickel in steel production in 1889 increased the demand for nickel; the nickel deposits of New Caledonia, discovered in 1865, provided most of the world's supply between 1875 and 1915. The discovery of the large deposits in the Sudbury Basin in Canada in 1883, in Norilsk-Talnakh in Russia in 1920, and in the Merensky Reef in South Africa in 1924 made large-scale nickel production possible. Coinage Aside from the aforementioned Bactrian coins, nickel was not a component of coins until the mid-19th century. Canada 99.9% nickel five-cent coins were struck in Canada (the world's largest nickel producer at the time) during non-war years from 1922 to 1981; the metal content made these coins magnetic. During the war years 1942–1945, most or all nickel was removed from Canadian and US coins to save it for making armor. Canada used 99.9% nickel from 1968 in its higher-value coins until 2000. Switzerland Coins of nearly pure nickel were first used in 1881 in Switzerland. United Kingdom Birmingham forged nickel coins in for trading in Malaysia. United States In the United States, the term "nickel" or "nick" originally applied to the copper-nickel Flying Eagle cent, which replaced copper with 12% nickel 1857–58, then the Indian Head cent of the same alloy from 1859 to 1864. Still later, in 1865, the term designated the three-cent nickel, with nickel increased to 25%. In 1866, the five-cent shield nickel (25% nickel, 75% copper) appropriated the designation, which has been used ever since for the subsequent 5-cent pieces. This alloy proportion is not ferromagnetic. The US nickel coin contains of nickel, which at the April 2007 price was worth 6.5 cents, along with 3.75 grams of copper worth about 3 cents, with a total metal value of more than 9 cents. Since the face value of a nickel is 5 cents, this made it an attractive target for melting by people wanting to sell the metals at a profit. The United States Mint, anticipating this practice, implemented new interim rules on December 14, 2006, subject to public comment for 30 days, which criminalized the melting and export of cents and nickels. Violators can be punished with a fine of up to $10,000 and/or a maximum of five years in prison. As of September 19, 2013, the melt value of a US nickel (copper and nickel included) is $0.045 (90% of the face value). Current use In the 21st century, the high price of nickel has led to some replacement of the metal in coins around the world. Coins still made with nickel alloys include one- and two-euro coins, 5¢, 10¢, 25¢, 50¢, and $1 U.S. coins, and 20p, 50p, £1, and £2 UK coins. From 2012 on the nickel-alloy used for 5p and 10p UK coins was replaced with nickel-plated steel. This ignited a public controversy regarding the problems of people with nickel allergy. World production An estimated 3.6 million tonnes (t) of nickel per year are mined worldwide; Indonesia (1,800,000 t), the Philippines (400,000 t), Russia (200,000 t), New Caledonia (France) (230,000 t), Canada (180,000 t) and Australia (160,000 t) are the largest producers as of 2023. The largest nickel deposits in non-Russian Europe are in Finland and Greece. Identified land-based sources averaging at least 1% nickel contain at least 130 million tonnes of nickel. About 60% is in laterites and 40% is in sulfide deposits. Also, extensive nickel sources are found in the depths of the Pacific Ocean, especially in an area called the Clarion Clipperton Zone in the form of polymetallic nodules peppering the seafloor at 3.5–6 km below sea level. These nodules are composed of numerous rare-earth metals and are estimated to be 1.7% nickel. With advances in science and engineering, regulation is currently being set in place by the International Seabed Authority to ensure that these nodules are collected in an environmentally conscientious manner while adhering to the United Nations Sustainable Development Goals. The one place in the United States where nickel has been profitably mined is Riddle, Oregon, with several square miles of nickel-bearing garnierite surface deposits. The mine closed in 1987. The Eagle mine project is a new nickel mine in Michigan's Upper Peninsula. Construction was completed in 2013, and operations began in the third quarter of 2014. In the first full year of operation, the Eagle Mine produced 18,000 t. Production Nickel is obtained through extractive metallurgy: it is extracted from ore by conventional roasting and reduction processes that yield metal of greater than 75% purity. In many stainless steel applications, 75% pure nickel can be used without further purification, depending on impurities. Traditionally, most sulfide ores are processed using pyrometallurgical techniques to produce a matte for further refining. Hydrometallurgical techniques are also used. Most sulfide deposits have traditionally been processed by concentration through a froth flotation process followed by pyrometallurgical extraction. The nickel matte is further processed with the Sherritt-Gordon process. First, copper is removed by adding hydrogen sulfide, leaving a concentrate of cobalt and nickel. Then, solvent extraction is used to separate the cobalt and nickel, with the final nickel content greater than 86%. A second common refining process is leaching the metal matte into a nickel salt solution, followed by electrowinning the nickel from solution by plating it onto a cathode as electrolytic nickel. Mond process The purest metal is obtained from nickel oxide by the Mond process, which gives a purity of over 99.99%. The process was patented by Ludwig Mond and has been in industrial use since before the beginning of the 20th century. In this process, nickel is treated with carbon monoxide in the presence of a sulfur catalyst at around 40–80 °C to form nickel carbonyl. In a similar reaction with iron, iron pentacarbonyl can form, though this reaction is slow. If necessary, the nickel may be separated by distillation. Dicobalt octacarbonyl is also formed in nickel distillation as a by-product, but it decomposes to tetracobalt dodecacarbonyl at the reaction temperature to give a non-volatile solid. Nickel is obtained from nickel carbonyl by one of two processes. It may be passed through a large chamber at high temperatures in which tens of thousands of nickel spheres (pellets) are constantly stirred. The carbonyl decomposes and deposits pure nickel onto the spheres. In the alternate process, nickel carbonyl is decomposed in a smaller chamber at 230 °C to create a fine nickel powder. The byproduct carbon monoxide is recirculated and reused. The highly pure nickel product is known as "carbonyl nickel". Market value The market price of nickel surged throughout 2006 and the early months of 2007; , the metal was trading at US$52,300/tonne or $1.47/oz. The price later fell dramatically; , the metal was trading at $11,000/tonne, or $0.31/oz. During the 2022 Russian invasion of Ukraine, worries about sanctions on Russian nickel exports triggered a short squeeze, causing the price of nickel to quadruple in just two days, reaching US$100,000 per tonne. The London Metal Exchange cancelled contracts worth $3.9 billion and suspended nickel trading for over a week. Analyst Andy Home argued that such price shocks are exacerbated by the purity requirements imposed by metal markets: only Grade I (99.8% pure) metal can be used as a commodity on the exchanges, but most of the world's supply is either in ferro-nickel alloys or lower-grade purities. Applications Global use of nickel is currently 68% in stainless steel, 10% in nonferrous alloys, 9% electroplating, 7% alloy steel, 3% foundries, and 4% other (including batteries). Nickel is used in many recognizable industrial and consumer products, including stainless steel, alnico magnets, coinage, rechargeable batteries (e.g. nickel–iron), electric guitar strings, microphone capsules, plating on plumbing fixtures, and special alloys such as permalloy, elinvar, and invar. It is used for plating and as a green tint in glass. Nickel is preeminently an alloy metal, and its chief use is in nickel steels and nickel cast irons, in which it typically increases the tensile strength, toughness, and elastic limit. It is widely used in many other alloys, including nickel brasses and bronzes and alloys with copper, chromium, aluminium, lead, cobalt, silver, and gold (Inconel, Incoloy, Monel, Nimonic). Nickel is traditionally used for Kris production in Southeastern Asia. Because nickel is resistant to corrosion, it was occasionally used as a substitute for decorative silver. Nickel was also occasionally used in some countries after 1859 as a cheap coinage metal (see above), but in the later years of the 20th century, it was replaced by cheaper stainless steel (i.e., iron) alloys, except in the United States and Canada. Nickel is an excellent alloying agent for certain precious metals and is used in the fire assay as a collector of platinum group elements (PGE). As such, nickel can fully collect all six PGEs from ores, and can partially collect gold. High-throughput nickel mines may also do PGE recovery (mainly platinum and palladium); examples are Norilsk, Russia and the Sudbury Basin, Canada. Nickel foam or nickel mesh is used in gas diffusion electrodes for alkaline fuel cells. Nickel and its alloys are often used as catalysts for hydrogenation reactions. Raney nickel, a finely divided nickel-aluminium alloy, is one common form, though related catalysts are also used, including Raney-type catalysts. Nickel is naturally magnetostrictive: in the presence of a magnetic field, the material undergoes a small change in length. The magnetostriction of nickel is on the order of 50 ppm and is negative, indicating that it contracts. Nickel is used as a binder in the cemented tungsten carbide or hardmetal industry and used in proportions of 6% to 12% by weight. Nickel makes the tungsten carbide magnetic and adds corrosion-resistance to the cemented parts, though the hardness is less than those with cobalt binder. , with a half-life of 100.1 years, is useful in krytron devices as a beta particle (high-speed electron) emitter to make ionization by the keep-alive electrode more reliable. It is being investigated as a power source for betavoltaic batteries. Around 27% of all nickel production is used for engineering, 10% for building and construction, 14% for tubular products, 20% for metal goods, 14% for transport, 11% for electronic goods, and 5% for other uses. Raney nickel is widely used for hydrogenation of unsaturated oils to make margarine, and substandard margarine and leftover oil may contain nickel as a contaminant. Forte et al. found that type 2 diabetic patients have 0.89 ng/mL of Ni in the blood relative to 0.77 ng/mL in control subjects. Nickel titanium is an alloy of roughly equal atomic percentages of its constituent metals which exhibits two closely related and unique properties: the shape memory effect and superelasticity. Biological role It was not recognized until the 1970s, but nickel is known to play an important role in the biology of some plants, bacteria, archaea, and fungi. Nickel enzymes such as urease are considered virulence factors in some organisms. Urease catalyzes hydrolysis of urea to form ammonia and carbamate. NiFe hydrogenases can catalyze oxidation of to form protons and electrons; and also the reverse reaction, the reduction of protons to form hydrogen gas. A nickel-tetrapyrrole coenzyme, cofactor F430, is present in methyl coenzyme M reductase, which can catalyze the formation of methane, or the reverse reaction, in methanogenic archaea (in +1 oxidation state). One of the carbon monoxide dehydrogenase enzymes consists of an Fe-Ni-S cluster. Other nickel-bearing enzymes include a rare bacterial class of superoxide dismutase and glyoxalase I enzymes in bacteria and several eukaryotic trypanosomal parasites (in other organisms, including yeast and mammals, this enzyme contains divalent ). Dietary nickel may affect human health through infections by nickel-dependent bacteria, but nickel may also be an essential nutrient for bacteria living in the large intestine, in effect functioning as a prebiotic. The US Institute of Medicine has not confirmed that nickel is an essential nutrient for humans, so neither a Recommended Dietary Allowance (RDA) nor an Adequate Intake have been established. The tolerable upper intake level of dietary nickel is 1 mg/day as soluble nickel salts. Estimated dietary intake is 70 to 100 μg/day; less than 10% is absorbed. What is absorbed is excreted in urine. Relatively large amounts of nickel – comparable to the estimated average ingestion above – leach into food cooked in stainless steel. For example, the amount of nickel leached after 10 cooking cycles into one serving of tomato sauce averages 88 μg. Nickel released from Siberian Traps volcanic eruptions is suspected of helping the growth of Methanosarcina, a genus of euryarchaeote archaea that produced methane in the Permian–Triassic extinction event, the biggest known mass extinction. Toxicity The major source of nickel exposure is oral consumption, as nickel is essential to plants. Typical background concentrations of nickel do not exceed 20 ng/m in air, 100 mg/kg in soil, 10 mg/kg in vegetation, 10 μg/L in freshwater and 1 μg/L in seawater. Environmental concentrations may be increased by human pollution. For example, nickel-plated faucets may contaminate water and soil; mining and smelting may dump nickel into wastewater; nickel–steel alloy cookware and nickel-pigmented dishes may release nickel into food. Air may be polluted by nickel ore refining and fossil fuel combustion. Humans may absorb nickel directly from tobacco smoke and skin contact with jewelry, shampoos, detergents, and coins. A less common form of chronic exposure is through hemodialysis as traces of nickel ions may be absorbed into the plasma from the chelating action of albumin. The average daily exposure is not a threat to human health. Most nickel absorbed by humans is removed by the kidneys and passed out of the body through urine or is eliminated through the gastrointestinal tract without being absorbed. Nickel is not a cumulative poison, but larger doses or chronic inhalation exposure may be toxic, even carcinogenic, and constitute an occupational hazard. Nickel compounds are classified as human carcinogens based on increased respiratory cancer risks observed in epidemiological studies of sulfidic ore refinery workers. This is supported by the positive results of the NTP bioassays with Ni sub-sulfide and Ni oxide in rats and mice. The human and animal data consistently indicate a lack of carcinogenicity via the oral route of exposure and limit the carcinogenicity of nickel compounds to respiratory tumours after inhalation. Nickel metal is classified as a suspect carcinogen; there is consistency between the absence of increased respiratory cancer risks in workers predominantly exposed to metallic nickel and the lack of respiratory tumours in a rat lifetime inhalation carcinogenicity study with nickel metal powder. In the rodent inhalation studies with various nickel compounds and nickel metal, increased lung inflammations with and without bronchial lymph node hyperplasia or fibrosis were observed. In rat studies, oral ingestion of water-soluble nickel salts can trigger perinatal mortality in pregnant animals. Whether these effects are relevant to humans is unclear as epidemiological studies of highly exposed female workers have not shown adverse developmental toxicity effects. People can be exposed to nickel in the workplace by inhalation, ingestion, and contact with skin or eye. The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for the workplace at 1 mg/m per 8-hour workday, excluding nickel carbonyl. The National Institute for Occupational Safety and Health (NIOSH) sets the recommended exposure limit (REL) at 0.015 mg/m per 8-hour workday. At 10 mg/m, nickel is immediately dangerous to life and health. Nickel carbonyl is an extremely toxic gas. The toxicity of metal carbonyls is a function of both the toxicity of the metal and the off-gassing of carbon monoxide from the carbonyl functional groups; nickel carbonyl is also explosive in air. Sensitized persons may show a skin contact allergy to nickel known as a contact dermatitis. Highly sensitized persons may also react to foods with high nickel content. Patients with pompholyx may also be sensitive to nickel. Nickel is the top confirmed contact allergen worldwide, partly due to its use in jewelry for pierced ears. Nickel allergies affecting pierced ears are often marked by itchy, red skin. Many earrings are now made without nickel or with low-release nickel to address this problem. The amount allowed in products that contact human skin is now regulated by the European Union. In 2002, researchers found that the nickel released by 1 and 2 euro coins, far exceeded those standards. This is believed to be due to a galvanic reaction. Nickel was voted Allergen of the Year in 2008 by the American Contact Dermatitis Society. In August 2015, the American Academy of Dermatology adopted a position statement on the safety of nickel: "Estimates suggest that contact dermatitis, which includes nickel sensitization, accounts for approximately $1.918 billion and affects nearly 72.29 million people." Reports show that both the nickel-induced activation of hypoxia-inducible factor (HIF-1) and the up-regulation of hypoxia-inducible genes are caused by depletion of intracellular ascorbate. The addition of ascorbate to the culture medium increased the intracellular ascorbate level and reversed both the metal-induced stabilization of HIF-1- and HIF-1α-dependent gene expression. Nickel in popular culture In the second Oz book, The Marvelous Land of Oz (by L. Frank Baum, published by Reilly & Britton, 1904), the Tin Woodman states that he has had his tin body nickel-plated. He is thereafter very careful not to allow his nickel plating to get scratched, nicked, or marred.
Physical sciences
Chemical elements_2
null
21275
https://en.wikipedia.org/wiki/Niobium
Niobium
Niobium is a chemical element; it has symbol Nb (formerly columbium, Cb) and atomic number 41. It is a light grey, crystalline, and ductile transition metal. Pure niobium has a Mohs hardness rating similar to pure titanium, and it has similar ductility to iron. Niobium oxidizes in Earth's atmosphere very slowly, hence its application in jewelry as a hypoallergenic alternative to nickel. Niobium is often found in the minerals pyrochlore and columbite. Its name comes from Greek mythology: Niobe, daughter of Tantalus, the namesake of tantalum. The name reflects the great similarity between the two elements in their physical and chemical properties, which makes them difficult to distinguish. English chemist Charles Hatchett reported a new element similar to tantalum in 1801 and named it columbium. In 1809, English chemist William Hyde Wollaston wrongly concluded that tantalum and columbium were identical. German chemist Heinrich Rose determined in 1846 that tantalum ores contain a second element, which he named niobium. In 1864 and 1865, a series of scientific findings clarified that niobium and columbium were the same element (as distinguished from tantalum), and for a century both names were used interchangeably. Niobium was officially adopted as the name of the element in 1949, but the name columbium remains in current use in metallurgy in the United States. It was not until the early 20th century that niobium was first used commercially. Niobium is an important addition to high-strength low-alloy steels. Brazil is the leading producer of niobium and ferroniobium, an alloy of 60–70% niobium with iron. Niobium is used mostly in alloys, the largest part in special steel such as that used in gas pipelines. Although these alloys contain a maximum of 0.1%, the small percentage of niobium enhances the strength of the steel by scavenging carbide and nitride. The temperature stability of niobium-containing superalloys is important for its use in jet and rocket engines. Niobium is used in various superconducting materials. These alloys, also containing titanium and tin, are widely used in the superconducting magnets of MRI scanners. Other applications of niobium include welding, nuclear industries, electronics, optics, numismatics, and jewelry. In the last two applications, the low toxicity and iridescence produced by anodization are highly desired properties. Niobium is considered a technology-critical element. History Niobium was identified by English chemist Charles Hatchett in 1801. He found a new element in a mineral sample that had been sent to England from Connecticut, United States in 1734 by John Winthrop FRS (grandson of John Winthrop the Younger) and named the mineral "columbite"" and the new element "columbium" after Columbia, the poetic name for the United States. The columbium discovered by Hatchett was probably a mixture of the new element with tantalum. Subsequently, there was considerable confusion over the difference between columbium (niobium) and the closely related tantalum. In 1809, English chemist William Hyde Wollaston compared the oxides derived from both columbium—columbite, with a density 5.918 g/cm, and tantalum—tantalite, with a density over 8 g/cm, and concluded that the two oxides, despite the significant difference in density, were identical; thus he kept the name tantalum. This conclusion was disputed in 1846 by German chemist Heinrich Rose, who argued that there were two different elements in the tantalite sample, and named them after children of Tantalus: niobium (from Niobe) and pelopium (from Pelops). This confusion arose from the minimal observed differences between tantalum and niobium. The claimed new elements pelopium, ilmenium, and dianium were in fact identical to niobium or mixtures of niobium and tantalum. The differences between tantalum and niobium were unequivocally demonstrated in 1864 by Christian Wilhelm Blomstrand and Henri Étienne Sainte-Claire Deville, as well as Louis J. Troost, who determined the formulas of some of the compounds in 1865 and finally by Swiss chemist Jean Charles Galissard de Marignac in 1866, who all proved that there were only two elements. Articles on ilmenium continued to appear until 1871. De Marignac was the first to prepare the metal in 1864, when he reduced niobium chloride by heating it in an atmosphere of hydrogen. Although de Marignac was able to produce tantalum-free niobium on a larger scale by 1866, it was not until the early 20th century that niobium was used in incandescent lamp filaments, the first commercial application. This use quickly became obsolete through the replacement of niobium with tungsten, which has a higher melting point. That niobium improves the strength of steel was first discovered in the 1920s, and this application remains its predominant use. In 1961, the American physicist Eugene Kunzler and coworkers at Bell Labs discovered that niobium–tin continues to exhibit superconductivity in the presence of strong electric currents and magnetic fields, making it the first material to support the high currents and fields necessary for useful high-power magnets and electrical power machinery. This discovery enabled—two decades later—the production of long multi-strand cables wound into coils to create large, powerful electromagnets for rotating machinery, particle accelerators, and particle detectors. Naming the element Columbium (symbol Cb) was the name originally given by Hatchett upon his discovery of the metal in 1801. The name reflected that the type specimen of the ore came from the United States of America (Columbia). This name remained in use in American journals—the last paper published by American Chemical Society with columbium in its title dates from 1953—while niobium was used in Europe. To end this confusion, the name niobium was chosen for element 41 at the 15th Conference of the Union of Chemistry in Amsterdam in 1949. A year later this name was officially adopted by the International Union of Pure and Applied Chemistry (IUPAC) after 100 years of controversy, despite the chronological precedence of the name columbium. This was a compromise of sorts; the IUPAC accepted tungsten instead of wolfram in deference to North American usage; and niobium instead of columbium in deference to European usage. While many US chemical societies and government organizations typically use the official IUPAC name, some metallurgists and metal societies still use the original American name, "columbium. Characteristics Physical Niobium is a lustrous, grey, ductile, paramagnetic metal in group 5 of the periodic table (see table), with an electron configuration in the outermost shells atypical for group 5. Similarly atypical configurations occur in the neighborhood of ruthenium (44) and rhodium (45). Although it is thought to have a body-centered cubic crystal structure from absolute zero to its melting point, high-resolution measurements of the thermal expansion along the three crystallographic axes reveal anisotropies which are inconsistent with a cubic structure. Therefore, further research and discovery in this area is expected. Niobium becomes a superconductor at cryogenic temperatures. At atmospheric pressure, it has the highest critical temperature of the elemental superconductors at 9.2 K. Niobium has the greatest magnetic penetration depth of any element. In addition, it is one of the three elemental Type II superconductors, along with vanadium and technetium. The superconductive properties are strongly dependent on the purity of the niobium metal. When very pure, it is comparatively soft and ductile, but impurities make it harder. The metal has a low capture cross-section for thermal neutrons; thus it is used in the nuclear industries where neutron transparent structures are desired. Chemical The metal takes on a bluish tinge when exposed to air at room temperature for extended periods. Despite a high melting point in elemental form (2,468 °C), it is less dense than other refractory metals. Furthermore, it is corrosion-resistant, exhibits superconductivity properties, and forms dielectric oxide layers. Niobium is slightly less electropositive and more compact than its predecessor in the periodic table, zirconium, whereas it is virtually identical in size to the heavier tantalum atoms, as a result of the lanthanide contraction. As a result, niobium's chemical properties are very similar to those for tantalum, which appears directly below niobium in the periodic table. Although its corrosion resistance is not as outstanding as that of tantalum, the lower price and greater availability make niobium attractive for less demanding applications, such as vat linings in chemical plants. Isotopes Almost all of the niobium in Earth's crust is the one stable isotope, Nb. By 2003, at least 32 radioisotopes had been synthesized, ranging in atomic mass from 81 to 113. The most stable is Nb with half-life 34.7 million years. Nb, along with Nb, has been detected in refined samples of terrestrial niobium and may originate from bombardment by cosmic ray muons in Earth's crust. One of the least stable niobium isotopes is 113Nb; estimated half-life 30 milliseconds. Isotopes lighter than the stable Nb tend to β decay, and those that are heavier tend to β decay, with some exceptions. Nb, Nb, and Nb have minor β-delayed proton emission decay paths, Nb decays by electron capture and positron emission, and Nb decays by both β and β decay. At least 25 nuclear isomers have been described, ranging in atomic mass from 84 to 104. Within this range, only Nb, Nb, and Nb do not have isomers. The most stable of niobium's isomers is Nb with half-life 16.13 years. The least stable isomer is Nb with a half-life of 103 ns. All of niobium's isomers decay by isomeric transition or beta decay except Nb, which has a minor electron capture branch. Occurrence Niobium is estimated to be the 33rd most abundant element in the Earth's crust, at 20 ppm. Some believe that the abundance on Earth is much greater, and that the element's high density has concentrated it in Earth's core. The free element is not found in nature, but niobium occurs in combination with other elements in minerals. Minerals that contain niobium often also contain tantalum. Examples include columbite () and columbite–tantalite (or coltan, ). Columbite–tantalite minerals (the most common species being columbite-(Fe) and tantalite-(Fe), where "-(Fe)" is the Levinson suffix indicating the prevalence of iron over other elements such as manganese) that are most usually found as accessory minerals in pegmatite intrusions, and in alkaline intrusive rocks. Less common are the niobates of calcium, uranium, thorium and the rare earth elements. Examples of such niobates are pyrochlore () (now a group name, with a relatively common example being, e.g., fluorcalciopyrochlore) and euxenite (correctly named euxenite-(Y)) (). These large deposits of niobium have been found associated with carbonatites (carbonate-silicate igneous rocks) and as a constituent of pyrochlore. The three largest currently mined deposits of pyrochlore, two in Brazil and one in Canada, were found in the 1950s, and are still the major producers of niobium mineral concentrates. The largest deposit is hosted within a carbonatite intrusion in Araxá, state of Minas Gerais, Brazil, owned by CBMM (Companhia Brasileira de Metalurgia e Mineração); the other active Brazilian deposit is located near Catalão, state of Goiás, and owned by China Molybdenum, also hosted within a carbonatite intrusion. Together, those two mines produce about 88% of the world's supply. Brazil also has a large but still unexploited deposit near São Gabriel da Cachoeira, state of Amazonas, as well as a few smaller deposits, notably in the state of Roraima. The third largest producer of niobium is the carbonatite-hosted Niobec mine, in Saint-Honoré, near Chicoutimi, Quebec, Canada, owned by Magris Resources. It produces between 7% and 10% of the world's supply. Production After the separation from the other minerals, the mixed oxides of tantalum and niobium are obtained. The first step in the processing is the reaction of the oxides with hydrofluoric acid: The first industrial scale separation, developed by Swiss chemist de Marignac, exploits the differing solubilities of the complex niobium and tantalum fluorides, dipotassium oxypentafluoroniobate monohydrate () and dipotassium heptafluorotantalate () in water. Newer processes use the liquid extraction of the fluorides from aqueous solution by organic solvents like cyclohexanone. The complex niobium and tantalum fluorides are extracted separately from the organic solvent with water and either precipitated by the addition of potassium fluoride to produce a potassium fluoride complex, or precipitated with ammonia as the pentoxide: Followed by: Several methods are used for the reduction to metallic niobium. The electrolysis of a molten mixture of [] and sodium chloride is one; the other is the reduction of the fluoride with sodium. With this method, a relatively high purity niobium can be obtained. In large scale production, is reduced with hydrogen or carbon. In the aluminothermic reaction, a mixture of iron oxide and niobium oxide is reacted with aluminium: Small amounts of oxidizers like sodium nitrate are added to enhance the reaction. The result is aluminium oxide and ferroniobium, an alloy of iron and niobium used in steel production. Ferroniobium contains between 60 and 70% niobium. Without iron oxide, the aluminothermic process is used to produce niobium. Further purification is necessary to reach the grade for superconductive alloys. Electron beam melting under vacuum is the method used by the two major distributors of niobium. , CBMM from Brazil controlled 85 percent of the world's niobium production. The United States Geological Survey estimates that the production increased from 38,700 tonnes in 2005 to 44,500 tonnes in 2006. Worldwide resources are estimated to be 4.4 million tonnes. During the ten-year period between 1995 and 2005, the production more than doubled, starting from 17,800 tonnes in 1995. Between 2009 and 2011, production was stable at 63,000 tonnes per year, with a slight decrease in 2012 to only 50,000 tonnes per year. Lesser amounts are found in Malawi's Kanyika Deposit (Kanyika mine). Compounds In many ways, niobium is similar to tantalum and zirconium. It reacts with most nonmetals at high temperatures; with fluorine at room temperature; with chlorine at 150 °C and hydrogen at 200 °C; and with nitrogen at 400 °C, with products that are frequently interstitial and nonstoichiometric. The metal begins to oxidize in air at 200 °C. It resists corrosion by acids, including aqua regia, hydrochloric, sulfuric, nitric and phosphoric acids. Niobium is attacked by hot concentrated sulfuric acid, hydrofluoric acid and hydrofluoric/nitric acid mixtures. It is also attacked by hot, saturated alkali metal hydroxide solutions. Although niobium exhibits all of the formal oxidation states from +5 to −1, the most common compounds have niobium in the +5 state. Characteristically, compounds in oxidation states less than 5+ display Nb–Nb bonding. In aqueous solutions, niobium only exhibits the +5 oxidation state. It is also readily prone to hydrolysis and is barely soluble in dilute solutions of hydrochloric, sulfuric, nitric and phosphoric acids due to the precipitation of hydrous Nb oxide. Nb(V) is also slightly soluble in alkaline media due to the formation of soluble polyoxoniobate species. Oxides, niobates and sulfides Niobium forms oxides in the oxidation states +5 (), +4 (), and the rarer oxidation state, +2 (NbO). Most common is the pentoxide, precursor to almost all niobium compounds and alloys. Niobates are generated by dissolving the pentoxide in basic hydroxide solutions or by melting it in alkali metal oxides. Examples are lithium niobate () and lanthanum niobate (). In the lithium niobate is a trigonally distorted perovskite-like structure, whereas the lanthanum niobate contains lone ions. The layered niobium sulfide () is also known. Materials can be coated with a thin film of niobium(V) oxide chemical vapor deposition or atomic layer deposition processes, produced by the thermal decomposition of niobium(V) ethoxide above 350 °C. Halides Niobium forms halides in the oxidation states of +5 and +4 as well as diverse substoichiometric compounds. The pentahalides () feature octahedral Nb centres. Niobium pentafluoride () is a white solid with a melting point of 79.0 °C and niobium pentachloride () is yellow (see image at right) with a melting point of 203.4 °C. Both are hydrolyzed to give oxides and oxyhalides, such as . The pentachloride is a versatile reagent used to generate the organometallic compounds, such as niobocene dichloride (). The tetrahalides () are dark-coloured polymers with Nb-Nb bonds; for example, the black hygroscopic niobium tetrafluoride () and dark violet niobium tetrachloride (). Anionic halide compounds of niobium are well known, owing in part to the Lewis acidity of the pentahalides. The most important is [NbF7]2−, an intermediate in the separation of Nb and Ta from the ores. This heptafluoride tends to form the oxopentafluoride more readily than does the tantalum compound. Other halide complexes include octahedral []: + 2 Cl → 2 [] As with other metals with low atomic numbers, a variety of reduced halide cluster ions is known, the prime example being []. Nitrides and carbides Other binary compounds of niobium include niobium nitride (NbN), which becomes a superconductor at low temperatures and is used in detectors for infrared light. The main niobium carbide is NbC, an extremely hard, refractory, ceramic material, commercially used in cutting tool bits. Applications Out of 44,500 tonnes of niobium mined in 2006, an estimated 90% was used in high-grade structural steel. The second-largest application is superalloys. Niobium alloy superconductors and electronic components account for a very small share of the world production. Steel production Niobium is an effective microalloying element for steel, within which it forms niobium carbide and niobium nitride. These compounds improve the grain refining, and retard recrystallization and precipitation hardening. These effects in turn increase the toughness, strength, formability, and weldability. Within microalloyed stainless steels, the niobium content is a small (less than 0.1%) but important addition to high-strength low-alloy steels that are widely used structurally in modern automobiles. Niobium is sometimes used in considerably higher quantities for highly wear-resistant machine components and knives, as high as 3% in Crucible CPM S110V stainless steel. These same niobium alloys are often used in pipeline construction. Superalloys Quantities of niobium are used in nickel-, cobalt-, and iron-based superalloys in proportions as great as 6.5% for such applications as jet engine components, gas turbines, rocket subassemblies, turbo charger systems, heat resisting, and combustion equipment. Niobium precipitates a hardening γ''-phase within the grain structure of the superalloy. One example superalloy is Inconel 718, consisting of roughly 50% nickel, 18.6% chromium, 18.5% iron, 5% niobium, 3.1% molybdenum, 0.9% titanium, and 0.4% aluminium. These superalloys were used, for example, in advanced air frame systems for the Gemini program. Another niobium alloy was used for the nozzle of the Apollo Service Module. Because niobium is oxidized at temperatures above 400 °C, a protective coating is necessary for these applications to prevent the alloy from becoming brittle. Niobium-based alloys C-103 alloy was developed in the early 1960s jointly by the Wah Chang Corporation and Boeing Co. DuPont, Union Carbide Corp., General Electric Co. and several other companies were developing Nb-base alloys simultaneously, largely driven by the Cold War and Space Race. It is composed of 89% niobium, 10% hafnium and 1% titanium and is used for liquid-rocket thruster nozzles, such as the descent engine of the Apollo Lunar Modules. The reactivity of niobium with oxygen requires it to be worked in a vacuum or inert atmosphere, which significantly increases the cost and difficulty of production. Vacuum arc remelting (VAR) and electron beam melting (EBM), novel processes at the time, enabled the development of niobium and other reactive metals. The project that yielded C-103 began in 1959 with as many as 256 experimental niobium alloys in the "C-series" (C arising possibly from columbium) that could be melted as buttons and rolled into sheet. Wah Chang Corporation had an inventory of hafnium, refined from nuclear-grade zirconium alloys, that it wanted to put to commercial use. The 103rd experimental composition of the C-series alloys, Nb-10Hf-1Ti, had the best combination of formability and high-temperature properties. Wah Chang fabricated the first 500 lb heat of C-103 in 1961, ingot to sheet, using EBM and VAR. The intended applications included turbine engines and liquid metal heat exchangers. Competing niobium alloys from that era included FS85 (Nb-10W-28Ta-1Zr) from Fansteel Metallurgical Corp., Cb129Y (Nb-10W-10Hf-0.2Y) from Wah Chang and Boeing, Cb752 (Nb-10W-2.5Zr) from Union Carbide, and Nb1Zr from Superior Tube Co. The nozzle of the Merlin Vacuum series of engines developed by SpaceX for the upper stage of its Falcon 9 rocket is made from a niobium alloy. Niobium-based superalloys are used to produce components to hypersonic missile systems. Superconducting magnets Niobium-germanium (), niobium–tin (), as well as the niobium–titanium alloys are used as a type II superconductor wire for superconducting magnets. These superconducting magnets are used in magnetic resonance imaging and nuclear magnetic resonance instruments as well as in particle accelerators. For example, the Large Hadron Collider uses 600 tons of superconducting strands, while the International Thermonuclear Experimental Reactor uses an estimated 600 tonnes of Nb3Sn strands and 250 tonnes of NbTi strands. In 1992 alone, more than US$1 billion worth of clinical magnetic resonance imaging systems were constructed with niobium-titanium wire. Other superconductors The superconducting radio frequency (SRF) cavities used in the free-electron lasers FLASH (result of the cancelled TESLA linear accelerator project) and XFEL are made from pure niobium. A cryomodule team at Fermilab used the same SRF technology from the FLASH project to develop 1.3 GHz nine-cell SRF cavities made from pure niobium. The cavities will be used in the linear particle accelerator of the International Linear Collider. The same technology will be used in LCLS-II at SLAC National Accelerator Laboratory and PIP-II at Fermilab. The high sensitivity of superconducting niobium nitride bolometers make them an ideal detector for electromagnetic radiation in the THz frequency band. These detectors were tested at the Submillimeter Telescope, the South Pole Telescope, the Receiver Lab Telescope, and at APEX, and are now used in the HIFI instrument on board the Herschel Space Observatory. Other uses Electroceramics Lithium niobate, which is a ferroelectric, is used extensively in mobile telephones and optical modulators, and for the manufacture of surface acoustic wave devices. It belongs to the ABO3 structure ferroelectrics like lithium tantalate and barium titanate. Niobium capacitors are available as alternative to tantalum capacitors, but tantalum capacitors still predominate. Niobium is added to glass to obtain a higher refractive index, making possible thinner and lighter corrective glasses. Hypoallergenic applications: medicine and jewelry Niobium and some niobium alloys are physiologically inert and hypoallergenic. For this reason, niobium is used in prosthetics and implant devices, such as pacemakers. Niobium treated with sodium hydroxide forms a porous layer that aids osseointegration. Like titanium, tantalum, and aluminium, niobium can be heated and anodized ("reactive metal anodization") to produce a wide array of iridescent colours for jewelry, where its hypoallergenic property is highly desirable. Numismatics Niobium is used as a precious metal in commemorative coins, often with silver or gold. For example, Austria produced a series of silver niobium euro coins starting in 2003; the colour in these coins is created by the diffraction of light by a thin anodized oxide layer. In 2012, ten coins are available showing a broad variety of colours in the centre of the coin: blue, green, brown, purple, violet, or yellow. Two more examples are the 2004 Austrian €25 150-Year Semmering Alpine Railway commemorative coin, and the 2006 Austrian €25 European Satellite Navigation commemorative coin. The Austrian mint produced for Latvia a similar series of coins starting in 2004, with one following in 2007. In 2011, the Royal Canadian Mint started production of a $5 sterling silver and niobium coin named Hunter's Moon in which the niobium was selectively oxidized, thus creating unique finishes where no two coins are exactly alike. Other The arc-tube seals of high pressure sodium vapor lamps are made from niobium, sometimes alloyed with 1% of zirconium; niobium has a very similar coefficient of thermal expansion, matching the sintered alumina arc tube ceramic, a translucent material which resists chemical attack or reduction by the hot liquid sodium and sodium vapour contained inside the operating lamp. Niobium is used in arc welding rods for some stabilized grades of stainless steel and in anodes for cathodic protection systems on some water tanks, which are then usually plated with platinum. Niobium is used to make the high voltage wire of the solar corona particles receptor module of the Parker Solar Probe. Niobium is a constituent of a lightfast chemically-stable inorganic yellow pigment that has the trade name NTP Yellow. It is Niobium Sulfur Tin Zinc Oxide, a pyrochlore, produced via high-temperature calcination. The pigment is also known as pigment yellow 227, commonly listed as PY 227 or PY227. Niobium is employed in the atomic energy industry for its high temperature and corrosion resistance, as well as its stability under radiation. It is used in nuclear reactors for components like fuel rods and reactor cores. Precautions Niobium has no known biological role. While niobium dust is an eye and skin irritant and a potential fire hazard, elemental niobium on a larger scale is physiologically inert (and thus hypoallergenic) and harmless. It is often used in jewelry and has been tested for use in some medical implants. Short- and long-term exposure to niobates and niobium chloride, two water-soluble chemicals, have been tested in rats. Rats treated with a single injection of niobium pentachloride or niobates show a median lethal dose (LD) between 10 and 100 mg/kg. For oral administration the toxicity is lower; a study with rats yielded a LD after seven days of 940 mg/kg.
Physical sciences
Chemical elements_2
null
21276
https://en.wikipedia.org/wiki/Neodymium
Neodymium
Neodymium is a chemical element; it has symbol Nd and atomic number 60. It is the fourth member of the lanthanide series and is considered to be one of the rare-earth metals. It is a hard, slightly malleable, silvery metal that quickly tarnishes in air and moisture. When oxidized, neodymium reacts quickly producing pink, purple/blue and yellow compounds in the +2, +3 and +4 oxidation states. It is generally regarded as having one of the most complex spectra of the elements. Neodymium was discovered in 1885 by the Austrian chemist Carl Auer von Welsbach, who also discovered praseodymium. Neodymium is present in significant quantities in the minerals monazite and bastnäsite. Neodymium is not found naturally in metallic form or unmixed with other lanthanides, and it is usually refined for general use. Neodymium is fairly common—about as common as cobalt, nickel, or copper—and is widely distributed in the Earth's crust. Most of the world's commercial neodymium is mined in China, as is the case with many other rare-earth metals. Neodymium compounds were first commercially used as glass dyes in 1927 and remain a popular additive. The color of neodymium compounds comes from the Nd3+ ion and is often a reddish-purple. This color changes with the type of lighting because of the interaction of the sharp light absorption bands of neodymium with ambient light enriched with the sharp visible emission bands of mercury, trivalent europium or terbium. Glasses that have been doped with neodymium are used in lasers that emit infrared with wavelengths between 1047 and 1062 nanometers. These lasers have been used in extremely high-power applications, such as in inertial confinement fusion. Neodymium is also used with various other substrate crystals, such as yttrium aluminium garnet in the Nd:YAG laser. Neodymium alloys are used to make high-strength neodymium magnets, which are powerful permanent magnets. These magnets are widely used in products like microphones, professional loudspeakers, in-ear headphones, high-performance hobby DC electric motors, and computer hard disks, where low magnet mass (or volume) or strong magnetic fields are required. Larger neodymium magnets are used in electric motors with a high power-to-weight ratio (e.g., in hybrid cars) and generators (e.g., aircraft and wind turbine electric generators). Physical properties Metallic neodymium has a bright, silvery metallic luster. Neodymium commonly exists in two allotropic forms, with a transformation from a double hexagonal to a body-centered cubic structure taking place at about 863 °C. Neodymium, like most of the lanthanides, is paramagnetic at room temperature. It becomes an antiferromagnet upon cooling below . Below this transition temperature it exhibits a set of complex magnetic phases that have long spin relaxation times and spin glass behavior. Neodymium is a rare-earth metal that was present in the classical mischmetal at a concentration of about 18%. To make neodymium magnets it is alloyed with iron, which is a ferromagnet. Electron configuration Neodymium is the fourth member of the lanthanide series. In the periodic table, it appears between the lanthanides praseodymium to its left and the radioactive element promethium to its right, and above the actinide uranium. Its 60 electrons are arranged in the configuration [Xe]4f46s2, of which the six 4f and 6s electrons are valence. Like most other metals in the lanthanide series, neodymium usually only uses three electrons as valence electrons, as afterwards the remaining 4f electrons are strongly bound: this is because the 4f orbitals penetrate the most through the inert xenon core of electrons to the nucleus, followed by 5d and 6s, and this increases with higher ionic charge. Neodymium can still lose a fourth electron because it comes early in the lanthanides, where the nuclear charge is still low enough and the 4f subshell energy high enough to allow the removal of further valence electrons. Chemical properties Neodymium has a melting point of and a boiling point of . Like other lanthanides, it usually has the oxidation state +3, but can also form in the +2 and +4 oxidation states, and even, in very rare conditions, +0. Neodymium metal quickly oxidizes at ambient conditions, forming an oxide layer like iron rust that can spall off and expose the metal to further oxidation; a centimeter-sized sample of neodymium corrodes completely in about a year. Nd3+ is generally soluble in water. Like its neighbor praseodymium, it readily burns at about 150 °C to form neodymium(III) oxide; the oxide then peels off, exposing the bulk metal to the further oxidation: Neodymium is an electropositive element, and it reacts slowly with cold water, or quickly with hot water, to form neodymium(III) hydroxide: Neodymium metal reacts vigorously with all the stable halogens: [a violet substance] [a mauve substance] [a violet substance] [a green substance] Neodymium dissolves readily in dilute sulfuric acid to form solutions that contain the lilac Nd(III) ion. These exist as a [Nd(OH2)9]3+ complexes: Compounds Some of the most important neodymium compounds include: halides: NdF3; NdCl2; NdCl3; NdBr3; NdI2; NdI3 oxides: hydroxide: carbonate: Nd2(CO3)3 sulfate: acetate: Nd(CH3COO)3 neodymium magnets (Nd2Fe14B) Some neodymium compounds vary in color under different types of lighting. Organoneodymium compounds Organoneodymium compounds are compounds that have a neodymium–carbon bond. These compounds are similar to those of the other lanthanides, characterized by an inability to undergo π backbonding. They are thus mostly restricted to the mostly ionic cyclopentadienides (isostructural with those of lanthanum) and the σ-bonded simple alkyls and aryls, some of which may be polymeric. Isotopes Naturally occurring neodymium (60Nd) is composed of five stable isotopes—142Nd, 143Nd, 145Nd, 146Nd and 148Nd, with 142Nd being the most abundant (27.2% of the natural abundance)—and two radioisotopes with extremely long half-lives, 144Nd (alpha decay with a half-life (t1/2) of years) and 150Nd (double beta decay, t1/2 ≈ years). In all, 35 radioisotopes of neodymium have been detected , with the most stable radioisotopes being the naturally occurring ones: 144Nd and 150Nd. All of the remaining radioactive isotopes have half-lives that are shorter than twelve days, and the majority of these have half-lives that are shorter than 70 seconds; the most stable artificial isotope is 147Nd with a half-life of 10.98 days. Neodymium also has 15 known metastable isotopes, with the most stable one being 139mNd (t1/2 = 5.5 hours), 135mNd (t1/2 = 5.5 minutes) and 133m1Nd (t1/2 ~70 seconds). The primary decay modes before the most abundant stable isotope, 142Nd, are electron capture and positron decay, and the primary mode after is beta minus decay. The primary decay products before 142Nd are praseodymium isotopes, and the primary products after 142Nd are promethium isotopes. Four of the five stable isotopes are only observationally stable, which means that they are expected to undergo radioactive decay, though with half-lives long enough to be considered stable for practical purposes. Additionally, some observationally stable isotopes of samarium are predicted to decay to isotopes of neodymium. Neodymium isotopes are used in various scientific applications. 142Nd has been used for the production of short-lived isotopes of thulium and ytterbium. 146Nd has been suggested for the production of 147Pm, which is a source of radioactive power. Several neodymium isotopes have been used for the production of other promethium isotopes. The decay from 147Sm (t1/2 = ) to the stable 143Nd allows for samarium–neodymium dating. 150Nd has also been used to study double beta decay. History In 1751, the Swedish mineralogist Axel Fredrik Cronstedt discovered a heavy mineral from the mine at Bastnäs, later named cerite. Thirty years later, fifteen-year-old Wilhelm Hisinger, a member of the family owning the mine, sent a sample to Carl Scheele, who did not find any new elements within. In 1803, after Hisinger had become an ironmaster, he returned to the mineral with Jöns Jacob Berzelius and isolated a new oxide, which they named ceria after the dwarf planet Ceres, which had been discovered two years earlier. Ceria was simultaneously and independently isolated in Germany by Martin Heinrich Klaproth. Between 1839 and 1843, ceria was shown to be a mixture of oxides by the Swedish surgeon and chemist Carl Gustaf Mosander, who lived in the same house as Berzelius; he separated out two other oxides, which he named lanthana and didymia. He partially decomposed a sample of cerium nitrate by roasting it in air and then treating the resulting oxide with dilute nitric acid. The metals that formed these oxides were thus named lanthanum and didymium. Didymium was later proven to not be a single element when it was split into two elements, praseodymium and neodymium, by Carl Auer von Welsbach in Vienna in 1885. Von Welsbach confirmed the separation by spectroscopic analysis, but the products were of relatively low purity. Pure neodymium was first isolated in 1925. The name neodymium is derived from the Greek words neos (νέος), new, and didymos (διδύμος), twin. Double nitrate crystallization was the means of commercial neodymium purification until the 1950s. Lindsay Chemical Division was the first to commercialize large-scale ion-exchange purification of neodymium. Starting in the 1950s, high purity (>99%) neodymium was primarily obtained through an ion exchange process from monazite, a mineral rich in rare-earth elements. The metal is obtained through electrolysis of its halide salts. Currently, most neodymium is extracted from bastnäsite and purified by solvent extraction. Ion-exchange purification is used for the highest purities (typically >99.99%). Since then, the glass technology has improved due to the improved purity of commercially available neodymium oxide and the advancement of glass technology in general. Early methods of separating the lanthanides depended on fractional crystallization, which did not allow for the isolation of high-purity neodymium until the aforementioned ion exchange methods were developed after World War II. Occurrence and production Occurrence Neodymium is rarely found in nature as a free element, instead occurring in ores such as monazite and bastnäsite (which are mineral groups rather than single minerals) that contain small amounts of all rare-earth elements. Neodymium is rarely dominant in these minerals, with exceptions such as monazite-(Nd) and kozoite-(Nd). The main mining areas are in China, United States, Brazil, India, Sri Lanka, and Australia. The Nd3+ ion is similar in size to ions of the early lanthanides of the cerium group (those from lanthanum to samarium and europium). As a result, it tends to occur along with them in phosphate, silicate and carbonate minerals, such as monazite (MIIIPO4) and bastnäsite (MIIICO3F), where M refers to all the rare-earth metals except scandium and the radioactive promethium (mostly Ce, La, and Y, with somewhat less Pr and Nd). Bastnäsite is usually lacking in thorium and the heavy lanthanides, and the purification of the light lanthanides from it is less involved than from monazite. The ore, after being crushed and ground, is first treated with hot concentrated sulfuric acid, which liberates carbon dioxide, hydrogen fluoride, and silicon tetrafluoride. The product is then dried and leached with water, leaving the early lanthanide ions, including lanthanum, in solution. In space Neodymium's per-particle abundance in the Solar System is 0.083 ppb (parts per billion). This figure is about two thirds of that of platinum, but two and a half times more than mercury, and nearly five times more than gold. The lanthanides are not usually found in space, and are much more abundant in the Earth's crust. In the Earth's crust Neodymium is classified as a lithophile under the Goldschmidt classification, meaning that it is generally found combined with oxygen. Although it belongs to the rare-earth metals, neodymium is not rare at all. Its abundance in the Earth's crust is about 41 mg/kg. It is similar in abundance to lanthanum. Production The world's production of neodymium was about 7,000 tons in 2004. The bulk of current production is from China. Historically, the Chinese government imposed strategic material controls on the element, causing large fluctuations in prices. The uncertainty of pricing and availability have caused companies (particularly Japanese ones) to create permanent magnets and associated electric motors with fewer rare-earth metals; however, so far they have been unable to eliminate the need for neodymium. According to the US Geological Survey, Greenland holds the largest reserves of undeveloped rare-earth deposits, particularly neodymium. Mining interests clash with native populations at those sites, due to the release of radioactive substances, mainly thorium, during the mining process. Neodymium is typically 10–18% of the rare-earth content of commercial deposits of the light rare-earth-element minerals bastnäsite and monazite. With neodymium compounds being the most strongly colored for the trivalent lanthanides, it can occasionally dominate the coloration of rare-earth minerals when competing chromophores are absent. It usually gives a pink coloration. Outstanding examples of this include monazite crystals from the tin deposits in Llallagua, Bolivia; ancylite from Mont Saint-Hilaire, Quebec, Canada; or lanthanite from Lower Saucon Township, Pennsylvania. As with neodymium glasses, such minerals change their colors under the differing lighting conditions. The absorption bands of neodymium interact with the visible emission spectrum of mercury vapor, with the unfiltered shortwave UV light causing neodymium-containing minerals to reflect a distinctive green color. This can be observed with monazite-containing sands or bastnäsite-containing ore. The demand for mineral resources, such as rare-earth elements (including neodymium) and other critical materials, has been rapidly increasing owing to the growing human population and industrial development. Recently, the requirement for a low-carbon society has led to a significant demand for energy-saving technologies such as batteries, high-efficiency motors, renewable energy sources, and fuel cells. Among these technologies, permanent magnets are often used to fabricate high-efficiency motors, with neodymium-iron-boron magnets (Nd2Fe14B sintered and bonded magnets; hereinafter referred to as NdFeB magnets) being the main type of permanent magnet in the market since their invention. NdFeB magnets are used in hybrid electric vehicles, plug-in hybrid electric vehicles, electric vehicles, fuel cell vehicles, wind turbines, home appliances, computers, and many small consumer electronic devices. Furthermore, they are indispensable for energy savings. Toward achieving the objectives of the Paris Agreement, the demand for NdFeB magnets is expected to increase significantly in the future. Applications Magnets Neodymium magnets (an alloy, Nd2Fe14B) are the strongest permanent magnets known. A neodymium magnet of a few tens of grams can lift a thousand times its own weight, and can snap together with enough force to break bones. These magnets are cheaper, lighter, and stronger than samarium–cobalt magnets. However, they are not superior in every aspect, as neodymium-based magnets lose their magnetism at lower temperatures and tend to corrode, while samarium–cobalt magnets do not. Neodymium magnets appear in products such as microphones, professional loudspeakers, headphones, guitar and bass guitar pick-ups, and computer hard disks where low mass, small volume, or strong magnetic fields are required. Neodymium is used in the electric motors of hybrid and electric automobiles and in the electricity generators of some designs of commercial wind turbines (only wind turbines with "permanent magnet" generators use neodymium). For example, drive electric motors of each Toyota Prius require of neodymium per vehicle. Glass Neodymium glass (Nd:glass) is produced by the inclusion of neodymium oxide (Nd2O3) in the glass melt. In daylight or incandescent light neodymium glass appears lavender, but it appears pale blue under fluorescent lighting. Neodymium may be used to color glass in shades ranging from pure violet through wine-red and warm gray. The first commercial use of purified neodymium was in glass coloration, starting with experiments by Leo Moser in November 1927. The resulting "Alexandrite" glass remains a signature color of the Moser glassworks to this day. Neodymium glass was widely emulated in the early 1930s by American glasshouses, most notably Heisey, Fostoria ("wisteria"), Cambridge ("heatherbloom"), and Steuben ("wisteria"), and elsewhere (e.g. Lalique, in France, or Murano). Tiffin's "twilight" remained in production from about 1950 to 1980. Current sources include glassmakers in the Czech Republic, the United States, and China. The sharp absorption bands of neodymium cause the glass color to change under different lighting conditions, being reddish-purple under daylight or yellow incandescent light, blue under white fluorescent lighting, and greenish under trichromatic lighting. In combination with gold or selenium, red colors are produced. Since neodymium coloration depends upon "forbidden" f-f transitions deep within the atom, there is relatively little influence on the color from the chemical environment, so the color is impervious to the thermal history of the glass. However, for the best color, iron-containing impurities need to be minimized in the silica used to make the glass. The same forbidden nature of the f-f transitions makes rare-earth colorants less intense than those provided by most d-transition elements, so more has to be used in a glass to achieve the desired color intensity. The original Moser recipe used about 5% of neodymium oxide in the glass melt, a sufficient quantity such that Moser referred to these as being "rare-earth doped" glasses. Being a strong base, that level of neodymium would have affected the melting properties of the glass, and the lime content of the glass might have needed adjustments. Light transmitted through neodymium glasses shows unusually sharp absorption bands; the glass is used in astronomical work to produce sharp bands by which spectral lines may be calibrated. Another application is the creation of selective astronomical filters to reduce the effect of light pollution from sodium and fluorescent lighting while passing other colours, especially dark red hydrogen-alpha emission from nebulae. Neodymium is also used to remove the green color caused by iron contaminants from glass. Neodymium is a component of "didymium" (referring to mixture of salts of neodymium and praseodymium) used for coloring glass to make welder's and glass-blower's goggles; the sharp absorption bands obliterate the strong sodium emission at 589 nm. The similar absorption of the yellow mercury emission line at 578 nm is the principal cause of the blue color observed for neodymium glass under traditional white-fluorescent lighting. Neodymium and didymium glass are used in color-enhancing filters in indoor photography, particularly in filtering out the yellow hues from incandescent lighting. Similarly, neodymium glass is becoming widely used more directly in incandescent light bulbs. These lamps contain neodymium in the glass to filter out yellow light, resulting in a whiter light which is more like sunlight. During World War I, didymium mirrors were reportedly used to transmit Morse code across battlefields. Similar to its use in glasses, neodymium salts are used as a colorant for enamels. Lasers Certain transparent materials with a small concentration of neodymium ions can be used in lasers as gain media for infrared wavelengths (1054–1064 nm), e.g. Nd:YAG (yttrium aluminium garnet), Nd:YAP (yttrium aluminium perovskite), Nd:YLF (yttrium lithium fluoride), Nd:YVO4 (yttrium orthovanadate), and Nd:glass. Neodymium-doped crystals (typically Nd:YVO4) generate high-powered infrared laser beams which are converted to green laser light in commercial DPSS hand-held lasers and laser pointers. Trivalent neodymium ion Nd3+ was the first lanthanide from rare-earth elements used for the generation of laser radiation. The Nd:CaWO4 laser was developed in 1961. Historically, it was the third laser which was put into operation (the first was ruby, the second the U3+:CaF laser). Over the years the neodymium laser became one of the most used lasers for application purposes. The success of the Nd3+ ion lies in the structure of its energy levels and in the spectroscopic properties suitable for the generation of laser radiation. In 1964 Geusic et al. demonstrated the operation of neodymium ion in YAG matrix Y3Al5O12. It is a four-level laser with lower threshold and with excellent mechanical and temperature properties. For optical pumping of this material it is possible to use non-coherent flashlamp radiation or a coherent diode beam. The current laser at the UK Atomic Weapons Establishment (AWE), the HELEN (High Energy Laser Embodying Neodymium) 1-terawatt neodymium-glass laser, can access the midpoints of pressure and temperature regions and is used to acquire data for modeling on how density, temperature, and pressure interact inside warheads. HELEN can create plasmas of around 106 K, from which opacity and transmission of radiation are measured. Neodymium glass solid-state lasers are used in extremely high power (terawatt scale), high energy (megajoules) multiple beam systems for inertial confinement fusion. Nd:glass lasers are usually frequency tripled to the third harmonic at 351 nm in laser fusion devices. Other Other applications of neodymium include: Neodymium has an unusually large specific heat capacity at liquid-helium temperatures, so is useful in cryocoolers. Neodymium acetate can be used as a standard contrasting agent in electron microscopy (a substitute for the radioactive and toxic uranyl acetate). Probably because of similarities to Ca2+, Nd3+ has been reported to promote plant growth. Rare-earth element compounds are frequently used in China as fertilizer. Samarium–neodymium dating is useful for determining the age relationships of rocks and meteorites. Neodymium isotopes recorded in marine sediments are used to reconstruct changes in past ocean circulation. Biological role and precautions The early lanthanides, including neodymium, as well as lanthanum, cerium and praseodymium, have been found to be essential to some methanotrophic bacteria living in volcanic mudpots, such as Methylacidiphilum fumariolicum. Neodymium is not otherwise known to have a biological role in any other organisms. Neodymium metal dust is combustible and therefore an explosion hazard. Neodymium compounds, as with all rare-earth metals, are of low to moderate toxicity; however, its toxicity has not been thoroughly investigated. Ingested neodymium salts are regarded as more toxic if they are soluble than if they are insoluble. Neodymium dust and salts are very irritating to the eyes and mucous membranes, and moderately irritating to skin. Breathing the dust can cause lung embolisms, and accumulated exposure damages the liver. Neodymium also acts as an anticoagulant, especially when given intravenously. Neodymium magnets have been tested for medical uses such as magnetic braces and bone repair, but biocompatibility issues have prevented widespread applications. Commercially available magnets made from neodymium are exceptionally strong and can attract each other from large distances. If not handled carefully, they come together very quickly and forcefully, causing injuries. There is at least one documented case of a person losing a fingertip when two magnets he was using snapped together from 50 cm away. Another risk of these powerful magnets is that if more than one magnet is ingested, they can pinch soft tissues in the gastrointestinal tract. This has led to an estimated 1,700 emergency room visits and necessitated the recall of the Buckyballs line of toys, which were construction sets of small neodymium magnets.
Physical sciences
Chemical elements_2
null
21277
https://en.wikipedia.org/wiki/Neptunium
Neptunium
Neptunium is a chemical element; it has symbol Np and atomic number 93. A radioactive actinide metal, neptunium is the first transuranic element. It is named after Neptune, the planet beyond Uranus in the Solar System, which uranium is named after. A neptunium atom has 93 protons and 93 electrons, of which seven are valence electrons. Neptunium metal is silvery and tarnishes when exposed to air. The element occurs in three allotropic forms and it normally exhibits five oxidation states, ranging from +3 to +7. Like all actinides, it is radioactive, poisonous, pyrophoric, and capable of accumulating in bones, which makes the handling of neptunium dangerous. Although many false claims of its discovery were made over the years, the element was first synthesized by Edwin McMillan and Philip H. Abelson at the Berkeley Radiation Laboratory in 1940. Since then, most neptunium has been and still is produced by neutron irradiation of uranium in nuclear reactors. The vast majority is generated as a by-product in conventional nuclear power reactors. While neptunium itself has no commercial uses at present, it is used as a precursor for the formation of plutonium-238, which is in turn used in radioisotope thermal generators to provide electricity for spacecraft. Neptunium has also been used in detectors of high-energy neutrons. The longest-lived isotope of neptunium, neptunium-237, is a by-product of nuclear reactors and plutonium production. This isotope, and the isotope neptunium-239, are also found in trace amounts in uranium ores due to neutron capture reactions and beta decay. Characteristics Physical Neptunium is a hard, silvery, ductile, radioactive actinide metal. In the periodic table, it is located to the right of the actinide uranium, to the left of the actinide plutonium and below the lanthanide promethium. Neptunium is a hard metal, having a bulk modulus of 118 GPa, comparable to that of manganese. Neptunium metal is similar to uranium in terms of physical workability. When exposed to air at normal temperatures, it forms a thin oxide layer. This reaction proceeds more rapidly as the temperature increases. Neptunium melts at : this low melting point, a property the metal shares with the neighboring element plutonium (which has melting point 639.4 °C), is due to the hybridization of the 5f and 6d orbitals and the formation of directional bonds in the metal. The boiling point of neptunium is not empirically known and the usually given value of 4174 °C is extrapolated from the vapor pressure of the element. If accurate, this would give neptunium the largest liquid range of any element (3535 K passes between its melting and boiling points). Neptunium is found in at least three allotropes. Some claims of a fourth allotrope have been made, but they are so far not proven. This multiplicity of allotropes is common among the actinides. The crystal structures of neptunium, protactinium, uranium, and plutonium do not have clear analogs among the lanthanides and are more similar to those of the 3d transition metals. α-neptunium takes on an orthorhombic structure, resembling a highly distorted body-centered cubic structure. Each neptunium atom is coordinated to four others and the Np–Np bond lengths are 260 pm. It is the densest of all the actinides and the fifth-densest of all naturally occurring elements, behind only rhenium, platinum, iridium, and osmium. α-neptunium has semimetallic properties, such as strong covalent bonding and a high electrical resistivity, and its metallic physical properties are closer to those of the metalloids than the true metals. Some allotropes of the other actinides also exhibit similar behaviour, though to a lesser degree. The densities of different isotopes of neptunium in the alpha phase are expected to be observably different: α-235Np should have density 20.303 g/cm3; α-236Np, density 20.389 g/cm3; α-237Np, density 20.476 g/cm3. β-neptunium takes on a distorted tetragonal close-packed structure. Four atoms of neptunium make up a unit cell, and the Np–Np bond lengths are 276 pm. γ-neptunium has a body-centered cubic structure and has Np–Np bond length of 297 pm. The γ form becomes less stable with increased pressure, though the melting point of neptunium also increases with pressure. The β-Np/γ-Np/liquid triple point occurs at 725 °C and 3200 MPa. Alloys Due to the presence of valence 5f electrons, neptunium and its alloys exhibit a very interesting magnetic behavior, like many other actinides. These can range from the itinerant band-like character characteristic of the transition metals to the local moment behavior typical of scandium, yttrium, and the lanthanides. This stems from 5f-orbital hybridization with the orbitals of the metal ligands, and the fact that the 5f orbital is relativistically destabilized and extends outwards. For example, pure neptunium is paramagnetic, NpAl3 is ferromagnetic, NpGe3 has no magnetic ordering, and NpSn3 may be a heavy fermion material. Investigations are underway regarding alloys of neptunium with uranium, americium, plutonium, zirconium, and iron, so as to recycle long-lived waste isotopes such as neptunium-237 into shorter-lived isotopes more useful as nuclear fuel. One neptunium-based superconductor alloy has been discovered with formula NpPd5Al2. This occurrence in neptunium compounds is somewhat surprising because they often exhibit strong magnetism, which usually destroys superconductivity. The alloy has a tetragonal structure with a superconductivity transition temperature of −268.3 °C (4.9 K). Chemical Neptunium has five ionic oxidation states ranging from +3 to +7 when forming chemical compounds, which can be simultaneously observed in solutions. It is the heaviest actinide that can lose all its valence electrons in a stable compound. The most stable state in solution is +5, but the valence +4 is preferred in solid neptunium compounds. Neptunium metal is very reactive. Ions of neptunium are prone to hydrolysis and formation of coordination compounds. Atomic A neptunium atom has 93 electrons, arranged in the configuration [Rn] 5f4 6d1 7s2. This differs from the configuration expected by the Aufbau principle in that one electron is in the 6d subshell instead of being as expected in the 5f subshell. This is because of the similarity of the electron energies of the 5f, 6d, and 7s subshells. In forming compounds and ions, all the valence electrons may be lost, leaving behind an inert core of inner electrons with the electron configuration of the noble gas radon; more commonly, only some of the valence electrons will be lost. The electron configuration for the tripositive ion Np3+ is [Rn] 5f4, with the outermost 7s and 6d electrons lost first: this is exactly analogous to neptunium's lanthanide homolog promethium, and conforms to the trend set by the other actinides with their [Rn] 5fn electron configurations in the tripositive state. The first ionization potential of neptunium was measured to be at most in 1974, based on the assumption that the 7s electrons would ionize before 5f and 6d; more recent measurements have refined this to 6.2657 eV. Isotopes Twenty-four neptunium radioisotopes have been characterized, with the most stable being 237Np with a half-life of 2.14 million years, 236Np with a half-life of 154,000 years, and 235Np with a half-life of 396.1 days. All of the remaining radioactive isotopes have half-lives that are less than 4.5 days, and the majority of these have half-lives that are less than 50 minutes. This element also has at least four meta states, with the most stable being 236mNp with a half-life of 22.5 hours. The isotopes of neptunium range in atomic weight from 219.032 u (219Np) to 244.068 u (244Np), though 221Np has not yet been reported. Most of the isotopes that are lighter than the most stable one, 237Np, decay primarily by electron capture although a sizable number, most notably 229Np and 230Np, also exhibit various levels of decay via alpha emission to become protactinium. 237Np itself, being the beta-stable isobar of mass number 237, decays almost exclusively by alpha emission into 233Pa, with very rare (occurring only about once in trillions of decays) spontaneous fission and cluster decay (emission of 30Mg to form 207Tl). All of the known isotopes except one that are heavier than this decay exclusively via beta emission. The lone exception, 240mNp, exhibits a rare (>0.12%) decay by isomeric transition in addition to beta emission. 237Np eventually decays to form bismuth-209 and thallium-205, unlike most other common heavy nuclei which decay into isotopes of lead. This decay chain is known as the neptunium series. This decay chain had long been extinct on Earth due to the short half-lives of all of its isotopes above bismuth-209, but is now being resurrected thanks to artificial production of neptunium on the tonne scale. The isotopes neptunium-235, -236, and -237 are predicted to be fissile; only neptunium-237's fissionability has been experimentally shown, with the critical mass being about 60 kg, only about 10 kg more than that of the commonly used uranium-235. Calculated values of the critical masses of neptunium-235, -236, and -237 respectively are 66.2 kg, 6.79 kg, and 63.6 kg: the neptunium-236 value is even lower than that of plutonium-239. In particular, 236Np also has a low neutron cross section. Despite this, a neptunium atomic bomb has never been built: uranium and plutonium have lower critical masses than 235Np and 237Np, and 236Np is difficult to purify as it is not found in quantity in spent nuclear fuel and is nearly impossible to separate in any significant quantities from 237Np. Occurrence The longest-lived isotope of neptunium, 237Np, has a half-life of 2.14 million years, which is more than 2,000 times shorter than the age of the Earth. Therefore, any primordial neptunium would have decayed in the distant past. After only about 80 million years, the concentration of even the longest-lived isotope, 237Np, would have been reduced to less than one-trillionth (10−12) of its original amount. Thus neptunium is present in nature only in negligible amounts produced as intermediate decay products of other isotopes. Trace amounts of the neptunium isotopes neptunium-237 and -239 are found naturally as decay products from transmutation reactions in uranium ores. 239Np and 237Np are the most common of these isotopes; they are directly formed from neutron capture by uranium-238 atoms. These neutrons come from the spontaneous fission of uranium-238, naturally neutron-induced fission of uranium-235, cosmic ray spallation of nuclei, and light elements absorbing alpha particles and emitting a neutron. The half-life of 239Np is very short, although the detection of its much longer-lived daughter 239Pu in nature in 1951 definitively established its natural occurrence. In 1952, 237Np was identified and isolated from concentrates of uranium ore from the Belgian Congo: in these minerals, the ratio of neptunium-237 to uranium is less than or equal to about 10−12 to 1. Additionally, 240Np must also occur as an intermediate decay product of 244Pu, which has been detected in meteorite dust in marine sediments on Earth. Most neptunium (and plutonium) now encountered in the environment is due to atmospheric nuclear explosions that took place between the detonation of the first atomic bomb in 1945 and the ratification of the Partial Nuclear Test Ban Treaty in 1963. The total amount of neptunium released by these explosions and the few atmospheric tests that have been carried out since 1963 is estimated to be around 2500 kg. The overwhelming majority of this is composed of the long-lived isotopes 236Np and 237Np since even the moderately long-lived 235Np (half-life 396 days) would have decayed to less than one-billionth (10−9) its original concentration over the intervening decades. An additional very small amount of neptunium, produced by neutron irradiation of natural uranium in nuclear reactor cooling water, is released when the water is discharged into rivers or lakes. The concentration of 237Np in seawater is approximately 6.5 × 10−5 millibecquerels per liter: this concentration is between 0.1% and 1% that of plutonium. Once released in the surface environment, in contact with atmospheric oxygen, neptunium generally oxidizes fairly quickly, usually to the +4 or +5 state. Regardless of its oxidation state, the element exhibits much greater mobility than the other actinides, largely due to its ability to readily form aqueous solutions with various other elements. In one study comparing the diffusion rates of neptunium(V), plutonium(IV), and americium(III) in sandstone and limestone, neptunium penetrated more than ten times as well as the other elements. Np(V) will also react efficiently in pH levels greater than 5.5 if there are no carbonates present and in these conditions it has also been observed to readily bond with quartz. It has also been observed to bond well with goethite, ferric oxide colloids, and several clays including kaolinite and smectite. Np(V) does not bond as readily to soil particles in mildly acidic conditions as its fellow actinides americium and curium by nearly an order of magnitude. This behavior enables it to migrate rapidly through the soil while in solution without becoming fixed in place, contributing further to its mobility. Np(V) is also readily absorbed by concrete, which because of the element's radioactivity is a consideration that must be addressed when building nuclear waste storage facilities. When absorbed in concrete, it is reduced to Np(IV) in a relatively short period of time. Np(V) is also reduced by humic acids if they are present on the surface of goethite, hematite, and magnetite. Np(IV) is less mobile and efficiently adsorbed by tuff, granodiorite, and bentonite; although uptake by the latter is most pronounced in mildly acidic conditions. It also exhibits a strong tendency to bind to colloidal particulates, an effect that is enhanced when in surface soil with high clay content. The behavior provides an additional aid in the element's observed high mobility. History Background and early claims When the first periodic table of the elements was published by Dmitri Mendeleev in the early 1870s, it showed a " — " in place after uranium similar to several other places for then-undiscovered elements. Other subsequent tables of known elements, including a 1913 publication of the known radioactive isotopes by Kasimir Fajans, also show an empty place after uranium, element 92. Up to and after the discovery of the final component of the atomic nucleus, the neutron in 1932, most scientists did not seriously consider the possibility of elements heavier than uranium. While nuclear theory at the time did not explicitly prohibit their existence, there was little evidence to suggest that they did. However, the discovery of induced radioactivity by Irène and Frédéric Joliot-Curie in late 1933 opened up an entirely new method of researching the elements and inspired a small group of Italian scientists led by Enrico Fermi to begin a series of experiments involving neutron bombardment. Although the Joliot-Curies' experiment involved bombarding a sample of 27Al with alpha particles to produce the radioactive 30P, Fermi realized that using neutrons, which have no electrical charge, would most likely produce even better results than the positively charged alpha particles. Accordingly, in March 1934 he began systematically subjecting all of the then-known elements to neutron bombardment to determine whether others could also be induced to radioactivity. After several months of work, Fermi's group had tentatively determined that lighter elements would disperse the energy of the captured neutron by emitting a proton or alpha particle and heavier elements would generally accomplish the same by emitting a gamma ray. This latter behavior would later result in the beta decay of a neutron into a proton, thus moving the resulting isotope one place up the periodic table. When Fermi's team bombarded uranium, they observed this behavior as well, which strongly suggested that the resulting isotope had an atomic number of 93. Fermi was initially reluctant to publicize such a claim, but after his team observed several unknown half-lives in the uranium bombardment products that did not match those of any known isotope, he published a paper entitled Possible Production of Elements of Atomic Number Higher than 92 in June 1934. For element 93, he proposed the name ausenium (atomic symbol Ao) after the Greek name Ausonia for Italy. Several theoretical objections to the claims of Fermi's paper were quickly raised; in particular, the exact process that took place when an atom captured a neutron was not well understood at the time. This and Fermi's accidental discovery three months later that nuclear reactions could be induced by slow neutrons cast further doubt in the minds of many scientists, notably Aristid von Grosse and Ida Noddack, that the experiment was creating element 93. While von Grosse's claim that Fermi was actually producing protactinium (element 91) was quickly tested and disproved, Noddack's proposal that the uranium had been shattered into two or more much smaller fragments was simply ignored by most because existing nuclear theory did not include a way for this to be possible. Fermi and his team maintained that they were in fact synthesizing a new element, but the issue remained unresolved for several years. Although the many different and unknown radioactive half-lives in the experiment's results showed that several nuclear reactions were occurring, Fermi's group could not prove that element 93 was being produced unless they could isolate it chemically. They and many other scientists attempted to accomplish this, including Otto Hahn and Lise Meitner who were among the best radiochemists in the world at the time and supporters of Fermi's claim, but they all failed. Much later, it was determined that the main reason for this failure was because the predictions of element 93's chemical properties were based on a periodic table which lacked the actinide series. This arrangement placed protactinium below tantalum, uranium below tungsten, and further suggested that element 93, at that point referred to as eka-rhenium, should be similar to the group 7 elements, including manganese and rhenium. Thorium, protactinium, and uranium, with their dominant oxidation states of +4, +5, and +6 respectively, fooled scientists into thinking they belonged below hafnium, tantalum, and tungsten, rather than below the lanthanide series, which was at the time viewed as a fluke, and whose members all have dominant +3 states; neptunium, on the other hand, has a much rarer, more unstable +7 state, with +4 and +5 being the most stable. Upon finding that plutonium and the other transuranic elements also have dominant +3 and +4 states, along with the discovery of the f-block, the actinide series was firmly established. While the question of whether Fermi's experiment had produced element 93 was stalemated, two additional claims of the discovery of the element appeared, although unlike Fermi, they both claimed to have observed it in nature. The first of these claims was by Czech engineer Odolen Koblic in 1934 when he extracted a small amount of material from the wash water of heated pitchblende. He proposed the name bohemium for the element, but after being analyzed it turned out that the sample was a mixture of tungsten and vanadium. The other claim, in 1938 by Romanian physicist Horia Hulubei and French chemist Yvette Cauchois, claimed to have discovered the new element via spectroscopy in minerals. They named their element sequanium, but the claim was discounted because the prevailing theory at the time was that if it existed at all, element 93 would not exist naturally. However, as neptunium does in fact occur in nature in trace amounts, as demonstrated when it was found in uranium ore in 1952, it is possible that Hulubei and Cauchois did in fact observe neptunium. Although by 1938 some scientists, including Niels Bohr, were still reluctant to accept that Fermi had actually produced a new element, he was nevertheless awarded the Nobel Prize in Physics in November 1938 "for his demonstrations of the existence of new radioactive elements produced by neutron irradiation, and for his related discovery of nuclear reactions brought about by slow neutrons". A month later, the almost totally unexpected discovery of nuclear fission by Hahn, Meitner, and Otto Frisch put an end to the possibility that Fermi had discovered element 93 because most of the unknown half-lives that had been observed by Fermi's team were rapidly identified as those of fission products. Perhaps the closest of all attempts to produce the missing element 93 was that conducted by the Japanese physicist Yoshio Nishina working with chemist Kenjiro Kimura in 1940, just before the outbreak of the Pacific War in 1941: they bombarded 238U with fast neutrons. However, while slow neutrons tend to induce neutron capture through a (n, γ) reaction, fast neutrons tend to induce a "knock-out" (n, 2n) reaction, where one neutron is added and two more are removed, resulting in the net loss of a neutron. Nishina and Kimura, having tested this technique on 232Th and successfully produced the known 231Th and its long-lived beta decay daughter 231Pa (both occurring in the natural decay chain of 235U), therefore correctly assigned the new 6.75-day half-life activity they observed to the new isotope 237U. They confirmed that this isotope was also a beta emitter and must hence decay to the unknown nuclide 23793. They attempted to isolate this nuclide by carrying it with its supposed lighter congener rhenium, but no beta or alpha decay was observed from the rhenium-containing fraction: Nishina and Kimura thus correctly speculated that the half-life of 23793, like that of 231Pa, was very long and hence its activity would be so weak as to be unmeasurable by their equipment, thus concluding the last and closest unsuccessful search for transuranic elements. Discovery As research on nuclear fission progressed in early 1939, Edwin McMillan at the Berkeley Radiation Laboratory of the University of California, Berkeley decided to run an experiment bombarding uranium using the powerful 60-inch (1.52 m) cyclotron that had recently been built at the university. The purpose was to separate the various fission products produced by the bombardment by exploiting the enormous force that the fragments gain from their mutual electrical repulsion after fissioning. Although he did not discover anything of note from this, McMillan did observe two new beta decay half-lives in the uranium trioxide target itself, which meant that whatever was producing the radioactivity had not violently repelled each other like normal fission products. He quickly realized that one of the half-lives closely matched the known 23-minute decay period of uranium-239, but the other half-life of 2.3 days was unknown. McMillan took the results of his experiment to chemist and fellow Berkeley professor Emilio Segrè to attempt to isolate the source of the radioactivity. Both scientists began their work using the prevailing theory that element 93 would have similar chemistry to rhenium, but Segrè rapidly determined that McMillan's sample was not at all similar to rhenium. Instead, when he reacted it with hydrogen fluoride (HF) with a strong oxidizing agent present, it behaved much like members of the rare earths. Since these elements comprise a large percentage of fission products, Segrè and McMillan decided that the half-life must have been simply another fission product, titling the paper "An Unsuccessful Search for Transuranium Elements". However, as more information about fission became available, the possibility that the fragments of nuclear fission could still have been present in the target became more remote. McMillan and several scientists, including Philip H. Abelson, attempted again to determine what was producing the unknown half-life. In early 1940, McMillan realized that his 1939 experiment with Segrè had failed to test the chemical reactions of the radioactive source with sufficient rigor. In a new experiment, McMillan tried subjecting the unknown substance to HF in the presence of a reducing agent, something he had not done before. This reaction resulted in the sample precipitating with the HF, an action that definitively ruled out the possibility that the unknown substance was a rare-earth metal. Shortly after this, Abelson, who had received his graduate degree from the university, visited Berkeley for a short vacation and McMillan asked the more able chemist to assist with the separation of the experiment's results. Abelson very quickly observed that whatever was producing the 2.3-day half-life did not have chemistry like any known element and was actually more similar to uranium than a rare-earth metal. This discovery finally allowed the source to be isolated and later, in 1945, led to the classification of the actinide series. As a final step, McMillan and Abelson prepared a much larger sample of bombarded uranium that had a prominent 23-minute half-life from 239U and demonstrated conclusively that the unknown 2.3-day half-life increased in strength in concert with a decrease in the 23-minute activity through the following reaction: {^{238}_{92}U} + {^{1}_{0}n} -> {^{239}_{92}U} ->[\beta^-][23\ \ce{min}] {^{239}_{93}Np} ->[\beta^-][2.355\ \ce{days}] {^{239}_{94}Pu} (The times are half-lives.) This proved that the unknown radioactive source originated from the decay of uranium and, coupled with the previous observation that the source was different chemically from all known elements, proved beyond all doubt that a new element had been discovered. McMillan and Abelson published their results in a paper entitled Radioactive Element 93 in the Physical Review on May 27, 1940. They did not propose a name for the element in the paper, but they soon decided on the name neptunium since Neptune is the next planet beyond Uranus in our solar system, which uranium is named after. McMillan and Abelson's success compared to Nishina and Kimura's near miss can be attributed to the favorable half-life of 239Np for radiochemical analysis and quick decay of 239U, in contrast to the slower decay of 237U and extremely long half-life of 237Np. Subsequent developments It was also realized that the beta decay of 239Np must produce an isotope of element 94 (now called plutonium), but the quantities involved in McMillan and Abelson's original experiment were too small to isolate and identify plutonium along with neptunium. The discovery of plutonium had to wait until the end of 1940, when Glenn T. Seaborg and his team identified the isotope plutonium-238. In 1942, Hahn and Fritz Strassmann, and independently Kurt Starke, reported the confirmation of element 93 in Berlin. Hahn's group did not pursue element 94, likely because they were discouraged by McMillan and Abelson's lack of success in isolating it. Since they had access to the stronger cyclotron at Paris at this point, Hahn's group would likely have been able to detect element 94 had they tried, albeit in tiny quantities (a few becquerels). Neptunium's unique radioactive characteristics allowed it to be traced as it moved through various compounds in chemical reactions, at first this was the only method available to prove that its chemistry was different from other elements. As the first isotope of neptunium to be discovered has such a short half-life, McMillan and Abelson were unable to prepare a sample that was large enough to perform chemical analysis of the new element using the technology that was then available. However, after the discovery of the long-lived 237Np isotope in 1942 by Glenn Seaborg and Arthur Wahl, forming weighable amounts of neptunium became a realistic endeavor. Its half-life was initially determined to be about 3 million years (later revised to 2.144 million years), confirming the predictions of Nishina and Kimura of a very long half-life. Early research into the element was somewhat limited because most of the nuclear physicists and chemists in the United States at the time were focused on the massive effort to research the properties of plutonium as part of the Manhattan Project. Research into the element did continue as a minor part of the project and the first bulk sample of neptunium was isolated in 1944. Much of the research into the properties of neptunium since then has been focused on understanding how to confine it as a portion of nuclear waste. Because it has isotopes with very long half-lives, it is of particular concern in the context of designing confinement facilities that can last for thousands of years. It has found some limited uses as a radioactive tracer and a precursor for various nuclear reactions to produce useful plutonium isotopes. However, most of the neptunium that is produced as a reaction byproduct in nuclear power stations is considered to be a waste product. Production Synthesis The vast majority of the neptunium that currently exists on Earth was produced artificially in nuclear reactions. Neptunium-237 is the most commonly synthesized isotope due to it being the only one that both can be produced via neutron capture and also has a half-life long enough to allow weighable quantities to be easily isolated. It is by far the most common isotope to be utilized in chemical studies of the element. When an 235U atom captures a neutron, it is converted to an excited state of 236U. About 85.5% of the excited 236U nuclei undergo fission, but the remainder decay to the ground state of 236U by emitting gamma radiation. Further neutron capture forms 237U which has a half-life of 7 days and quickly decays to 237Np through beta decay. During beta decay, the excited 237U emits an electron, while the atomic weak interaction converts a neutron to a proton, thus creating 237Np. 237U is also produced via an (n,2n) reaction with 238U. This only happens with very energetic neutrons. 237Np is the product of alpha decay of 241Am, which is produced through neutron irradiation of uranium-238. Heavier isotopes of neptunium decay quickly, and lighter isotopes of neptunium cannot be produced by neutron capture, so chemical separation of neptunium from cooled spent nuclear fuel gives nearly pure 237Np. The short-lived heavier isotopes 238Np and 239Np, useful as radioactive tracers, are produced through neutron irradiation of 237Np and 238U respectively, while the longer-lived lighter isotopes 235Np and 236Np are produced through irradiation of 235U with protons and deuterons in a cyclotron. Artificial 237Np metal is usually isolated through a reaction of 237NpF3 with liquid barium or lithium at around 1200 °C and is most often extracted from spent nuclear fuel rods in kilogram amounts as a by-product in plutonium production. 2 NpF3 + 3 Ba → 2 Np + 3 BaF2 By weight, neptunium-237 discharges are about 5% as great as plutonium discharges and about 0.05% of spent nuclear fuel discharges. However, even this fraction still amounts to more than fifty tons per year globally. Purification methods Recovering uranium and plutonium from spent nuclear fuel for reuse is one of the major processes of the nuclear fuel cycle. As it has a long half-life of just over 2 million years, the alpha emitter 237Np is one of the major isotopes of the minor actinides separated from spent nuclear fuel. Many separation methods have been used to separate out the neptunium, operating on small and large scales. The small-scale purification operations have the goals of preparing pure neptunium as a precursor of metallic neptunium and its compounds, and also to isolate and preconcentrate neptunium in samples for analysis. Most methods that separate neptunium ions exploit the differing chemical behaviour of the differing oxidation states of neptunium (from +3 to +6 or sometimes even +7) in solution. Among the methods that are or have been used are: solvent extraction (using various extractants, usually multidentate β-diketone derivatives, organophosphorus compounds, and amine compounds), chromatography using various ion-exchange or chelating resins, coprecipitation (possible matrices include LaF3, BiPO4, BaSO4, Fe(OH)3, and MnO2), electrodeposition, and biotechnological methods. Currently, commercial reprocessing plants use the Purex process, involving the solvent extraction of uranium and plutonium with tributyl phosphate. Chemistry and compounds Solution chemistry When it is in an aqueous solution, neptunium can exist in any of its five possible oxidation states (+3 to +7) and each of these show a characteristic color. The stability of each oxidation state is strongly dependent on various factors, such as the presence of oxidizing or reducing agents, pH of the solution, presence of coordination complex-forming ligands, and even the concentration of neptunium in the solution. In acidic solutions, the neptunium(III) to neptunium(VII) ions exist as Np3+, Np4+, , , and . In basic solutions, they exist as the oxides and hydroxides Np(OH)3, NpO2, NpO2OH, NpO2(OH)2, and . Not as much work has been done to characterize neptunium in basic solutions. Np3+ and Np4+ can easily be reduced and oxidized to each other, as can and . Neptunium(III) Np(III) or Np3+ exists as hydrated complexes in acidic solutions, . It is a dark blue-purple and is analogous to its lighter congener, the pink rare-earth ion Pm3+. In the presence of oxygen, it is quickly oxidized to Np(IV) unless strong reducing agents are also present. Nevertheless, it is the second-least easily hydrolyzed neptunium ion in water, forming the NpOH2+ ion. Np3+ is the predominant neptunium ion in solutions of pH 4–5. Neptunium(IV) Np(IV) or Np4+ is pale yellow-green in acidic solutions, where it exists as hydrated complexes (). It is quite unstable to hydrolysis in acidic aqueous solutions at pH 1 and above, forming NpOH3+. In basic solutions, Np4+ tends to hydrolyze to form the neutral neptunium(IV) hydroxide (Np(OH)4) and neptunium(IV) oxide (NpO2). Neptunium(V) Np(V) or is green-blue in aqueous solution, in which it behaves as a strong Lewis acid. It is a stable ion and is the most common form of neptunium in aqueous solutions. Unlike its neighboring homologues and , does not spontaneously disproportionate except at very low pH and high concentration: 2 + 4 H+ ⇌ Np4+ + + 2 H2O It hydrolyzes in basic solutions to form NpO2OH and . Neptunium(VI) Np(VI) or , the neptunyl ion, shows a light pink or reddish color in an acidic solution and yellow-green otherwise. It is a strong Lewis acid and is the main neptunium ion encountered in solutions of pH 3–4. Though stable in acidic solutions, it is quite easily reduced to the Np(V) ion, and it is not as stable as the homologous hexavalent ions of its neighbours uranium and plutonium (the uranyl and plutonyl ions). It hydrolyzes in basic solutions to form the oxo and hydroxo ions NpO2OH+, , and . Neptunium(VII) Np(VII) is dark green in a strongly basic solution. Though its chemical formula in basic solution is frequently cited as , this is a simplification and the real structure is probably closer to a hydroxo species like . Np(VII) was first prepared in basic solution in 1967. In strongly acidic solution, Np(VII) is found as ; water quickly reduces this to Np(VI). Its hydrolysis products are uncharacterized. Hydroxides The oxides and hydroxides of neptunium are closely related to its ions. In general, Np hydroxides at various oxidation levels are less stable than the actinides before it on the periodic table such as thorium and uranium and more stable than those after it such as plutonium and americium. This phenomenon is because the stability of an ion increases as the ratio of atomic number to the radius of the ion increases. Thus actinides higher on the periodic table will more readily undergo hydrolysis. Neptunium(III) hydroxide is quite stable in acidic solutions and in environments that lack oxygen, but it will rapidly oxidize to the IV state in the presence of air. It is not soluble in water. Np(IV) hydroxides exist mainly as the electrically neutral Np(OH)4 and its mild solubility in water is not affected at all by the pH of the solution. This suggests that the other Np(IV) hydroxide, , does not have a significant presence. Because the Np(V) ion is very stable, it can only form a hydroxide in high acidity levels. When placed in a 0.1 M sodium perchlorate solution, it does not react significantly for a period of months, although a higher molar concentration of 3.0 M will result in it reacting to the solid hydroxide NpO2OH almost immediately. Np(VI) hydroxide is more reactive but it is still fairly stable in acidic solutions. It will form the compound NpO3· H2O in the presence of ozone under various carbon dioxide pressures. Np(VII) has not been well-studied and no neutral hydroxides have been reported. It probably exists mostly as . Oxides Three anhydrous neptunium oxides have been reported, NpO2, Np2O5, and Np3O8, though some studies have stated that only the first two of these exist, suggesting that claims of Np3O8 are actually the result of mistaken analysis of Np2O5. However, as the full extent of the reactions that occur between neptunium and oxygen has yet to be researched, it is not certain which of these claims is accurate. Although neptunium oxides have not been produced with neptunium in oxidation states as high as those possible with the adjacent actinide uranium, neptunium oxides are more stable at lower oxidation states. This behavior is illustrated by the fact that NpO2 can be produced by simply burning neptunium salts of oxyacids in air. The greenish-brown NpO2 is very stable over a large range of pressures and temperatures and does not undergo phase transitions at low temperatures. It does show a phase transition from face-centered cubic to orthorhombic at around 33–37 GPa, although it returns to its original phase when pressure is released. It remains stable under oxygen pressures up to 2.84 MPa and temperatures up to 400 °C. Np2O5 is black-brown in color and monoclinic with a lattice size of 418×658×409 picometres. It is relatively unstable and decomposes to NpO2 and O2 at 420–695 °C. Although Np2O5 was initially subject to several studies that claimed to produce it with mutually contradictory methods, it was eventually prepared successfully by heating neptunium peroxide to 300–350 °C for 2–3 hours or by heating it under a layer of water in an ampoule at 180 °C. Neptunium also forms a large number of oxide compounds with a wide variety of elements, although the neptunate oxides formed with alkali metals and alkaline earth metals have been by far the most studied. Ternary neptunium oxides are generally formed by reacting NpO2 with the oxide of another element or by precipitating from an alkaline solution. Li5NpO6 has been prepared by reacting Li2O and NpO2 at 400 °C for 16 hours or by reacting Li2O2 with NpO3 · H2O at 400 °C for 16 hours in a quartz tube and flowing oxygen. Alkali neptunate compounds K3NpO5, Cs3NpO5, and Rb3NpO5 are all produced by a similar reaction: NpO2 + 3 MO2 → M3NpO5 (M = K, Cs, Rb) The oxide compounds KNpO4, CsNpO4, and RbNpO4 are formed by reacting Np(VII) () with a compound of the alkali metal nitrate and ozone. Additional compounds have been produced by reacting NpO3 and water with solid alkali and alkaline peroxides at temperatures of 400–600 °C for 15–30 hours. Some of these include Ba3(NpO5)2, Ba2NaNpO6, and Ba2LiNpO6. Also, a considerable number of hexavalent neptunium oxides are formed by reacting solid-state NpO2 with various alkali or alkaline earth oxides in an environment of flowing oxygen. Many of the resulting compounds also have an equivalent compound that substitutes uranium for neptunium. Some compounds that have been characterized include Na2Np2O7, Na4NpO5, Na6NpO6, and Na2NpO4. These can be obtained by heating different combinations of NpO2 and Na2O to various temperature thresholds and further heating will also cause these compounds to exhibit different neptunium allotropes. The lithium neptunate oxides Li6NpO6 and Li4NpO5 can be obtained with similar reactions of NpO2 and Li2O. A large number of additional alkali and alkaline neptunium oxide compounds such as Cs4Np5O17 and Cs2Np3O10 have been characterized with various production methods. Neptunium has also been observed to form ternary oxides with many additional elements in groups 3 through 7, although these compounds are much less well studied. Halides Although neptunium halide compounds have not been nearly as well studied as its oxides, a fairly large number have been successfully characterized. Of these, neptunium fluorides have been the most extensively researched, largely because of their potential use in separating the element from nuclear waste products. Four binary neptunium fluoride compounds, NpF3, NpF4, NpF5, and NpF6, have been reported. The first two are fairly stable and were first prepared in 1947 through the following reactions: NpO2 + H2 + 3 HF → NpF3 + 2 H2O   (400°C) NpF3 + O2 + HF → NpF4 + H2O  (400°C) Later, NpF4 was obtained directly by heating NpO2 to various temperatures in mixtures of either hydrogen fluoride or pure fluorine gas. NpF5 is much more difficult to form and most known preparation methods involve reacting NpF4 or NpF6 compounds with various other fluoride compounds. NpF5 will decompose into NpF4 and NpF6 when heated to around 320 °C. NpF6 or neptunium hexafluoride is extremely volatile, as are its adjacent actinide compounds uranium hexafluoride (UF6) and plutonium hexafluoride (PuF6). This volatility has attracted a large amount of interest to the compound in an attempt to devise a simple method for extracting neptunium from spent nuclear power station fuel rods. NpF6 was first prepared in 1943 by reacting NpF3 and gaseous fluorine at very high temperatures and the first bulk quantities were obtained in 1958 by heating NpF4 and dripping pure fluorine on it in a specially prepared apparatus. Additional methods that have successfully produced neptunium hexafluoride include reacting BrF3 and BrF5 with NpF4 and by reacting several different neptunium oxide and fluoride compounds with anhydrous hydrogen fluorides. Four neptunium oxyfluoride compounds, NpO2F, NpOF3, NpO2F2, and NpOF4, have been reported, although none of them have been extensively studied. NpO2F2 is a pinkish solid and can be prepared by reacting NpO3 · H2O and Np2F5 with pure fluorine at around 330 °C. NpOF3 and NpOF4 can be produced by reacting neptunium oxides with anhydrous hydrogen fluoride at various temperatures. Neptunium also forms a wide variety of fluoride compounds with various elements. Some of these that have been characterized include CsNpF6, Rb2NpF7, Na3NpF8, and K3NpO2F5. Two neptunium chlorides, NpCl3 and NpCl4, have been characterized. Although several attempts to obtain NpCl5 have been made, they have not been successful. NpCl3 is produced by reducing neptunium dioxide with hydrogen and carbon tetrachloride (CCl4) and NpCl4 by reacting a neptunium oxide with CCl4 at around 500 °C. Other neptunium chloride compounds have also been reported, including NpOCl2, Cs2NpCl6, Cs3NpO2Cl4, and Cs2NaNpCl6. Neptunium bromides NpBr3 and NpBr4 have also been produced; the latter by reacting aluminium bromide with NpO2 at 350 °C and the former in an almost identical procedure but with zinc present. The neptunium iodide NpI3 has also been prepared by the same method as NpBr3. Chalcogenides, pnictides, and carbides Neptunium chalcogen and pnictogen compounds have been well studied primarily as part of research into their electronic and magnetic properties and their interactions in the natural environment. Pnictide and carbide compounds have also attracted interest because of their presence in the fuel of several advanced nuclear reactor designs, although the latter group has not had nearly as much research as the former. Chalcogenides A wide variety of neptunium sulfide compounds have been characterized, including the pure sulfide compounds NpS, NpS3, Np2S5, Np3S5, Np2S3, and Np3S4. Of these, Np2S3, prepared by reacting NpO2 with hydrogen sulfide and carbon disulfide at around 1000 °C, is the most well-studied and three allotropic forms are known. The α form exists up to around 1230 °C, the β up to 1530 °C, and the γ form, which can also exist as Np3S4, at higher temperatures. NpS can be produced by reacting Np2S3 and neptunium metal at 1600 °C and Np3S5 can be prepared by the decomposition of Np2S3 at 500 °C or by reacting sulfur and neptunium hydride at 650 °C. Np2S5 is made by heating a mixture of Np3S5 and pure sulfur to 500 °C. All of the neptunium sulfides except for the β and γ forms of Np2S3 are isostructural with the equivalent uranium sulfide and several, including NpS, α−Np2S3, and β−Np2S3 are also isostructural with the equivalent plutonium sulfide. The oxysulfides NpOS, Np4O4S, and Np2O2S have also been produced, although the latter three have not been well studied. NpOS was first prepared in 1985 by vacuum sealing NpO2, Np3S5, and pure sulfur in a quartz tube and heating it to 900 °C for one week. Neptunium selenide compounds that have been reported include NpSe, NpSe3, Np2Se3, Np2Se5, Np3Se4, and Np3Se5. All of these have only been obtained by heating neptunium hydride and selenium metal to various temperatures in a vacuum for an extended period of time and Np2Se3 is only known to exist in the γ allotrope at relatively high temperatures. Two neptunium oxyselenide compounds are known, NpOSe and Np2O2Se, are formed with similar methods by replacing the neptunium hydride with neptunium dioxide. The known neptunium telluride compounds NpTe, NpTe3, Np3Te4, Np2Te3, and Np2O2Te are formed by similar procedures to the selenides and Np2O2Te is isostructural to the equivalent uranium and plutonium compounds. No neptunium−polonium compounds have been reported. Pnictides and carbides Neptunium nitride (NpN) was first prepared in 1953 by reacting neptunium hydride and ammonia gas at around 750 °C in a quartz capillary tube. Later, it was produced by reacting different mixtures of nitrogen and hydrogen with neptunium metal at various temperatures. It has also been produced by the reduction of neptunium dioxide with diatomic nitrogen gas at 1550 °C. NpN is isomorphous with uranium mononitride (UN) and plutonium mononitride (PuN) and has a melting point of 2830 °C under a nitrogen pressure of around 1 MPa. Two neptunium phosphide compounds have been reported, NpP and Np3P4. The first has a face centered cubic structure and is prepared by converting neptunium metal to a powder and then reacting it with phosphine gas at 350 °C. Np3P4 can be produced by reacting neptunium metal with red phosphorus at 740 °C in a vacuum and then allowing any extra phosphorus to sublimate away. The compound is non-reactive with water but will react with nitric acid to produce Np(IV) solution. Three neptunium arsenide compounds have been prepared, NpAs, NpAs2, and Np3As4. The first two were first produced by heating arsenic and neptunium hydride in a vacuum-sealed tube for about a week. Later, NpAs was also made by confining neptunium metal and arsenic in a vacuum tube, separating them with a quartz membrane, and heating them to just below neptunium's melting point of 639 °C, which is slightly higher than the arsenic's sublimation point of 615 °C. Np3As4 is prepared by a similar procedure using iodine as a transporting agent. NpAs2 crystals are brownish gold and Np3As4 is black. The neptunium antimonide compound NpSb was produced in 1971 by placing equal quantities of both elements in a vacuum tube, heating them to the melting point of antimony, and then heating it further to 1000 °C for sixteen days. This procedure also produced trace amounts of an additional antimonide compound Np3Sb4. One neptunium-bismuth compound, NpBi, has also been reported. The neptunium carbides NpC, Np2C3, and NpC2 (tentative) have been reported, but have not characterized in detail despite the high importance and utility of actinide carbides as advanced nuclear reactor fuel. NpC is a non-stoichiometric compound, and could be better labelled as NpCx (0.82 ≤ x ≤ 0.96). It may be obtained from the reaction of neptunium hydride with graphite at 1400 °C or by heating the constituent elements together in an electric arc furnace using a tungsten electrode. It reacts with excess carbon to form pure Np2C3. NpC2 is formed from heating NpO2 in a graphite crucible at 2660–2800 °C. Other inorganic Hydrides Neptunium reacts with hydrogen in a similar manner to its neighbor plutonium, forming the hydrides NpH2+x (face-centered cubic) and NpH3 (hexagonal). These are isostructural with the corresponding plutonium hydrides, although unlike PuH2+x, the lattice parameters of NpH2+x become greater as the hydrogen content (x) increases. The hydrides require extreme care in handling as they decompose in a vacuum at 300 °C to form finely divided neptunium metal, which is pyrophoric. Phosphates, sulfates, and carbonates Being chemically stable, neptunium phosphates have been investigated for potential use in immobilizing nuclear waste. Neptunium pyrophosphate (α-NpP2O7), a green solid, has been produced in the reaction between neptunium dioxide and boron phosphate at 1100 °C, though neptunium(IV) phosphate has so far remained elusive. The series of compounds NpM2(PO4)3, where M is an alkali metal (Li, Na, K, Rb, or Cs), are all known. Some neptunium sulfates have been characterized, both aqueous and solid and at various oxidation states of neptunium (IV through VI have been observed). Additionally, neptunium carbonates have been investigated to achieve a better understanding of the behavior of neptunium in geological repositories and the environment, where it may come into contact with carbonate and bicarbonate aqueous solutions and form soluble complexes. Organometallic A few organoneptunium compounds are known and chemically characterized, although not as many as for uranium due to neptunium's scarcity and radioactivity. The most well known organoneptunium compounds are the cyclopentadienyl and cyclooctatetraenyl compounds and their derivatives. The trivalent cyclopentadienyl compound Np(C5H5)3·THF was obtained in 1972 from reacting Np(C5H5)3Cl with sodium, although the simpler Np(C5H5) could not be obtained. Tetravalent neptunium cyclopentadienyl, a reddish-brown complex, was synthesized in 1968 by reacting neptunium(IV) chloride with potassium cyclopentadienide: NpCl4 + 4 KC5H5 → Np(C5H5)4 + 4 KCl It is soluble in benzene and THF, and is less sensitive to oxygen and water than Pu(C5H5)3 and Am(C5H5)3. Other Np(IV) cyclopentadienyl compounds are known for many ligands: they have the general formula (C5H5)3NpL, where L represents a ligand. Neptunocene, Np(C8H8)2, was synthesized in 1970 by reacting neptunium(IV) chloride with K2(C8H8). It is isomorphous to uranocene and plutonocene, and they behave chemically identically: all three compounds are insensitive to water or dilute bases but are sensitive to air, reacting quickly to form oxides, and are only slightly soluble in benzene and toluene. Other known neptunium cyclooctatetraenyl derivatives include Np(RC8H7)2 (R = ethanol, butanol) and KNp(C8H8)·2THF, which is isostructural to the corresponding plutonium compound. In addition, neptunium hydrocarbyls have been prepared, and solvated triiodide complexes of neptunium are a precursor to many organoneptunium and inorganic neptunium compounds. Coordination complexes There is much interest in the coordination chemistry of neptunium, because its five oxidation states all exhibit their own distinctive chemical behavior, and the coordination chemistry of the actinides is heavily influenced by the actinide contraction (the greater-than-expected decrease in ionic radii across the actinide series, analogous to the lanthanide contraction). Solid state Few neptunium(III) coordination compounds are known, because Np(III) is readily oxidized by atmospheric oxygen while in aqueous solution. However, sodium formaldehyde sulfoxylate can reduce Np(IV) to Np(III), stabilizing the lower oxidation state and forming various sparingly soluble Np(III) coordination complexes, such as ·11H2O, ·H2O, and . Many neptunium(IV) coordination compounds have been reported, the first one being , which is isostructural with the analogous uranium(IV) coordination compound. Other Np(IV) coordination compounds are known, some involving other metals such as cobalt (·8H2O, formed at 400 K) and copper (·6H2O, formed at 600 K). Complex nitrate compounds are also known: the experimenters who produced them in 1986 and 1987 obtained single crystals by slow evaporation of the Np(IV) solution at ambient temperature in concentrated nitric acid and excess 2,2′-pyrimidine. The coordination chemistry of neptunium(V) has been extensively researched due to the presence of cation–cation interactions in the solid state, which had been already known for actinyl ions. Some known such compounds include the neptunyl dimer ·8H2O and neptunium glycolate, both of which form green crystals. Neptunium(VI) compounds range from the simple oxalate (which is unstable, usually becoming Np(IV)) to such complicated compounds as the green . Extensive study has been performed on compounds of the form , where M represents a monovalent cation and An is either uranium, neptunium, or plutonium. Since 1967, when neptunium(VII) was discovered, some coordination compounds with neptunium in the +7 oxidation state have been prepared and studied. The first reported such compound was initially characterized as ·nH2O in 1968, but was suggested in 1973 to actually have the formula ·2H2O based on the fact that Np(VII) occurs as in aqueous solution. This compound forms dark green prismatic crystals with maximum edge length 0.15–0.4 mm. In aqueous solution Most neptunium coordination complexes known in solution involve the element in the +4, +5, and +6 oxidation states: only a few studies have been done on neptunium(III) and (VII) coordination complexes. For the former, NpX2+ and (X = Cl, Br) were obtained in 1966 in concentrated LiCl and LiBr solutions, respectively: for the latter, 1970 experiments discovered that the ion could form sulfate complexes in acidic solutions, such as and ; these were found to have higher stability constants than the neptunyl ion (). A great many complexes for the other neptunium oxidation states are known: the inorganic ligands involved are the halides, iodate, azide, nitride, nitrate, thiocyanate, sulfate, carbonate, chromate, and phosphate. Many organic ligands are known to be able to be used in neptunium coordination complexes: they include acetate, propionate, glycolate, lactate, oxalate, malonate, phthalate, mellitate, and citrate. Analogously to its neighbours, uranium and plutonium, the order of the neptunium ions in terms of complex formation ability is Np4+ > ≥ Np3+ > . (The relative order of the middle two neptunium ions depends on the ligands and solvents used.) The stability sequence for Np(IV), Np(V), and Np(VI) complexes with monovalent inorganic ligands is F− > > SCN− > > Cl− > ; the order for divalent inorganic ligands is > > . These follow the strengths of the corresponding acids. The divalent ligands are more strongly complexing than the monovalent ones. can also form the complex ions [] (M = Al, Ga, Sc, In, Fe, Cr, Rh) in perchloric acid solution: the strength of interaction between the two cations follows the order Fe > In > Sc > Ga > Al. The neptunyl and uranyl ions can also form a complex together. Applications Precursor in plutonium-238 production An important use of 237Np is as a precursor in plutonium-238 production, where it is irradiated with neutrons to form 238Pu, an alpha emitter for radioisotope thermal generators for spacecraft and military applications. 237Np will capture a neutron to form 238Np and beta decay with a half-life of just over two days to 238Pu. ^{237}_{93}Np + ^{1}_{0}n -> ^{238}_{93}Np ->[\beta^-][2.117 \ \ce{d}] ^{238}_{94}Pu 238Pu also exists in sizable quantities in spent nuclear fuel but would have to be separated from other isotopes of plutonium. Irradiating neptunium-237 with electron beams, provoking bremsstrahlung, also produces quite pure samples of the isotope plutonium-236, useful as a tracer to determine plutonium concentration in the environment. Weapons Neptunium is fissionable, and could theoretically be used as fuel in a fast-neutron reactor or a nuclear weapon, with a critical mass of around 60 kilograms. In 1992, the U.S. Department of Energy declassified the statement that neptunium-237 "can be used for a nuclear explosive device". It is not believed that an actual weapon has ever been constructed using neptunium. As of 2009, the world production of neptunium-237 by commercial power reactors was over 1000 critical masses a year, but to extract the isotope from irradiated fuel elements would be a major industrial undertaking. In September 2002, researchers at the Los Alamos National Laboratory briefly produced the first known nuclear critical mass using neptunium in combination with shells of enriched uranium (uranium-235), discovering that the critical mass of a bare sphere of neptunium-237 "ranges from kilogram weights in the high fifties to low sixties," showing that it "is about as good a bomb material as [uranium-235]." The United States Federal government made plans in March 2004 to move America's supply of separated neptunium to a nuclear-waste disposal site in Nevada. Physics 237Np is used in devices for detecting high-energy (MeV) neutrons. Role in nuclear waste Neptunium accumulates in commercial household ionization-chamber smoke detectors from decay of the (typically) 0.2 microgram of americium-241 initially present as a source of ionizing radiation. With a half-life of 432 years, the americium-241 in an ionization smoke detector includes about 3% neptunium after 20 years, and about 15% after 100 years. Under oxidizing conditions, neptunium-237 is the most mobile actinide in the deep geological repository environment of the Yucca Mountain project in Nevada. This makes it and its predecessors such as americium-241 candidates of interest for destruction by nuclear transmutation. Due to its long half-life, neptunium will become the major contributor of the total radiotoxicity at Yucca Mountain in 10,000 years. As it is unclear what happens to the non-reprocessed spent fuel containment in that long time span, an extraction and transmutation of neptunium after spent fuel reprocessing could help to minimize the contamination of the environment if the nuclear waste could be mobilized after several thousand years. Biological role and precautions Neptunium does not have a biological role, as it has a short half-life and occurs only in small traces naturally. Animal tests show it to be absorbed poorly (~1%) via the digestive tract. When injected it concentrates in the bones, from which it is slowly released. Finely divided neptunium metal presents a fire hazard because neptunium is pyrophoric; small grains will ignite spontaneously in air at room temperature.
Physical sciences
Chemical elements_2
null
21278
https://en.wikipedia.org/wiki/Nobelium
Nobelium
Nobelium is a synthetic chemical element; it has symbol No and atomic number 102. It is named after Alfred Nobel, the inventor of dynamite and benefactor of science. A radioactive metal, it is the tenth transuranium element, the second transfermium, and is the penultimate member of the actinide series. Like all elements with atomic number over 100, nobelium can only be produced in particle accelerators by bombarding lighter elements with charged particles. A total of twelve nobelium isotopes are known to exist; the most stable is 259No with a half-life of 58 minutes, but the shorter-lived 255No (half-life 3.1 minutes) is most commonly used in chemistry because it can be produced on a larger scale. Chemistry experiments have confirmed that nobelium behaves as a heavier homolog to ytterbium in the periodic table. The chemical properties of nobelium are not completely known: they are mostly only known in aqueous solution. Before nobelium's discovery, it was predicted that it would show a stable +2 oxidation state as well as the +3 state characteristic of the other actinides; these predictions were later confirmed, as the +2 state is much more stable than the +3 state in aqueous solution and it is difficult to keep nobelium in the +3 state. In the 1950s and 1960s, many claims of the discovery of nobelium were made from laboratories in Sweden, the Soviet Union, and the United States. Although the Swedish scientists soon retracted their claims, the priority of the discovery and therefore the naming of the element was disputed between Soviet and American scientists. It was not until 1992 that the International Union of Pure and Applied Chemistry (IUPAC) credited the Soviet team with the discovery. Even so, nobelium, the Swedish proposal, was retained as the name of the element due to its long-standing use in the literature. Introduction Discovery The discovery of element 102 was a complicated process and was claimed by groups from Sweden, the United States, and the Soviet Union. The first complete and incontrovertible report of its detection only came in 1966 from the Joint Institute of Nuclear Research at Dubna (then in the Soviet Union). The first announcement of the discovery of element 102 was announced by physicists at the Nobel Institute for Physics in Sweden in 1957. The team reported that they had bombarded a curium target with carbon-13 ions for twenty-five hours in half-hour intervals. Between bombardments, ion-exchange chemistry was performed on the target. Twelve out of the fifty bombardments contained samples emitting (8.5 ± 0.1) MeV alpha particles, which were in drops which eluted earlier than fermium (atomic number Z = 100) and californium (Z = 98). The half-life reported was 10 minutes and was assigned to either 251102 or 253102, although the possibility that the alpha particles observed were from a presumably short-lived mendelevium (Z = 101) isotope created from the electron capture of element 102 was not excluded. The team proposed the name nobelium (No) for the new element, which was immediately approved by IUPAC, a decision which the Dubna group characterized in 1968 as being hasty. In 1958, scientists at the Lawrence Berkeley National Laboratory repeated the experiment. The Berkeley team, consisting of Albert Ghiorso, Glenn T. Seaborg, John R. Walton and Torbjørn Sikkeland, used the new heavy-ion linear accelerator (HILAC) to bombard a curium target (95% 244Cm and 5% 246Cm) with 13C and 12C ions. They were unable to confirm the 8.5 MeV activity claimed by the Swedes but were instead able to detect decays from fermium-250, supposedly the daughter of 254102 (produced from the curium-246), which had an apparent half-life of ~3 s. Probably this assignment was also wrong, as later 1963 Dubna work showed that the half-life of 254No is significantly longer (about 50 s). It is more likely that the observed alpha decays did not come from element 102, but rather from 250mFm. In 1959, the Swedish team attempted to explain the Berkeley team's inability to detect element 102 in 1958, maintaining that they did discover it. However, later work has shown that no nobelium isotopes lighter than 259No (no heavier isotopes could have been produced in the Swedish experiments) with a half-life over 3 minutes exist, and that the Swedish team's results are most likely from thorium-225, which has a half-life of 8 minutes and quickly undergoes triple alpha decay to polonium-213, which has a decay energy of 8.53612 MeV. This hypothesis is lent weight by the fact that thorium-225 can easily be produced in the reaction used and would not be separated out by the chemical methods used. Later work on nobelium also showed that the divalent state is more stable than the trivalent one and hence that the samples emitting the alpha particles could not have contained nobelium, as the divalent nobelium would not have eluted with the other trivalent actinides. Thus, the Swedish team later retracted their claim and associated the activity to background effects. In 1959, the team continued their studies and claimed that they were able to produce an isotope that decayed predominantly by emission of an 8.3 MeV alpha particle, with a half-life of 3 s with an associated 30% spontaneous fission branch. The activity was initially assigned to 254102 but later changed to 252102. However, they also noted that it was not certain that element 102 had been produced due to difficult conditions. The Berkeley team decided to adopt the proposed name of the Swedish team, "nobelium", for the element. + → → + 4 Meanwhile, in Dubna, experiments were carried out in 1958 and 1960 aiming to synthesize element 102 as well. The first 1958 experiment bombarded plutonium-239 and -241 with oxygen-16 ions. Some alpha decays with energies just over 8.5 MeV were observed, and they were assigned to 251,252,253102, although the team wrote that formation of isotopes from lead or bismuth impurities (which would not produce nobelium) could not be ruled out. While later 1958 experiments noted that new isotopes could be produced from mercury, thallium, lead, or bismuth impurities, the scientists still stood by their conclusion that element 102 could be produced from this reaction, mentioning a half-life of under 30 seconds and a decay energy of (8.8 ± 0.5) MeV. Later 1960 experiments proved that these were background effects. 1967 experiments also lowered the decay energy to (8.6 ± 0.4) MeV, but both values are too high to possibly match those of 253No or 254No. The Dubna team later stated in 1970 and again in 1987 that these results were not conclusive. In 1961, Berkeley scientists claimed the discovery of element 103 in the reaction of californium with boron and carbon ions. They claimed the production of the isotope 257103, and also claimed to have synthesized an alpha decaying isotope of element 102 that had a half-life of 15 s and alpha decay energy 8.2 MeV. They assigned this to 255102 without giving a reason for the assignment. The values do not agree with those now known for 255No, although they do agree with those now known for 257No, and while this isotope probably played a part in this experiment, its discovery was inconclusive. Work on element 102 also continued in Dubna, and in 1964, experiments were carried out there to detect alpha-decay daughters of element 102 isotopes by synthesizing element 102 from the reaction of a uranium-238 target with neon ions. The products were carried along a silver catcher foil and purified chemically, and the isotopes 250Fm and 252Fm were detected. The yield of 252Fm was interpreted as evidence that its parent 256102 was also synthesized: as it was noted that 252Fm could also be produced directly in this reaction by the simultaneous emission of an alpha particle with the excess neutrons, steps were taken to ensure that 252Fm could not go directly to the catcher foil. The half-life detected for 256102 was 8 s, which is much higher than the more modern 1967 value of (3.2 ± 0.2) s. Further experiments were conducted in 1966 for 254102, using the reactions 243Am(15N,4n)254102 and 238U(22Ne,6n)254102, finding a half-life of (50 ± 10) s: at that time the discrepancy between this value and the earlier Berkeley value was not understood, although later work proved that the formation of the isomer 250mFm was less likely in the Dubna experiments than at the Berkeley ones. In hindsight, the Dubna results on 254102 were probably correct and can be now considered a conclusive detection of element 102. One more very convincing experiment from Dubna was published in 1966 (though it was submitted in 1965), again using the same two reactions, which concluded that 254102 indeed had a half-life much longer than the 3 seconds claimed by Berkeley. Later work in 1967 at Berkeley and 1971 at the Oak Ridge National Laboratory fully confirmed the discovery of element 102 and clarified earlier observations. In December 1966, the Berkeley group repeated the Dubna experiments and fully confirmed them, and used this data to finally assign correctly the isotopes they had previously synthesized but could not yet identify at the time, and thus claimed to have discovered nobelium in 1958 to 1961. + → → + 6 In 1969, the Dubna team carried out chemical experiments on element 102 and concluded that it behaved as the heavier homologue of ytterbium. The Russian scientists proposed the name joliotium (Jo) for the new element after Irène Joliot-Curie, who had recently died, creating an element naming controversy that would not be resolved for several decades, with each group using its own proposed names. In 1992, the IUPAC-IUPAP Transfermium Working Group (TWG) reassessed the claims of discovery and concluded that only the Dubna work from 1966 correctly detected and assigned decays to nuclei with atomic number 102 at the time. The Dubna team are therefore officially recognised as the discoverers of nobelium, although it is possible that it was detected at Berkeley in 1959. This decision was criticized by Berkeley the following year, calling the reopening of the cases of elements 101 to 103 a "futile waste of time", while Dubna agreed with IUPAC's decision. In 1994, as part of an attempted resolution to the element naming controversy, IUPAC ratified names for elements 101–109. For element 102, it ratified the name nobelium (No) on the basis that it had become entrenched in the literature over the course of 30 years and that Alfred Nobel should be commemorated in this fashion. Because of outcry over the 1994 names, which mostly did not respect the choices of the discoverers, a comment period ensued, and in 1995 IUPAC named element 102 flerovium (Fl) as part of a new proposal, after either Georgy Flyorov or his eponymous Flerov Laboratory of Nuclear Reactions. This proposal was also not accepted, and in 1997 the name nobelium was restored. Today the name flerovium, with the same symbol, refers to element 114. Characteristics Physical In the periodic table, nobelium is located to the right of the actinide mendelevium, to the left of the actinide lawrencium, and below the lanthanide ytterbium. Nobelium metal has not yet been prepared in bulk quantities, and bulk preparation is currently impossible. Nevertheless, a number of predictions and some preliminary experimental results have been done regarding its properties. The lanthanides and actinides, in the metallic state, can exist as either divalent (such as europium and ytterbium) or trivalent (most other lanthanides) metals. The former have fns2 configurations, whereas the latter have fn−1d1s2 configurations. In 1975, Johansson and Rosengren examined the measured and predicted values for the cohesive energies (enthalpies of crystallization) of the metallic lanthanides and actinides, both as divalent and trivalent metals. The conclusion was that the increased binding energy of the [Rn]5f136d17s2 configuration over the [Rn]5f147s2 configuration for nobelium was not enough to compensate for the energy needed to promote one 5f electron to 6d, as is true also for the very late actinides: thus einsteinium, fermium, mendelevium, and nobelium were expected to be divalent metals, although for nobelium this prediction has not yet been confirmed. The increasing predominance of the divalent state well before the actinide series concludes is attributed to the relativistic stabilization of the 5f electrons, which increases with increasing atomic number: an effect of this is that nobelium is predominantly divalent instead of trivalent, unlike all the other lanthanides and actinides. In 1986, nobelium metal was estimated to have an enthalpy of sublimation between 126 kJ/mol, a value close to the values for einsteinium, fermium, and mendelevium and supporting the theory that nobelium would form a divalent metal. Like the other divalent late actinides (except the once again trivalent lawrencium), metallic nobelium should assume a face-centered cubic crystal structure. Divalent nobelium metal should have a metallic radius of around 197 pm. Nobelium's melting point has been predicted to be 800 °C, the same value as that estimated for the neighboring element mendelevium. Its density is predicted to be around 9.9 ± 0.4 g/cm3. Chemical The chemistry of nobelium is incompletely characterized and is known only in aqueous solution, in which it can take on the +3 or +2 oxidation states, the latter being more stable. It was largely expected before the discovery of nobelium that in solution, it would behave like the other actinides, with the trivalent state being predominant; however, Seaborg predicted in 1949 that the +2 state would also be relatively stable for nobelium, as the No2+ ion would have the ground-state electron configuration [Rn]5f14, including the stable filled 5f14 shell. It took nineteen years before this prediction was confirmed. In 1967, experiments were conducted to compare nobelium's chemical behavior to that of terbium, californium, and fermium. All four elements were reacted with chlorine and the resulting chlorides were deposited along a tube, along which they were carried by a gas. It was found that the nobelium chloride produced was strongly adsorbed on solid surfaces, proving that it was not very volatile, like the chlorides of the other three investigated elements. However, both NoCl2 and NoCl3 were expected to exhibit nonvolatile behavior and hence this experiment was inconclusive as to what the preferred oxidation state of nobelium was. Determination of nobelium's favoring of the +2 state had to wait until the next year, when cation-exchange chromatography and coprecipitation experiments were carried out on around fifty thousand 255No atoms, finding that it behaved differently from the other actinides and more like the divalent alkaline earth metals. This proved that in aqueous solution, nobelium is most stable in the divalent state when strong oxidizers are absent. Later experimentation in 1974 showed that nobelium eluted with the alkaline earth metals, between Ca2+ and Sr2+. Nobelium is the only known f-block element for which the +2 state is the most common and stable one in aqueous solution. This occurs because of the large energy gap between the 5f and 6d orbitals at the end of the actinide series. It is expected that the relativistic stabilization of the 7s subshell greatly destabilizes nobelium dihydride, NoH2, and relativistic stabilisation of the 7p1/2 spinor over the 6d3/2 spinor mean that excited states in nobelium atoms have 7s and 7p contribution instead of the expected 6d contribution. The long No–H distances in the NoH2 molecule and the significant charge transfer lead to extreme ionicity with a dipole moment of 5.94 D for this molecule. In this molecule, nobelium is expected to exhibit main-group-like behavior, specifically acting like an alkaline earth metal with its ns2 valence shell configuration and core-like 5f orbitals. Nobelium's complexing ability with chloride ions is most similar to that of barium, which complexes rather weakly. Its complexing ability with citrate, oxalate, and acetate in an aqueous solution of 0.5 M ammonium nitrate is between that of calcium and strontium, although it is somewhat closer to that of strontium. The standard reduction potential of the E°(No3+→No2+) couple was estimated in 1967 to be between +1.4 and +1.5 V; it was later found in 2009 to be only about +0.75 V. The positive value shows that No2+ is more stable than No3+ and that No3+ is a good oxidizing agent. While the quoted values for the E°(No2+→No0) and E°(No3+→No0) vary among sources, the accepted standard estimates are −2.61 and −1.26 V. It has been predicted that the value for the E°(No4+→No3+) couple would be +6.5 V. The Gibbs energies of formation for No3+ and No2+ are estimated to be −342 and −480 kJ/mol, respectively. Atomic A nobelium atom has 102 electrons. They are expected to be arranged in the configuration [Rn]5f147s2 (ground state term symbol 1S0), although experimental verification of this electron configuration had not yet been made as of 2006. The sixteen electrons in the 5f and 7s subshells are valence electrons. In forming compounds, three valence electrons may be lost, leaving behind a [Rn]5f13 core: this conforms to the trend set by the other actinides with their [Rn]5fn electron configurations in the tripositive state. Nevertheless, it is more likely that only two valence electrons are lost, leaving behind a stable [Rn]5f14 core with a filled 5f14 shell. The first ionization potential of nobelium was measured to be at most (6.65 ± 0.07) eV in 1974, based on the assumption that the 7s electrons would ionize before the 5f ones; this value has not yet been refined further due to nobelium's scarcity and high radioactivity. The ionic radius of hexacoordinate and octacoordinate No3+ had been preliminarily estimated in 1978 to be around 90 and 102 pm respectively; the ionic radius of No2+ has been experimentally found to be 100 pm to two significant figures. The enthalpy of hydration of No2+ has been calculated as 1486 kJ/mol. Isotopes Fourteen isotopes of nobelium are known, with mass numbers 248–260 and 262; all are radioactive. Additionally, nuclear isomers are known for mass numbers 250, 251, 253, and 254. Of these, the longest-lived isotope is 259No with a half-life of 58 minutes, and the longest-lived isomer is 251mNo with a half-life of 1.7 seconds. However, the still undiscovered isotope 261No is predicted to have a still longer half-life of 3 hours. Additionally, the shorter-lived 255No (half-life 3.1 minutes) is more often used in chemical experimentation because it can be produced in larger quantities from irradiation of californium-249 with carbon-12 ions. After 259No and 255No, the next most stable nobelium isotopes are 253No (half-life 1.62 minutes), 254No (51 seconds), 257No (25 seconds), 256No (2.91 seconds), and 252No (2.57 seconds). All of the remaining nobelium isotopes have half-lives that are less than a second, and the shortest-lived known nobelium isotope (248No) has a half-life of less than 2 microseconds. The isotope 254No is especially interesting theoretically as it is in the middle of a series of prolate nuclei from 231Pa to 279Rg, and the formation of its nuclear isomers (of which two are known) is controlled by proton orbitals such as 2f5/2 which come just above the spherical proton shell; it can be synthesized in the reaction of 208Pb with 48Ca. The half-lives of nobelium isotopes increase smoothly from 250No to 253No. However, a dip appears at 254No, and beyond this the half-lives of even-even nobelium isotopes drop sharply as spontaneous fission becomes the dominant decay mode. For example, the half-life of 256No is almost three seconds, but that of 258No is only 1.2 milliseconds. This shows that at nobelium, the mutual repulsion of protons poses a limit to the region of long-lived nuclei in the actinide series. The even-odd nobelium isotopes mostly continue to have longer half-lives as their mass numbers increase, with a dip in the trend at 257No. Preparation and purification The isotopes of nobelium are mostly produced by bombarding actinide targets (uranium, plutonium, curium, californium, or einsteinium), with the exception of nobelium-262, which is produced as the daughter of lawrencium-262. The most commonly used isotope, 255No, can be produced from bombarding curium-248 or californium-249 with carbon-12: the latter method is more common. Irradiating a 350 μg cm−2 target of californium-249 with three trillion (3 × 1012) 73 MeV carbon-12 ions per second for ten minutes can produce around 1200 nobelium-255 atoms. Once the nobelium-255 is produced, it can be separated out similarly as used to purify the neighboring actinide mendelevium. The recoil momentum of the produced nobelium-255 atoms is used to bring them physically far away from the target from which they are produced, bringing them onto a thin foil of metal (usually beryllium, aluminium, platinum, or gold) just behind the target in a vacuum: this is usually combined by trapping the nobelium atoms in a gas atmosphere (frequently helium), and carrying them along with a gas jet from a small opening in the reaction chamber. Using a long capillary tube, and including potassium chloride aerosols in the helium gas, the nobelium atoms can be transported over tens of meters. The thin layer of nobelium collected on the foil can then be removed with dilute acid without completely dissolving the foil. The nobelium can then be isolated by exploiting its tendency to form the divalent state, unlike the other trivalent actinides: under typically used elution conditions (bis-(2-ethylhexyl) phosphoric acid (HDEHP) as stationary organic phase and 0.05 M hydrochloric acid as mobile aqueous phase, or using 3 M hydrochloric acid as an eluant from cation-exchange resin columns), nobelium will pass through the column and elute while the other trivalent actinides remain on the column. However, if a direct "catcher" gold foil is used, the process is complicated by the need to separate out the gold using anion-exchange chromatography before isolating the nobelium by elution from chromatographic extraction columns using HDEHP.
Physical sciences
Actinides
Chemistry
21285
https://en.wikipedia.org/wiki/Nuclear%20physics
Nuclear physics
Nuclear physics is the field of physics that studies atomic nuclei and their constituents and interactions, in addition to the study of other forms of nuclear matter. Nuclear physics should not be confused with atomic physics, which studies the atom as a whole, including its electrons. Discoveries in nuclear physics have led to applications in many fields. This includes nuclear power, nuclear weapons, nuclear medicine and magnetic resonance imaging, industrial and agricultural isotopes, ion implantation in materials engineering, and radiocarbon dating in geology and archaeology. Such applications are studied in the field of nuclear engineering. Particle physics evolved out of nuclear physics and the two fields are typically taught in close association. Nuclear astrophysics, the application of nuclear physics to astrophysics, is crucial in explaining the inner workings of stars and the origin of the chemical elements. History The history of nuclear physics as a discipline distinct from atomic physics, starts with the discovery of radioactivity by Henri Becquerel in 1896, made while investigating phosphorescence in uranium salts. The discovery of the electron by J. J. Thomson a year later was an indication that the atom had internal structure. At the beginning of the 20th century the accepted model of the atom was J. J. Thomson's "plum pudding" model in which the atom was a positively charged ball with smaller negatively charged electrons embedded inside it. In the years that followed, radioactivity was extensively investigated, notably by Marie Curie, a Polish physicist whose maiden name was Sklodowska, Pierre Curie, Ernest Rutherford and others. By the turn of the century, physicists had also discovered three types of radiation emanating from atoms, which they named alpha, beta, and gamma radiation. Experiments by Otto Hahn in 1911 and by James Chadwick in 1914 discovered that the beta decay spectrum was continuous rather than discrete. That is, electrons were ejected from the atom with a continuous range of energies, rather than the discrete amounts of energy that were observed in gamma and alpha decays. This was a problem for nuclear physics at the time, because it seemed to indicate that energy was not conserved in these decays. The 1903 Nobel Prize in Physics was awarded jointly to Becquerel, for his discovery and to Marie and Pierre Curie for their subsequent research into radioactivity. Rutherford was awarded the Nobel Prize in Chemistry in 1908 for his "investigations into the disintegration of the elements and the chemistry of radioactive substances". In 1905, Albert Einstein formulated the idea of mass–energy equivalence. While the work on radioactivity by Becquerel and Marie Curie predates this, an explanation of the source of the energy of radioactivity would have to wait for the discovery that the nucleus itself was composed of smaller constituents, the nucleons. Rutherford discovers the nucleus In 1906, Ernest Rutherford published "Retardation of the α Particle from Radium in passing through matter." Hans Geiger expanded on this work in a communication to the Royal Society with experiments he and Rutherford had done, passing alpha particles through air, aluminum foil and gold leaf. More work was published in 1909 by Geiger and Ernest Marsden, and further greatly expanded work was published in 1910 by Geiger. In 1911–1912 Rutherford went before the Royal Society to explain the experiments and propound the new theory of the atomic nucleus as we now understand it. Published in 1909, with the eventual classical analysis by Rutherford published May 1911, the key preemptive experiment was performed during 1909, at the University of Manchester. Ernest Rutherford's assistant, Professor Johannes "Hans" Geiger, and an undergraduate, Marsden, performed an experiment in which Geiger and Marsden under Rutherford's supervision fired alpha particles (helium 4 nuclei) at a thin film of gold foil. The plum pudding model had predicted that the alpha particles should come out of the foil with their trajectories being at most slightly bent. But Rutherford instructed his team to look for something that shocked him to observe: a few particles were scattered through large angles, even completely backwards in some cases. He likened it to firing a bullet at tissue paper and having it bounce off. The discovery, with Rutherford's analysis of the data in 1911, led to the Rutherford model of the atom, in which the atom had a very small, very dense nucleus containing most of its mass, and consisting of heavy positively charged particles with embedded electrons in order to balance out the charge (since the neutron was unknown). As an example, in this model (which is not the modern one) nitrogen-14 consisted of a nucleus with 14 protons and 7 electrons (21 total particles) and the nucleus was surrounded by 7 more orbiting electrons. Eddington and stellar nuclear fusion Around 1920, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper The Internal Constitution of the Stars. At that time, the source of stellar energy was a complete mystery; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation E = mc2. This was a particularly remarkable development since at that time fusion and thermonuclear energy, and even that stars are largely composed of hydrogen (see metallicity), had not yet been discovered. Studies of nuclear spin The Rutherford model worked quite well until studies of nuclear spin were carried out by Franco Rasetti at the California Institute of Technology in 1929. By 1925 it was known that protons and electrons each had a spin of . In the Rutherford model of nitrogen-14, 20 of the total 21 nuclear particles should have paired up to cancel each other's spin, and the final odd particle should have left the nucleus with a net spin of . Rasetti discovered, however, that nitrogen-14 had a spin of 1. James Chadwick discovers the neutron In 1932 Chadwick realized that radiation that had been observed by Walther Bothe, Herbert Becker, Irène and Frédéric Joliot-Curie was actually due to a neutral particle of about the same mass as the proton, that he called the neutron (following a suggestion from Rutherford about the need for such a particle). In the same year Dmitri Ivanenko suggested that there were no electrons in the nucleus — only protons and neutrons — and that neutrons were spin particles, which explained the mass not due to protons. The neutron spin immediately solved the problem of the spin of nitrogen-14, as the one unpaired proton and one unpaired neutron in this model each contributed a spin of in the same direction, giving a final total spin of 1. With the discovery of the neutron, scientists could at last calculate what fraction of binding energy each nucleus had, by comparing the nuclear mass with that of the protons and neutrons which composed it. Differences between nuclear masses were calculated in this way. When nuclear reactions were measured, these were found to agree with Einstein's calculation of the equivalence of mass and energy to within 1% as of 1934. Proca's equations of the massive vector boson field Alexandru Proca was the first to develop and report the massive vector boson field equations and a theory of the mesonic field of nuclear forces. Proca's equations were known to Wolfgang Pauli who mentioned the equations in his Nobel address, and they were also known to Yukawa, Wentzel, Taketani, Sakata, Kemmer, Heitler, and Fröhlich who appreciated the content of Proca's equations for developing a theory of the atomic nuclei in Nuclear Physics. Yukawa's meson postulated to bind nuclei In 1935 Hideki Yukawa proposed the first significant theory of the strong force to explain how the nucleus holds together. In the Yukawa interaction a virtual particle, later called a meson, mediated a force between all nucleons, including protons and neutrons. This force explained why nuclei did not disintegrate under the influence of proton repulsion, and it also gave an explanation of why the attractive strong force had a more limited range than the electromagnetic repulsion between protons. Later, the discovery of the pi meson showed it to have the properties of Yukawa's particle. With Yukawa's papers, the modern model of the atom was complete. The center of the atom contains a tight ball of neutrons and protons, which is held together by the strong nuclear force, unless it is too large. Unstable nuclei may undergo alpha decay, in which they emit an energetic helium nucleus, or beta decay, in which they eject an electron (or positron). After one of these decays the resultant nucleus may be left in an excited state, and in this case it decays to its ground state by emitting high-energy photons (gamma decay). The study of the strong and weak nuclear forces (the latter explained by Enrico Fermi via Fermi's interaction in 1934) led physicists to collide nuclei and electrons at ever higher energies. This research became the science of particle physics, the crown jewel of which is the standard model of particle physics, which describes the strong, weak, and electromagnetic forces. Modern nuclear physics A heavy nucleus can contain hundreds of nucleons. This means that with some approximation it can be treated as a classical system, rather than a quantum-mechanical one. In the resulting liquid-drop model, the nucleus has an energy that arises partly from surface tension and partly from electrical repulsion of the protons. The liquid-drop model is able to reproduce many features of nuclei, including the general trend of binding energy with respect to mass number, as well as the phenomenon of nuclear fission. Superimposed on this classical picture, however, are quantum-mechanical effects, which can be described using the nuclear shell model, developed in large part by Maria Goeppert Mayer and J. Hans D. Jensen. Nuclei with certain "magic" numbers of neutrons and protons are particularly stable, because their shells are filled. Other more complicated models for the nucleus have also been proposed, such as the interacting boson model, in which pairs of neutrons and protons interact as bosons. Ab initio methods try to solve the nuclear many-body problem from the ground up, starting from the nucleons and their interactions. Much of current research in nuclear physics relates to the study of nuclei under extreme conditions such as high spin and excitation energy. Nuclei may also have extreme shapes (similar to that of Rugby balls or even pears) or extreme neutron-to-proton ratios. Experimenters can create such nuclei using artificially induced fusion or nucleon transfer reactions, employing ion beams from an accelerator. Beams with even higher energies can be used to create nuclei at very high temperatures, and there are signs that these experiments have produced a phase transition from normal nuclear matter to a new state, the quark–gluon plasma, in which the quarks mingle with one another, rather than being segregated in triplets as they are in neutrons and protons. Nuclear decay Eighty elements have at least one stable isotope which is never observed to decay, amounting to a total of about 251 stable nuclides. However, thousands of isotopes have been characterized as unstable. These "radioisotopes" decay over time scales ranging from fractions of a second to trillions of years. Plotted on a chart as a function of atomic and neutron numbers, the binding energy of the nuclides forms what is known as the valley of stability. Stable nuclides lie along the bottom of this energy valley, while increasingly unstable nuclides lie up the valley walls, that is, have weaker binding energy. The most stable nuclei fall within certain ranges or balances of composition of neutrons and protons: too few or too many neutrons (in relation to the number of protons) will cause it to decay. For example, in beta decay, a nitrogen-16 atom (7 protons, 9 neutrons) is converted to an oxygen-16 atom (8 protons, 8 neutrons) within a few seconds of being created. In this decay a neutron in the nitrogen nucleus is converted by the weak interaction into a proton, an electron and an antineutrino. The element is transmuted to another element, with a different number of protons. In alpha decay, which typically occurs in the heaviest nuclei, the radioactive element decays by emitting a helium nucleus (2 protons and 2 neutrons), giving another element, plus helium-4. In many cases this process continues through several steps of this kind, including other types of decays (usually beta decay) until a stable element is formed. In gamma decay, a nucleus decays from an excited state into a lower energy state, by emitting a gamma ray. The element is not changed to another element in the process (no nuclear transmutation is involved). Other more exotic decays are possible (see the first main article). For example, in internal conversion decay, the energy from an excited nucleus may eject one of the inner orbital electrons from the atom, in a process which produces high speed electrons but is not beta decay and (unlike beta decay) does not transmute one element to another. Nuclear fusion In nuclear fusion, two low-mass nuclei come into very close contact with each other so that the strong force fuses them. It requires a large amount of energy for the strong or nuclear forces to overcome the electrical repulsion between the nuclei in order to fuse them; therefore nuclear fusion can only take place at very high temperatures or high pressures. When nuclei fuse, a very large amount of energy is released and the combined nucleus assumes a lower energy level. The binding energy per nucleon increases with mass number up to nickel-62. Stars like the Sun are powered by the fusion of four protons into a helium nucleus, two positrons, and two neutrinos. The uncontrolled fusion of hydrogen into helium is known as thermonuclear runaway. A frontier in current research at various institutions, for example the Joint European Torus (JET) and ITER, is the development of an economically viable method of using energy from a controlled fusion reaction. Nuclear fusion is the origin of the energy (including in the form of light and other electromagnetic radiation) produced by the core of all stars including our own Sun. Nuclear fission Nuclear fission is the reverse process to fusion. For nuclei heavier than nickel-62 the binding energy per nucleon decreases with the mass number. It is therefore possible for energy to be released if a heavy nucleus breaks apart into two lighter ones. The process of alpha decay is in essence a special type of spontaneous nuclear fission. It is a highly asymmetrical fission because the four particles which make up the alpha particle are especially tightly bound to each other, making production of this nucleus in fission particularly likely. From several of the heaviest nuclei whose fission produces free neutrons, and which also easily absorb neutrons to initiate fission, a self-igniting type of neutron-initiated fission can be obtained, in a chain reaction. Chain reactions were known in chemistry before physics, and in fact many familiar processes like fires and chemical explosions are chemical chain reactions. The fission or "nuclear" chain-reaction, using fission-produced neutrons, is the source of energy for nuclear power plants and fission-type nuclear bombs, such as those detonated in Hiroshima and Nagasaki, Japan, at the end of World War II. Heavy nuclei such as uranium and thorium may also undergo spontaneous fission, but they are much more likely to undergo decay by alpha decay. For a neutron-initiated chain reaction to occur, there must be a critical mass of the relevant isotope present in a certain space under certain conditions. The conditions for the smallest critical mass require the conservation of the emitted neutrons and also their slowing or moderation so that there is a greater cross-section or probability of them initiating another fission. In two regions of Oklo, Gabon, Africa, natural nuclear fission reactors were active over 1.5 billion years ago. Measurements of natural neutrino emission have demonstrated that around half of the heat emanating from the Earth's core results from radioactive decay. However, it is not known if any of this results from fission chain reactions. Production of "heavy" elements According to the theory, as the Universe cooled after the Big Bang it eventually became possible for common subatomic particles as we know them (neutrons, protons and electrons) to exist. The most common particles created in the Big Bang which are still easily observable to us today were protons and electrons (in equal numbers). The protons would eventually form hydrogen atoms. Almost all the neutrons created in the Big Bang were absorbed into helium-4 in the first three minutes after the Big Bang, and this helium accounts for most of the helium in the universe today (see Big Bang nucleosynthesis). Some relatively small quantities of elements beyond helium (lithium, beryllium, and perhaps some boron) were created in the Big Bang, as the protons and neutrons collided with each other, but all of the "heavier elements" (carbon, element number 6, and elements of greater atomic number) that we see today, were created inside stars during a series of fusion stages, such as the proton–proton chain, the CNO cycle and the triple-alpha process. Progressively heavier elements are created during the evolution of a star. Energy is only released in fusion processes involving smaller atoms than iron because the binding energy per nucleon peaks around iron (56 nucleons). Since the creation of heavier nuclei by fusion requires energy, nature resorts to the process of neutron capture. Neutrons (due to their lack of charge) are readily absorbed by a nucleus. The heavy elements are created by either a slow neutron capture process (the so-called s-process) or the rapid, or r-process. The s process occurs in thermally pulsing stars (called AGB, or asymptotic giant branch stars) and takes hundreds to thousands of years to reach the heaviest elements of lead and bismuth. The r-process is thought to occur in supernova explosions, which provide the necessary conditions of high temperature, high neutron flux and ejected matter. These stellar conditions make the successive neutron captures very fast, involving very neutron-rich species which then beta-decay to heavier elements, especially at the so-called waiting points that correspond to more stable nuclides with closed neutron shells (magic numbers).
Physical sciences
Nuclear physics
null
21289
https://en.wikipedia.org/wiki/Nautical%20mile
Nautical mile
A nautical mile is a unit of length used in air, marine, and space navigation, and for the definition of territorial waters. Historically, it was defined as the meridian arc length corresponding to one minute ( of a degree) of latitude at the equator, so that Earth's polar circumference is very near to 21,600 nautical miles (that is 60 minutes × 360 degrees). Today the international nautical mile is defined as . The derived unit of speed is the knot, one nautical mile per hour. Unit symbol There is no single internationally agreed symbol, with several symbols in use. NM is used by the International Civil Aviation Organization. nmi is used by the Institute of Electrical and Electronics Engineers and the United States Government Publishing Office. M is used as the abbreviation for the nautical mile by the International Hydrographic Organization. nm is a non-standard abbreviation used in many maritime applications and texts, including U.S. Government Coast Pilots and Sailing Directions. It conflicts with the SI symbol for nanometre. History The word mile is from the Latin phrase for a thousand paces: . Navigation at sea was done by eye until around 1500 when navigational instruments were developed and cartographers began using a coordinate system with parallels of latitude and meridians of longitude. The earliest reference of 60 miles to a degree is a map by Nicolaus Germanus in a 1482 edition of Ptolemy's Geography indicating one degree of longitude at the Equator contains "". An earlier manuscript map by Nicolaus Germanus in a previous edition of Geography states "" ("one degree longitude and latitude under the equator forms 500 stadia, which make 62 miles"). Whether a correction or convenience, the reason for the change from 62 to 60 miles to a degree is not explained. Eventually, the ratio of 60 miles to a degree appeared in English in a 1555 translation of Pietro Martire d'Anghiera's Decades: "[Ptolemy] assigned likewise to every degree three score miles." By the late 16th century English geographers and navigators knew that the ratio of distances at sea to degrees was constant along any great circle (such as the equator, or any meridian), assuming that Earth was a sphere. In 1574, William Bourne stated in A Regiment for the Sea the "rule to raise a degree" practised by navigators: "But as I take it, we in England should allowe 60 myles to one degrée: that is, after 3 miles to one of our Englishe leagues, wherefore 20 of oure English leagues shoulde answere to one degrée." Likewise, Robert Hues wrote in 1594 that the distance along a great circle was 60 miles per degree. However, these referred to the old English mile of 5000 feet and league of 15,000 feet, relying upon Ptolemy's underestimate of the Earth's circumference. In the early seventeenth century, English geographers started to acknowledge the discrepancy between the angular measurement of a degree of latitude and the linear measurement of miles. In 1624 Edmund Gunter suggested 352,000 feet to a degree (5866 feet per arcminute). In 1633, William Oughtred suggested 349,800 feet to a degree (5830 feet per arcminute). Both Gunter and Oughtred put forward the notion of dividing a degree into 100 parts, but their proposal was generally ignored by navigators. The ratio of 60 miles, or 20 leagues, to a degree of latitude remained fixed while the length of the mile was revised with better estimates of the earth's circumference. In 1637, Robert Norwood proposed a new measurement of 6120 feet for an arcminute of latitude, which was within 44 feet of the currently accepted value for a nautical mile. Since the Earth is not a perfect sphere but is an oblate spheroid with slightly flattened poles, a minute of latitude is not constant, but about 1,862 metres at the poles and 1,843 metres at the Equator. France and other metric countries state that in principle a nautical mile is an arcminute of a meridian at a latitude of 45°, but that is a modern justification for a more mundane calculation that was developed a century earlier. By the mid-19th century, France had defined a nautical mile via the original 1791 definition of the metre, one ten-millionth of a quarter meridian. So became the metric length for a nautical mile. France made it legal for the French Navy in 1906, and many metric countries voted to sanction it for international use at the 1929 International Hydrographic Conference. Both the United States and the United Kingdom used an average arcminute—specifically, a minute of arc of a great circle of a sphere having the same surface area as the Clarke 1866 ellipsoid. The authalic (equal area) radius of the Clarke 1866 ellipsoid is . The resulting arcminute is . The United States chose five significant digits for its nautical mile, 6,080.2 feet, whereas the United Kingdom chose four significant digits for its Admiralty mile, 6,080 feet. In 1929 the international nautical mile was defined by the First International Extraordinary Hydrographic Conference in Monaco as exactly 1,852 metres (which is ). The United States did not adopt the international nautical mile until 1954. Britain adopted it in 1970, but legal references to the obsolete unit are now converted to 1,853 metres (which is ). Similar definitions The metre was originally defined as of the length of the meridian arc from the North pole to the equator (1% of a centesimal degree of latitude), thus one kilometre of distance corresponds to one centigrad (also known as centesimal arc minute) of latitude. The Earth's circumference is therefore approximately 40,000 km. The equatorial circumference is slightly longer than the polar circumference the measurement based on this ( = 1,855.3 metres) is known as the geographical mile. Using the definition of a degree of latitude on Mars, a Martian nautical mile equals to . This is potentially useful for celestial navigation on a human mission to the planet, both as a shorthand and a quick way to roughly determine the location.
Physical sciences
Length and distance
null
21291
https://en.wikipedia.org/wiki/Nail%20%28fastener%29
Nail (fastener)
In woodworking and construction, a nail is a small object made of metal (or wood, called a tree nail or "trunnel") which is used as a fastener, as a peg to hang something, or sometimes as a decoration. Generally, nails have a sharp point on one end and a flattened head on the other, but headless nails are available. Nails are made in a great variety of forms for specialized purposes. The most common is a wire nail. Other types of nails include pins, tacks, brads, spikes, and cleats. Nails are typically driven into the workpiece by a hammer or nail gun. A nail holds materials together by friction in the axial direction and shear strength laterally. The point of the nail is also sometimes bent over or clinched after driving to prevent pulling out. History The history of the nail is divided roughly into three distinct periods: Hand-wrought (forged) nail (pre-history until 19th century) Cut nail (roughly 1800 to 1914) Wire nail (roughly 1860 to the present) From the late 1700s to the mid-1900s, nail prices fell by a factor of 10; since then nail prices have increased slightly, reflecting in part an upturn in materials prices and a shift toward specialty nails. Hand wrought In hand-working of nails, a smith works an approximately conical iron pin tapering to a point. This is then inserted into a nail-header (also known as a nail-plate), essentially a plate of iron with a small hole in it. The broad end of the pin is slightly wider than the hole of the nail-header: the smith fits the pin into the hole of the nail-header and then hammers the broad end of the pin. Unable to advance through the hole, the broad end is flattened against the nail-header to create a nail-head. In at least some metalworking traditions, nail-headers might have been identical to draw-plates (a plate bored with tapering holes of different sizes through which wire can be drawn to extrude it to increasingly fine proportions). The Bible provides a number of references to nails, including the story in Judges of Jael the wife of Heber, who drives a nail (or tent-peg) into the temple of a sleeping Canaanite commander; the provision of iron for nails by King David for what would become Solomon's Temple; and in connection with the crucifixion of Jesus Christ. The Romans made extensive use of nails. The Roman army, for example, left behind seven tons of nails when it evacuated the fortress of Inchtuthil in Perthshire in Scotland in 86 to 87 CE. The term "penny", as it refers to nails, probably originated in medieval England to describe the price of a hundred nails. Nails themselves were sufficiently valuable and standardized to be used as an informal medium of exchange. Until around 1800 artisans known as nailers or nailors made nails by hand – note the surname Naylor. (Workmen called slitters cut up iron bars to a suitable size for nailers to work on. From the late 16th century, manual slitters disappeared with the rise of the slitting mill, which cut bars of iron into rods with an even cross-section, saving much manual effort.) At the time of the American Revolution, England was the largest manufacturer of nails in the world. Nails were expensive and difficult to obtain in the American colonies, so that abandoned houses were sometimes deliberately burned down to allow recovery of used nails from the ashes. This became such a problem in Virginia that a law was created to stop people from burning their houses when they moved. Families often had small nail-manufacturing setups in their homes; during bad weather and at night, the entire family might work at making nails for their own use and for barter. Thomas Jefferson wrote in a letter: "In our private pursuits it is a great advantage that every honest employment is deemed honorable. I am myself a nail maker." The growth of the trade in the American colonies was theoretically held back by the prohibition of new slitting mills in America by the Iron Act 1750, though there is no evidence that the Act was actually enforced. The production of wrought-iron nails continued well into the 19th century, but ultimately was reduced to nails for purposes for which the softer cut nails were unsuitable, including horseshoe nails. Cut The slitting mill, introduced to England in 1590, simplified the production of nail rods, but the real first efforts to mechanise the nail-making process itself occurred between 1790 and 1820, initially in England and the United States, when various machines were invented to automate and speed up the process of making nails from bars of wrought iron. Also in Sweden in the early 1700s Christopher Polhem produced a nail cutting machine as part of his automated factory. These nails were known as cut nails because they were produced by cutting iron bars into rods; they were also known as square nails because of their roughly rectangular cross section. The cut-nail process was patented in the U.S. by Jacob Perkins in 1795 and in England by Joseph Dyer, who set up machinery in Birmingham. The process was designed to cut nails from sheets of iron, while making sure that the fibres of the iron ran down the nails. The Birmingham industry expanded in the following decades, and reached its greatest extent in the 1860s, after which it declined due to competition from wire nails, but continued until the outbreak of World War I. Cut nails were one of the important factors in the increase in balloon framing beginning in the 1830s and thus the decline of timber framing with wooden joints. Though still used for historical renovations, and for heavy-duty applications, such as attaching boards to masonry walls, cut nails are much less common today than wire nails. Wire Wire nails are formed from wire. Usually coils of wire are drawn through a series of dies to reach a specific diameter, then cut into short rods that are then formed into nails. The nail tip is usually cut by a blade; the head is formed by reshaping the other end of the rod under high pressure. Other dies are used to cut grooves and ridges. Wire nails were also known as "French nails" for their country of origin. Belgian wire nails began to compete in England in 1863. Joseph Henry Nettlefold was making wire nails at Smethwick by 1875. Over the following decades, the nail-making process was almost completely automated. Eventually the industry had machines capable of quickly producing huge numbers of inexpensive nails with little or no human intervention. With the introduction of cheap wire nails, the use of wrought iron for nail making quickly declined, as more slowly did the production of cut nails. In the United States, in 1892 more steel-wire nails were produced than cut nails. In 1913, 90% of manufactured nails were wire nails. Nails went from being rare and precious to being a cheap mass-produced commodity. Today almost all nails are manufactured from wire, but the term "wire nail" has come to refer to smaller nails, often available in a wider, more precise range of gauges than is typical for larger common and finish nails. Today, many nails are made using the modern rotary principle nail machine, which allows wire feeding, wire cutting and nail head forming to take place in one continuous process of rotating movements. Materials Nails were formerly made of bronze or wrought iron and were crafted by blacksmiths and nailors. These crafts people used a heated square iron rod that they forged before they hammered the sides which formed a point. After reheating and cutting off, the blacksmith or nailor inserted the hot nail into an opening and hammered it. Later new ways of making nails were created using machines to shear the nails before wiggling the bar sideways to produce a shank. For example, the Type A cut nails were sheared from an iron bar type guillotine using early machinery. This method was slightly altered until the 1820s when new heads on the nails' ends were pounded via a separate mechanical nail heading machine. In the 1810s, iron bars were flipped over after each stroke while the cutter set was at an angle. Every nail was then sheared off of taper allowing for an automatic grip of each nail which also formed their heads. Type B nails were created this way. In 1886, 10 percent of the nails that were made in the United States were of the soft steel wire variety and by 1892, steel wire nails overtook iron cut nails as the main type of nails that were being produced. In 1913, wire nails were 90 percent of all nails that were produced. Today's nails are typically made of steel, often dipped or coated to prevent corrosion in harsh conditions or to improve adhesion. Ordinary nails for wood are usually of a soft, low-carbon or "mild" steel (about 0.1% carbon, the rest iron and perhaps a trace of silicon or manganese). Nails for masonry applications are tempered and have a higher carbon content. Types Types of nail include: Aluminum nails – Made of aluminum in many shapes and sizes for use with aluminum architectural metals Box nail – like a common nail but with a thinner shank and head Brads are small, thin, tapered nails with a lip or projection to one side rather than a full head or a small finish nail Floor brad ('stigs') – flat, tapered and angular, for use in fixing floor boards Oval brad – Ovals utilize the principles of fracture mechanics to allow nailing without splitting. Highly anisotropic materials like regular wood (as opposed to wood composites) can easily be wedged apart. Use of an oval perpendicular to the wood's grain cuts the wood fibers rather than wedges them apart, and thus allows fastening without splitting, even close to edges Panel pins Tacks or Tintacks are short, sharp pointed nails often used with carpet, fabric and paper. Normally cut from sheet steel (as opposed to wire), the tack is used in upholstery, shoe making and saddle manufacture. The triangular shape of the nail's cross section gives greater grip and less tearing of materials such as cloth and leather compared to a wire nail. Brass tack – brass tacks are commonly used where corrosion may be an issue, such as furniture where contact with human skin salts will cause corrosion on steel nails Canoe tack – A clinching (or clenching) nail. The nail point is tapered so that it can be turned back on itself using a clinching iron. It then bites back into the wood from the side opposite the nail's head, forming a rivet-like fastening. Clench-nails used in building clinker boats. Shoe tack – A clinching nail (see above) for clinching leather and sometimes wood, formerly used for handmade shoes. Carpet tack Upholstery tacks – used to attach coverings to furniture Thumbtack (or "push-pin" or "drawing-pin") are lightweight pins used to secure paper or cardboard. Casing nails – have a head that is smoothly tapered, in comparison to the "stepped" head of a finish nail. When used to install casing around windows or doors, they allow the wood to be pried off later with minimal damage when repairs are needed, and without the need to dent the face of the casing in order to grab and extract the nail. Once the casing has been removed, the nails can be extracted from the inner frame with any of the usual nail pullers Clout nail – a roofing nail Coil nail – nails designed for use in a pneumatic nail gun assembled in coils Common nail – smooth shank, wire nail with a heavy, flat head. The typical nail for framing Convex head (nipple head, springhead) roofing nail – an umbrella shaped head with a rubber gasket for fastening metal roofing, usually with a ring shank Copper nail – nails made of copper for use with copper flashing or slate shingles etc. D-head (clipped head) nail – a common or box nail with part of the head removed for some pneumatic nail guns Double-ended nail – a rare type of nail with points on both ends and the "head" in the middle for joining boards together. See this patent. Similar to a dowel nail but with a head on the shank. Double-headed (duplex, formwork, shutter, scaffold) nail – used for temporary nailing; nails can easily pulled for later disassembly Dowel nail – a double pointed nail without a "head" on the shank, a piece of round steel sharpened on both ends Drywall (plasterboard) nail – short, hardened, ring-shank nail with a very thin head Fiber cement nail – a nail for installing fiber cement siding Finish nail (bullet head nail, lost-head nail) – A wire nail with a small head intended to be minimally visible or driven below the wood surface and the hole filled to be invisible Gang nail – a nail plate Hardboard pin – a small nail for fixing hardboard or thin plywood, often with a square shank Horseshoe nail – nails used to hold horseshoes on hoofs Joist hanger nail – special nails rated for use with joist hangers and similar brackets. Sometimes called "Teco nails" ( × .148 shank nails used in metal connectors such as hurricane ties) Lost-head nail – see finish nail Masonry (concrete) – lengthwise fluted, hardened nail for use in concrete Oval wire nail – nails with an oval shank Panel pin Gutter spike – Large long nail intended to hold wooden gutters and some metal gutters in place at the bottom edge of a roof Ring (annular, improved, jagged) shank nail – nails that have ridges circling the shank to provide extra resistance to pulling out Roofing (clout) nail – generally a short nail with a broad head used with asphalt shingles, felt paper or the like Screw (helical) nail – a nail with a spiral shank - uses including flooring and assembling pallets Shake (shingle) nail – small headed nails to use for nailing shakes and shingles Sprig – a small nail with either a headless, tapered shank or a square shank with a head on one side. Commonly used by glaziers to fix a glass plane into a wooden frame. Square nail – a cut nail T-head nail – shaped like the letter T Veneer pin Wire (French) nail – a general term for a nail with a round shank. These are sometimes called French nails from their country of invention Wire-weld collated nail – nails held together with slender wires for use in nail guns Sizes Most countries, except the United States, use a metric system for describing nail sizes. A 50 × 3.0 indicates a nail 50 mm long (not including the head) and 3 mm in diameter. Lengths are rounded to the nearest millimetre. For example, finishing nail* sizes typically available from German suppliers are: Drahtstift mit Senkkopf (Stahl, DIN 1151) United States penny sizes In the United States, the length of a nail is designated by its penny size. Terminology Box: a wire nail with a head; box nails have a smaller shank than common nails of the same size Bright: no surface coating; not recommended for weather exposure or acidic or treated lumber Casing: a wire nail with a slightly larger head than finish nails; often used for flooring CC or Coated: "cement coated"; nail coated with adhesive, also known as cement or glue, for greater holding power; also resin- or vinyl-coated; coating melts from friction when driven to help lubricate then adheres when cool; color varies by manufacturer (tan, pink, are common) Common: a common construction wire nail with a disk-shaped head that is typically 3 to 4 times the diameter of the shank: common nails have larger shanks than box nails of the same size Cut: machine-made square nails. Now used for masonry and historical reproduction or restoration Duplex: a common nail with a second head, allowing for easy extraction; often used for temporary work, such as concrete forms or wood scaffolding; sometimes called a "scaffold nail" Drywall: a specialty blued-steel nail with a thin broad head used to fasten gypsum wallboard to wooden framing members Finish: a wire nail that has a head only slightly larger than the shank; can be easily concealed by countersinking the nail slightly below the finished surface with a nail-set and filling the resulting void with a filler (putty, spackle, caulk, etc.) Forged: handmade nails (usually square), hot-forged by a blacksmith or nailor, often used in historical reproduction or restoration, commonly sold as collectors items Galvanized: treated for resistance to corrosion and/or weather exposure Electrogalvanized: provides a smooth finish with some corrosion resistance Hot-dip galvanized: provides a rough finish that deposits more zinc than other methods, resulting in very high corrosion resistance that is suitable for some acidic and treated lumber; Mechanically galvanized: deposits more zinc than electrogalvanizing for increased corrosion resistance Head: round flat metal piece formed at the top of the nail; for increased holding power Helix: the nail has a square shank that has been twisted, making it very difficult to pull out; often used in decking so they are usually galvanized; sometimes called decking nails Length: distance from the bottom of the head to the point of a nail Phosphate-coated: a dark grey to black finish providing a surface that binds well with paint and joint compound and minimal corrosion resistance Point: sharpened end opposite the "head" for greater ease in driving Pole barn: long shank ( in to 8 in, 6 cm to 20 cm), ring shank (see below), hardened nails; usually oil quenched or galvanized (see above); commonly used in the construction of wood framed, metal buildings (pole barns) Ring shank: small directional rings on the shank to prevent the nail from working back out once driven in; common in drywall, flooring, and pole barn nails Shank: the body the length of the nail between the head and the point; may be smooth, or may have rings or spirals for greater holding power Sinker: these are the most common nails used in framing today; same thin diameter as a box nail; cement coated (see above); the bottom of the head is tapered like a wedge or funnel and the top of the head is grid embossed to keep the hammer strike from sliding off Spike: a large nail; usually over 4 in (100 mm) long Spiral: a twisted wire nail; spiral nails have smaller shanks than common nails of the same size In art and religion Nails have been used in art, such as the Nail Men—a form of fundraising common in Germany and Austria during World War I. Before the 1850s bocce and pétanque boules were wooden balls, sometimes partially reinforced with hand-forged nails. When cheap, plentiful machine-made nails became available, manufacturers began to produce the boule cloutée—a wooden core studded with nails to create an all-metal surface. Nails of different metals and colors (steel, brass, and copper) were used to create a wide variety of designs and patterns. Some of the old boules cloutées are genuine works of art and valued collector's items. Once nails became cheap and widely available, they were often used in folk art and outsider art as a method of decorating a surface with metallic studs. Another common artistic use is the construction of sculpture from welded or brazed nails. Nails were sometimes inscribed with incantations or signs intended for religious or mystical benefit, used at shrines or on the doors of houses for protection.
Technology
Components_2
null
21435
https://en.wikipedia.org/wiki/Nerve
Nerve
A nerve is an enclosed, cable-like bundle of nerve fibers (called axons) in the peripheral nervous system. Nerves have historically been considered the basic units of the peripheral nervous system. A nerve provides a common pathway for the electrochemical nerve impulses called action potentials that are transmitted along each of the axons to peripheral organs or, in the case of sensory nerves, from the periphery back to the central nervous system. Each axon is an extension of an individual neuron, along with other supportive cells such as some Schwann cells that coat the axons in myelin. Each axon is surrounded by a layer of connective tissue called the endoneurium. The axons are bundled together into groups called fascicles, and each fascicle is wrapped in a layer of connective tissue called the perineurium. The entire nerve is wrapped in a layer of connective tissue called the epineurium. Nerve cells (often called neurons) are further classified as sensory, motor, or mixed nerves. In the central nervous system, the analogous structures are known as nerve tracts. Structure Each nerve is covered on the outside by a dense sheath of connective tissue, the epineurium. Beneath this is a layer of fat cells, the perineurium, which forms a complete sleeve around a bundle of axons. Perineurial septae extend into the nerve and subdivide it into several bundles of fibres. Surrounding each such fibre is the endoneurium. This forms an unbroken tube from the surface of the spinal cord to the level where the axon synapses with its muscle fibres, or ends in sensory receptors. The endoneurium consists of an inner sleeve of material called the glycocalyx and an outer delicate meshwork of collagen fibres. Nerves are bundled and often travel along with blood vessels, since the neurons of a nerve have fairly high energy requirements. Within the endoneurium, the individual nerve fibres are surrounded by a low-protein liquid called endoneurial fluid. This acts in a similar way to the cerebrospinal fluid in the central nervous system and constitutes a blood-nerve barrier similar to the blood–brain barrier. Molecules are thereby prevented from crossing the blood into the endoneurial fluid. During the development of nerve edema from nerve irritation (or injury), the amount of endoneurial fluid may increase at the site of irritation. This increase in fluid can be visualized using magnetic resonance (MR) neurography, and thus MR neurography can identify nerve irritation and/or injury. Categories Nerves are categorized into three groups based on the direction that signals are conducted: Afferent nerves conduct sensory information from sensory neurons to the central nervous system, for example from the mechanoreceptors in skin. Bundles of afferent fibers are known as sensory nerves. Efferent nerves conduct signals from the central nervous system along motor neurons to their target muscles and glands. Bundles of these fibres are known as efferent nerves. Mixed nerves contain both afferent and efferent axons, and thus conduct both incoming sensory information and outgoing muscle commands in the same bundle. All spinal nerves are mixed nerves, and some of the cranial nerves are also mixed nerves. Nerves can be categorized into two groups based on where they connect to the central nervous system: Spinal nerves innervate (distribute to/stimulate) much of the body, and connect through the vertebral column to the spinal cord and thus to the central nervous system. They are given letter-number designations according to the vertebra through which they connect to the spinal column. Cranial nerves innervate parts of the head, and connect directly to the brain (especially to the brainstem). They are typically assigned Roman numerals from 1 to 12, although cranial nerve zero is sometimes included. In addition, cranial nerves have descriptive names. Terminology Specific terms are used to describe nerves and their actions. A nerve that supplies information to the brain from an area of the body, or controls an action of the body is said to innervate that section of the body or organ. Other terms relate to whether the nerve affects the same side ("ipsilateral") or opposite side ("contralateral") of the body, to the part of the brain that supplies it. Development Nerve growth normally ends in adolescence but can be re-stimulated with a molecular mechanism known as "notch signaling". If the axons of a neuron are damaged, as long as the cell body of the neuron is not damaged, the axons can regenerate and remake the synaptic connections with neurons with the help of guidepost cells. This is also referred to as neuroregeneration. The nerve begins the process by destroying the nerve distal to the site of injury allowing Schwann cells, basal lamina, and the neurilemma near the injury to begin producing a regeneration tube. Nerve growth factors are produced causing many nerve sprouts to bud. When one of the growth processes finds the regeneration tube, it begins to grow rapidly towards its original destination guided the entire time by the regeneration tube. Nerve regeneration is very slow and can take up to several months to complete. While this process does repair some nerves, there will still be some functional deficit as the repairs are not perfect. Function A nerve conveys information in the form of electrochemical impulses (as nerve impulses known as action potentials) carried by the individual neurons that make up the nerve. These impulses are extremely fast, with some myelinated neurons conducting at speeds up to 120 m/s. The impulses travel from one neuron to another by crossing a synapse, where the message is converted from electrical to chemical and then back to electrical. Nervous system The nervous system is the part of an animal that coordinates its actions by transmitting signals to and from different parts of its body. In vertebrates it consists of two main parts, the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS consists of the brain, including the brainstem, and spinal cord. The PNS consists mainly of nerves, which are enclosed bundles of the long fibers or axons, that connect the CNS to all remaining body parts. The PNS is divided into three separate subsystems, the somatic, autonomic, and enteric nervous systems. Somatic nerves mediate voluntary movement. The autonomic nervous system is further subdivided into the sympathetic and the parasympathetic nervous systems. The sympathetic nervous system is activated in cases of emergencies to mobilize energy, while the parasympathetic nervous system is activated when organisms are in a relaxed state. The enteric nervous system functions to control the gastrointestinal system. Both autonomic and enteric nervous systems function involuntarily. Nerves that exit from the cranium are called cranial nerves while those exiting from the spinal cord are called spinal nerves. Clinical significance Neurologists usually diagnose disorders of nerves by a physical examination, including the testing of reflexes, walking and other directed movements, muscle weakness, proprioception, and the sense of touch. This initial exam can be followed with tests such as nerve conduction study, electromyography (EMG), and computed tomography (CT). Nerves can be damaged by physical injury as well as conditions like carpal tunnel syndrome (CTS) and repetitive strain injury. Autoimmune diseases such as Guillain–Barré syndrome, neurodegenerative diseases, polyneuropathy, infection, neuritis, diabetes, or failure of the blood vessels surrounding the nerve all cause nerve damage, which can vary in severity. A pinched nerve occurs when pressure is placed on a nerve, usually from swelling due to an injury, or pregnancy and can result in pain, weakness, numbness or paralysis, an example being CTS. Symptoms can be felt in areas far from the actual site of damage, a phenomenon called referred pain. Referred pain can happen when the damage causes altered signalling to other areas. Cancer can spread by invading the spaces around nerves. This is particularly common in head and neck cancer, prostate cancer and colorectal cancer. Multiple sclerosis is a disease associated with extensive nerve damage. It occurs when the macrophages of an individual's own immune system damage the myelin sheaths that insulate the axon of the nerve. Other animals A neuron is called identified if it has properties that distinguish it from every other neuron in the same animal—properties such as location, neurotransmitter, gene expression pattern, and connectivity—and if every individual organism belonging to the same species has exactly one neuron with the same set of properties. In vertebrate nervous systems, very few neurons are "identified" in this sense. Researchers believe humans have none—but in simpler nervous systems, some or all neurons may be thus unique. In vertebrates, the best known identified neurons are the gigantic Mauthner cells of fish. Every fish has two Mauthner cells, located in the bottom part of the brainstem, one on the left side and one on the right. Each Mauthner cell has an axon that crosses over, innervating (stimulating) neurons at the same brain level and then travelling down through the spinal cord, making numerous connections as it goes. The synapses generated by a Mauthner cell are so powerful that a single action potential gives rise to a major behavioral response: within milliseconds the fish curves its body into a C-shape, then straightens, thereby propelling itself rapidly forward. Functionally of this is a fast escape response, triggered most easily by a strong sound wave or pressure wave impinging on the lateral line organ of the fish. Mauthner cells are not the only identified neurons in fish—there are about 20 more types, including pairs of "Mauthner cell analogs" in each spinal segmental nucleus. Although a Mauthner cell is capable of bringing about an escape response all by itself, in the context of ordinary behavior other types of cells usually contribute to shaping the amplitude and direction of the response. Mauthner cells have been described as command neurons. A command neuron is a special type of identified neuron, defined as a neuron that is capable of driving a specific behavior all by itself. Such neurons appear most commonly in the fast escape systems of various species—the squid giant axon and squid giant synapse, used for pioneering experiments in neurophysiology because of their enormous size, both participate in the fast escape circuit of the squid. The concept of a command neuron has, however, become controversial, because of studies showing that some neurons that initially appeared to fit the description were really only capable of evoking a response in a limited set of circumstances. In organisms of radial symmetry, nerve nets serve for the nervous system. There is no brain or centralised head region, and instead there are interconnected neurons spread out in nerve nets. These are found in Cnidaria, Ctenophora and Echinodermata. History Herophilos (335–280 BC) described the functions of the optic nerve in sight and the oculomotor nerve in eye movement. Analysis of the nerves in the cranium enabled him to differentiate between blood vessels and nerves ( "string, plant fiber, nerve"). Modern research has not confirmed William Cullen's 1785 hypothesis associating mental states with physical nerves, although popular or lay medicine may still invoke "nerves" in diagnosing or blaming any sort of psychological worry or hesitancy, as in the common traditional phrases "my poor nerves", "", and "nervous breakdown".
Biology and health sciences
Nervous system
Biology
21445
https://en.wikipedia.org/wiki/November
November
November is the eleventh and penultimate month of the year in the Julian and Gregorian calendars. Its length is 30 days. November was the ninth month of the calendar of Romulus . November retained its name (from the Latin novem meaning "nine") when January and February were added to the Roman calendar. November is a month of late spring in the Southern Hemisphere and late autumn in the Northern Hemisphere. Therefore, November in the Southern Hemisphere is the seasonal equivalent of May in the Northern Hemisphere and vice versa. In Ancient Rome, Ludi Plebeii was held from November 4–17, Epulum Jovis was held on November 13 and Brumalia celebrations began on November 24. These dates do not correspond to the modern Gregorian calendar. November was referred to as Blōtmōnaþ by the Anglo-Saxons. Brumaire and Frimaire were the months on which November fell in the French Republican calendar. Astronomy November meteor showers include the Andromedids, which occurs from September 25 to December 6 and generally peak around November 9–14, the Leonids, which occurs from November 15–20, the Alpha Monocerotids, which occurs from November 15–25 with the peak on November 21–22, the Northern Taurids, which occurs from October 20 to December 10, and the Southern Taurids, which occurs from September 10 – November 20, and the Phoenicids; which occur from November 29 to December 9 with the peak occurring on December 5–6. The Orionids, which occurs in late October, sometimes lasts into November. Astrology The Western zodiac signs for November are Scorpio (October 23 – November 21) and Sagittarius (November 22 – December 21). Symbols November's birthstone is the topaz (particularly, yellow) which symbolizes friendship and the citrine. Its birth flower is the chrysanthemum. Observances This list does not necessarily imply either official status or general observance. Non-Gregorian (All Baha'i, Islamic, and Jewish observances begin at the sundown prior to the date listed, and end at sundown of the date in question unless otherwise noted.) List of observances set by the Bahá'í calendar List of observances set by the Chinese calendar List of observances set by the Hebrew calendar List of observances set by the Islamic calendar List of observances set by the Solar Hijri calendar Month-long In Catholic tradition, November is the Month for prayer for the Holy Souls in Purgatory. A November List or November Dead List is sometimes maintained for this purpose. Academic Writing Month Annual Family Reunion Planning Month Lung Cancer Awareness Month Movember National Novel Writing Month No Nut November Pancreatic Cancer Awareness Month (Global) Pulmonary Hypertension Awareness Month Stomach Cancer Awareness Month Dinovember United States Native American Heritage Month COPD Awareness Month Epilepsy Awareness Month Military Family Month National Adoption Month National Alzheimer's Disease Awareness Month National Blog Posting Month National Critical Infrastructure Protection Month National Entrepreneurship Month National Family Caregivers Month National Bone Marrow Donor Awareness Month National Diabetes Month National Homeless Youth Month National Hospice Month National Impaired Driving Prevention Month National Pomegranate Month Prematurity Awareness Month Movable Mitzvah Day International: November 15
Technology
Months
null
21462
https://en.wikipedia.org/wiki/Normal%20distribution
Normal distribution
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is The parameter is the mean or expectation of the distribution (and also its median and mode), while the parameter is the variance. The standard deviation of the distribution is (sigma). A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate. Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. Their importance is partly due to the central limit theorem. It states that, under some conditions, the average of many samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distribution converges to a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes, such as measurement errors, often have distributions that are nearly normal. Moreover, Gaussian distributions have some unique properties that are valuable in analytic studies. For instance, any linear combination of a fixed collection of independent normal deviates is a normal deviate. Many results and methods, such as propagation of uncertainty and least squares parameter fitting, can be derived analytically in explicit form when the relevant variables are normally distributed. A normal distribution is sometimes informally called a bell curve. However, many other distributions are bell-shaped (such as the Cauchy, Student's t, and logistic distributions). (For other names, see Naming.) The univariate probability distribution is generalized for vectors in the multivariate normal distribution and for matrices in the matrix normal distribution. Definitions Standard normal distribution The simplest case of a normal distribution is known as the standard normal distribution or unit normal distribution. This is a special case when and , and it is described by this probability density function (or density): The variable has a mean of 0 and a variance and standard deviation of 1. The density has its peak at and inflection points at and . Although the density above is most commonly known as the standard normal, a few authors have used that term to describe other versions of the normal distribution. Carl Friedrich Gauss, for example, once defined the standard normal as which has a variance of , and Stephen Stigler once defined the standard normal as which has a simple functional form and a variance of General normal distribution Every normal distribution is a version of the standard normal distribution, whose domain has been stretched by a factor (the standard deviation) and then translated by (the mean value): The probability density must be scaled by so that the integral is still 1. If is a standard normal deviate, then will have a normal distribution with expected value and standard deviation . This is equivalent to saying that the standard normal distribution can be scaled/stretched by a factor of and shifted by to yield a different normal distribution, called . Conversely, if is a normal deviate with parameters and , then this distribution can be re-scaled and shifted via the formula to convert it to the standard normal distribution. This variate is also called the standardized form of . Notation The probability density of the standard Gaussian distribution (standard normal distribution, with zero mean and unit variance) is often denoted with the Greek letter (phi). The alternative form of the Greek letter phi, , is also used quite often. The normal distribution is often referred to as or . Thus when a random variable is normally distributed with mean and standard deviation , one may write Alternative parameterizations Some authors advocate using the precision as the parameter defining the width of the distribution, instead of the standard deviation or the variance . The precision is normally defined as the reciprocal of the variance, . The formula for the distribution then becomes This choice is claimed to have advantages in numerical computations when is very close to zero, and simplifies formulas in some contexts, such as in the Bayesian inference of variables with multivariate normal distribution. Alternatively, the reciprocal of the standard deviation might be defined as the precision, in which case the expression of the normal distribution becomes According to Stigler, this formulation is advantageous because of a much simpler and easier-to-remember formula, and simple approximate formulas for the quantiles of the distribution. Normal distributions form an exponential family with natural parameters and , and natural statistics x and x2. The dual expectation parameters for normal distribution are and . Cumulative distribution function The cumulative distribution function (CDF) of the standard normal distribution, usually denoted with the capital Greek letter , is the integral Error function The related error function gives the probability of a random variable, with normal distribution of mean 0 and variance 1/2 falling in the range . That is: These integrals cannot be expressed in terms of elementary functions, and are often said to be special functions. However, many numerical approximations are known; see below for more. The two functions are closely related, namely For a generic normal distribution with density , mean and variance , the cumulative distribution function is The complement of the standard normal cumulative distribution function, , is often called the Q-function, especially in engineering texts. It gives the probability that the value of a standard normal random variable will exceed : . Other definitions of the -function, all of which are simple transformations of , are also used occasionally. The graph of the standard normal cumulative distribution function has 2-fold rotational symmetry around the point (0,1/2); that is, . Its antiderivative (indefinite integral) can be expressed as follows: The cumulative distribution function of the standard normal distribution can be expanded by Integration by parts into a series: where denotes the double factorial. An asymptotic expansion of the cumulative distribution function for large x can also be derived using integration by parts. For more, see Error function#Asymptotic expansion. A quick approximation to the standard normal distribution's cumulative distribution function can be found by using a Taylor series approximation: Recursive computation with Taylor series expansion The recursive nature of the family of derivatives may be used to easily construct a rapidly converging Taylor series expansion using recursive entries about any point of known value of the distribution,: where: Using the Taylor series and Newton's method for the inverse function An application for the above Taylor series expansion is to use Newton's method to reverse the computation. That is, if we have a value for the cumulative distribution function, , but do not know the x needed to obtain the , we can use Newton's method to find x, and use the Taylor series expansion above to minimize the number of computations. Newton's method is ideal to solve this problem because the first derivative of , which is an integral of the normal standard distribution, is the normal standard distribution, and is readily available to use in the Newton's method solution. To solve, select a known approximate solution, , to the desired . may be a value from a distribution table, or an intelligent estimate followed by a computation of using any desired means to compute. Use this value of and the Taylor series expansion above to minimize computations. Repeat the following process until the difference between the computed and the desired , which we will call , is below a chosen acceptably small error, such as 10−5, 10−15, etc.: where is the from a Taylor series solution using and When the repeated computations converge to an error below the chosen acceptably small value, x will be the value needed to obtain a of the desired value, . Standard deviation and coverage About 68% of values drawn from a normal distribution are within one standard deviation σ from the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations. This fact is known as the 68–95–99.7 (empirical) rule, or the 3-sigma rule. More precisely, the probability that a normal deviate lies in the range between and is given by To 12 significant digits, the values for are: For large , one can use the approximation . Quantile function The quantile function of a distribution is the inverse of the cumulative distribution function. The quantile function of the standard normal distribution is called the probit function, and can be expressed in terms of the inverse error function: For a normal random variable with mean and variance , the quantile function is The quantile of the standard normal distribution is commonly denoted as . These values are used in hypothesis testing, construction of confidence intervals and Q–Q plots. A normal random variable will exceed with probability , and will lie outside the interval with probability . In particular, the quantile is 1.96; therefore a normal random variable will lie outside the interval in only 5% of cases. The following table gives the quantile such that will lie in the range with a specified probability . These values are useful to determine tolerance interval for sample averages and other statistical estimators with normal (or asymptotically normal) distributions. The following table shows , not as defined above. For small , the quantile function has the useful asymptotic expansion Properties The normal distribution is the only distribution whose cumulants beyond the first two (i.e., other than the mean and variance) are zero. It is also the continuous distribution with the maximum entropy for a specified mean and variance. Geary has shown, assuming that the mean and variance are finite, that the normal distribution is the only distribution where the mean and variance calculated from a set of independent draws are independent of each other. The normal distribution is a subclass of the elliptical distributions. The normal distribution is symmetric about its mean, and is non-zero over the entire real line. As such it may not be a suitable model for variables that are inherently positive or strongly skewed, such as the weight of a person or the price of a share. Such variables may be better described by other distributions, such as the log-normal distribution or the Pareto distribution. The value of the normal density is practically zero when the value lies more than a few standard deviations away from the mean (e.g., a spread of three standard deviations covers all but 0.27% of the total distribution). Therefore, it may not be an appropriate model when one expects a significant fraction of outliers—values that lie many standard deviations away from the mean—and least squares and other statistical inference methods that are optimal for normally distributed variables often become highly unreliable when applied to such data. In those cases, a more heavy-tailed distribution should be assumed and the appropriate robust statistical inference methods applied. The Gaussian distribution belongs to the family of stable distributions which are the attractors of sums of independent, identically distributed distributions whether or not the mean or variance is finite. Except for the Gaussian which is a limiting case, all stable distributions have heavy tails and infinite variance. It is one of the few distributions that are stable and that have probability density functions that can be expressed analytically, the others being the Cauchy distribution and the Lévy distribution. Symmetries and derivatives The normal distribution with density (mean and variance ) has the following properties: It is symmetric around the point which is at the same time the mode, the median and the mean of the distribution. It is unimodal: its first derivative is positive for negative for and zero only at The area bounded by the curve and the -axis is unity (i.e. equal to one). Its first derivative is Its second derivative is Its density has two inflection points (where the second derivative of is zero and changes sign), located one standard deviation away from the mean, namely at and Its density is log-concave. Its density is infinitely differentiable, indeed supersmooth of order 2. Furthermore, the density of the standard normal distribution (i.e. and ) also has the following properties: Its first derivative is Its second derivative is More generally, its th derivative is where is the th (probabilist) Hermite polynomial. The probability that a normally distributed variable with known and is in a particular set, can be calculated by using the fact that the fraction has a standard normal distribution. Moments The plain and absolute moments of a variable are the expected values of and , respectively. If the expected value of is zero, these parameters are called central moments; otherwise, these parameters are called non-central moments. Usually we are interested only in moments with integer order . If has a normal distribution, the non-central moments exist and are finite for any whose real part is greater than −1. For any non-negative integer , the plain central moments are: Here denotes the double factorial, that is, the product of all numbers from to 1 that have the same parity as The central absolute moments coincide with plain moments for all even orders, but are nonzero for odd orders. For any non-negative integer The last formula is valid also for any non-integer When the mean the plain and absolute moments can be expressed in terms of confluent hypergeometric functions and These expressions remain valid even if is not an integer.
Mathematics
Statistics and probability
null
21474
https://en.wikipedia.org/wiki/Natural%20number
Natural number
In mathematics, the natural numbers are the numbers 0, 1, 2, 3, and so on, possibly excluding 0. Some start counting with 0, defining the natural numbers as the non-negative integers , while others start with 1, defining them as the positive integers Some authors acknowledge both definitions whenever convenient. Sometimes, the whole numbers are the natural numbers plus zero. In other cases, the whole numbers refer to all of the integers, including negative integers. The counting numbers are another term for the natural numbers, particularly in primary school education, and are ambiguous as well although typically start at 1. The natural numbers are used for counting things, like "there are six coins on the table", in which case they are called cardinal numbers. They are also used to put things in order, like "this is the third largest city in the country", which are called ordinal numbers. Natural numbers are also used as labels, like jersey numbers on a sports team, where they serve as nominal numbers and do not have mathematical properties. The natural numbers form a set, commonly symbolized as a bold or blackboard bold . Many other number sets are built from the natural numbers. For example, the integers are made by adding 0 and negative numbers. The rational numbers add fractions, and the real numbers add infinite decimals. Complex numbers add the square root of . This chain of extensions canonically embeds the natural numbers in the other number systems. Natural numbers are studied in different areas of math. Number theory looks at things like how numbers divide evenly (divisibility), or how prime numbers are spread out. Combinatorics studies counting and arranging numbered objects, such as partitions and enumerations. History Ancient roots The most primitive method of representing a natural number is to use one's fingers, as in finger counting. Putting down a tally mark for each object is another primitive method. Later, a set of objects could be tested for equality, excess or shortage—by striking out a mark and removing an object from the set. The first major advance in abstraction was the use of numerals to represent numbers. This allowed systems to be developed for recording large numbers. The ancient Egyptians developed a powerful system of numerals with distinct hieroglyphs for 1, 10, and all powers of 10 up to over 1 million. A stone carving from Karnak, dating back from around 1500 BCE and now at the Louvre in Paris, depicts 276 as 2 hundreds, 7 tens, and 6 ones; and similarly for the number 4,622. The Babylonians had a place-value system based essentially on the numerals for 1 and 10, using base sixty, so that the symbol for sixty was the same as the symbol for one—its value being determined from context. A much later advance was the development of the idea that  can be considered as a number, with its own numeral. The use of a 0 digit in place-value notation (within other numbers) dates back as early as 700 BCE by the Babylonians, who omitted such a digit when it would have been the last symbol in the number. The Olmec and Maya civilizations used 0 as a separate number as early as the , but this usage did not spread beyond Mesoamerica. The use of a numeral 0 in modern times originated with the Indian mathematician Brahmagupta in 628 CE. However, 0 had been used as a number in the medieval computus (the calculation of the date of Easter), beginning with Dionysius Exiguus in 525 CE, without being denoted by a numeral. Standard Roman numerals do not have a symbol for 0; instead, nulla (or the genitive form nullae) from , the Latin word for "none", was employed to denote a 0 value. The first systematic study of numbers as abstractions is usually credited to the Greek philosophers Pythagoras and Archimedes. Some Greek mathematicians treated the number 1 differently than larger numbers, sometimes even not as a number at all. Euclid, for example, defined a unit first and then a number as a multitude of units, thus by his definition, a unit is not a number and there are no unique numbers (e.g., any two units from indefinitely many units is a 2). However, in the definition of perfect number which comes shortly afterward, Euclid treats 1 as a number like any other. Independent studies on numbers also occurred at around the same time in India, China, and Mesoamerica. Emergence as a term Nicolas Chuquet used the term progression naturelle (natural progression) in 1484. The earliest known use of "natural number" as a complete English phrase is in 1763. The 1771 Encyclopaedia Britannica defines natural numbers in the logarithm article. Starting at 0 or 1 has long been a matter of definition. In 1727, Bernard Le Bovier de Fontenelle wrote that his notions of distance and element led to defining the natural numbers as including or excluding 0. In 1889, Giuseppe Peano used N for the positive integers and started at 1, but he later changed to using N0 and N1. Historically, most definitions have excluded 0, but many mathematicians such as George A. Wentworth, Bertrand Russell, Nicolas Bourbaki, Paul Halmos, Stephen Cole Kleene, and John Horton Conway have preferred to include 0. Mathematicians have noted tendencies in which definition is used, such as algebra texts including 0, number theory and analysis texts excluding 0, logic and set theory texts including 0, dictionaries excluding 0, school books (through high-school level) excluding 0, and upper-division college-level books including 0. There are exceptions to each of these tendencies and as of 2023 no formal survey has been conducted. Arguments raised include division by zero and the size of the empty set. Computer languages often start from zero when enumerating items like loop counters and string- or array-elements. Including 0 began to rise in popularity in the 1960s. The ISO 31-11 standard included 0 in the natural numbers in its first edition in 1978 and this has continued through its present edition as ISO 80000-2. Formal construction In 19th century Europe, there was mathematical and philosophical discussion about the exact nature of the natural numbers. Henri Poincaré stated that axioms can only be demonstrated in their finite application, and concluded that it is "the power of the mind" which allows conceiving of the indefinite repetition of the same act. Leopold Kronecker summarized his belief as "God made the integers, all else is the work of man". The constructivists saw a need to improve upon the logical rigor in the foundations of mathematics. In the 1860s, Hermann Grassmann suggested a recursive definition for natural numbers, thus stating they were not really natural—but a consequence of definitions. Later, two classes of such formal definitions emerged, using set theory and Peano's axioms respectively. Later still, they were shown to be equivalent in most practical applications. Set-theoretical definitions of natural numbers were initiated by Frege. He initially defined a natural number as the class of all sets that are in one-to-one correspondence with a particular set. However, this definition turned out to lead to paradoxes, including Russell's paradox. To avoid such paradoxes, the formalism was modified so that a natural number is defined as a particular set, and any set that can be put into one-to-one correspondence with that set is said to have that number of elements. In 1881, Charles Sanders Peirce provided the first axiomatization of natural-number arithmetic. In 1888, Richard Dedekind proposed another axiomatization of natural-number arithmetic, and in 1889, Peano published a simplified version of Dedekind's axioms in his book The principles of arithmetic presented by a new method (). This approach is now called Peano arithmetic. It is based on an axiomatization of the properties of ordinal numbers: each natural number has a successor and every non-zero natural number has a unique predecessor. Peano arithmetic is equiconsistent with several weak systems of set theory. One such system is ZFC with the axiom of infinity replaced by its negation. Theorems that can be proved in ZFC but cannot be proved using the Peano Axioms include Goodstein's theorem. Notation The set of all natural numbers is standardly denoted or Older texts have occasionally employed as the symbol for this set. Since natural numbers may contain or not, it may be important to know which version is referred to. This is often specified by the context, but may also be done by using a subscript or a superscript in the notation, such as: Naturals without zero: Naturals with zero: Alternatively, since the natural numbers naturally form a subset of the integers (often they may be referred to as the positive, or the non-negative integers, respectively. To be unambiguous about whether 0 is included or not, sometimes a superscript "" or "+" is added in the former case, and a subscript (or superscript) "0" is added in the latter case: Properties This section uses the convention . Addition Given the set of natural numbers and the successor function sending each natural number to the next one, one can define addition of natural numbers recursively by setting and for all , . Thus, , , and so on. The algebraic structure is a commutative monoid with identity element 0. It is a free monoid on one generator. This commutative monoid satisfies the cancellation property, so it can be embedded in a group. The smallest group containing the natural numbers is the integers. If 1 is defined as , then . That is, is simply the successor of . Multiplication Analogously, given that addition has been defined, a multiplication operator can be defined via and . This turns into a free commutative monoid with identity element 1; a generator set for this monoid is the set of prime numbers. Relationship between addition and multiplication Addition and multiplication are compatible, which is expressed in the distribution law: . These properties of addition and multiplication make the natural numbers an instance of a commutative semiring. Semirings are an algebraic generalization of the natural numbers where multiplication is not necessarily commutative. The lack of additive inverses, which is equivalent to the fact that is not closed under subtraction (that is, subtracting one natural from another does not always result in another natural), means that is not a ring; instead it is a semiring (also known as a rig). If the natural numbers are taken as "excluding 0", and "starting at 1", the definitions of + and × are as above, except that they begin with and . Furthermore, has no identity element. Order In this section, juxtaposed variables such as indicate the product , and the standard order of operations is assumed. A total order on the natural numbers is defined by letting if and only if there exists another natural number where . This order is compatible with the arithmetical operations in the following sense: if , and are natural numbers and , then and . An important property of the natural numbers is that they are well-ordered: every non-empty set of natural numbers has a least element. The rank among well-ordered sets is expressed by an ordinal number; for the natural numbers, this is denoted as (omega). Division In this section, juxtaposed variables such as indicate the product , and the standard order of operations is assumed. While it is in general not possible to divide one natural number by another and get a natural number as result, the procedure of division with remainder or Euclidean division is available as a substitute: for any two natural numbers and with there are natural numbers and such that The number is called the quotient and is called the remainder of the division of by . The numbers and are uniquely determined by and . This Euclidean division is key to the several other properties (divisibility), algorithms (such as the Euclidean algorithm), and ideas in number theory. Algebraic properties satisfied by the natural numbers The addition (+) and multiplication (×) operations on natural numbers as defined above have several algebraic properties: Closure under addition and multiplication: for all natural numbers and , both and are natural numbers. Associativity: for all natural numbers , , and , and . Commutativity: for all natural numbers and , and . Existence of identity elements: for every natural number , and . If the natural numbers are taken as "excluding 0", and "starting at 1", then for every natural number , . However, the "existence of additive identity element" property is not satisfied Distributivity of multiplication over addition for all natural numbers , , and , . No nonzero zero divisors: if and are natural numbers such that , then or (or both). Generalizations Two important generalizations of natural numbers arise from the two uses of counting and ordering: cardinal numbers and ordinal numbers. A natural number can be used to express the size of a finite set; more precisely, a cardinal number is a measure for the size of a set, which is even suitable for infinite sets. The numbering of cardinals usually begins at zero, to accommodate the empty set . This concept of "size" relies on maps between sets, such that two sets have the same size, exactly if there exists a bijection between them. The set of natural numbers itself, and any bijective image of it, is said to be countably infinite and to have cardinality aleph-null (). Natural numbers are also used as linguistic ordinal numbers: "first", "second", "third", and so forth. The numbering of ordinals usually begins at zero, to accommodate the order type of the empty set . This way they can be assigned to the elements of a totally ordered finite set, and also to the elements of any well-ordered countably infinite set without limit points. This assignment can be generalized to general well-orderings with a cardinality beyond countability, to yield the ordinal numbers. An ordinal number may also be used to describe the notion of "size" for a well-ordered set, in a sense different from cardinality: if there is an order isomorphism (more than a bijection) between two well-ordered sets, they have the same ordinal number. The first ordinal number that is not a natural number is expressed as ; this is also the ordinal number of the set of natural numbers itself. The least ordinal of cardinality (that is, the initial ordinal of ) is but many well-ordered sets with cardinal number have an ordinal number greater than . For finite well-ordered sets, there is a one-to-one correspondence between ordinal and cardinal numbers; therefore they can both be expressed by the same natural number, the number of elements of the set. This number can also be used to describe the position of an element in a larger finite, or an infinite, sequence. A countable non-standard model of arithmetic satisfying the Peano Arithmetic (that is, the first-order Peano axioms) was developed by Skolem in 1933. The hypernatural numbers are an uncountable model that can be constructed from the ordinary natural numbers via the ultrapower construction. Other generalizations are discussed in . Georges Reeb used to claim provocatively that "The naïve integers don't fill up ". Formal definitions There are two standard methods for formally defining natural numbers. The first one, named for Giuseppe Peano, consists of an autonomous axiomatic theory called Peano arithmetic, based on few axioms called Peano axioms. The second definition is based on set theory. It defines the natural numbers as specific sets. More precisely, each natural number is defined as an explicitly defined set, whose elements allow counting the elements of other sets, in the sense that the sentence "a set has elements" means that there exists a one to one correspondence between the two sets and . The sets used to define natural numbers satisfy Peano axioms. It follows that every theorem that can be stated and proved in Peano arithmetic can also be proved in set theory. However, the two definitions are not equivalent, as there are theorems that can be stated in terms of Peano arithmetic and proved in set theory, which are not provable inside Peano arithmetic. A probable example is Fermat's Last Theorem. The definition of the integers as sets satisfying Peano axioms provide a model of Peano arithmetic inside set theory. An important consequence is that, if set theory is consistent (as it is usually guessed), then Peano arithmetic is consistent. In other words, if a contradiction could be proved in Peano arithmetic, then set theory would be contradictory, and every theorem of set theory would be both true and wrong. Peano axioms The five Peano axioms are the following: 0 is a natural number. Every natural number has a successor which is also a natural number. 0 is not the successor of any natural number. If the successor of equals the successor of , then equals . The axiom of induction: If a statement is true of 0, and if the truth of that statement for a number implies its truth for the successor of that number, then the statement is true for every natural number. These are not the original axioms published by Peano, but are named in his honor. Some forms of the Peano axioms have 1 in place of 0. In ordinary arithmetic, the successor of is . Set-theoretic definition Intuitively, the natural number is the common property of all sets that have elements. So, it seems natural to define as an equivalence class under the relation "can be made in one to one correspondence". This does not work in all set theories, as such an equivalence class would not be a set (because of Russell's paradox). The standard solution is to define a particular set with elements that will be called the natural number . The following definition was first published by John von Neumann, although Levy attributes the idea to unpublished work of Zermelo in 1916. As this definition extends to infinite set as a definition of ordinal number, the sets considered below are sometimes called von Neumann ordinals. The definition proceeds as follows: Call , the empty set. Define the successor of any set by . By the axiom of infinity, there exist sets which contain 0 and are closed under the successor function. Such sets are said to be inductive. The intersection of all inductive sets is still an inductive set. This intersection is the set of the natural numbers. It follows that the natural numbers are defined iteratively as follows: , , , , , etc. It can be checked that the natural numbers satisfy the Peano axioms. With this definition, given a natural number , the sentence "a set has elements" can be formally defined as "there exists a bijection from to ." This formalizes the operation of counting the elements of . Also, if and only if is a subset of . In other words, the set inclusion defines the usual total order on the natural numbers. This order is a well-order. It follows from the definition that each natural number is equal to the set of all natural numbers less than it. This definition, can be extended to the von Neumann definition of ordinals for defining all ordinal numbers, including the infinite ones: "each ordinal is the well-ordered set of all smaller ordinals." If one does not accept the axiom of infinity, the natural numbers may not form a set. Nevertheless, the natural numbers can still be individually defined as above, and they still satisfy the Peano axioms. There are other set theoretical constructions. In particular, Ernst Zermelo provided a construction that is nowadays only of historical interest, and is sometimes referred to as . It consists in defining as the empty set, and . With this definition each nonzero natural number is a singleton set. So, the property of the natural numbers to represent cardinalities is not directly accessible; only the ordinal property (being the th element of a sequence) is immediate. Unlike von Neumann's construction, the Zermelo ordinals do not extend to infinite ordinals.
Mathematics
Counting and numbers
null
21476
https://en.wikipedia.org/wiki/Natural%20logarithm
Natural logarithm
The natural logarithm of a number is its logarithm to the base of the mathematical constant , which is an irrational and transcendental number approximately equal to . The natural logarithm of is generally written as , , or sometimes, if the base is implicit, simply . Parentheses are sometimes added for clarity, giving , , or . This is done particularly when the argument to the logarithm is not a single symbol, so as to prevent ambiguity. The natural logarithm of is the power to which would have to be raised to equal . For example, is , because . The natural logarithm of itself, , is , because , while the natural logarithm of is , since . The natural logarithm can be defined for any positive real number as the area under the curve from to (with the area being negative when ). The simplicity of this definition, which is matched in many other formulas involving the natural logarithm, leads to the term "natural". The definition of the natural logarithm can then be extended to give logarithm values for negative numbers and for all non-zero complex numbers, although this leads to a multi-valued function: see complex logarithm for more. The natural logarithm function, if considered as a real-valued function of a positive real variable, is the inverse function of the exponential function, leading to the identities: Like all logarithms, the natural logarithm maps multiplication of positive numbers into addition: Logarithms can be defined for any positive base other than 1, not only . However, logarithms in other bases differ only by a constant multiplier from the natural logarithm, and can be defined in terms of the latter, . Logarithms are useful for solving equations in which the unknown appears as the exponent of some other quantity. For example, logarithms are used to solve for the half-life, decay constant, or unknown time in exponential decay problems. They are important in many branches of mathematics and scientific disciplines, and are used to solve problems involving compound interest. History The concept of the natural logarithm was worked out by Gregoire de Saint-Vincent and Alphonse Antonio de Sarasa before 1649. Their work involved quadrature of the hyperbola with equation , by determination of the area of hyperbolic sectors. Their solution generated the requisite "hyperbolic logarithm" function, which had the properties now associated with the natural logarithm. An early mention of the natural logarithm was by Nicholas Mercator in his work Logarithmotechnia, published in 1668, although the mathematics teacher John Speidell had already compiled a table of what in fact were effectively natural logarithms in 1619. It has been said that Speidell's logarithms were to the base , but this is not entirely true due to complications with the values being expressed as integers. Notational conventions The notations and both refer unambiguously to the natural logarithm of , and without an explicit base may also refer to the natural logarithm. This usage is common in mathematics, along with some scientific contexts as well as in many programming languages. In some other contexts such as chemistry, however, can be used to denote the common (base 10) logarithm. It may also refer to the binary (base 2) logarithm in the context of computer science, particularly in the context of time complexity. Generally, the notation for the logarithm to base of a number is shown as . So the of to the base would be . Definitions The natural logarithm can be defined in several equivalent ways. Inverse of exponential The most general definition is as the inverse function of , so that . Because is positive and invertible for any real input , this definition of is well defined for any positive . Integral definition The natural logarithm of a positive, real number may be defined as the area under the graph of the hyperbola with equation between and . This is the integral If is in , then the region has negative area, and the logarithm is negative. This function is a logarithm because it satisfies the fundamental multiplicative property of a logarithm: This can be demonstrated by splitting the integral that defines into two parts, and then making the variable substitution (so ) in the second part, as follows: In elementary terms, this is simply scaling by in the horizontal direction and by in the vertical direction. Area does not change under this transformation, but the region between and is reconfigured. Because the function is equal to the function , the resulting area is precisely . The number can then be defined to be the unique real number such that . Properties The natural logarithm has the following mathematical properties: Derivative The derivative of the natural logarithm as a real-valued function on the positive reals is given by How to establish this derivative of the natural logarithm depends on how it is defined firsthand. If the natural logarithm is defined as the integral then the derivative immediately follows from the first part of the fundamental theorem of calculus. On the other hand, if the natural logarithm is defined as the inverse of the (natural) exponential function, then the derivative (for ) can be found by using the properties of the logarithm and a definition of the exponential function. From the definition of the number the exponential function can be defined as where The derivative can then be found from first principles. Also, we have: so, unlike its inverse function , a constant in the function doesn't alter the differential. Series Since the natural logarithm is undefined at 0, itself does not have a Maclaurin series, unlike many other elementary functions. Instead, one looks for Taylor expansions around other points. For example, if then This is the Taylor series for around 1. A change of variables yields the Mercator series: valid for and Leonhard Euler, disregarding , nevertheless applied this series to to show that the harmonic series equals the natural logarithm of ; that is, the logarithm of infinity. Nowadays, more formally, one can prove that the harmonic series truncated at is close to the logarithm of , when is large, with the difference converging to the Euler–Mascheroni constant. The figure is a graph of and some of its Taylor polynomials around 0. These approximations converge to the function only in the region ; outside this region, the higher-degree Taylor polynomials devolve to worse approximations for the function. A useful special case for positive integers , taking , is: If then Now, taking for positive integers , we get: If then Since we arrive at Using the substitution again for positive integers , we get: This is, by far, the fastest converging of the series described here. The natural logarithm can also be expressed as an infinite product: Two examples might be: From this identity, we can easily get that: For example: The natural logarithm in integration The natural logarithm allows simple integration of functions of the form : an antiderivative of is given by . This is the case because of the chain rule and the following fact: In other words, when integrating over an interval of the real line that does not include , then where is an arbitrary constant of integration. Likewise, when the integral is over an interval where , For example, consider the integral of over an interval that does not include points where is infinite: The natural logarithm can be integrated using integration by parts: Let: then: Efficient computation For where , the closer the value of is to 1, the faster the rate of convergence of its Taylor series centered at 1. The identities associated with the logarithm can be leveraged to exploit this: Such techniques were used before calculators, by referring to numerical tables and performing manipulations such as those above. Natural logarithm of 10 The natural logarithm of 10, approximately equal to , plays a role for example in the computation of natural logarithms of numbers represented in scientific notation, as a mantissa multiplied by a power of 10: This means that one can effectively calculate the logarithms of numbers with very large or very small magnitude using the logarithms of a relatively small set of decimals in the range . High precision To compute the natural logarithm with many digits of precision, the Taylor series approach is not efficient since the convergence is slow. Especially if is near 1, a good alternative is to use Halley's method or Newton's method to invert the exponential function, because the series of the exponential function converges more quickly. For finding the value of to give using Halley's method, or equivalently to give using Newton's method, the iteration simplifies to which has cubic convergence to . Another alternative for extremely high precision calculation is the formula where denotes the arithmetic-geometric mean of 1 and , and with chosen so that bits of precision is attained. (For most purposes, the value of 8 for is sufficient.) In fact, if this method is used, Newton inversion of the natural logarithm may conversely be used to calculate the exponential function efficiently. (The constants and can be pre-computed to the desired precision using any of several known quickly converging series.) Or, the following formula can be used: where are the Jacobi theta functions. Based on a proposal by William Kahan and first implemented in the Hewlett-Packard HP-41C calculator in 1979 (referred to under "LN1" in the display, only), some calculators, operating systems (for example Berkeley UNIX 4.3BSD), computer algebra systems and programming languages (for example C99) provide a special natural logarithm plus 1 function, alternatively named LNP1, or log1p to give more accurate results for logarithms close to zero by passing arguments , also close to zero, to a function , which returns the value , instead of passing a value close to 1 to a function returning . The function avoids in the floating point arithmetic a near cancelling of the absolute term 1 with the second term from the Taylor expansion of the natural logarithm. This keeps the argument, the result, and intermediate steps all close to zero where they can be most accurately represented as floating-point numbers. In addition to base , the IEEE 754-2008 standard defines similar logarithmic functions near 1 for binary and decimal logarithms: and . Similar inverse functions named "expm1", "expm" or "exp1m" exist as well, all with the meaning of . An identity in terms of the inverse hyperbolic tangent, gives a high precision value for small values of on systems that do not implement . Computational complexity The computational complexity of computing the natural logarithm using the arithmetic-geometric mean (for both of the above methods) is . Here, is the number of digits of precision at which the natural logarithm is to be evaluated, and is the computational complexity of multiplying two -digit numbers. Continued fractions While no simple continued fractions are available, several generalized continued fractions exist, including: These continued fractions—particularly the last—converge rapidly for values close to 1. However, the natural logarithms of much larger numbers can easily be computed, by repeatedly adding those of smaller numbers, with similarly rapid convergence. For example, since 2 = 1.253 × 1.024, the natural logarithm of 2 can be computed as: Furthermore, since 10 = 1.2510 × 1.0243, even the natural logarithm of 10 can be computed similarly as: The reciprocal of the natural logarithm can be also written in this way: For example: Complex logarithms The exponential function can be extended to a function which gives a complex number as for any arbitrary complex number ; simply use the infinite series with =z complex. This exponential function can be inverted to form a complex logarithm that exhibits most of the properties of the ordinary logarithm. There are two difficulties involved: no has ; and it turns out that . Since the multiplicative property still works for the complex exponential function, , for all complex and integers . So the logarithm cannot be defined for the whole complex plane, and even then it is multi-valued—any complex logarithm can be changed into an "equivalent" logarithm by adding any integer multiple of at will. The complex logarithm can only be single-valued on the cut plane. For example, or or , etc.; and although can be defined as , or or , and so on.
Mathematics
Specific functions
null
21477
https://en.wikipedia.org/wiki/Neogene
Neogene
The Neogene ( ,) is a geologic period and system that spans 20.45 million years from the end of the Paleogene Period million years ago (Mya) to the beginning of the present Quaternary Period million years ago. It is the second period of the Cenozoic and the eleventh period of the Phanerozoic. The Neogene is sub-divided into two epochs, the earlier Miocene and the later Pliocene. Some geologists assert that the Neogene cannot be clearly delineated from the modern geological period, the Quaternary. The term "Neogene" was coined in 1853 by the Austrian palaeontologist Moritz Hörnes (1815–1868). The earlier term Tertiary Period was used to define the span of time now covered by Paleogene and Neogene and, despite no longer being recognized as a formal stratigraphic term, "Tertiary" still sometimes remains in informal use. During this period, mammals and birds continued to evolve into modern forms, while other groups of life remained relatively unchanged. The first humans (Homo habilis) appeared in Africa near the end of the period. Some continental movements took place, the most significant event being the connection of North and South America at the Isthmus of Panama, late in the Pliocene. This cut off the warm ocean currents from the Pacific to the Atlantic Ocean, leaving only the Gulf Stream to transfer heat to the Arctic Ocean. The global climate cooled considerably throughout the Neogene, culminating in a series of continental glaciations in the Quaternary Period that followed. Divisions In ICS terminology, from upper (later, more recent) to lower (earlier): The Pliocene Epoch is subdivided into two ages: Piacenzian Age, preceded by Zanclean Age The Miocene Epoch is subdivided into six ages: Messinian Age, preceded by Tortonian Age Serravallian Age Langhian Age Burdigalian Age Aquitanian Age In different geophysical regions of the world, other regional names are also used for the same or overlapping ages and other timeline subdivisions. The terms Neogene System (formal) and Upper Tertiary System (informal) describe the rocks deposited during the Neogene Period. Paleogeography The continents in the Neogene were very close to their current positions. The Isthmus of Panama formed, connecting North and South America. The Indian subcontinent continued to collide with Asia, forming the Himalayas. Sea levels fell, creating land bridges between Africa and Eurasia and between Eurasia and North America. Climate The global climate became more seasonal and continued an overall drying and cooling trend which began during the Paleogene. The Early Miocene was relatively cool; Early Miocene mid-latitude seawater and continental thermal gradients were already very similar to those of the present. During the Middle Miocene, Earth entered a warm phase known as the Middle Miocene Climatic Optimum (MMCO), which was driven by the emplacement of the Columbia River Basalt Group. Around 11 Ma, the Middle Miocene Warm Interval gave way to the much cooler Late Miocene. The ice caps on both poles began to grow and thicken, a process enhanced by positive feedbacks from increased formation of sea ice. Between 7 and 5.3 Ma, a decrease in global temperatures termed the Late Miocene Cooling (LMC) ensued, driven by decreases in carbon dioxide concentrations. During the Pliocene, from about 5.3 to 2.7 Ma, another warm interval occurred, being known as the Pliocene Warm Interval (PWI), interrupting the longer-term cooling trend. The Pliocene Thermal Maximum (PTM) occurred between 3.3 and 3.0 Ma. During the Pliocene, Green Sahara phases of wet conditions in North Africa were frequent and occurred about every 21 kyr, being especially intense when Earth's orbit's eccentricity was high. The PWI had similar levels of atmospheric carbon dioxide to contemporary times and is often seen as an analogous climate to the projected climate of the near future as a result of anthropogenic global warming. Towards the end of the Pliocene, decreased heat transport towards the Antarctic resulting from a weakening of the Indonesian Throughflow (ITF) cooled the Earth, a process that exacerbated itself in a positive feedback as sea levels dropped and the ITF diminished and further limited the heat transported southward by the Leeuwin Current. By the end of the period the first of a series of glaciations of the current Ice Age began. Flora and fauna Marine and continental flora and fauna have a modern appearance. The reptile group Choristodera went extinct in the early part of the period, while the amphibians known as Allocaudata disappeared at the end of it. Neogene also marked the end of the reptilian genera Langstonia and Barinasuchus, terrestrial predators that were the last surviving members of Sebecosuchia, a group related to crocodiles. The oceans were dominated by large carnivores like megalodons and livyatans, and 19 million years ago about 70% of all pelagic shark species disappeared. Mammals and birds continued to be the dominant terrestrial vertebrates, and took many forms as they adapted to various habitats. An explosive radiation of ursids took place at the Miocene-Pliocene boundary. The first hominins, the ancestors of humans, may have appeared in southern Europe and migrated into Africa. The first humans (belonging to the species Homo habilis) appeared in Africa near the end of the period. About 20 million years ago gymnosperms in the form of some conifer and cycad groups started to diversify and produce more species due to the changing conditions. In response to the cooler, seasonal climate, tropical plant species gave way to deciduous ones and grasslands replaced many forests. Grasses therefore greatly diversified, and herbivorous mammals evolved alongside it, creating the many grazing animals of today such as horses, antelope, and bison. Ice age mammals like the mammoths and woolly rhinoceros were common in Pliocene. With lower levels of in the atmosphere, plants expanded and reached ecological dominance in grasslands during the last 10 million years. Also Asteraceae (daisies) went through a significant adaptive radiation. Eucalyptus fossil leaves occur in the Miocene of New Zealand, where the genus is not native today, but have been introduced from Australia. Disagreements The Neogene traditionally ended at the end of the Pliocene Epoch, just before the older definition of the beginning of the Quaternary Period; many time scales show this division. However, there was a movement amongst geologists (particularly marine geologists) to also include ongoing geological time (Quaternary) in the Neogene, while others (particularly terrestrial geologists) insist the Quaternary to be a separate period of distinctly different record. The somewhat confusing terminology and disagreement amongst geologists on where to draw what hierarchical boundaries is due to the comparatively fine divisibility of time units as time approaches the present, and due to geological preservation that causes the youngest sedimentary geological record to be preserved over a much larger area and to reflect many more environments than the older geological record. By dividing the Cenozoic Era into three (arguably two) periods (Paleogene, Neogene, Quaternary) instead of seven epochs, the periods are more closely comparable to the duration of periods in the Mesozoic and Paleozoic Eras. The International Commission on Stratigraphy (ICS) once proposed that the Quaternary be considered a sub-era (sub-erathem) of the Neogene, with a beginning date of 2.58 Ma, namely the start of the Gelasian Stage. In the 2004 proposal of the ICS, the Neogene would have consisted of the Miocene and Pliocene Epochs. The International Union for Quaternary Research (INQUA) counterproposed that the Neogene and the Pliocene end at 2.58 Ma, that the Gelasian be transferred to the Pleistocene, and the Quaternary be recognized as the third period in the Cenozoic, citing key changes in Earth's climate, oceans, and biota that occurred 2.58 Ma and its correspondence to the Gauss-Matuyama magnetostratigraphic boundary. In 2006 ICS and INQUA reached a compromise that made Quaternary a sub-era, subdividing Cenozoic into the old classical Tertiary and Quaternary, a compromise that was rejected by International Union of Geological Sciences because it split both Neogene and Pliocene in two. Following formal discussions at the 2008 International Geological Congress in Oslo, Norway, the ICS decided in May 2009 to make the Quaternary the youngest period of the Cenozoic Era with its base at 2.58 Mya and including the Gelasian Age, which was formerly considered part of the Neogene Period and Pliocene Epoch. Thus the Neogene Period ends bounding the succeeding Quaternary Period at 2.58 Mya.
Physical sciences
Geological periods
null
21485
https://en.wikipedia.org/wiki/Neutrino
Neutrino
A neutrino ( ; denoted by the Greek letter ) is an elementary particle that interacts via the weak interaction and gravity. The neutrino is so named because it is electrically neutral and because its rest mass is so small (-ino) that it was long thought to be zero. The rest mass of the neutrino is much smaller than that of the other known elementary particles (excluding massless particles). The weak force has a very short range, the gravitational interaction is extremely weak due to the very small mass of the neutrino, and neutrinos do not participate in the electromagnetic interaction or the strong interaction. Thus, neutrinos typically pass through normal matter unimpeded and undetected. Weak interactions create neutrinos in one of three leptonic flavors: electron neutrino, muon neutrino, tau neutrino, Each flavor is associated with the correspondingly named charged lepton. Although neutrinos were long believed to be massless, it is now known that there are three discrete neutrino masses with different values (all tiny, the smallest of which could be zero), but the three masses do not uniquely correspond to the three flavors: A neutrino created with a specific flavor is a specific mixture of all three mass states (a quantum superposition). Similar to some other neutral particles, neutrinos oscillate between different flavors in flight as a consequence. For example, an electron neutrino produced in a beta decay reaction may interact in a distant detector as a muon or tau neutrino. The three mass values are not yet known as of 2024, but laboratory experiments and cosmological observations have determined the differences of their squares, an upper limit on their sum (< ), and an upper limit on the mass of the electron neutrino. Neutrinos are fermions, which have spin of . For each neutrino, there also exists a corresponding antiparticle, called an antineutrino, which also has spin of and no electric charge. Antineutrinos are distinguished from neutrinos by having opposite-signed lepton number and weak isospin, and right-handed instead of left-handed chirality. To conserve total lepton number (in nuclear beta decay), electron neutrinos only appear together with positrons (anti-electrons) or electron-antineutrinos, whereas electron antineutrinos only appear with electrons or electron neutrinos. Neutrinos are created by various radioactive decays; the following list is not exhaustive, but includes some of those processes: beta decay of atomic nuclei or hadrons, such as: natural nuclear reactions such as those that take place in the core of a star or during a supernova decay of radionuclides naturally present in Earth's crust decays of products of artificial nuclear reactions in nuclear reactors, nuclear bombs, or in breeding blankets of fusion reactors decays of exotic particles such as muons and pions, from cosmic rays or particle accelerators during the spin-down of a neutron star when cosmic rays or accelerated particle beams strike atoms The majority of neutrinos which are detected about the Earth are from nuclear reactions inside the Sun. At the surface of the Earth, the flux is about 65 billion () solar neutrinos, per second per square centimeter. Neutrinos can be used for tomography of the interior of the Earth. History Pauli's proposal The neutrino was postulated first by Wolfgang Pauli in 1930 to explain how beta decay could conserve energy, momentum, and angular momentum (spin). In contrast to Niels Bohr, who proposed a statistical version of the conservation laws to explain the observed continuous energy spectra in beta decay, Pauli hypothesized an undetected particle that he called a "neutron", using the same -on ending employed for naming both the proton and the electron. He considered that the new particle was emitted from the nucleus together with the electron or beta particle in the process of beta decay and had a mass similar to the electron. James Chadwick discovered a much more massive neutral nuclear particle in 1932 and named it a neutron also, leaving two kinds of particles with the same name. The word "neutrino" entered the scientific vocabulary through Enrico Fermi, who used it during a conference in Paris in July 1932 and at the Solvay Conference in October 1933, where Pauli also employed it. The name (the Italian equivalent of "little neutral one") was jokingly coined by Edoardo Amaldi during a conversation with Fermi at the Institute of Physics of via Panisperna in Rome, in order to distinguish this light neutral particle from Chadwick's heavy neutron. In Fermi's theory of beta decay, Chadwick's large neutral particle could decay to a proton, electron, and the smaller neutral particle (now called an electron antineutrino): Fermi's paper, written in 1934, unified Pauli's neutrino with Paul Dirac's positron and Werner Heisenberg's neutron–proton model and gave a solid theoretical basis for future experimental work. By 1934, there was experimental evidence against Bohr's idea that energy conservation is invalid for beta decay: At the Solvay conference of that year, measurements of the energy spectra of beta particles (electrons) were reported, showing that there is a strict limit on the energy of electrons from each type of beta decay. Such a limit is not expected if the conservation of energy is invalid, in which case any amount of energy would be statistically available in at least a few decays. The natural explanation of the beta decay spectrum as first measured in 1934 was that only a limited (and conserved) amount of energy was available, and a new particle was sometimes taking a varying fraction of this limited energy, leaving the rest for the beta particle. Pauli made use of the occasion to publicly emphasize that the still-undetected "neutrino" must be an actual particle. The first evidence of the reality of neutrinos came in 1938 via simultaneous cloud-chamber measurements of the electron and the recoil of the nucleus. Direct detection In 1942, Wang Ganchang first proposed the use of beta capture to experimentally detect neutrinos. In the 20 July 1956 issue of Science, Clyde Cowan, Frederick Reines, Francis B. "Kiko" Harrison, Herald W. Kruse, and Austin D. McGuire published confirmation that they had detected the neutrino, a result that was rewarded almost forty years later with the 1995 Nobel Prize. In this experiment, now known as the Cowan–Reines neutrino experiment, antineutrinos created in a nuclear reactor by beta decay reacted with protons to produce neutrons and positrons: The positron quickly finds an electron, and they annihilate each other. The two resulting gamma rays (γ) are detectable. The neutron can be detected by its capture on an appropriate nucleus, releasing a gamma ray. The coincidence of both events—positron annihilation and neutron capture—gives a unique signature of an antineutrino interaction. In February 1965, the first neutrino found in nature was identified by a group including Frederick Reines and Friedel Sellschop. The experiment was performed in a specially prepared chamber at a depth of 3 km in the East Rand ("ERPM") gold mine near Boksburg, South Africa. A plaque in the main building commemorates the discovery. The experiments also implemented a primitive neutrino astronomy and looked at issues of neutrino physics and weak interactions. Neutrino flavor The antineutrino discovered by Clyde Cowan and Frederick Reines was the antiparticle of the electron neutrino. In 1962, Leon M. Lederman, Melvin Schwartz, and Jack Steinberger showed that more than one type of neutrino exists by first detecting interactions of the muon neutrino (already hypothesised with the name neutretto), which earned them the 1988 Nobel Prize in Physics. When the third type of lepton, the tau, was discovered in 1975 at the Stanford Linear Accelerator Center, it was also expected to have an associated neutrino (the tau neutrino). The first evidence for this third neutrino type came from the observation of missing energy and momentum in tau decays analogous to the beta decay leading to the discovery of the electron neutrino. The first detection of tau neutrino interactions was announced in 2000 by the DONUT collaboration at Fermilab; its existence had already been inferred by both theoretical consistency and experimental data from the Large Electron–Positron Collider. Solar neutrino problem In the 1960s, the now-famous Homestake experiment made the first measurement of the flux of electron neutrinos arriving from the core of the Sun and found a value that was between one third and one half the number predicted by the Standard Solar Model. This discrepancy, which became known as the solar neutrino problem, remained unresolved for some thirty years, while possible problems with both the experiment and the solar model were investigated, but none could be found. Eventually, it was realized that both were actually correct and that the discrepancy between them was due to neutrinos being more complex than was previously assumed. It was postulated that the three neutrinos had nonzero and slightly different masses, and could therefore oscillate into undetectable flavors on their flight to the Earth. This hypothesis was investigated by a new series of experiments, thereby opening a new major field of research that still continues. Eventual confirmation of the phenomenon of neutrino oscillation led to two Nobel prizes, one to R. Davis, who conceived and led the Homestake experiment and Masatoshi Koshiba of Kamiokande, whose work confirmed it, and one to Takaaki Kajita of Super-Kamiokande and A.B. McDonald of Sudbury Neutrino Observatory for their joint experiment, which confirmed the existence of all three neutrino flavors and found no deficit. Oscillation A practical method for investigating neutrino oscillations was first suggested by Bruno Pontecorvo in 1957 using an analogy with kaon oscillations; over the subsequent 10 years, he developed the mathematical formalism and the modern formulation of vacuum oscillations. In 1985 Stanislav Mikheyev and Alexei Smirnov (expanding on 1978 work by Lincoln Wolfenstein) noted that flavor oscillations can be modified when neutrinos propagate through matter. This so-called Mikheyev–Smirnov–Wolfenstein effect (MSW effect) is important to understand because many neutrinos emitted by fusion in the Sun pass through the dense matter in the solar core (where essentially all solar fusion takes place) on their way to detectors on Earth. Starting in 1998, experiments began to show that solar and atmospheric neutrinos change flavors (see Super-Kamiokande and Sudbury Neutrino Observatory). This resolved the solar neutrino problem: the electron neutrinos produced in the Sun had partly changed into other flavors which the experiments could not detect. Although individual experiments, such as the set of solar neutrino experiments, are consistent with non-oscillatory mechanisms of neutrino flavor conversion, taken altogether, neutrino experiments imply the existence of neutrino oscillations. Especially relevant in this context are the reactor experiment KamLAND and the accelerator experiments such as MINOS. The KamLAND experiment has indeed identified oscillations as the neutrino flavor conversion mechanism involved in the solar electron neutrinos. Similarly MINOS confirms the oscillation of atmospheric neutrinos and gives a better determination of the mass squared splitting. Takaaki Kajita of Japan, and Arthur B. McDonald of Canada, received the 2015 Nobel Prize for Physics for their landmark finding, theoretical and experimental, that neutrinos can change flavors. Cosmic neutrinos As well as specific sources, a general background level of neutrinos is expected to pervade the universe, theorized to occur due to two main sources. Cosmic neutrino background (Big Bang originated) Around 1 second after the Big Bang, neutrinos decoupled, giving rise to a background level of neutrinos known as the cosmic neutrino background (CNB). Diffuse supernova neutrino background (Supernova originated) R. Davis and M. Koshiba were jointly awarded the 2002 Nobel Prize in Physics. Both conducted pioneering work on solar neutrino detection, and Koshiba's work also resulted in the first real-time observation of neutrinos from the SN 1987A supernova in the nearby Large Magellanic Cloud. These efforts marked the beginning of neutrino astronomy. SN 1987A represents the only verified detection of neutrinos from a supernova. However, many stars have gone supernova in the universe, leaving a theorized diffuse supernova neutrino background. Properties and reactions Neutrinos have half-integer spin (); therefore they are fermions. Neutrinos are leptons. They have only been observed to interact through the weak force, although it is assumed that they also interact gravitationally. Since they have non-zero mass, theoretical considerations permit neutrinos to interact magnetically, but do not require them to. As yet there is no experimental evidence for a non-zero magnetic moment in neutrinos. Flavor, mass, and their mixing Weak interactions create neutrinos in one of three leptonic flavors: electron neutrinos (), muon neutrinos (), or tau neutrinos (), associated with the corresponding charged leptons, the electron (), muon (), and tau (), respectively. Although neutrinos were long believed to be massless, it is now known that there are three discrete neutrino masses; each neutrino flavor state is a linear combination of the three discrete mass eigenstates. Although only differences of squares of the three mass values are known as of 2016, experiments have shown that these masses are tiny compared to any other particle. From cosmological measurements, it has been calculated that the sum of the three neutrino masses must be less than one-millionth that of the electron. More formally, neutrino flavor eigenstates (creation and annihilation combinations) are not the same as the neutrino mass eigenstates (simply labeled "1", "2", and "3"). As of 2024, it is not known which of these three is the heaviest. The neutrino mass hierarchy consists of two possible configurations. In analogy with the mass hierarchy of the charged leptons, the configuration with mass 2 being lighter than mass 3 is conventionally called the "normal hierarchy", while in the "inverted hierarchy", the opposite would hold. Several major experimental efforts are underway to help establish which is correct. A neutrino created in a specific flavor eigenstate is in an associated specific quantum superposition of all three mass eigenstates. The three masses differ so little that they cannot possibly be distinguished experimentally within any practical flight path. The proportion of each mass state in the pure flavor states produced has been found to depend profoundly on the flavor. The relationship between flavor and mass eigenstates is encoded in the PMNS matrix. Experiments have established moderate- to low-precision values for the elements of this matrix, with the single complex phase in the matrix being only poorly known, as of 2016. A non-zero mass allows neutrinos to possibly have a tiny magnetic moment; if so, neutrinos would interact electromagnetically, although no such interaction has ever been observed. Flavor oscillations Neutrinos oscillate between different flavors in flight. For example, an electron neutrino produced in a beta decay reaction may interact in a distant detector as a muon or tau neutrino, as defined by the flavor of the charged lepton produced in the detector. This oscillation occurs because the three mass state components of the produced flavor travel at slightly different speeds, so that their quantum mechanical wave packets develop relative phase shifts that change how they combine to produce a varying superposition of three flavors. Each flavor component thereby oscillates as the neutrino travels, with the flavors varying in relative strengths. The relative flavor proportions when the neutrino interacts represent the relative probabilities for that flavor of interaction to produce the corresponding flavor of charged lepton. There are other possibilities in which neutrinos could oscillate even if they were massless: If Lorentz symmetry were not an exact symmetry, neutrinos could experience Lorentz-violating oscillations. Mikheyev–Smirnov–Wolfenstein effect Neutrinos traveling through matter, in general, undergo a process analogous to light traveling through a transparent material. This process is not directly observable because it does not produce ionizing radiation, but gives rise to the Mikheyev–Smirnov–Wolfenstein effect. Only a small fraction of the neutrino's energy is transferred to the material. Antineutrinos For each neutrino, there also exists a corresponding antiparticle, called an antineutrino, which also has no electric charge and half-integer spin. They are distinguished from the neutrinos by having opposite signs of lepton number and opposite chirality (and consequently opposite-sign weak isospin). As of 2016, no evidence has been found for any other difference. So far, despite extensive and continuing searches for exceptions, in all observed leptonic processes there has never been any change in total lepton number; for example, if the total lepton number is zero in the initial state, then the final state has only matched lepton and anti-lepton pairs: electron neutrinos appear in the final state together with only positrons (anti-electrons) or electron antineutrinos, and electron antineutrinos with electrons or electron neutrinos. Antineutrinos are produced in nuclear beta decay together with a beta particle (in beta decay a neutron decays into a proton, electron, and antineutrino). All antineutrinos observed thus far had right-handed helicity (i.e., only one of the two possible spin states has ever been seen), while neutrinos were all left-handed. Antineutrinos were first detected as a result of their interaction with protons in a large tank of water. This was installed next to a nuclear reactor as a controllable source of the antineutrinos (see Cowan–Reines neutrino experiment). Researchers around the world have begun to investigate the possibility of using antineutrinos for reactor monitoring in the context of preventing the proliferation of nuclear weapons. Majorana mass Because antineutrinos and neutrinos are neutral particles, it is possible that they are the same particle. Rather than conventional Dirac fermions, neutral particles can be another type of spin  particle called Majorana particles, named after the Italian physicist Ettore Majorana who first proposed the concept. For the case of neutrinos this theory has gained popularity as it can be used, in combination with the seesaw mechanism, to explain why neutrino masses are so small compared to those of the other elementary particles, such as electrons or quarks. Majorana neutrinos would have the property that the neutrino and antineutrino could be distinguished only by chirality; what experiments observe as a difference between the neutrino and antineutrino could simply be due to one particle with two possible chiralities. , it is not known whether neutrinos are Majorana or Dirac particles. It is possible to test this property experimentally. For example, if neutrinos are indeed Majorana particles, then lepton-number violating processes such as neutrinoless double-beta decay would be allowed, while they would not if neutrinos are Dirac particles. Several experiments have been and are being conducted to search for this process, e.g. GERDA, EXO, SNO+, and CUORE. The cosmic neutrino background is also a probe of whether neutrinos are Majorana particles, since there should be a different number of cosmic neutrinos detected in either the Dirac or Majorana case. Nuclear reactions Neutrinos can interact with a nucleus, changing it to another nucleus. This process is used in radiochemical neutrino detectors. In this case, the energy levels and spin states within the target nucleus have to be taken into account to estimate the probability for an interaction. In general the interaction probability increases with the number of neutrons and protons within a nucleus. It is very hard to uniquely identify neutrino interactions among the natural background of radioactivity. For this reason, in early experiments a special reaction channel was chosen to facilitate the identification: the interaction of an antineutrino with one of the hydrogen nuclei in the water molecules. A hydrogen nucleus is a single proton, so simultaneous nuclear interactions, which would occur within a heavier nucleus, do not need to be considered for the detection experiment. Within a cubic meter of water placed right outside a nuclear reactor, only relatively few such interactions can be recorded, but the setup is now used for measuring the reactor's plutonium production rate. Induced fission and other disintegration events Very much like neutrons do in nuclear reactors, neutrinos can induce fission reactions within heavy nuclei. So far, this reaction has not been measured in a laboratory, but is predicted to happen within stars and supernovae. The process affects the abundance of isotopes seen in the universe. Neutrino-induced disintegration of deuterium nuclei has been observed in the Sudbury Neutrino Observatory, which uses a heavy water detector. Types There are three known types (flavors) of neutrinos: electron neutrino , muon neutrino , and tau neutrino , named after their partner leptons in the Standard Model (see table at right). The current best measurement of the number of neutrino types comes from observing the decay of the Z boson. This particle can decay into any light neutrino and its antineutrino, and the more available types of light neutrinos, the shorter the lifetime of the Z boson. Measurements of the Z lifetime have shown that three light neutrino flavors couple to the Z. The correspondence between the six quarks in the Standard Model and the six leptons, among them the three neutrinos, suggests to physicists' intuition that there should be exactly three types of neutrino. Research There are several active research areas involving the neutrino with aspirations of finding: the three neutrino mass values the degree of CP violation in the leptonic sector (which may lead to leptogenesis) evidence of physics which might break the Standard Model of particle physics, such as neutrinoless double beta decay, which would be evidence for violation of lepton number conservation. Detectors near artificial neutrino sources International scientific collaborations install large neutrino detectors near nuclear reactors or in neutrino beams from particle accelerators to better constrain the neutrino masses and the values for the magnitude and rates of oscillations between neutrino flavors. These experiments are thereby searching for the existence of CP violation in the neutrino sector; that is, whether or not the laws of physics treat neutrinos and antineutrinos differently. The KATRIN experiment in Germany began to acquire data in June 2018 to determine the value of the mass of the electron neutrino, with other approaches to this problem in the planning stages. Gravitational effects Despite their tiny masses, neutrinos are so numerous that their gravitational force can influence other matter in the universe. The three known neutrino flavors are the only candidates for dark matter that are experimentally established elementary particles – specifically, they would be hot dark matter. However, the currently known neutrino types seem to be essentially ruled out as a substantial proportion of dark matter, based on observations of the cosmic microwave background. It still seems plausible that heavier, sterile neutrinos might compose warm dark matter, if they exist. Sterile neutrino searches Other efforts search for evidence of a sterile neutrino – a fourth neutrino flavor that would not interact with matter like the three known neutrino flavors. The possibility of sterile neutrinos is unaffected by the Z boson decay measurements described above: If their mass is greater than half the Z boson's mass, they could not be a decay product. Therefore, to be consistent with not having been detected in Z boson decays, heavy sterile neutrinos would need to have a mass of at least 45.6 GeV. The existence of such particles is in fact hinted by experimental data from the LSND experiment. On the other hand, the currently running MiniBooNE experiment suggested that sterile neutrinos are not required to explain the experimental data, although the latest research into this area is on-going and anomalies in the MiniBooNE data may allow for exotic neutrino types, including sterile neutrinos. A re-analysis of reference electron spectra data from the Institut Laue-Langevin in 2011 has also hinted at a fourth, light sterile neutrino. Triggered by the 2011 findings, several experiments at very short distances from nuclear reactors have searched for sterile neutrinos since then. While most of them were able to rule out the existence of a light sterile neutrino, the combined results are ambiguous. According to an analysis published in 2010, data from the Wilkinson Microwave Anisotropy Probe of the cosmic background radiation is compatible with either three or four types of neutrinos. Neutrinoless double-beta decay searches Another hypothesis concerns "neutrinoless double-beta decay", which, if it exists, would violate lepton number conservation. Searches for this mechanism are underway but have not yet found evidence for it. If they were to, then what are now called antineutrinos could not be true antiparticles. Cosmic ray neutrinos Cosmic ray neutrino experiments detect neutrinos from space to study both the nature of neutrinos and the cosmic sources producing them. Speed Before neutrinos were found to oscillate, they were generally assumed to be massless, propagating at the speed of light (). According to the theory of special relativity, the question of neutrino velocity is closely related to their mass: If neutrinos are massless, they must travel at the speed of light, and if they have mass they cannot reach the speed of light. Due to their tiny mass, the predicted speed is extremely close to the speed of light in all experiments, and current detectors are not sensitive to the expected difference. Also, there are some Lorentz-violating variants of quantum gravity which might allow faster-than-light neutrinos. A comprehensive framework for Lorentz violations is the Standard-Model Extension (SME). The first measurements of neutrino speed were made in the early 1980s using pulsed pion beams (produced by pulsed proton beams hitting a target). The pions decayed producing neutrinos, and the neutrino interactions observed within a time window in a detector at a distance were consistent with the speed of light. This measurement was repeated in 2007 using the MINOS detectors, which found the speed of neutrinos to be, at the 99% confidence level, in the range between and . The central value of is higher than the speed of light but, with uncertainty taken into account, is also consistent with a velocity of exactly or slightly less. This measurement set an upper bound on the mass of the muon neutrino at with 99% confidence. After the detectors for the project were upgraded in 2012, MINOS refined their initial result and found agreement with the speed of light, with the difference in the arrival time of neutrinos and light of −0.0006% (±0.0012%). A similar observation was made, on a much larger scale, with supernova 1987A (SN 1987A). Antineutrinos with an energy of 10 MeV from the supernova were detected within a time window that was consistent with the speed of light for the neutrinos. So far, all measurements of neutrino speed have been consistent with the speed of light. Superluminal neutrino glitch In September 2011, the OPERA collaboration released calculations showing velocities of 17 GeV and 28 GeV neutrinos exceeding the speed of light in their experiments. In November 2011, OPERA repeated its experiment with changes so that the speed could be determined individually for each detected neutrino. The results showed the same faster-than-light speed. In February 2012, reports came out that the results may have been caused by a loose fiber optic cable attached to one of the atomic clocks which measured the departure and arrival times of the neutrinos. An independent recreation of the experiment in the same laboratory by ICARUS found no discernible difference between the speed of a neutrino and the speed of light. Mass The Standard Model of particle physics assumed that neutrinos are massless. The experimentally established phenomenon of neutrino oscillation, which mixes neutrino flavor states with neutrino mass states (analogously to CKM mixing), requires neutrinos to have nonzero masses. Massive neutrinos were originally conceived by Bruno Pontecorvo in the 1950s. Enhancing the basic framework to accommodate their mass is straightforward by adding a right-handed Lagrangian. Providing for neutrino mass can be done in two ways, and some proposals use both: If, like other fundamental Standard Model fermions, mass is generated by the Dirac mechanism, then the framework would require an additional right-chiral component which is an SU(2) singlet. This component would have the conventional Yukawa interactions with the neutral component of the Higgs doublet; but, otherwise, would have no interactions with Standard Model particles. Or, else, mass can be generated by the Majorana mechanism, which would require the neutrino and antineutrino to be the same particle. A hard upper limit on the masses of neutrinos comes from cosmology: the Big Bang model predicts that there is a fixed ratio between the number of neutrinos and the number of photons in the cosmic microwave background. If the total mass of all three types of neutrinos exceeded an average of per neutrino, there would be so much mass in the universe that it would collapse. This limit can be circumvented by assuming that the neutrino is unstable, but there are limits within the Standard Model that make this difficult. A much more stringent constraint comes from a careful analysis of cosmological data, such as the cosmic microwave background radiation, galaxy surveys, and the Lyman-alpha forest. Analysis of data from the WMAP microwave space telescope found that the sum of the masses of the three neutrino species must be less than . In 2018, the Planck collaboration published a stronger bound of , which was derived by combining their CMB total intensity, polarization and gravitational lensing observations with Baryon-Acoustic oscillation measurements from galaxy surveys and supernova measurements from Pantheon. A 2021 reanalysis that adds redshift space distortion measurements from the SDSS-IV eBOSS survey gets an even tighter upper limit of . However, several ground-based telescopes with similarly sized error bars as Planck prefer higher values for the neutrino mass sum, indicating some tension in the data sets. The Nobel prize in Physics 2015 was awarded to Takaaki Kajita and Arthur B. McDonald for their experimental discovery of neutrino oscillations, which demonstrates that neutrinos have mass. In 1998, research results at the Super-Kamiokande neutrino detector determined that neutrinos can oscillate from one flavor to another, which requires that they must have a nonzero mass. While this shows that neutrinos have mass, the absolute neutrino mass scale is still not known. This is because neutrino oscillations are sensitive only to the difference in the squares of the masses. As of 2020, the best-fit value of the difference of the squares of the masses of mass eigenstates 1 and 2 is while for eigenstates 2 and 3 it is Since is the difference of two squared masses, at least one of them must have a value that is at least the square root of this value. Thus, there exists at least one neutrino mass eigenstate with a mass of at least . A number of efforts are under way to directly determine the absolute neutrino mass scale in laboratory experiments, especially using nuclear beta decay. Upper limits on the effective electron neutrino masses come from beta decays of tritium. The Mainz Neutrino Mass Experiment set an upper limit of at 95% Confidence Level. Since June 2018 the KATRIN experiment searches for a mass between and in tritium decays. The February 2022 upper limit is mν <  at 90% CL in combination with a previous campaign by KATRIN from 2019. On 31 May 2010, OPERA researchers observed the first tau neutrino candidate event in a muon neutrino beam, the first time this transformation in neutrinos had been observed, providing further evidence that they have mass. If the neutrino is a Majorana particle, the mass may be calculated by finding the half-life of neutrinoless double-beta decay of certain nuclei. The current lowest upper limit on the Majorana mass of the neutrino has been set by KamLAND-Zen: . Chirality Experimental results show that within the margin of error, all produced and observed neutrinos have left-handed helicities (spins antiparallel to momenta), and all antineutrinos have right-handed helicities. In the massless limit, that means that only one of two possible chiralities is observed for either particle. These are the only chiralities included in the Standard Model of particle interactions. It is possible that their counterparts (right-handed neutrinos and left-handed antineutrinos) simply do not exist. If they do exist, their properties are substantially different from observable neutrinos and antineutrinos. It is theorized that they are either very heavy (on the order of GUT scale—see Seesaw mechanism), do not participate in weak interaction (so-called sterile neutrinos), or both. The existence of nonzero neutrino masses somewhat complicates the situation. Neutrinos are produced in weak interactions as chirality eigenstates. Chirality of a massive particle is not a constant of motion; helicity is, but the chirality operator does not share eigenstates with the helicity operator. Free neutrinos propagate as mixtures of left- and right-handed helicity states, with mixing amplitudes on the order of . This does not significantly affect the experiments, because neutrinos involved are nearly always ultrarelativistic, and thus mixing amplitudes are vanishingly small. Effectively, they travel so quickly and time passes so slowly in their rest-frames that they do not have enough time to change over any observable path. For example, most solar neutrinos have energies on the order of ~; consequently, the fraction of neutrinos with "wrong" helicity among them cannot exceed . GSI anomaly An unexpected series of experimental results for the rate of decay of heavy highly charged radioactive ions circulating in a storage ring has provoked theoretical activity in an effort to find a convincing explanation. The observed phenomenon is known as the GSI anomaly, as the storage ring is a facility at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, Germany. The rates of weak decay of two radioactive species with half lives of about 40 seconds and 200 seconds were found to have a significant oscillatory modulation, with a period of about 7 seconds. As the decay process produces an electron neutrino, some of the suggested explanations for the observed oscillation rate propose new or altered neutrino properties. Ideas related to flavor oscillation met with skepticism. A later proposal is based on differences between neutrino mass eigenstates. Sources Artificial Reactor neutrinos Nuclear reactors are the major source of human-generated neutrinos. The majority of energy in a nuclear reactor is generated by fission (the four main fissile isotopes in nuclear reactors are , , and ), the resultant neutron-rich daughter nuclides rapidly undergo additional beta decays, each converting one neutron to a proton and an electron and releasing an electron antineutrino. Including these subsequent decays, the average nuclear fission releases about of energy, of which roughly 95.5% remains in the core as heat, and roughly 4.5% (or about ) is radiated away as antineutrinos. For a typical nuclear reactor with a thermal power of , the total power production from fissioning atoms is actually , of which is radiated away as antineutrino radiation and never appears in the engineering. This is to say, of fission energy is lost from this reactor and does not appear as heat available to run turbines, since antineutrinos penetrate all building materials practically without interaction. The antineutrino energy spectrum depends on the degree to which the fuel is burned (plutonium-239 fission antineutrinos on average have slightly more energy than those from uranium-235 fission), but in general, the detectable antineutrinos from fission have a peak energy between about 3.5 and , with a maximum energy of about . There is no established experimental method to measure the flux of low-energy antineutrinos. Only antineutrinos with an energy above threshold of can trigger inverse beta decay and thus be unambiguously identified (see below). An estimated 3% of all antineutrinos from a nuclear reactor carry an energy above that threshold. Thus, an average nuclear power plant may generate over antineutrinos per second above the threshold, but also a much larger number ( this number) below the energy threshold; these lower-energy antineutrinos are invisible to present detector technology. Accelerator neutrinos Some particle accelerators have been used to make neutrino beams. The technique is to collide protons with a fixed target, producing charged pions or kaons. These unstable particles are then magnetically focused into a long tunnel where they decay while in flight. Because of the relativistic boost of the decaying particle, the neutrinos are produced as a beam rather than isotropically. Efforts to design an accelerator facility where neutrinos are produced through muon decays are ongoing. Such a setup is generally known as a "neutrino factory". Collider neutrinos Unlike other artificial sources, colliders produce both neutrinos and anti-neutrinos of all flavors at very high energies. The first direct observation of collider neutrinos was reported in 2023 by the FASER experiment at the Large Hadron Collider. Nuclear weapons Nuclear weapons also produce very large quantities of neutrinos. Fred Reines and Clyde Cowan considered the detection of neutrinos from a bomb prior to their search for reactor neutrinos; a fission reactor was recommended as a better alternative by Los Alamos physics division leader J.M.B. Kellogg. Fission weapons produce antineutrinos (from the fission process), and fusion weapons produce both neutrinos (from the fusion process) and antineutrinos (from the initiating fission explosion). Geologic Neutrinos are produced together with the natural background radiation. In particular, the decay chains of and isotopes, as well as , include beta decays which emit antineutrinos. These so-called geoneutrinos can provide valuable information on the Earth's interior. A first indication for geoneutrinos was found by the KamLAND experiment in 2005, updated results have been presented by KamLAND, and Borexino. The main background in the geoneutrino measurements are the antineutrinos coming from reactors. Atmospheric Atmospheric neutrinos result from the interaction of cosmic rays with atomic nuclei in the Earth's atmosphere, creating showers of particles, many of which are unstable and produce neutrinos when they decay. A collaboration of particle physicists from Tata Institute of Fundamental Research (India), Osaka City University (Japan) and Durham University (UK) recorded the first cosmic ray neutrino interaction in an underground laboratory in Kolar Gold Fields in India in 1965. Solar Solar neutrinos originate from the nuclear fusion powering the Sun and other stars. The details of the operation of the Sun are explained by the Standard Solar Model. In short: when four protons fuse to become one helium nucleus, two of them have to convert into neutrons, and each such conversion releases one electron neutrino. The Sun sends enormous numbers of neutrinos in all directions. Each second, about 65 billion () solar neutrinos pass through every square centimeter on the part of the Earth orthogonal to the direction of the Sun. Since neutrinos are insignificantly absorbed by the mass of the Earth, the surface area on the side of the Earth opposite the Sun receives about the same number of neutrinos as the side facing the Sun. Supernovae Colgate & White (1966) calculated that neutrinos carry away most of the gravitational energy released during the collapse of massive stars, events now categorized as Type Ib and Ic and Type II supernovae. When such stars collapse, matter densities at the core become so high () that the degeneracy of electrons is not enough to prevent protons and electrons from combining to form a neutron and an electron neutrino. Mann (1997) found a second and more profuse neutrino source is the thermal energy (100 billion kelvins) of the newly formed neutron core, which is dissipated via the formation of neutrino–antineutrino pairs of all flavors. Colgate and White's theory of supernova neutrino production was confirmed in 1987, when neutrinos from Supernova 1987A were detected. The water-based detectors Kamiokande II and IMB detected 11 and 8 antineutrinos (lepton number = −1) of thermal origin, respectively, while the scintillator-based Baksan detector found 5 neutrinos (lepton number = +1) of either thermal or electron-capture origin, in a burst less than 13 seconds long. The neutrino signal from the supernova arrived at Earth several hours before the arrival of the first electromagnetic radiation, as expected from the evident fact that the latter emerges along with the shock wave. The exceptionally feeble interaction with normal matter allowed the neutrinos to pass through the churning mass of the exploding star, while the electromagnetic photons were slowed. Because neutrinos interact so little with matter, it is thought that a supernova's neutrino emissions carry information about the innermost regions of the explosion. Much of the visible light comes from the decay of radioactive elements produced by the supernova shock wave, and even light from the explosion itself is scattered by dense and turbulent gases, and thus delayed. The neutrino burst is expected to reach Earth before any electromagnetic waves, including visible light, gamma rays, or radio waves. The exact time delay of the electromagnetic waves' arrivals depends on the velocity of the shock wave and on the thickness of the outer layer of the star. For a Type II supernova, astronomers expect the neutrino flood to be released seconds after the stellar core collapse, while the first electromagnetic signal may emerge hours later, after the explosion shock wave has had time to reach the surface of the star. The SuperNova Early Warning System project uses a network of neutrino detectors to monitor the sky for candidate supernova events; the neutrino signal will provide a useful advance warning of a star exploding in the Milky Way. Although neutrinos pass through the outer gases of a supernova without scattering, they provide information about the deeper supernova core with evidence that here, even neutrinos scatter to a significant extent. In a supernova core the densities are those of a neutron star (which is expected to be formed in this type of supernova), becoming large enough to influence the duration of the neutrino signal by delaying some neutrinos. The 13-second-long neutrino signal from SN 1987A lasted far longer than it would take for unimpeded neutrinos to cross through the neutrino-generating core of a supernova, expected to be only 3,200 kilometers in diameter for SN 1987A. The number of neutrinos counted was also consistent with a total neutrino energy of , which was estimated to be nearly all of the total energy of the supernova. For an average supernova, approximately (an octodecillion) neutrinos are released, but the actual number detected at a terrestrial detector will be far smaller, at the level of where is the mass of the detector (with e.g. Super Kamiokande having a mass of 50 kton) and is the distance to the supernova. Hence in practice it will only be possible to detect neutrino bursts from supernovae within or nearby the Milky Way (our own galaxy). In addition to the detection of neutrinos from individual supernovae, it should also be possible to detect the diffuse supernova neutrino background, which originates from all supernovae in the Universe. Supernova remnants The energy of supernova neutrinos ranges from a few to several tens of MeV. The sites where cosmic rays are accelerated are expected to produce neutrinos that are at least one million times more energetic, produced from turbulent gaseous environments left over by supernova explosions: Supernova remnants. The origin of the cosmic rays was attributed to supernovas by Baade and Zwicky; this hypothesis was refined by Ginzburg and Syrovatsky who attributed the origin to supernova remnants, and supported their claim by the crucial remark, that the cosmic ray losses of the Milky Way is compensated, if the efficiency of acceleration in supernova remnants is about 10 percent. Ginzburg and Syrovatskii's hypothesis is supported by the specific mechanism of "shock wave acceleration" happening in supernova remnants, which is consistent with the original theoretical picture drawn by Enrico Fermi, and is receiving support from observational data. The very high-energy neutrinos are still to be seen, but this branch of neutrino astronomy is just in its infancy. The main existing or forthcoming experiments that aim at observing very-high-energy neutrinos from our galaxy are Baikal, AMANDA, IceCube, ANTARES, NEMO and Nestor. Related information is provided by very-high-energy gamma ray observatories, such as VERITAS, HESS and MAGIC. Indeed, the collisions of cosmic rays are supposed to produce charged pions, whose decay give the neutrinos, neutral pions, and gamma rays the environment of a supernova remnant, which is transparent to both types of radiation. Still-higher-energy neutrinos, resulting from the interactions of extragalactic cosmic rays, could be observed with the Pierre Auger Observatory or with the dedicated experiment named ANITA. Big Bang It is thought that, just like the cosmic microwave background radiation leftover from the Big Bang, there is a background of low-energy neutrinos in our Universe. In the 1980s it was proposed that these may be the explanation for the dark matter thought to exist in the universe. Neutrinos have one important advantage over most other dark matter candidates: They are known to exist. This idea also has serious problems. From particle experiments, it is known that neutrinos are very light. This means that they easily move at speeds close to the speed of light. For this reason, dark matter made from neutrinos is termed "hot dark matter". The problem is that being fast moving, the neutrinos would tend to have spread out evenly in the universe before cosmological expansion made them cold enough to congregate in clumps. This would cause the part of dark matter made of neutrinos to be smeared out and unable to cause the large galactic structures that we see. These same galaxies and groups of galaxies appear to be surrounded by dark matter that is not fast enough to escape from those galaxies. Presumably this matter provided the gravitational nucleus for formation. This implies that neutrinos cannot make up a significant part of the total amount of dark matter. From cosmological arguments, relic background neutrinos are estimated to have density of 56 of each type per cubic centimeter and temperature () if they are massless, much colder if their mass exceeds . Although their density is quite high, they have not yet been observed in the laboratory, as their energy is below thresholds of most detection methods, and due to extremely low neutrino interaction cross-sections at sub-eV energies. In contrast, boron-8 solar neutrinos—which are emitted with a higher energy—have been detected definitively despite having a space density that is lower than that of relic neutrinos by some six orders of magnitude. Detection Neutrinos cannot be detected directly because they do not carry electric charge, which means they do not ionize the materials they pass through. Other ways neutrinos might affect their environment, such as the MSW effect, do not produce traceable radiation. A unique reaction to identify antineutrinos, sometimes referred to as inverse beta decay, as applied by Reines and Cowan (see below), requires a very large detector to detect a significant number of neutrinos. All detection methods require the neutrinos to carry a minimum threshold energy. So far, there is no detection method for low-energy neutrinos, in the sense that potential neutrino interactions (for example by the MSW effect) cannot be uniquely distinguished from other causes. Neutrino detectors are often built underground to isolate the detector from cosmic rays and other background radiation. Antineutrinos were first detected in the 1950s near a nuclear reactor. Reines and Cowan used two targets containing a solution of cadmium chloride in water. Two scintillation detectors were placed next to the cadmium targets. Antineutrinos with an energy above the threshold of caused charged current interactions with the protons in the water, producing positrons and neutrons. This is very much like decay, where energy is used to convert a proton into a neutron, a positron () and an electron neutrino () is emitted: From known decay: In the Cowan and Reines experiment, instead of an outgoing neutrino, an incoming antineutrino () from a nuclear reactor interacts with a proton: The resulting positron annihilation with electrons in the detector material created photons with an energy of about . Pairs of photons in coincidence could be detected by the two scintillation detectors above and below the target. The neutrons were captured by cadmium nuclei resulting in gamma rays of about that were detected a few microseconds after the photons from a positron annihilation event. Since then, various detection methods have been used. Super Kamiokande is a large volume of water surrounded by photomultiplier tubes that watch for the Cherenkov radiation emitted when an incoming neutrino creates an electron or muon in the water. The Sudbury Neutrino Observatory is similar, but used heavy water as the detecting medium, which uses the same effects, but also allows the additional reaction any-flavor neutrino photo-dissociation of deuterium, resulting in a free neutron which is then detected from gamma radiation after chlorine-capture. Other detectors have consisted of large volumes of chlorine or gallium which are periodically checked for excesses of argon or germanium, respectively, which are created by electron-neutrinos interacting with the original substance. MINOS used a solid plastic scintillator coupled to photomultiplier tubes, while Borexino uses a liquid pseudocumene scintillator also watched by photomultiplier tubes and the NOνA detector uses liquid scintillator watched by avalanche photodiodes. The IceCube Neutrino Observatory uses of the Antarctic ice sheet near the south pole with photomultiplier tubes distributed throughout the volume. Scientific interest Neutrinos' low mass and neutral charge mean they interact exceedingly weakly with other particles and fields. This feature of weak interaction interests scientists because it means neutrinos can be used to probe environments that other radiation (such as light or radio waves) cannot penetrate. Using neutrinos as a probe was first proposed in the mid-20th century as a way to detect conditions at the core of the Sun. The solar core cannot be imaged directly because electromagnetic radiation (such as light) is diffused by the great amount and density of matter surrounding the core. On the other hand, neutrinos pass through the Sun with few interactions. Whereas photons emitted from the solar core may require  years to diffuse to the outer layers of the Sun, neutrinos generated in stellar fusion reactions at the core cross this distance practically unimpeded at nearly the speed of light. Neutrinos are also useful for probing astrophysical sources beyond the Solar System because they are the only known particles that are not significantly attenuated by their travel through the interstellar medium. Optical photons can be obscured or diffused by dust, gas, and background radiation. High-energy cosmic rays, in the form of swift protons and atomic nuclei, are unable to travel more than about 100 megaparsecs due to the Greisen–Zatsepin–Kuzmin limit (GZK cutoff). Neutrinos, in contrast, can travel even greater distances barely attenuated. The galactic core of the Milky Way is fully obscured by dense gas and numerous bright objects. Neutrinos produced in the galactic core might be measurable by Earth-based neutrino telescopes. Another important use of the neutrino is in the observation of supernovae, the explosions that end the lives of highly massive stars. The core collapse phase of a supernova is an extremely dense and energetic event. It is so dense that no known particles are able to escape the advancing core front except for neutrinos. Consequently, supernovae are known to release approximately 99% of their radiant energy in a short (10-second) burst of neutrinos. These neutrinos are a very useful probe for core collapse studies. The rest mass of the neutrino is an important test of cosmological and astrophysical theories. The neutrino's significance in probing cosmological phenomena is as great as any other method, and is thus a major focus of study in astrophysical communities. The study of neutrinos is important in particle physics because neutrinos typically have the lowest rest mass among massive particles (i.e. the lowest non-zero rest mass, i.e. excluding the zero rest mass of photons and gluons), and hence are examples of the lowest-energy massive particles theorized in extensions of the Standard Model of particle physics. In November 2012, American scientists used a particle accelerator to send a coherent neutrino message through 780 feet of rock. This marks the first use of neutrinos for communication, and future research may permit binary neutrino messages to be sent immense distances through even the densest materials, such as the Earth's core. In July 2018, the IceCube Neutrino Observatory announced that they have traced an extremely-high-energy neutrino that hit their Antarctica-based research station in September 2017 back to its point of origin in the blazar TXS 0506+056 located 3.7 billion light-years away in the direction of the constellation Orion. This is the first time that a neutrino detector has been used to locate an object in space and that a source of cosmic rays has been identified. In November 2022, the IceCube Neutrino Observatory found evidence of high-energy neutrino emission from NGC 1068, also known as Messier 77, an active galaxy in the constellation Cetus and one of the most familiar and well-studied galaxies to date. In June 2023, astronomers reported using a new technique to detect, for the first time, the release of neutrinos from the galactic plane of the Milky Way galaxy.
Physical sciences
Fermions
null
21488
https://en.wikipedia.org/wiki/Nanotechnology
Nanotechnology
Nanotechnology is the manipulation of matter with at least one dimension sized from 1 to 100 nanometers (nm). At this scale, commonly known as the nanoscale, surface area and quantum mechanical effects become important in describing properties of matter. This definition of nanotechnology includes all types of research and technologies that deal with these special properties. It is common to see the plural form "nanotechnologies" as well as "nanoscale technologies" to refer to research and applications whose common trait is scale. An earlier understanding of nanotechnology referred to the particular technological goal of precisely manipulating atoms and molecules for fabricating macroscale products, now referred to as molecular nanotechnology. Nanotechnology defined by scale includes fields of science such as surface science, organic chemistry, molecular biology, semiconductor physics, energy storage, engineering, microfabrication, and molecular engineering. The associated research and applications range from extensions of conventional device physics to molecular self-assembly, from developing new materials with dimensions on the nanoscale to direct control of matter on the atomic scale. Nanotechnology may be able to create new materials and devices with diverse applications, such as in nanomedicine, nanoelectronics, biomaterials energy production, and consumer products. However, nanotechnology raises issues, including concerns about the toxicity and environmental impact of nanomaterials, and their potential effects on global economics, as well as various doomsday scenarios. These concerns have led to a debate among advocacy groups and governments on whether special regulation of nanotechnology is warranted. Origins The concepts that seeded nanotechnology were first discussed in 1959 by physicist Richard Feynman in his talk There's Plenty of Room at the Bottom, in which he described the possibility of synthesis via direct manipulation of atoms. The term "nano-technology" was first used by Norio Taniguchi in 1974, though it was not widely known. Inspired by Feynman's concepts, K. Eric Drexler used the term "nanotechnology" in his 1986 book Engines of Creation: The Coming Era of Nanotechnology, which proposed the idea of a nanoscale "assembler" that would be able to build a copy of itself and of other items of arbitrary complexity with atom-level control. Also in 1986, Drexler co-founded The Foresight Institute to increase public awareness and understanding of nanotechnology concepts and implications. The emergence of nanotechnology as a field in the 1980s occurred through the convergence of Drexler's theoretical and public work, which developed and popularized a conceptual framework, and high-visibility experimental advances that drew additional attention to the prospects. In the 1980s, two breakthroughs sparked the growth of nanotechnology. First, the invention of the scanning tunneling microscope in 1981 enabled visualization of individual atoms and bonds, and was successfully used to manipulate individual atoms in 1989. The microscope's developers Gerd Binnig and Heinrich Rohrer at IBM Zurich Research Laboratory received a Nobel Prize in Physics in 1986. Binnig, Quate and Gerber also invented the analogous atomic force microscope that year. Second, fullerenes (buckyballs) were discovered in 1985 by Harry Kroto, Richard Smalley, and Robert Curl, who together won the 1996 Nobel Prize in Chemistry. C60 was not initially described as nanotechnology; the term was used regarding subsequent work with related carbon nanotubes (sometimes called graphene tubes or Bucky tubes) which suggested potential applications for nanoscale electronics and devices. The discovery of carbon nanotubes is largely attributed to Sumio Iijima of NEC in 1991, for which Iijima won the inaugural 2008 Kavli Prize in Nanoscience. In the early 2000s, the field garnered increased scientific, political, and commercial attention that led to both controversy and progress. Controversies emerged regarding the definitions and potential implications of nanotechnologies, exemplified by the Royal Society's report on nanotechnology. Challenges were raised regarding the feasibility of applications envisioned by advocates of molecular nanotechnology, which culminated in a public debate between Drexler and Smalley in 2001 and 2003. Meanwhile, commercial products based on advancements in nanoscale technologies began emerging. These products were limited to bulk applications of nanomaterials and did not involve atomic control of matter. Some examples include the Silver Nano platform for using silver nanoparticles as an antibacterial agent, nanoparticle-based sunscreens, carbon fiber strengthening using silica nanoparticles, and carbon nanotubes for stain-resistant textiles. Governments moved to promote and fund research into nanotechnology, such as American the National Nanotechnology Initiative, which formalized a size-based definition of nanotechnology and established research funding, and in Europe via the European Framework Programmes for Research and Technological Development. By the mid-2000s scientific attention began to flourish. Nanotechnology roadmaps centered on atomically precise manipulation of matter and discussed existing and projected capabilities, goals, and applications. Fundamental concepts Nanotechnology is the science and engineering of functional systems at the molecular scale. In its original sense, nanotechnology refers to the projected ability to construct items from the bottom up making complete, high-performance products. One nanometer (nm) is one billionth, or 10−9, of a meter. By comparison, typical carbon–carbon bond lengths, or the spacing between these atoms in a molecule, are in the range , and DNA's diameter is around 2 nm. On the other hand, the smallest cellular life forms, the bacteria of the genus Mycoplasma, are around 200 nm in length. By convention, nanotechnology is taken as the scale range , following the definition used by the American National Nanotechnology Initiative. The lower limit is set by the size of atoms (hydrogen has the smallest atoms, which have an approximately ,25 nm kinetic diameter). The upper limit is more or less arbitrary, but is around the size below which phenomena not observed in larger structures start to become apparent and can be made use of. These phenomena make nanotechnology distinct from devices that are merely miniaturized versions of an equivalent macroscopic device; such devices are on a larger scale and come under the description of microtechnology. To put that scale in another context, the comparative size of a nanometer to a meter is the same as that of a marble to the size of the earth. Two main approaches are used in nanotechnology. In the "bottom-up" approach, materials and devices are built from molecular components which assemble themselves chemically by principles of molecular recognition. In the "top-down" approach, nano-objects are constructed from larger entities without atomic-level control. Areas of physics such as nanoelectronics, nanomechanics, nanophotonics and nanoionics have evolved to provide nanotechnology's scientific foundation. Larger to smaller: a materials perspective Several phenomena become pronounced as system size. These include statistical mechanical effects, as well as quantum mechanical effects, for example, the "quantum size effect" in which the electronic properties of solids alter along with reductions in particle size. Such effects do not apply at macro or micro dimensions. However, quantum effects can become significant when nanometer scales. Additionally, physical (mechanical, electrical, optical, etc.) properties change versus macroscopic systems. One example is the increase in surface area to volume ratio altering mechanical, thermal, and catalytic properties of materials. Diffusion and reactions can be different as well. Systems with fast ion transport are referred to as nanoionics. The mechanical properties of nanosystems are of interest in research. Simple to complex: a molecular perspective Modern synthetic chemistry can prepare small molecules of almost any structure. These methods are used to manufacture a wide variety of useful chemicals such as pharmaceuticals or commercial polymers. This ability raises the question of extending this kind of control to the next-larger level, seeking methods to assemble single molecules into supramolecular assemblies consisting of many molecules arranged in a well-defined manner. These approaches utilize the concepts of molecular self-assembly and/or supramolecular chemistry to automatically arrange themselves into a useful conformation through a bottom-up approach. The concept of molecular recognition is important: molecules can be designed so that a specific configuration or arrangement is favored due to non-covalent intermolecular forces. The Watson–Crick basepairing rules are a direct result of this, as is the specificity of an enzyme targeting a single substrate, or the specific folding of a protein. Thus, components can be designed to be complementary and mutually attractive so that they make a more complex and useful whole. Such bottom-up approaches should be capable of producing devices in parallel and be much cheaper than top-down methods, but could potentially be overwhelmed as the size and complexity of the desired assembly increases. Most useful structures require complex and thermodynamically unlikely arrangements of atoms. Nevertheless, many examples of self-assembly based on molecular recognition in exist in biology, most notably Watson–Crick basepairing and enzyme-substrate interactions. Molecular nanotechnology: a long-term view Molecular nanotechnology, sometimes called molecular manufacturing, concerns engineered nanosystems (nanoscale machines) operating on the molecular scale. Molecular nanotechnology is especially associated with molecular assemblers, machines that can produce a desired structure or device atom-by-atom using the principles of mechanosynthesis. Manufacturing in the context of productive nanosystems is not related to conventional technologies used to manufacture nanomaterials such as carbon nanotubes and nanoparticles. When Drexler independently coined and popularized the term "nanotechnology", he envisioned manufacturing technology based on molecular machine systems. The premise was that molecular-scale biological analogies of traditional machine components demonstrated molecular machines were possible: biology was full of examples of sophisticated, stochastically optimized biological machines. Drexler and other researchers have proposed that advanced nanotechnology ultimately could be based on mechanical engineering principles, namely, a manufacturing technology based on the mechanical functionality of these components (such as gears, bearings, motors, and structural members) that would enable programmable, positional assembly to atomic specification. The physics and engineering performance of exemplar designs were analyzed in Drexler's book Nanosystems: Molecular Machinery, Manufacturing, and Computation. In general, assembling devices on the atomic scale requires positioning atoms on other atoms of comparable size and stickiness. Carlo Montemagno's view is that future nanosystems will be hybrids of silicon technology and biological molecular machines. Richard Smalley argued that mechanosynthesis was impossible due to difficulties in mechanically manipulating individual molecules. This led to an exchange of letters in the ACS publication Chemical & Engineering News in 2003. Though biology clearly demonstrates that molecular machines are possible, non-biological molecular machines remained in their infancy. Alex Zettl and colleagues at Lawrence Berkeley Laboratories and UC Berkeley constructed at least three molecular devices whose motion is controlled via changing voltage: a nanotube nanomotor, a molecular actuator, and a nanoelectromechanical relaxation oscillator. Ho and Lee at Cornell University in 1999 used a scanning tunneling microscope to move an individual carbon monoxide molecule (CO) to an individual iron atom (Fe) sitting on a flat silver crystal and chemically bound the CO to the Fe by applying a voltage. Research Nanomaterials Many areas of science develop or study materials having unique properties arising from their nanoscale dimensions. Interface and colloid science produced many materials that may be useful in nanotechnology, such as carbon nanotubes and other fullerenes, and various nanoparticles and nanorods. Nanomaterials with fast ion transport are related to nanoionics and nanoelectronics. Nanoscale materials can be used for bulk applications; most commercial applications of nanotechnology are of this flavor. Progress has been made in using these materials for medical applications, including tissue engineering, drug delivery, antibacterials and biosensors. Nanoscale materials such as nanopillars are used in solar cells. Applications incorporating semiconductor nanoparticles in products such as display technology, lighting, solar cells and biological imaging; see quantum dots. Bottom-up approaches The bottom-up approach seeks to arrange smaller components into more complex assemblies. DNA nanotechnology utilizes Watson–Crick basepairing to construct well-defined structures out of DNA and other nucleic acids. Approaches from the field of "classical" chemical synthesis (inorganic and organic synthesis) aim at designing molecules with well-defined shape (e.g. bis-peptides). More generally, molecular self-assembly seeks to use concepts of supramolecular chemistry, and molecular recognition in particular, to cause single-molecule components to automatically arrange themselves into some useful conformation. Atomic force microscope tips can be used as a nanoscale "write head" to deposit a chemical upon a surface in a desired pattern in a process called dip-pen nanolithography. This technique fits into the larger subfield of nanolithography. Molecular-beam epitaxy allows for bottom-up assemblies of materials, most notably semiconductor materials commonly used in chip and computing applications, stacks, gating, and nanowire lasers. Top-down approaches These seek to create smaller devices by using larger ones to direct their assembly. Many technologies that descended from conventional solid-state silicon methods for fabricating microprocessors are capable of creating features smaller than 100 nm. Giant magnetoresistance-based hard drives already on the market fit this description, as do atomic layer deposition (ALD) techniques. Peter Grünberg and Albert Fert received the Nobel Prize in Physics in 2007 for their discovery of giant magnetoresistance and contributions to the field of spintronics. Solid-state techniques can be used to create nanoelectromechanical systems or NEMS, which are related to microelectromechanical systems or MEMS. Focused ion beams can directly remove material, or even deposit material when suitable precursor gasses are applied at the same time. For example, this technique is used routinely to create sub-100 nm sections of material for analysis in transmission electron microscopy. Atomic force microscope tips can be used as a nanoscale "write head" to deposit a resist, which is then followed by an etching process to remove material in a top-down method. Functional approaches Functional approaches seek to develop useful components without regard to how they might be assembled. Magnetic assembly for the synthesis of anisotropic superparamagnetic materials such as magnetic nano chains. Molecular scale electronics seeks to develop molecules with useful electronic properties. These could be used as single-molecule components in a nanoelectronic device, such as rotaxane. Synthetic chemical methods can be used to create synthetic molecular motors, such as in a so-called nanocar. Biomimetic approaches Bionics or biomimicry seeks to apply biological methods and systems found in nature to the study and design of engineering systems and modern technology. Biomineralization is one example of the systems studied. Bionanotechnology is the use of biomolecules for applications in nanotechnology, including the use of viruses and lipid assemblies. Nanocellulose, a nanopolymer often used for bulk-scale applications, has gained interest owing to its useful properties such as abundance, high aspect ratio, good mechanical properties, renewability, and biocompatibility. Speculative These subfields seek to anticipate what inventions nanotechnology might yield, or attempt to propose an agenda along which inquiry could progress. These often take a big-picture view, with more emphasis on societal implications than engineering details. Molecular nanotechnology is a proposed approach that involves manipulating single molecules in finely controlled, deterministic ways. This is more theoretical than the other subfields, and many of its proposed techniques are beyond current capabilities. Nanorobotics considers self-sufficient machines operating at the nanoscale. There are hopes for applying nanorobots in medicine. Nevertheless, progress on innovative materials and patented methodologies have been demonstrated. Productive nanosystems are "systems of nanosystems" could produce atomically precise parts for other nanosystems, not necessarily using novel nanoscale-emergent properties, but well-understood fundamentals of manufacturing. Because of the discrete (i.e. atomic) nature of matter and the possibility of exponential growth, this stage could form the basis of another industrial revolution. Mihail Roco proposed four states of nanotechnology that seem to parallel the technical progress of the Industrial Revolution, progressing from passive nanostructures to active nanodevices to complex nanomachines and ultimately to productive nanosystems. Programmable matter seeks to design materials whose properties can be easily, reversibly and externally controlled though a fusion of information science and materials science. Due to the popularity and media exposure of the term nanotechnology, the words picotechnology and femtotechnology have been coined in analogy to it, although these are used only informally. Dimensionality in nanomaterials Nanomaterials can be classified in 0D, 1D, 2D and 3D nanomaterials. Dimensionality plays a major role in determining the characteristic of nanomaterials including physical, chemical, and biological characteristics. With the decrease in dimensionality, an increase in surface-to-volume ratio is observed. This indicates that smaller dimensional nanomaterials have higher surface area compared to 3D nanomaterials. Two dimensional (2D) nanomaterials have been extensively investigated for electronic, biomedical, drug delivery and biosensor applications. Tools and techniques Scanning microscopes The atomic force microscope (AFM) and the Scanning Tunneling Microscope (STM) are two versions of scanning probes that are used for nano-scale observation. Other types of scanning probe microscopy have much higher resolution, since they are not limited by the wavelengths of sound or light. The tip of a scanning probe can also be used to manipulate nanostructures (positional assembly). Feature-oriented scanning may be a promising way to implement these nano-scale manipulations via an automatic algorithm. However, this is still a slow process because of low velocity of the microscope. The top-down approach anticipates nanodevices that must be built piece by piece in stages, much as manufactured items are made. Scanning probe microscopy is an important technique both for characterization and synthesis. Atomic force microscopes and scanning tunneling microscopes can be used to look at surfaces and to move atoms around. By designing different tips for these microscopes, they can be used for carving out structures on surfaces and to help guide self-assembling structures. By using, for example, feature-oriented scanning approach, atoms or molecules can be moved around on a surface with scanning probe microscopy techniques. Lithography Various techniques of lithography, such as optical lithography, X-ray lithography, dip pen lithography, electron beam lithography or nanoimprint lithography offer top-down fabrication techniques where a bulk material is reduced to a nano-scale pattern. Another group of nano-technological techniques include those used for fabrication of nanotubes and nanowires, those used in semiconductor fabrication such as deep ultraviolet lithography, electron beam lithography, focused ion beam machining, nanoimprint lithography, atomic layer deposition, and molecular vapor deposition, and further including molecular self-assembly techniques such as those employing di-block copolymers. Bottom-up In contrast, bottom-up techniques build or grow larger structures atom by atom or molecule by molecule. These techniques include chemical synthesis, self-assembly and positional assembly. Dual-polarization interferometry is one tool suitable for characterization of self-assembled thin films. Another variation of the bottom-up approach is molecular-beam epitaxy or MBE. Researchers at Bell Telephone Laboratories including John R. Arthur. Alfred Y. Cho, and Art C. Gossard developed and implemented MBE as a research tool in the late 1960s and 1970s. Samples made by MBE were key to the discovery of the fractional quantum Hall effect for which the 1998 Nobel Prize in Physics was awarded. MBE lays down atomically precise layers of atoms and, in the process, build up complex structures. Important for research on semiconductors, MBE is also widely used to make samples and devices for the newly emerging field of spintronics. Therapeutic products based on responsive nanomaterials, such as the highly deformable, stress-sensitive Transfersome vesicles, are approved for human use in some countries. Applications As of August 21, 2008, the Project on Emerging Nanotechnologies estimated that over 800 manufacturer-identified nanotech products were publicly available, with new ones hitting the market at a pace of 3–4 per week. Most applications are "first generation" passive nanomaterials that includes titanium dioxide in sunscreen, cosmetics, surface coatings, and some food products; Carbon allotropes used to produce gecko tape; silver in food packaging, clothing, disinfectants, and household appliances; zinc oxide in sunscreens and cosmetics, surface coatings, paints and outdoor furniture varnishes; and cerium oxide as a fuel catalyst. In the electric car industry, single wall carbon nanotubes (SWCNTs) address key lithium-ion battery challenges, including energy density, charge rate, service life, and cost. SWCNTs connect electrode particles during charge/discharge process, preventing battery premature degradation. Their exceptional ability to wrap active material particles enhanced electrical conductivity and physical properties, setting them apart multi-walled carbon nanotubes and carbon black. Further applications allow tennis balls to last longer, golf balls to fly straighter, and bowling balls to become more durable. Trousers and socks have been infused with nanotechnology to last longer and lower temperature in the summer. Bandages are infused with silver nanoparticles to heal cuts faster. Video game consoles and personal computers may become cheaper, faster, and contain more memory thanks to nanotechnology. Also, to build structures for on chip computing with light, for example on chip optical quantum information processing, and picosecond transmission of information. Nanotechnology may have the ability to make existing medical applications cheaper and easier to use in places like the doctors' offices and at homes. Cars use nanomaterials in such ways that car parts require fewer metals during manufacturing and less fuel to operate in the future. Nanoencapsulation involves the enclosure of active substances within carriers. Typically, these carriers offer advantages, such as enhanced bioavailability, controlled release, targeted delivery, and protection of the encapsulated substances. In the medical field, nanoencapsulation plays a significant role in drug delivery. It facilitates more efficient drug administration, reduces side effects, and increases treatment effectiveness. Nanoencapsulation is particularly useful for improving the bioavailability of poorly water-soluble drugs, enabling controlled and sustained drug release, and supporting the development of targeted therapies. These features collectively contribute to advancements in medical treatments and patient care. Nanotechnology may play role in tissue engineering. When designing scaffolds, researchers attempt to mimic the nanoscale features of a cell's microenvironment to direct its differentiation down a suitable lineage. For example, when creating scaffolds to support bone growth, researchers may mimic osteoclast resorption pits. Researchers used DNA origami-based nanobots capable of carrying out logic functions to target drug delivery in cockroaches. A nano bible (a .5mm2 silicon chip) was created by the Technion in order to increase youth interest in nanotechnology. Implications One concern is the effect that industrial-scale manufacturing and use of nanomaterials will have on human health and the environment, as suggested by nanotoxicology research. For these reasons, some groups advocate that nanotechnology be regulated. However, regulation might stifle scientific research and the development of beneficial innovations. Public health research agencies, such as the National Institute for Occupational Safety and Health research potential health effects stemming from exposures to nanoparticles. Nanoparticle products may have unintended consequences. Researchers have discovered that bacteriostatic silver nanoparticles used in socks to reduce foot odor are released in the wash. These particles are then flushed into the wastewater stream and may destroy bacteria that are critical components of natural ecosystems, farms, and waste treatment processes. Public deliberations on risk perception in the US and UK carried out by the Center for Nanotechnology in Society found that participants were more positive about nanotechnologies for energy applications than for health applications, with health applications raising moral and ethical dilemmas such as cost and availability. Experts, including director of the Woodrow Wilson Center's Project on Emerging Nanotechnologies David Rejeski, testified that commercialization depends on adequate oversight, risk research strategy, and public engagement. As of 206 Berkeley, California was the only US city to regulate nanotechnology. Health and environmental concerns Inhaling airborne nanoparticles and nanofibers may contribute to pulmonary diseases, e.g. fibrosis. Researchers found that when rats breathed in nanoparticles, the particles settled in the brain and lungs, which led to significant increases in biomarkers for inflammation and stress response and that nanoparticles induce skin aging through oxidative stress in hairless mice. A two-year study at UCLA's School of Public Health found lab mice consuming nano-titanium dioxide showed DNA and chromosome damage to a degree "linked to all the big killers of man, namely cancer, heart disease, neurological disease and aging". A Nature Nanotechnology study suggested that some forms of carbon nanotubes could be as harmful as asbestos if inhaled in sufficient quantities. Anthony Seaton of the Institute of Occupational Medicine in Edinburgh, Scotland, who contributed to the article on carbon nanotubes said "We know that some of them probably have the potential to cause mesothelioma. So those sorts of materials need to be handled very carefully." In the absence of specific regulation forthcoming from governments, Paull and Lyons (2008) have called for an exclusion of engineered nanoparticles in food. A newspaper article reports that workers in a paint factory developed serious lung disease and nanoparticles were found in their lungs. Regulation Calls for tighter regulation of nanotechnology have accompanied a debate related to human health and safety risks. Some regulatory agencies cover some nanotechnology products and processes – by "bolting on" nanotechnology to existing regulations – leaving clear gaps. Davies proposed a road map describing steps to deal with these shortcomings. Andrew Maynard, chief science advisor to the Woodrow Wilson Center's Project on Emerging Nanotechnologies, reported insufficient funding for human health and safety research, and as a result inadequate understanding of human health and safety risks. Some academics called for stricter application of the precautionary principle, slowing marketing approval, enhanced labelling and additional safety data. A Royal Society report identified a risk of nanoparticles or nanotubes being released during disposal, destruction and recycling, and recommended that "manufacturers of products that fall under extended producer responsibility regimes such as end-of-life regulations publish procedures outlining how these materials will be managed to minimize possible human and environmental exposure".
Technology
Basics_5
null
21490
https://en.wikipedia.org/wiki/Nylon
Nylon
Nylon is a family of synthetic polymers with amide backbones, usually linking aliphatic or semi-aromatic groups. Nylons are white or colorless and soft; some are silk-like. They are thermoplastic, which means that they can be melt-processed into fibers, films, and diverse shapes. The properties of nylons are often modified by blending with a wide variety of additives. Many kinds of nylon are known. One family, designated nylon-XY, is derived from diamines and dicarboxylic acids of carbon chain lengths X and Y, respectively. An important example is nylon-6,6 (C₁₂H₂₂N₂O₂). Another family, designated nylon-Z, is derived from aminocarboxylic acids with carbon chain length Z. An example is nylon-[6]. Nylon polymers have significant commercial applications in fabric and fibers (apparel, flooring and rubber reinforcement), in shapes (molded parts for cars, electrical equipment, etc.), and in films (mostly for food packaging). History DuPont and the invention of nylon Researchers at DuPont began developing cellulose-based fibers, culminating in the synthetic fiber rayon. DuPont's experience with rayon was an important precursor to its development and marketing of nylon. DuPont's invention of nylon spanned an eleven-year period, ranging from the initial research program in polymers in 1927 to its announcement in 1938, shortly before the opening of the 1939 New York World's Fair. The project grew from a new organizational structure at DuPont, suggested by Charles Stine in 1927, in which the chemical department would be composed of several small research teams that would focus on "pioneering research" in chemistry and would "lead to practical applications". Harvard instructor Wallace Hume Carothers was hired to direct the polymer research group. Initially he was allowed to focus on pure research, building on and testing the theories of German chemist Hermann Staudinger. He was very successful, as research he undertook greatly improved the knowledge of polymers and contributed to the science. Nylon was the first commercially successful synthetic thermoplastic polymer. DuPont began its research project in 1927. The first nylon, nylon 66, was synthesized on February 28, 1935, by Wallace Hume Carothers at DuPont's research facility at the DuPont Experimental Station. In response to Carothers' work, Paul Schlack at IG Farben developed nylon 6, a different molecule based on caprolactam, on January 29, 1938. In the spring of 1930, Carothers and his team had already synthesized two new polymers. One was neoprene, a synthetic rubber greatly used during World War II. The other was a white elastic but strong paste that would later become nylon. After these discoveries, Carothers' team was made to shift its research from a more pure research approach investigating general polymerization to a more practically focused goal of finding "one chemical combination that would lend itself to industrial applications". It was not until the beginning of 1935 that a polymer called "polymer 6-6" was finally produced. Carothers' coworker, Washington University alumnus Julian W. Hill had used a cold drawing method to produce a polyester in 1930. This cold drawing method was later used by Carothers in 1935 to fully develop nylon. The first example of nylon (nylon 6.6) was produced on February 28, 1935, at DuPont's research facility at the DuPont Experimental Station. It had all the desired properties of elasticity and strength. However, it also required a complex manufacturing process that would become the basis of industrial production in the future. DuPont obtained a patent for the polymer in September 1938, and quickly achieved a monopoly of the fiber. Carothers died 16 months before the announcement of nylon, therefore he was never able to see his success. Nylon was first used commercially in a nylon-bristled toothbrush in 1938, followed more famously in women's stockings or "nylons" which were shown at the 1939 New York World's Fair and first sold commercially in 1940, whereupon they became an instant commercial success with 64 million pairs sold during their first year on the market. During World War II, almost all nylon production was diverted to the military for use in parachutes and parachute cord. Wartime uses of nylon and other plastics greatly increased the market for the new materials. The production of nylon required interdepartmental collaboration between three departments at DuPont: the Department of Chemical Research, the Ammonia Department, and the Department of Rayon. Some of the key ingredients of nylon had to be produced using high pressure chemistry, the main area of expertise of the Ammonia Department. Nylon was considered a "godsend to the Ammonia Department", which had been in financial difficulties. The reactants of nylon soon constituted half of the Ammonia Department's sales and helped them come out of the period of the Great Depression by creating jobs and revenue at DuPont. DuPont's nylon project demonstrated the importance of chemical engineering in industry, helped create jobs, and furthered the advancement of chemical engineering techniques. In fact, it developed a chemical plant that provided 1800 jobs and used the latest technologies of the time, which are still used as a model for chemical plants today. The ability to acquire a large number of chemists and engineers quickly was a huge contribution to the success of DuPont's nylon project. The first nylon plant was located at Seaford, Delaware, beginning commercial production on December 15, 1939. On October 26, 1995, the Seaford plant was designated a National Historic Chemical Landmark by the American Chemical Society. Early marketing strategies An important part of nylon's popularity stems from DuPont's marketing strategy. DuPont promoted the fiber to increase demand before the product was available to the general market. Nylon's commercial announcement occurred on October 27, 1938, at the final session of the Herald Tribunes yearly "Forum on Current Problems", on the site of the approaching New York City world's fair. The "first man-made organic textile fiber" which was derived from "coal, water and air" and promised to be "as strong as steel, as fine as the spider's web" was received enthusiastically by the audience, many of them middle-class women, and made the headlines of most newspapers. Nylon was introduced as part of "The world of tomorrow" at the 1939 New York World's Fair and was featured at DuPont's "Wonder World of Chemistry" at the Golden Gate International Exposition in San Francisco in 1939. Actual nylon stockings were not shipped to selected stores in the national market until May 15, 1940. However, a limited number were released for sale in Delaware before that. The first public sale of nylon stockings occurred on October 24, 1939, in Wilmington, Delaware. 4,000 pairs of stockings were available, all of which were sold within three hours. Another added bonus to the campaign was that it meant reducing silk imports from Japan, an argument that won over many wary customers. Nylon was even mentioned by President Roosevelt's cabinet, which addressed its "vast and interesting economic possibilities" five days after the material was formally announced. However, the early excitement over nylon also caused problems. It fueled unreasonable expectations that nylon would be better than silk, a miracle fabric as strong as steel that would last forever and never run. Realizing the danger of claims such as "New Hosiery Held Strong as Steel" and "No More Runs", DuPont scaled back the terms of the original announcement, especially those stating that nylon would possess the strength of steel. Also, DuPont executives marketing nylon as a revolutionary man-made material did not at first realize that some consumers experienced a sense of unease and distrust, even fear, towards synthetic fabrics. A particularly damaging news story, drawing on DuPont's 1938 patent for the new polymer, suggested that one method of producing nylon might be to use cadaverine (pentamethylenediamine), a chemical extracted from corpses. Although scientists asserted that cadaverine was also extracted by heating coal, the public often refused to listen. A woman confronted one of the lead scientists at DuPont and refused to accept that the rumour was not true. DuPont changed its campaign strategy, emphasizing that nylon was made from "coal, air and water", and started focusing on the personal and aesthetic aspects of nylon, rather than its intrinsic qualities. Nylon was thus domesticated, and attention shifted to the material and consumer aspect of the fiber with slogans like "If it's nylon, it's prettier, and oh! How fast it dries!". Production of nylon fabric After nylon's nationwide release in 1940, its production ramped up significantly. In that year alone, 1300 tons of the fabric were produced, marking a remarkable start for this innovative material.[8]: 100  The demand for nylon surged, particularly for nylon stockings, which became an instant sensation. During their first year on the market, an astounding 64 million pairs of nylon stockings were sold, reflecting the fabric's rapid integration into daily life and fashion.[8]: 101  Such was the success of nylon that in 1941, just a year after its launch, a second plant was opened in Martinsville, Virginia, to meet the growing demand and ensure a steady supply of this popular fabric. This expansion underscored the profound impact nylon had on the textile industry and its rapid rise to prominence as a versatile and sought-after material. While nylon was marketed as the durable and indestructible material of the people, it was sold at about one-and-a-half times the price of silk stockings ($4.27 per pound of nylon versus $2.79 per pound of silk). Sales of nylon stockings were strong in part due to changes in women's fashion. As Lauren Olds explains: "by 1939 [hemlines] had inched back up to the knee, closing the decade just as it started off". The shorter skirts were accompanied by a demand for stockings that offered fuller coverage without the use of garters to hold them up. However, as of February 11, 1942, nylon production was redirected from being a consumer material to one used by the military. DuPont's production of nylon stockings and other lingerie stopped, and most manufactured nylon was used to make parachutes and tents for World War II. Although nylon stockings already made before the war could be purchased, they were generally sold on the black market for as high as $20. Once the war ended, the return of nylon was awaited with great anticipation. Although DuPont projected yearly production of 360 million pairs of stockings, there were delays in converting back to consumer rather than wartime production. In 1946, the demand for nylon stockings could not be satisfied, which led to the nylon riots. In one instance, an estimated 40,000 people lined up in Pittsburgh to buy 13,000 pairs of nylons. In the meantime, women cut up nylon tents and parachutes left from the war in order to make blouses and wedding dresses. Between the end of the war and 1952, production of stockings and lingerie used 80% of the world's nylon. DuPont put focus on catering to the civilian demand, and continually expanded its production. Introduction of nylon blends As pure nylon hosiery was sold in a wider market, problems became apparent. Nylon stockings were found to be fragile, in the sense that the thread often tended to unravel lengthwise, creating 'runs'. People also reported that pure nylon textiles could be uncomfortable due to nylon's lack of absorbency. Moisture stayed inside the fabric near the skin under hot or moist conditions instead of being "wicked" away. Nylon fabric could also be itchy and tended to cling and sometimes spark as a result of static electrical charge built up by friction. Also, under some conditions, nylon could degrade, perforating or shredding stockings. Scientists explained this as acid hydrolysis resulting from air pollution, attributing it to London smog in 1952, as well as poor air quality in New York and Los Angeles. The solution found to problems with pure nylon fabric was to blend nylon with other existing fibers or polymers such as cotton, polyester, and spandex. This led to the development of a wide array of blended fabrics. The new nylon blends retained the desirable properties of nylon (elasticity, durability, ability to be dyed) and kept clothes prices low and affordable. As of 1950, the New York Quartermaster Procurement Agency (NYQMPA), which developed and tested textiles for the Army and Navy, had committed to developing a wool-nylon blend. They were not the only ones to introduce blends of both natural and synthetic fibers. America's Textile Reporter referred to 1951 as the "Year of the blending of the fibers". Fabric blends included mixes like "Bunara" (wool-rabbit-nylon) and "Casmet" (wool-nylon-fur). In Britain, in November 1951, the inaugural address of the 198th session of the Royal Society for the Encouragement of Arts, Manufactures and Commerce focused on the blending of textiles. DuPont's Fabric Development Department cleverly targeted French fashion designers, supplying them with fabric samples. In 1955, designers such as Coco Chanel, Jean Patou, and Christian Dior showed gowns created with DuPont fibers, and fashion photographer Horst P. Horst was hired to document their use of DuPont fabrics. American Fabrics credited blends with providing "creative possibilities and new ideas for fashions which had been hitherto undreamed of." Etymology DuPont went through an extensive process to generate names for its new product. In 1940, John W. Eckelberry of DuPont stated that the letters "nyl" were arbitrary, and the "on" was copied from the suffixes of other fibers such as cotton and rayon. A later publication by DuPont (Context, vol. 7, no. 2, 1978) explained that the name was originally intended to be "No-Run" ("run" meaning "unravel") but was modified to avoid making such an unjustified claim. Since the products were not really run-proof, the vowels were swapped to produce "nuron", which was changed to "nilon" "to make it sound less like a nerve tonic". For clarity in pronunciation, the "i" was changed to "y". A persistent urban legend exists that the name is derived from "New York" and "London"; however, no organisation in London was ever involved in the research and production of nylon. Longer-term popularity Nylon’s popularity soared in the 1940s and 1950s due to its durability and sheerness. In the 1970s, it became more popular due to its flexibility and price. In spite of oil shortages in the 1970s, consumption of nylon textiles continued to grow by 7.5% per year between the 1960s and 1980s. Overall production of synthetic fibers, however, dropped from 63% of the worlds textile production in 1965, to 45% of the world's textile production in early 1970s. The appeal of "new" technologies wore off, and nylon fabric "was going out of style in the 1970s". Also, consumers became concerned about environmental costs throughout the production cycle: obtaining the raw materials (oil), energy use during production, waste produced during creation of the fiber, and eventual waste disposal of materials that were not biodegradable. Synthetic fibers have not dominated the market since the 1950s and 1960s. , the worldwide production of nylon is estimated at 8.9 million tons. Although pure nylon has many flaws and is now rarely used, its derivatives have greatly influenced and contributed to society. From scientific discoveries relating to the production of plastics and polymerization, to economic impact during the depression and the changing of women's fashion, nylon was a revolutionary product. The Lunar Flag Assembly, the first flag planted on the moon in a symbolic gesture of celebration, was made of nylon. The flag itself cost $5.50 but had to have a specially designed flagpole with a horizontal bar so that it would appear to "fly". One historian describes nylon as "an object of desire", comparing the invention to Coca-Cola in the eyes of 20th century consumers. Chemistry In common usage, the prefix "PA" (polyamide) or the name "Nylon" are used interchangeably and are equivalent in meaning. The nomenclature used for nylon polymers was devised during the synthesis of the first simple aliphatic nylons and uses numbers to describe the number of carbons in each monomer unit, including the carbon(s) of the carboxylic acid(s). Subsequent use of cyclic and aromatic monomers required the use of letters or sets of letters. One number after "PA" or "Nylon" indicates a homopolymer which is monadic or based on one amino acid (minus H2O) as monomer: PA 6 or Nylon 6: [NH−(CH2)5−CO]n made from ε-caprolactam. Two numbers or sets of letters indicate a dyadic homopolymer formed from two monomers: one diamine and one dicarboxylic acid. The first number indicates the number of carbons in the diamine. The two numbers should be separated by a comma for clarity, but the comma is often omitted. PA or Nylon 6,10 (or 610): [NH−(CH2)6−NH−CO−(CH2)8−CO]n made from hexamethylenediamine and sebacic acid; For copolymers the comonomers or pairs of comonomers are separated by slashes: PA 6/66: [NH−(CH2)6−NH−CO−(CH2)4−CO]n−[NH−(CH2)5−CO]m made from caprolactam, hexamethylenediamine and adipic acid; PA 66/610: [NH−(CH2)6−NH−CO−(CH2)4−CO]n−[NH−(CH2)6−NH−CO−(CH2)8−CO]m made from hexamethylenediamine, adipic acid and sebacic acid. The term polyphthalamide (abbreviated to PPA) is used when 60% or more moles of the carboxylic acid portion of the repeating unit in the polymer chain is composed of a combination of terephthalic acid (TPA) and isophthalic acid (IPA). Types Nylon 66 and related heteropolymers Nylon 66 and related polyamides are condensation polymers forms from equal parts of diamine and dicarboxylic acids. In the first case, the "repeating unit" has the ABAB structure, as also seen in many polyesters and polyurethanes. Since each monomer in this copolymer has the same reactive group on both ends, the direction of the amide bond reverses between each monomer, unlike natural polyamide proteins, which have overall directionality: C terminal → N terminal. In the second case (so called AA), the repeating unit corresponds to the single monomer. Wallace Carothers at DuPont patented nylon 66. In the case of nylons that involve reaction of a diamine and a dicarboxylic acid, it is difficult to get the proportions exactly correct, and deviations can lead to chain termination at molecular weights less than a desirable 10,000 daltons. To overcome this problem, a crystalline, solid "nylon salt" can be formed at room temperature, using an exact 1:1 ratio of the acid and the base to neutralize each other. The salt is crystallized to purify it and obtain the desired precise stoichiometry. Heated to 285 °C (545 °F), the salt reacts to form nylon polymer with the production of water. Nylon 510, made from pentamethylene diamine and sebacic acid, was included in the Carothers patent to nylon 66 Nylon 610 is produced similarly using hexamethylene diamine. These materials are more expensive because of the relatively high cost of sebacic acid. Owing to the high hydrocarbon content, nylon 610 is more hydrophobic and finds applications suited for this property, such as bristles. Examples of these polymers that are or were commercially available: PA46 DSM Stanyl PA410 DSM Ecopaxx PA4T DSM Four Tii PA66 DuPont Zytel Nylon 6 and related homopolymers These polymers are made from a lactam or amino acid. The synthetic route using lactams (cyclic amides) was developed by Paul Schlack at IG Farben, leading to nylon 6, or polycaprolactam—formed by a ring-opening polymerization. The peptide bond within the caprolactam is broken with the exposed active groups on each side being incorporated into two new bonds as the monomer becomes part of the polymer backbone. The 428 °F (220 °C) melting point of nylon 6 is lower than the 509 °F (265 °C) melting point of nylon 66. Homopolymer nylons are derived from one monomer. Examples of these polymers that are or were commercially available: PA6 Lanxess Durethan B PA11 Arkema Rilsan PA12 Evonik Vestamid L Nylon 1,6 Nylons can also be synthesized from dinitriles using acid catalysis. For example, this method is applicable for preparation of nylon 1,6 from adiponitrile, formaldehyde and water. Additionally, nylons can be synthesized from diols and dinitriles using this method as well. Copolymers It is easy to make mixtures of the monomers or sets of monomers used to make nylons to obtain copolymers. This lowers crystallinity and can therefore lower the melting point. Some copolymers that have been or are commercially available are listed below: PA6/66 DuPont Zytel PA6/6T BASF Ultramid T (6/6T copolymer) PA6I/6T DuPont Selar PA PA66/6T DuPont Zytel HTN PA12/MACMI EMS Grilamid TR Blends Most nylon polymers are miscible with each other allowing a range of blends to be made. The two polymers can react with one another by transamidation to form random copolymers. Crystallinity According to their crystallinity, polyamides can be: semi-crystalline: high crystallinity: PA46 and PA66; low crystallinity: PAMXD6 made from m-xylylenediamine and adipic acid; amorphous: PA6I made from hexamethylenediamine and isophthalic acid. According to this classification, PA66, for example, is an aliphatic semi-crystalline homopolyamide. Environmental impact All nylons are susceptible to hydrolysis, especially by strong acids, a reaction essentially the reverse of their synthesis. The molecular weight of nylon products so attacked drops, and cracks form quickly at the affected zones. Lower members of the nylons (such as nylon 6) are affected more than higher members such as nylon 12. This means that nylon parts cannot be used in contact with sulfuric acid for example, such as the electrolyte used in lead–acid batteries. When being molded, nylon must be dried to prevent hydrolysis in the molding machine barrel since water at high temperatures can also degrade the polymer. The reaction is shown above. The average greenhouse gas footprint of nylon in manufacturing carpets is estimated at 5.43 kg CO2 equivalent per kg, when produced in Europe. This gives it almost the same carbon footprint as wool, but with greater durability and therefore a lower overall carbon footprint. Data published by PlasticsEurope indicates for nylon 66 a greenhouse gas footprint of 6.4 kg CO2 equivalent per kg, and an energy consumption of 138 kJ/kg. When considering the environmental impact of nylon, it is important to consider the use phase. Various nylons break down in fire and form hazardous smoke, and toxic fumes or ash, typically containing hydrogen cyanide. Incinerating nylons to recover the high energy used to create them is usually expensive, so most nylons reach the garbage dumps, decaying slowly. Discarded nylon fabric takes 30–40 years to decompose. Nylon used in discarded fishing gear such as fishing nets is a contributor to debris in the ocean. Nylon is a robust polymer and lends itself well to recycling. Much nylon resin is recycled directly in a closed loop at the injection molding machine, by grinding sprues and runners and mixing them with the virgin granules being consumed by the molding machine. Because of the expense and difficulties of the nylon recycling process, few companies utilize it while most favor using cheaper, newly made plastics for their products instead. US clothing company Patagonia has products containing recycled nylon and in the mid-2010s invested in Bureo, a company that recycles nylon from used fishing nets to use in sunglasses and skateboards. The Italian company Aquafil also has demonstrated recycling fishing nets lost in the ocean into apparel. Vanden Recycling recycles nylon and other polyamides (PA) and has operations in the UK, Australia, Hong Kong, the UAE, Turkey and Finland. Nylon is the most popular fiber type in the residential carpet industry today. The US EPA estimates that 9.2% of carpet fiber, backing and padding was recycled in 2018, 17.8% was incinerated in waste-to-energy facilities, and 73% was discarded in landfills. Some of the world's largest carpet and rug companies are promoting "cradle to cradle"—the re-use of non-virgin materials including ones not historically recycled—as the industry's pathway forward. Properties Above their melting temperatures, Tm, thermoplastics like nylon are amorphous solids or viscous fluids in which the chains approximate random coils. Below Tm, amorphous regions alternate with regions which are lamellar crystals. The amorphous regions contribute elasticity, and the crystalline regions contribute strength and rigidity. The planar amide (-CO-NH-) groups are very polar, so nylon forms multiple hydrogen bonds among adjacent strands. Because the nylon backbone is so regular and symmetrical, especially if all the amide bonds are in the trans configuration, nylons often have high crystallinity and make excellent fibers. The amount of crystallinity depends on the details of formation, as well as on the kind of nylon. Nylon 66 can have multiple parallel strands aligned with their neighboring peptide bonds at coordinated separations of exactly six and four carbons for considerable lengths, so the carbonyl oxygens and amide hydrogens can line up to form interchain hydrogen bonds repeatedly, without interruption (see the figure opposite). Nylon 510 can have coordinated runs of five and eight carbons. Thus parallel (but not antiparallel) strands can participate in extended, unbroken, multi-chain β-pleated sheets, a strong and tough supermolecular structure similar to that found in natural silk fibroin and the β-keratins in feathers. (Proteins have only an amino acid α-carbon separating sequential -CO-NH- groups.) Nylon 6 will form uninterrupted H-bonded sheets with mixed directionalities, but the β-sheet wrinkling is somewhat different. The three-dimensional disposition of each alkane hydrocarbon chain depends on rotations about the 109.47° tetrahedral bonds of singly bonded carbon atoms. When extruded into fibers through pores in an industry spinneret, the individual polymer chains tend to align because of viscous flow. If subjected to cold drawing afterwards, the fibers align further, increasing their crystallinity, and the material acquires additional tensile strength. In practice, nylon fibers are most often drawn using heated rolls at high speeds. Block nylon tends to be less crystalline, except near the surfaces due to shearing stresses during formation. Nylon is clear and colorless, or milky, but is easily dyed. Multistranded nylon cord and rope is slippery and tends to unravel. The ends can be melted and fused with a heat source such as a flame or electrode to prevent this. Nylons are hygroscopic and will absorb or desorb moisture as a function of the ambient humidity. Variations in moisture content have several effects on the polymer. Firstly, the dimensions will change, but more importantly moisture acts as a plasticizer, lowering the glass transition temperature (Tg), and consequently the elastic modulus at temperatures below the Tg When dry, polyamide is a good electrical insulator. However, polyamide is hygroscopic. The absorption of water will change some of the material's properties such as its electrical resistance. Nylon is less absorbent than wool or cotton. The characteristic features of nylon 66 include: Pleats and creases can be heat-set at higher temperatures More compact molecular structure Better weathering properties; better sunlight resistance Softer "Hand" High melting point (256 °C, 492.8 °F) Superior colorfastness Excellent abrasion resistance On the other hand, nylon 6 is easy to dye, more readily fades; it has a higher impact resistance, a more rapid moisture absorption, greater elasticity, and elastic recovery. Variation of luster: nylon has the ability to be very lustrous, semi-lustrous, or dull. Durability: its high tenacity fibers are used for seatbelts, tire cords, ballistic cloth, and other uses. High elongation Excellent abrasion resistance Highly resilient (nylon fabrics are heat-set) Paved the way for easy-care garments High resistance to insects, fungi, animals, as well as molds, mildew, rot, and many chemicals Used in carpets and nylon stockings Melts instead of burning Used in many military applications Good specific strength Transparent to infrared light (−12 dB) Nylon clothing tends to be less flammable than cotton and rayon, but nylon fibers may melt and stick to skin. Uses Nylon was first used commercially in a nylon-bristled toothbrush in 1938, followed more famously in women's stockings or "nylons" which were shown at the 1939 New York World's Fair and first sold commercially in 1940. Its use increased dramatically during World War II, when the need for fabrics increased dramatically. Fibers Bill Pittendreigh, DuPont, and other individuals and corporations worked diligently during the first few months of World War II to find a way to replace Asian silk and hemp with nylon in parachutes. It was also used to make tires, tents, ropes, ponchos, and other military supplies. It was even used in the production of a high-grade paper for U.S. currency. At the outset of the war, cotton accounted for more than 80% of all fibers used and manufactured, and wool fibers accounted for nearly all of the rest. By August 1945, manufactured fibers had taken a market share of 25%, at the expense of cotton. After the war, because of shortages of both silk and nylon, nylon parachute material was sometimes repurposed to make dresses. Nylon 6 and 66 fibers are used in carpet manufacture. Nylon is one kind of fiber used in tire cord. Herman E. Schroeder pioneered application of nylon in tires. Molds and resins Nylon resins are widely used in the automobile industry especially in the engine compartment. Molded nylon is used in hair combs and mechanical parts such as machine screws, gears, gaskets, and other low- to medium-stress components previously cast in metal. Engineering-grade nylon is processed by extrusion, casting, and injection molding. Type 6,6 Nylon 101 is the most common commercial grade of nylon, and Nylon 6 is the most common commercial grade of molded nylon. For use in tools such as spudgers, nylon is available in glass-filled variants which increase structural and impact strength and rigidity, and molybdenum disulfide-filled variants which increase lubricity. Nylon can be used as the matrix material in composite materials, with reinforcing fibers like glass or carbon fiber; such a composite has a higher density than pure nylon. Such thermoplastic composites (25% to 30% glass fiber) are frequently used in car components next to the engine, such as intake manifolds, where the good heat resistance of such materials makes them feasible competitors to metals. Nylon was used to make the stock of the Remington Nylon 66 rifle. The frame of the modern Glock pistol is made of a nylon composite. Food packaging Nylon resins are used as a component of food packaging films where an oxygen barrier is needed. Some of the terpolymers based upon nylon are used every day in packaging. Nylon has been used for meat wrappings and sausage sheaths. The high temperature resistance of nylon makes it useful for oven bags. Filaments Nylon filaments are primarily used in brushes especially toothbrushes and string trimmers. They are also used as monofilaments in fishing line. Nylon 610 and 612 are the most used polymers for filaments. Its various properties also make it very useful as a material in additive manufacturing; specifically, as a filament in consumer and professional grade fused deposition modeling 3D printers. Other forms Nylon resins can be extruded into rods, tubes, and sheets. Nylon powders are used to powder coat metals. Nylon 11 and nylon 12 are the most widely used. In the mid-1940s, classical guitarist Andrés Segovia mentioned the shortage of good guitar strings in the United States, particularly his favorite Pirastro catgut strings, to a number of foreign diplomats at a party, including General Lindeman of the British Embassy. A month later, the General presented Segovia with some nylon strings which he had obtained via some members of the DuPont family. Segovia found that although the strings produced a clear sound, they had a faint metallic timbre which he hoped could be eliminated. Nylon strings were first tried on stage by Olga Coelho in New York in January 1944. In 1946, Segovia and string maker Albert Augustine were introduced by their mutual friend Vladimir Bobri, editor of Guitar Review. On the basis of Segovia's interest and Augustine's past experiments, they decided to pursue the development of nylon strings. DuPont, skeptical of the idea, agreed to supply the nylon if Augustine would endeavor to develop and produce the actual strings. After three years of development, Augustine demonstrated a nylon first string whose quality impressed guitarists, including Segovia, in addition to DuPont. Wound strings, however, were more problematic. Eventually, however, after experimenting with various types of metal and smoothing and polishing techniques, Augustine was also able to produce high quality nylon wound strings.
Physical sciences
Carbon–nitrogen bond
null