id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4 values | section stringlengths 4 49 ⌀ | sublist stringclasses 9 values |
|---|---|---|---|---|---|---|
22059223 | https://en.wikipedia.org/wiki/Limousine | Limousine | A limousine ( or ), or limo () for short, is a large, chauffeur-driven luxury vehicle with a partition between the driver compartment and the passenger compartment which can be operated mechanically by hand or by a button electronically. A luxury sedan with a very long wheelbase and driven by a professional driver is called a stretch limousine.
In some countries, such as the United States, Germany, Canada, and Australia, a limousine service may be any pre-booked hire car with a driver, usually, but only sometimes a luxury car. In particular, airport shuttle services are often called "limousine services", though they often use minivans or light commercial vehicles.
Etymology
The word limousine is derived from the name of the French region Limousin; however, how the area's name was transferred to the car is uncertain.
One possibility involves a particular type of carriage hood or roof that physically resembled the raised hood of the cloak worn by the shepherds there.
An alternate etymology speculates that some early chauffeurs wore a Limousin-style cloak in the open driver's compartment for protection from the weather. The name was then extended to this particular type of car with a permanent top projecting over the chauffeur. This former type of automobile had an enclosed passenger compartment seating three to five persons, with only a roof projecting forward over the open driver's area in the front.
History
Wealthy owners of expensive carriages and their passengers were accustomed to their private compartments leaving their coachman or driver outside in all weathers. When automobiles arrived, the same people required a similar arrangement for their chauffeurs. As such, the 1916 definition of limousine by the US Society of Automobile Engineers is "a closed car seating three to five inside, with driver's seat outside".
In Great Britain, the limousine de-ville was a version of the limousine town car where the driver's compartment was outside and had no weather protection. The limousine-landaulet variant (also sold in the United States) had a removable or folding roof section over the rear passenger seat.
In the United States, sub-categories of limousines in 1916 were the berline, defined as "a limousine having the driver's seat entirely enclosed", and the brougham, described as "a limousine with no roof over the driver's seat."
The president of the United States has ridden in a variety of brands of state cars starting from 1899 when President William McKinley first to ride in a car, a steam Locomobile.
U.S. limousine business declined in the 21st century due to the effects of the Great Recession, the subsequent rise of ride sharing apps, and an industry crisis precipitated by deadly stretch limousine crashes in 2015 and Schoharie, New York, in 2018. Moreover, during this time, people who would have once utilized limousines began opting to travel more discreetly in cars like black SUVs.
Characteristics
The limousine body style usually has a partition separating the driver from the rear passenger compartment. This partition usually includes an openable glass section so passengers may see the road. Communication with the driver is possible either by opening the partition window or using an intercom system.
Limousines are often long-wheelbase vehicles to provide extra legroom in the passenger compartment. There will usually be occasional seats (in the U.S. called jump seats) at the front of the compartment (either forward-facing, rear-facing, or able to face either direction).
Many nations have official state cars designed to transport government officials. The top leaders have dedicated and specially equipped limousines. The United States Presidential State Car is the official car of the President of the United States.
Stretch limousines
Stretch limousines are longer than regular limousines, usually to accommodate more passengers. Stretch limousines may have seating along the sides of the cabin.
A "stretch limousine" was created in Fort Smith, Arkansas, around 1928 by the Armbruster coach company. Their vehicles were primarily used to transport famous "big band" leaders, such as Glenn Miller and Benny Goodman, and their members and equipment. These early stretch limousines were often called "big band buses". Armbruster called their lengthened cars "extended-wheelbase multi-door auto-coaches". Their 12-passenger coaches were used by hotels, taxis, airlines, corporations, and tour companies. Knock-down programs by automakers made coachbuilders stretch vehicles, but Armbruster also custom built limousines using unibody construction such as the 1969 AMC Ambassadors.
, stretch limousines comprise one percent of U.S. limousine company offerings. That total was down from about ten percent in 2013.
Novelty limousines
A variety of vehicles have been converted into novelty limousines. They are used for weddings, parties, and other social occasions. Another style of novelty limousine are those painted in bright colors, such as purple or pink.
Vehicles converted into novelty stretch limousines include the East German Trabant, Volkswagen Beetle, Fiat Panda, and Citroën 2CV. There are instances of Corvettes, Ferraris, and Mini Coopers being stretched to accommodate up to 10 passengers.
Gallery
| Technology | Motorized road transport | null |
26416344 | https://en.wikipedia.org/wiki/Algoman%20orogeny | Algoman orogeny | The Algoman orogeny, known as the Kenoran orogeny in Canada, was an episode of mountain-building (orogeny) during the Late Archean Eon that involved repeated episodes of continental collisions, compressions and subductions. The Superior province and the Minnesota River Valley terrane collided about 2,700 to 2,500 million years ago. The collision folded the Earth's crust and produced enough heat and pressure to metamorphose the rock. Blocks were added to the Superior province along a boundary that stretches from present-day eastern South Dakota into the Lake Huron area. The Algoman orogeny brought the Archean Eon to a close, about ; it lasted less than 100 million years and marks a major change in the development of the Earth's crust.
The Canadian shield contains belts of metavolcanic and metasedimentary rocks formed by the action of metamorphism on volcanic and sedimentary rock. The areas between individual belts consist of granites or granitic gneisses that form fault zones. These two types of belts can be seen in the Wabigoon, Quetico and Wawa subprovinces; the Wabigoon and Wawa are of volcanic origin and the Quetico is of sedimentary origin. These three subprovinces lie linearly in southwestern- to northeastern-oriented belts about wide on the southern portion of the Superior Province.
The Slave province and portions of the Nain province were also affected. Between about 2,000 and these combined with the Sask and Wyoming cratons to form the first supercontinent, the Kenorland supercontinent.
Overview
Through most of the Archean Eon, the Earth had a heat production at least twice that of the present. The timing of initiation of plate tectonics is still debated, but if modern-day tectonics were operative in the Archean, higher heat fluxes might have caused tectonic processes to be more active. As a result, plates and continents may have been smaller. No broad blocks as old as 3 Ga are found in Precambrian shields. Toward the end of the Archean, however, some of these blocks or terranes came together to form larger blocks welded together by greenstone belts.
Two such terranes that now form part of the Canadian shield collided about . These were the Superior province and the large Minnesota River Valley terrane, the former composed mainly of granite and the latter of gneiss. This led to the mountain-building episode known as the Algoman orogeny in the U. S. (named for Algoma, Kewaunee County, Wisconsin), and the Kenoran orogeny in Canada. Its duration is estimated at 50 to 100 million years. The current boundary between these terranes is known as the Great Lakes tectonic zone (GLTZ). This zone is wide and extends in a line roughly 1,200 kilometers long from the middle of South Dakota, east through the middle of the Upper Peninsula of Michigan, into the Sudbury, Ontario region. The region remains slightly active today. Rifting in the GLTZ began about at the end of the Algoman orogeny.
The orogeny affected adjacent regions of northern Minnesota and Ontario in the Superior province as well as the Slave and the eastern part of the Nain province, a far wider region of influence than in subsequent orogenies. It is the earliest datable orogeny in North America and brought the Archean Eon to a close. The end of the Archean Eon marks a major change in the development of the Earth's crust: the crust was essentially formed and achieved thicknesses of about under the continents.
Tectonics
The collision between terranes folded the Earth's crust, and produced enough heat and pressure to metamorphose then-existing rock. Repeated continental collisions, compression along a north–south axis, and subduction resulted in the uprising of the Algoman Mountains. This was followed by intrusions of granite plutons and batholithic domes within the gneisses about ; two examples are the Sacred Heart granite of southwestern Minnesota and the Watersmeet Domes metagabbros (metamorphosed gabbros) that straddle the border of Wisconsin and Michigan's Upper Peninsula. After the intrusions solidified, new stresses on the greenstone belt caused movement horizontally along several faults and moved huge blocks of the crust vertically relative to adjacent blocks. This combination of folding, intrusion and faulting built mountain ranges throughout northern Minnesota, northern Wisconsin, Michigan's Upper Peninsula and southernmost Ontario. Igneous and high-grade metamorphic rocks are associated with the orogeny.
By extrapolating the now-eroded and tilted beds upward, geologists have determined that these mountains were several kilometers high. Similar projections of the tilted beds downward, coupled with geophysical measurements on the greenstone belts in Canada, suggest the metavolcanic and metasedimentary rocks of the belts project downward at least a few kilometers.
Greenstone
The action of metamorphism on the border between granite and gneiss bodies produces a succession of metamorphosed volcanic and sedimentary rocks called greenstone belts. Most Archean volcanic rocks are concentrated within greenstone belts; the green color comes from minerals, such as chlorite, epidote and actinolite that formed during metamorphism. After metamorphism occurred, these rocks were folded and faulted into a system of mountains by the Algoman orogeny.
The volcanic beds are thick. About the greenstone belt was subjected to new stresses that caused movement along several faults. Faulting on both small and large scales is typical of greenstone belt deformation. These faults show both vertical and horizontal movement relative to adjacent blocks. Large-scale faults typically occur along the margins of the greenstone belts where they are in contact with enclosed granitic rocks. Vertical movement may be thousands of meters and horizontal movements of many kilometers occur along some fault zones.
Some time before , masses of magma intruded under and within the igneous and sedimentary rocks, heating and pressing the rocks to metamorphose them into hard greenish greenstones. They began with fissure eruptions of basalt, continued with intermediate and felsic rocks erupted from volcanic centers and ended with deposition of sediments from the erosion of the volcanic pile. The rising magma was extruded under a shallow ancient sea where it cooled to form pillowed greenstones. Some of Minnesota's pillows probably cooled at depths as great as and contain no gas cavities or vesicles.
Most greenstone belts, with all of their components, have been folded into troughlike synclines; the original basaltic rock, which was on the bottom, occurs on the outer margins of the trough. The overlying, younger rock units – rhyolites and greywackes – occur closer to the center of the syncline. The rocks are so intensely folded that most have been tilted nearly 90°, with the tops of layers on one side of the synclinal belt facing those on the other side; the rock sequences are in effect lying on their sides. The folding can be so complex that a single layer may be exposed at the surface many times by subsequent erosion.
Volcanic activity
As the greenstone belts were forming, volcanoes ejected tephra into the air which settled as sediments to become compacted into the greywackes and mudstones of the Knife Lake and Lake Vermilion formations. Greywackes are poorly sorted mixtures of clay, mica and quartz that may be derived from the decomposition of pyroclastic debris; the presence of this debris suggests that some explosive volcanic activity had occurred in the area earlier. The volcanism took place on the surface and the other deformations took place at various depths. Numerous earthquakes accompanied the volcanism and faulting.
Superior province
The Superior province forms the core of both the North American continent and the Canadian shield, and has a thickness of at least . Its granites date from 2,700 to 2,500 million years ago. It was formed by the welding together of many small terranes, the ages of which decrease away from the nucleus. This progression is illustrated by the age of the Wabigoon, Quetico and Wawa subprovinces, discussed in their individual sections. Later terranes docked on the periphery of continental masses with geosynclines developing between the fused nuclei and oceanic crust. In general the Superior province consists of east–west-trending belts of predominately volcanic rocks alternating with belts of sedimentary and gneissic rocks.
Due to down warping along elongate zones, each belt is essentially a large downfold or downfaulted block. The areas between individual belts are fault zones consisting of granite or granitic gneiss. Its western part contains a regional pattern of east–west-trending wide granitic greenstone and metasedimentary belts (subprovinces). Western Superior province's mantle has remained intact since the 2,700-million-year-ago accretion of the subprovinces.
Both folding and faulting can be seen in the Wabigoon, Quetico and Wawa subprovinces. These three subprovinces lie linearly in southwestern- to northeastern-oriented belts of about wide (see figure on right). The northernmost and widest province is the Wabigoon. It begins in north-central Minnesota and continues northeasterly into central Ontario; it is partially interrupted by the Southern province. Immediately to the south, the Quetico subprovince extends as far west in north-central Minnesota, and extends further to the northeast. It is completely interrupted by a narrow band of the 1,100- to 1,550-million-year-old Southern province to the northeast of Thunder Bay. The Wawa subprovince is the most southerly of the three; it begins in central Minnesota, continues northeast to Thunder Bay, Ontario, Canada, (where its southern border just skims north Thunder Bay) and then extends east beyond Lake Superior. The northern boundary continues in a roughly northeasterly heading, while the southern border dips south to follow the northeast shore of Lake Superior.
Fault zones
The three subprovinces are separated by steeply dipping shear zones caused by continued compression that occurred during the Algoman orogeny. These boundaries are major fault zones.
The boundary between the Wabigoon and Quetico subprovinces seems to have been also controlled by colliding plates and subsequent transpressions. This Rainy Lake – Seine River fault zone is a major northeast–southwest trending strike-slip fault zone; it trends N80°E to cut through the northwest part of Voyageurs National Park in Minnesota and extends westward to near International Falls, Minnesota and Fort Frances, Ontario. The fault has transported rocks in the greenstone belt a considerable distance from their origin. The greenstone belt is wide at the Seven Sisters Islands; to the west the greenstone interfingers with pods of anorthositic gabbro. Radiometric dating from the Rainy Lake area in Ontario show an age of about 2,700 million years old, which favors a moving tectonic plate model for the formation of the boundary.
The largest fault is the Vermilion fault separating the Quetico and Wawa subprovinces. It has a N40°E trend and was caused by the introduction of masses of magma. The Vermilion fault can be traced westward to North Dakota. It has had a horizontal movement with the northern block moving eastward and upward relative to the southern block. The junction between the Quetico and Wawa subprovinces has a zone of biotite-rich migmatite, a rock that has characteristics of both igneous and metamorphic processes; this indicates a zone of partial melting which is possible only under high temperature and pressure conditions. It is visible as a wide belt. Most of the flattened large crystals in the fault indicate a simple compression rather than a wrenching, shearing or rotational event as the two subprovinces docked. This provides evidence that the Quetico and Wawa subprovinces were joined by the collision of two continental plates, about . Structures in the migmatite include folds and foliations; the foliations cut across both limbs of earlier-phase folds. These cross-cutting foliations indicate that the migmatite has undergone at least two periods of ductile deformation.
Wabigoon subprovince
The Wabigoon subprovince is a formerly active volcanic island chain, made up of metavolcanic-metasedimentary intrusions. These metamorphosed rocks are volcanically derived greenstone belts, and are surrounded and cut by granitic plutons and batholiths. The subprovince's greenstone belts consist of felsic volcanics, felsic batholiths and felsic plutons aged from 3,000 to 2,670 million years old.
Quetico subprovince
The Quetico gneiss belt extends some across Ontario and parts of Minnesota. The dominant rocks within the belt are schists and gneisses produced by intense metamorphism of greywackes and minor amounts of other sedimentary rocks. The sediments, alkalic plutons and felsic plutons are aged from 2,690 to 2,680 million years. The metamorphism is relatively low-grade on the margins and high-grade in the center. The low-grade components of the greywackes were derived primarily from volcanic rocks; the high-grade rocks are coarser-grained and contain minerals that reflect higher temperatures. The granitic intrusions within the high-grade metasediments were produced by subduction of the ocean crust and partial melting of metasedimentary rocks. Immediately south of Voyageurs National Park and extending to the Vermilion fault is a broad transition zone that contains migmatite.
The Quetico gneiss belt represents an accretionary wedge that formed in a trench during the collision of several island arcs (greenstone belts). Boundaries between the gneiss belt and the flanking greenstone belts to the north and south are major fault zones, the Vermilion and Rainy Lake – Seine River fault zones.
Wawa subprovince
The Wawa subprovince is a formerly active volcanic island chain, consisting of metamorphosed greenstone belts which are surrounded by and cut by granitic plutons and batholiths. These greenstone belts consist of felsic volcanics, felsic batholiths, felsic plutons and sediments aged from 2,700 to 2,670 million years old.
The predominant rock type is a white, coarse-grained, foliated hornblende tonalite. Minerals in the tonalite are quartz, plagioclase, alkali feldspar and hornblende.
Slave province
In extensive regions of the Slave province of northern Canada, the magma that later became batholiths heated the surrounding rock to create metamorphic regions called aureoles about 2,575 million years ago. These regions are typically wide. The creation of aureoles was a continuous process, but three recognizable metamorphic phases can be correlated with established deformational phases. The cycle began with a deformation phase unaccompanied by metamorphism. This evolved into the second phase accompanied by broad regional metamorphism as thermal doming began. With continued updoming of the isotherms, the third phase produced minor folding but caused major metamorphic recrystallization, resulting in the emplacement of granite at the core of the thermal dome. This phase occurred at lower pressure because of erosional unloading, but the temperatures were more extreme, ranging up to about . With deformation complete, the thermal dome decayed; minor mineralogical changes occurred during this decay phase. The region has since been effectively stable.
Geochronology of several Archean rock units establishes a sequence of events, approximately 75 million years in duration, leading to the formation of a new crustal segment. The oldest rocks, at 2,650 million years old, are basic metavolcanics with largely calc-alkaline characteristics. Radiometric dating indicates ages of 2,640 to 2,620 million years are recorded for the syn-kinematic quartz diorite batholiths and 2,590 to 2,100 million years for the major late-kinematic bodies. Pegmatitic adamellites, at 2,575±25 million years, are the youngest plutonic units.
Metagreywackes and metapelites from two areas traversing one of these aureoles near Yellowknife have been studied. Most of the Slave province rocks are granitic with metamorphosed Yellowknife metasedimentary and volcanic rocks. Isotopic ages of these rocks is around , the time of the Kenoran orogeny. Rocks comprising the Slave province represent a high grade of metamorphism, intrusion and basement remobilization typical of Archean terranes. Migmatites, batholithic intrusive and granulitic metamorphic rocks show foliation and compositional banding; the rocks are uniformly hard and so thoroughly deformed that little foliation exists. Most Yellowknife Supergroup metasediments are tightly folded (isoclinal) or occur in plunging anticlines.
Nain province
The Archean rocks forming the Nain province of northeastern Canada and Greenland are separated from the Superior terrane by a narrow band of remobilized rocks. Greenland separated from North America less than and its Precambrian terranes align with Canada's on the opposite side of Baffin Bay. The southern tip of Greenland is part of the Nain Province, this means it was connected to North America at the end of the Kenoran orogen.
| Physical sciences | Geologic features | Earth science |
33632441 | https://en.wikipedia.org/wiki/Psychoactive%20drug | Psychoactive drug | A psychoactive drug, mind-altering drug, or consciousness-altering drug is a chemical substance that changes brain function and results in alterations in perception, mood, consciousness, cognition, or behavior. The term psychotropic drug is often used interchangeably, while some sources present narrower definitions. These substances may be used medically; recreationally; to purposefully improve performance or alter consciousness; as entheogens for ritual, spiritual, or shamanic purposes; or for research, including psychedelic therapy. Physicians and other healthcare practitioners prescribe psychoactive drugs from several categories for therapeutic purposes. These include anesthetics, analgesics, anticonvulsant and antiparkinsonian drugs as well as medications used to treat neuropsychiatric disorders, such as antidepressants, anxiolytics, antipsychotics, and stimulants. Some psychoactive substances may be used in detoxification and rehabilitation programs for persons dependent on or addicted to other psychoactive drugs.
Psychoactive substances often bring about subjective changes in consciousness and mood (although these may be objectively observed) that the user may find rewarding and pleasant (e.g., euphoria or a sense of relaxation) or advantageous in an objectively observable or measurable way (e.g. increased alertness), thus the effects are reinforcing to varying degrees. Substances which are rewarding and thus positively reinforcing have the potential to induce a state of addiction – compulsive drug use despite negative consequences. In addition, sustained use of some substances may produce physical or psychological dependence or both, associated with physical or psychological withdrawal symptoms respectively. Drug rehabilitation attempts to reduce addiction through a combination of psychotherapy, support groups, and other psychoactive substances. Conversely, certain psychoactive drugs may be so unpleasant that the person will never use the substance again. This is especially true of certain deliriants (e.g. Jimson weed), powerful dissociatives (e.g. Salvia divinorum), and classic psychedelics (e.g. LSD, psilocybin), in the form of a "bad trip".
Psychoactive drug misuse, dependence, and addiction have resulted in legal measures and moral debate. Governmental controls on manufacture, supply, and prescription attempt to reduce problematic medical drug use; worldwide efforts to combat trafficking in psychoactive drugs are commonly termed the "war on drugs". Ethical concerns have also been raised about the overuse of these drugs clinically and about their marketing by manufacturers. Popular campaigns to decriminalize or legalize the recreational use of certain drugs (e.g., cannabis) are also ongoing.
History
Psychoactive drug use can be traced to prehistory. Archaeological evidence of the use of psychoactive substances, mostly plants, dates back at least 10,000 years; historical evidence indicates cultural use 5,000 years ago. There is evidence of the chewing of coca leaves, for example, in Peruvian society 8,000 years ago.
Psychoactive substances have been used medicinally and to alter consciousness. Consciousness altering may be a primary drive, akin to the need to satiate thirst, hunger, or sexual desire. This may be manifest in the long history of drug use, and even in children's desire for spinning, swinging, or sliding, suggesting that the drive to alter one's state of mind is universal.
In The Hasheesh Eater (1857), American author Fitz Hugh Ludlow was one of the first to describe in modern terms the desire to change one's consciousness through drug use:
During the 20th century, the majority of countries initially responded to the use of recreational drugs by prohibiting production, distribution, or use through criminalization. A notable example occurred with Prohibition in the United States, where early in the century alcohol was made illegal for 13 years. In recent decades, an emerging perspective among governments and law enforcement holds that illicit drug use cannot be stopped through prohibition. One organization holding that view, Law Enforcement Against Prohibition (LEAP), concluded that "[in] fighting a war on drugs the government has increased the problems of society and made them far worse. A system of regulation rather than prohibition is a less harmful, more ethical and a more effective public policy."
In some countries, there has been a move toward harm reduction, where the use of illicit drugs is neither condoned nor promoted, but services and support are provided to ensure users have adequate factual information readily available, and that the negative effects of their use be minimized. Such is the case with Portugal's drug policy of decriminalization, with a primary goal of reducing the adverse health effects of drug use.
Terminology
Psychoactive and psychotropic are often used interchangeably in general and academic sources, to describe substances that act on the brain to alter cognition and perception; some sources make a distinction between the terms. One narrower definition of psychotropic refers to drugs used to treat mental disorders, such as anxiolytic sedatives, antidepressants, antimanic agents, and neuroleptics. Another usage of psychotropic refers to substances determined to pose "high abuse liability", including stimulants, hallucinogens, opioids, and sedatives/hypnotics including alcohol. In international drug control, psychotropic substances refers to the substances specified in the Convention on Psychotropic Substances, which does not include narcotics.
The term "drug" has become a skunked term. "Drugs" can have a negative connotation, often associated with illegal substances like cocaine or heroin, despite the fact that the terms "drug" and "medicine" are sometimes used interchangeably.
Novel psychoactive substances (NPS), also known as "designer drugs" are a category of psychoactive drugs (substances) that are designed to mimic the effects of often illegal drugs, usually in efforts to circumvent existing drug laws.
Types
Psychoactive drugs are divided according to their pharmacological effects. Common subtypes include:
Anxiolytics are medicinally used to reduce the symptoms of anxiety, and sometimes insomnia.
Example: benzodiazepines such as Xanax and Valium; barbiturates
Empathogen–entactogens alter emotional state, often resulting in an increased sense of empathy, closeness, and emotional communication.
Example: MDMA (ecstasy), MDA, 6-APB, AMT
Stimulants increase activity, or arousal, of the central nervous system. They can enhance alertness, attention, cognition, mood and physical performance. Some stimulants are used medicinally to treat individuals with ADHD and narcolepsy.
Examples: amphetamines, caffeine, cocaine, nicotine
Depressants reduce, or depress, activity and stimulation in the central nervous system. This category encompasses a spectrum of substances with sedative, soporific, and anesthetic properties, and include sedatives, hypnotics, and opioids.
Examples: ethanol (alcohol), opioids such as morphine, fentanyl, and codeine, cannabis, barbiturates, and benzodiazepines
Hallucinogens, including psychedelics, dissociatives and deliriants, encompass substances that produce distinct alterations in perception, sensation of space and time, and emotional state.
Examples, psychedelics: Psilocybin, LSD, DMT (N,N-Dimethyltryptamine), mescaline
Examples, dissociatives: Dextromethorphan, Salvia divinorum
Examples, deliriants: Datura, scopolamine
Uses
The ways in which psychoactive substances are used vary widely between cultures. Some substances may have controlled or illegal uses, others may have shamanic purposes, and others are used medicinally. Examples would be social drinking, nootropic supplements, and sleep aids. Caffeine is the world's most widely consumed psychoactive substance, and is legal and unregulated in nearly all jurisdictions; in North America, 90% of adults consume caffeine daily.
Mental disorders
Psychiatric medications are psychoactive drugs prescribed for the management of mental and emotional disorders, or to aid in overcoming challenging behavior. There are six major classes of psychiatric medications:
Antidepressants treat disorders such as clinical depression, dysthymia, anxiety, eating disorders, and borderline personality disorder.
Stimulants, used to treat disorders such as attention deficit hyperactivity disorder and narcolepsy, and for weight reduction.
Antipsychotics, used to treat psychotic symptoms, such as those associated with schizophrenia or severe mania, or as adjuncts to relieve clinical depression.
Mood stabilizers, used to treat bipolar disorder and schizoaffective disorder.
Anxiolytics, used to treat anxiety disorders.
Depressants, used as hypnotics, sedatives, and anesthetics, depending upon dosage.
In addition, several psychoactive substances are currently employed to treat various addictions. These include acamprosate or naltrexone in the treatment of alcoholism, or methadone or buprenorphine maintenance therapy in the case of opioid addiction.
Exposure to psychoactive drugs can cause changes to the brain that counteract or augment some of their effects; these changes may be beneficial or harmful. However, there is a significant amount of evidence that the relapse rate of mental disorders negatively corresponds with the length of properly followed treatment regimens (that is, relapse rate substantially declines over time), and to a much greater degree than placebo.
Military
Drugs used by militaries
Militaries worldwide have used or are using various psychoactive drugs to treat pain and to improve performance of soldiers by suppressing hunger, increasing the ability to sustain effort without food, increasing and lengthening wakefulness and concentration, suppressing fear, reducing empathy, and improving reflexes and memory-recall among other things.
Both military and civilian American intelligence officials are known to have used psychoactive drugs while interrogating captives apprehended in its "war on terror". In July 2012 Jason Leopold and Jeffrey Kaye, psychologists and human rights workers, had a Freedom of Information Act request fulfilled that confirmed that the use of psychoactive drugs during interrogation was a long-standing
practice. Captives and former captives had been reporting medical staff collaborating with interrogators to drug captives with powerful psychoactive drugs prior to interrogation since the very first captives release.
In May 2003 recently released Pakistani captive Sha Mohammed Alikhel described the routine use of psychoactive drugs. He said that Jihan Wali, a captive kept in a nearby cell, was rendered catatonic through the use of these drugs.
Alcohol has a long association of military use, and has been called "liquid courage" for its role in preparing troops for battle, anaesthetize injured soldiers, and celebrate military victories. It has also served as a coping mechanism for combat stress reactions and a means of decompression from combat to everyday life. However, this reliance on alcohol can have negative consequences for physical and mental health.
The first documented case of a soldier overdosing on methamphetamine during combat, was the Finnish corporal Aimo Koivunen, a soldier who fought in the Winter War and the Continuation War.
Psychochemical warfare
Psychoactive drugs have been used in military applications as non-lethal weapons.
Pain management
Psychoactive drugs are often prescribed to manage pain. The subjective experience of pain is primarily regulated by endogenous opioid peptides. Thus, pain can often be managed using psychoactives that operate on this neurotransmitter system, also known as opioid receptor agonists. This class of drugs can be highly addictive, and includes opiate narcotics, like morphine and codeine. NSAIDs, such as aspirin and ibuprofen, are also analgesics. These agents also reduce eicosanoid-mediated inflammation by inhibiting the enzyme cyclooxygenase.
Anesthesia
General anesthetics are a class of psychoactive drug used on people to block physical pain and other sensations. Most anesthetics induce unconsciousness, allowing the person to undergo medical procedures like surgery, without the feelings of physical pain or emotional trauma. To induce unconsciousness, anesthetics affect the GABA and NMDA systems. For example, Propofol is a GABA agonist, and ketamine is an NMDA receptor antagonist.
Performance-enhancement
Performance-enhancing substances, also known as performance-enhancing drugs (PEDs), are substances that are used to improve any form of activity performance in humans. A well-known example of cheating in sports involves doping in sport, where banned physical performance-enhancing drugs are used by athletes and bodybuilders. Athletic performance-enhancing substances are sometimes referred as ergogenic aids. Cognitive performance-enhancing drugs, commonly called nootropics, are sometimes used by students to improve academic performance. Performance-enhancing substances are also used by military personnel to enhance combat performance.
Recreation
Many psychoactive substances are used for their mood and perception altering effects, including those with accepted uses in medicine and psychiatry. Examples of psychoactive substances include caffeine, alcohol, cocaine, LSD, nicotine, cannabis, and dextromethorphan. Classes of drugs frequently used recreationally include:
Stimulants, which activate the central nervous system. These are used recreationally for their euphoric effects.
Hallucinogens (psychedelics, dissociatives and deliriants), which induce perceptual and cognitive alterations.
Hypnotics, which depress the central nervous system.
Opioid analgesics, which also depress the central nervous system. These are used recreationally because of their euphoric effects.
Inhalants, in the forms of gas aerosols, or solvents, which are inhaled as a vapor because of their stupefying effects. Many inhalants also fall into the above categories (such as nitrous oxide which is also an analgesic).
In some modern and ancient cultures, drug usage is seen as a status symbol. Recreational drugs are seen as status symbols in settings such as at nightclubs and parties. For example, in ancient Egypt, gods were commonly pictured holding hallucinogenic plants.
Because there is controversy about regulation of recreational drugs, there is an ongoing debate about drug prohibition. Critics of prohibition believe that regulation of recreational drug use is a violation of personal autonomy and freedom. In the United States, critics have noted that prohibition or regulation of recreational and spiritual drug use might be unconstitutional, and causing more harm than is prevented.
Some people who take psychoactive drugs experience drug or substance induced psychosis. A 2019 systematic review and meta-analysis by Murrie et al. found that the pooled proportion of transition from substance-induced psychosis to schizophrenia was 25% (95% CI 18%–35%), compared with 36% (95% CI 30%–43%) for brief, atypical and not otherwise specified psychoses. Type of substance was the primary predictor of transition from drug-induced psychosis to schizophrenia, with highest rates associated with cannabis (6 studies, 34%, CI 25%–46%), hallucinogens (3 studies, 26%, CI 14%–43%) and amphetamines (5 studies, 22%, CI 14%–34%). Lower rates were reported for opioid (12%), alcohol (10%) and sedative (9%) induced psychoses. Transition rates were slightly lower in older cohorts but were not affected by sex, country of the study, hospital or community location, urban or rural setting, diagnostic methods, or duration of follow-up.
Ritual and spiritual
Offerings
Alcohol and tobacco (nicotine) have been and are used as offerings in various religions and spiritual practices. Coca leaves have been used as offerings in rituals.
Alcohol
According to the Catholic Church, the sacramental wine used in the Eucharist must contain alcohol. Canon 924 of the present Code of Canon Law (1983) states:
§3 The wine must be natural, made from grapes of the vine, and not corrupt.
Psychoactive use
Entheogen
Certain psychoactives, particularly hallucinogens, have been used for religious purposes since prehistoric times. Native Americans have used peyote cacti containing mescaline for religious ceremonies for as long as 5700 years. The muscimol-containing Amanita muscaria mushroom was used for ritual purposes throughout prehistoric Europe.
The use of entheogens for religious purposes resurfaced in the West during the counterculture movements of the 1960s and 70s. Under the leadership of Timothy Leary, new spiritual and intention-based movements began to use LSD and other hallucinogens as tools to access deeper inner exploration. In the United States, the use of peyote for ritual purposes is protected only for members of the Native American Church, which is allowed to cultivate and distribute peyote. However, the genuine religious use of peyote, regardless of one's personal ancestry, is protected in Colorado, Arizona, New Mexico, Nevada, and Oregon.
Psychedelic therapy
Psychedelic therapy (or psychedelic-assisted therapy) refers to the proposed use of psychedelic drugs, such as psilocybin, MDMA, LSD, and ayahuasca, to treat mental disorders. As of 2021, psychedelic drugs are controlled substances in most countries and psychedelic therapy is not legally available outside clinical trials, with some exceptions.
Psychonautics
The aims and methods of psychonautics, when state-altering substances are involved, is commonly distinguished from recreational drug use by research sources. Psychonautics as a means of exploration need not involve drugs, and may take place in a religious context with an established history. Cohen considers psychonautics closer in association to wisdom traditions and other transpersonal and integral movements.
Self-medication
Self-medication, sometime called do-it-yourself (DIY) medicine, is a human behavior in which an individual uses a substance or any exogenous influence to self-administer treatment for physical or psychological conditions, for example headaches or fatigue.
The substances most widely used in self-medication are over-the-counter drugs and dietary supplements, which are used to treat common health issues at home. These do not require a doctor's prescription to obtain and, in some countries, are available in supermarkets and convenience stores.
Sex
Sex and drugs date back to ancient humans and have been interlocked throughout human history. Both legal and illegal, the consumption of drugs and their effects on the human body encompasses all aspects of sex, including desire, performance, pleasure, conception, gestation, and disease.
There are many different types of drugs that are commonly associated with their effects on sex, including alcohol, cannabis, cocaine, MDMA, GHB, amphetamines, opioids, antidepressants, and many others.
Social movements
Cannabis
In the US, NORML (National Organization for the Reform of Marijuana Laws) has led since the 1970s a movement to legalize cannabis nationally. The so-called "420 movement" is the global association of the number 420 with cannabis consumption: April 20th – fourth month, twentieth day – has become an international counterculture holiday based on the celebration and consumption of cannabis; 4:20 pm on any day is a time to consume cannabis.
Operation Overgrow
Operation Overgrow is the name, given by cannabis activists, of an "operation" to spread marijuana seeds wildly "so it grows like weed". The thought behind the operation is to draw attention to the debate about legalization/decriminalization of marijuana.
Suicide
A drug overdose involves taking a dose of a drug that exceeds safe levels. In the UK (England and Wales) until 2013, a drug overdose was the most common suicide method in females. In 2019 in males the percentage is 16%. Self-poisoning accounts for the highest number of non-fatal suicide attempts. In the United States about 60% of suicide attempts and 14% of suicide deaths involve drug overdoses. The risk of death in suicide attempts involving overdose is about 2%.
Most people are under the influence of sedative-hypnotic drugs (such as alcohol or benzodiazepines) when they die by suicide, with alcoholism present in between 15% and 61% of cases. Countries that have higher rates of alcohol use and a greater density of bars generally also have higher rates of suicide. About 2.2–3.4% of those who have been treated for alcoholism at some point in their life die by suicide. Alcoholics who attempt suicide are usually male, older, and have tried to take their own lives in the past. In adolescents who misuse alcohol, neurological and psychological dysfunctions may contribute to the increased risk of suicide.
Overdose attempts using painkillers are among the most common, due to their easy availability over-the-counter.
Route of administration
Psychoactive drugs are administered via oral ingestion as a tablet, capsule, powder, liquid, and beverage; via injection by subcutaneous, intramuscular, and intravenous route; via rectum by suppository and enema; and via inhalation by smoking, vaporizing, and snorting. The efficiency of each method of administration varies from drug to drug.
The psychiatric drugs fluoxetine, quetiapine, and lorazepam are ingested orally in tablet or capsule form. Alcohol and caffeine are ingested in beverage form; nicotine and cannabis are smoked or vaporized; peyote and psilocybin mushrooms are ingested in botanical form or dried; and crystalline drugs such as cocaine and methamphetamine are usually inhaled or snorted.
Determinants of effects
The theory of dosage, set, and setting is a useful model in dealing with the effects of psychoactive substances, especially in a controlled therapeutic setting as well as in recreational use. Dr. Timothy Leary, based on his own experiences and systematic observations on psychedelics, developed this theory along with his colleagues Ralph Metzner, and Richard Alpert (Ram Dass) in the 1960s.
Dosage
The first factor, dosage, has been a truism since ancient times, or at least since Paracelsus who said, "Dose makes the poison." Some compounds are beneficial or pleasurable when consumed in small amounts, but harmful, deadly, or evoke discomfort in higher doses.
Set
The set is the internal attitudes and constitution of the person, including their expectations, wishes, fears, and sensitivity to the drug. This factor is especially important for the hallucinogens, which have the ability to make conscious experiences out of the unconscious. In traditional cultures, set is shaped primarily by the worldview, health and genetic characteristics that all the members of the culture share.
Setting
The third aspect is setting, which pertains to the surroundings, the place, and the time in which the experiences transpire.
This theory clearly states that the effects are equally the result of chemical, pharmacological, psychological, and physical influences. The model that Timothy Leary proposed applied to the psychedelics, although it also applies to other psychoactives.
Effects
Psychoactive drugs operate by temporarily affecting a person's neurochemistry, which in turn causes changes in a person's mood, cognition, perception and behavior. There are many ways in which psychoactive drugs can affect the brain. Each drug has a specific action on one or more neurotransmitter or neuroreceptor in the brain.
Drugs that increase activity in particular neurotransmitter systems are called agonists. They act by increasing the synthesis of one or more neurotransmitters, by reducing its reuptake from the synapses, or by mimicking the action by binding directly to the postsynaptic receptor. Drugs that reduce neurotransmitter activity are called antagonists, and operate by interfering with synthesis or blocking postsynaptic receptors so that neurotransmitters cannot bind to them.
Exposure to a psychoactive substance can cause changes in the structure and functioning of neurons, as the nervous system tries to re-establish the homeostasis disrupted by the presence of the drug (see also, neuroplasticity). Exposure to antagonists for a particular neurotransmitter can increase the number of receptors for that neurotransmitter or the receptors themselves may become more responsive to neurotransmitters; this is called sensitization. Conversely, overstimulation of receptors for a particular neurotransmitter may cause a decrease in both number and sensitivity of these receptors, a process called desensitization or tolerance. Sensitization and desensitization are more likely to occur with long-term exposure, although they may occur after only a single exposure. These processes are thought to play a role in drug dependence and addiction. Physical dependence on antidepressants or anxiolytics may result in worse depression or anxiety, respectively, as withdrawal symptoms. Unfortunately, because clinical depression (also called major depressive disorder) is often referred to simply as depression, antidepressants are often requested by and prescribed for patients who are depressed, but not clinically depressed.
Affected neurotransmitter systems
The following is a brief table of notable drugs and their primary neurotransmitter, receptor or method of action. Many drugs act on more than one transmitter or receptor in the brain.
Addiction and dependence
Psychoactive drugs are often associated with addiction or drug dependence. Dependence can be divided into two types: psychological dependence, by which a user experiences negative psychological or emotional withdrawal symptoms (e.g., depression) and physical dependence, by which a user must use a drug to avoid physically uncomfortable or even medically harmful physical withdrawal symptoms. Drugs that are both rewarding and reinforcing are addictive; these properties of a drug are mediated through activation of the mesolimbic dopamine pathway, particularly the nucleus accumbens. Not all addictive drugs are associated with physical dependence, e.g., amphetamine, and not all drugs that produce physical dependence are addictive drugs, e.g., oxymetazoline.
Globally, as of 2016, alcohol use disorders were the most prevalent of all substance use disorders (SUD) worldwide; cannabis dependence and opioid dependence were the next most prevalent SUDs.
Many professionals, self-help groups, and businesses specialize in drug rehabilitation, with varying degrees of success, and many parents attempt to influence the actions and choices of their children regarding psychoactives.
Common forms of rehabilitation include psychotherapy, support groups and pharmacotherapy, which uses psychoactive substances to reduce cravings and physiological withdrawal symptoms while a user is going through detox. Methadone, itself an opioid and a psychoactive substance, is a common treatment for heroin addiction, as is another opioid, buprenorphine. Recent research on addiction has shown some promise in using psychedelics such as ibogaine to treat and even cure drug addictions, although this has yet to become a widely accepted practice.
Legality
The legality of psychoactive drugs has been controversial through most of recent history; the Second Opium War and Prohibition are two historical examples of legal controversy surrounding psychoactive drugs. However, in recent years, the most influential document regarding the legality of psychoactive drugs is the Single Convention on Narcotic Drugs, an international treaty signed in 1961 as an Act of the United Nations. Signed by 73 nations including the United States, the USSR, Pakistan, India, and the United Kingdom, the Single Convention on Narcotic Drugs established Schedules for the legality of each drug and laid out an international agreement to fight addiction to recreational drugs by combatting the sale, trafficking, and use of scheduled drugs. All countries that signed the treaty passed laws to implement these rules within their borders. However, some countries that signed the Single Convention on Narcotic Drugs, such as the Netherlands, are more lenient with their enforcement of these laws.
In the United States, the Food and Drug Administration (FDA) has authority over all drugs, including psychoactive drugs. The FDA regulates which psychoactive drugs are over the counter and which are only available with a prescription. However, certain psychoactive drugs, like alcohol, tobacco, and drugs listed in the Single Convention on Narcotic Drugs are subject to criminal laws. The Controlled Substances Act of 1970 regulates the recreational drugs outlined in the Single Convention on Narcotic Drugs. Alcohol is regulated by state governments, but the federal National Minimum Drinking Age Act penalizes states for not following a national drinking age. Tobacco is also regulated by all fifty state governments. Most people accept such restrictions and prohibitions of certain drugs, especially the "hard" drugs, which are illegal in most countries.
In the medical context, psychoactive drugs as a treatment for illness is widespread and generally accepted. Little controversy exists concerning over the counter psychoactive medications in antiemetics and antitussives. Psychoactive drugs are commonly prescribed to patients with psychiatric disorders. However, certain critics believe that certain prescription psychoactives, such as antidepressants and stimulants, are overprescribed and threaten patients' judgement and autonomy.
Effect on animals
A number of animals consume different psychoactive plants, animals, berries and even fermented fruit, becoming intoxicated. An example of this is cats after consuming catnip. Traditional legends of sacred plants often contain references to animals that introduced humankind to their use. Animals and psychoactive plants appear to have co-evolved, possibly explaining why these chemicals and their receptors exist within the nervous system.
Widely used psychoactive drugs
This is a list of commonly used drugs that contain psychoactive ingredients. Please note that the following lists contains legal and illegal drugs (based on the country's laws).
Common legal drugs
The most widely consumed psychotropic drugs worldwide are:
Caffeine
Alcohol
Nicotine
Common prescribed drugs
Benzodiazepines
Cannabis
Opioids
Amphetamines
SSRIs
Common street drugs
Cocaine
Heroin
LSD
Methamphetamine
Ecstasy
Psilocybin mushrooms
Benzodiazepines
Pharmaceutical drugs
Stimulants
| Biology and health sciences | General concepts_2 | Health |
28266290 | https://en.wikipedia.org/wiki/Homo%20luzonensis | Homo luzonensis | Homo luzonensis, also known as Callao Man and locally called "Ubag" after a mythical caveman, is an extinct, possibly pygmy, species of archaic human from the Late Pleistocene of Luzon, the Philippines. Their remains, teeth and phalanges, are known only from Callao Cave in the northern part of the island dating to before 50,000 years ago. They were initially identified as belonging to modern humans in 2010, but in 2019, after the discovery of more specimens, they were placed into a new species based on the presence of a wide range of traits similar to modern humans as well as to Australopithecus and early Homo. In 2023, a study found that the fossilized remains were 134,000 ± 14,000 years old, much older than previously thought.
Their ancestors, who may have been Asian H. erectus or some other even earlier Homo, would have needed to have made a sea crossing of several miles at minimum to reach the island. Hominin presence on Luzon dates to as early as 771,000 to 631,000 years ago. The inhabitants of the cave dragged in mainly Philippine deer carcasses, and used tools for butchering.
Taxonomy
The first bone was discovered in 2007 by zooarchaeologist Philip Piper while sorting through animal bones recovered from the archaeological excavation led by Filipino archaeologist Armand Mijares in Callao Cave, Northern Luzon, Philippines. In 2010, Mijares and French bioanthropologist , together with a team of international and local Philippine archaeologists, identified them as belonging to modern humans. After the discovery of 12 new specimens and based on the apparent presence of both modern-humanlike and primitive Australopithecus-like features, they reassigned the remains (and other hominin findings from the cave) to a new species, Homo luzonensis, the specific name deriving from the name of the island.
The holotype, CCH6, comprises the upper right premolars and molars. The paratypes are: CCH1, a right third metatarsal bone of the foot; CCH2 and CCH5, two phalanges of the fingers; CCH3 and CCH4, two phalanges of the foot; CCH4, a left premolar; and CCH9, a right third molar. CCH7 represents a juvenile femoral shaft. These represent at least three individuals. The specimens are kept at the National Museum of the Philippines, Manila.
The exact taxonomic placement of H. luzonensis is unknown, and, like for other tropical hominins, DNA extraction failed. It is possible that—like what is hypothesized for H. floresiensis from Flores, Indonesia—H. luzonensis descended from an early H. erectus dispersal across Southeast Asia. It is also possible that these two insular archaic humans descend from an entirely different Homo species possibly earlier than H. erectus. The bones were dated to before 50,000 years ago, and there is evidence of hominin activity on the island as early as 771,000 – 631,000 years ago.
Anatomy
Like other endemic fauna on Luzon, as well as H. floresiensis, H. luzonensis may have shrunk in size due to insular dwarfism. However, more complete remains are needed to verify size. Much like H. floresiensis, H. luzonensis presents a number of characteristics more similar to Australopithecus and early Homo than to modern humans and more recent Homo.
The teeth of H. luzonensis are small and mesiodistally (the width of the tooth) shortened. The molars are smaller than those of H. floresiensis. Like other recent Homo and modern humans, the molars decrease in size towards the back of the mouth, and the enamel-dentin juncture lacks well defined wavy crenulations. The enamel-dentine juncture is most similar to that of Asian H. erectus. The premolars are oddly large compared to the molars, with more similar proportions to Paranthropus than any other Homo, though H. luzonensis postcanine teeth differ greatly from those of Paranthropus in size and shape. H. luzonensis premolars share many characteristics with those of Australopithecus, Paranthropus, and early Homo.
The finger bones are long, narrow, and curved, which is seen in Australopithecus, H. floresiensis, and sometimes modern humans. They are dorso-palmarly (from the palm to the back of the hand) compressed, and have well-developed flexor sheath attachment, which are seen in Australopithecus and the early H. habilis. Unique to H. luzonensis, the dorsal beak near the knuckle was strongly developed and angled towards the wrist rather than the finger. The foot bones are morphologically unique among Homo, and are distinguishable from those of A. africanus and A. afarensis. Australopithecus limbs are generally interpreted as being adaptations for bipedalism and potentially suspensory behavior in the trees, but the fragmentary record of H. luzonensis limits extrapolation of locomotory behavior.
Since the remains are so fragmentary, it is difficult to make accurate estimates of actual size for this species, but they may have been within the range of modern day Philippine Negritos, who average in height for males and for females.
Culture
Because Luzon has always been an island in the Quaternary, the ancestors of H. luzonensis would have had to have made a substantial sea crossing over the Huxley Line.
About 90% of the bone fragments from Callao Cave belong to the Philippine deer, which suggests that deer carcasses were periodically brought into the cave. With the exception of Palawan (where there were tigers), there is no evidence of large carnivores ever inhabiting the Philippines during the Pleistocene, which attributes these remains to human activity. The Philippine warty pig and an extinct bovid were also present. There are cut marks on a deer tibia, and a lack of tools in the cave could either have resulted from the use of organic material for tools rather than stone, or the processing of meat away from the cave.
The Rizal Archaeological Site situated in Rizal, Kalinga, Philippines and within an area that has been subject to archaeological explorations since the 1950s, yielded an almost complete skeleton of a rhino (the extinct Nesorhinus philippinensis), which had been butchered by early hominins c. 709,000 years ago. Together with the rhinoceros skeleton, six lithic cores, forty-nine lithic flakes, and two hammerstones, were found at the Rizal site. Some cores and the used lithic raw material show a similarity to the chert assemblage from the Lower Paleolithic Arubo 1 site in central Luzon. Also present were the remains of the elephant-relative Stegodon, the Philippine deer, freshwater turtles, and monitor lizards.
| Biology and health sciences | Homo | Biology |
49396186 | https://en.wikipedia.org/wiki/First%20observation%20of%20gravitational%20waves | First observation of gravitational waves | The first direct observation of gravitational waves was made on 14 September 2015 and was announced by the LIGO and Virgo collaborations on 11 February 2016. Previously, gravitational waves had been inferred only indirectly, via their effect on the timing of pulsars in binary star systems. The waveform, detected by both LIGO observatories, matched the predictions of general relativity for a gravitational wave emanating from the inward spiral and merger of two black holes (of 36 and 29 ) and the subsequent ringdown of a single, 62 black hole remnant. The signal was named GW150914 (from gravitational wave and the date of observation 2015-09-14). It was also the first observation of a binary black hole merger, demonstrating both the existence of binary stellar-mass black hole systems and the fact that such mergers could occur within the current age of the universe.
This first direct observation was reported around the world as a remarkable accomplishment for many reasons. Efforts to directly prove the existence of such waves had been ongoing for over fifty years, and the waves are so minuscule that Albert Einstein himself doubted that they could ever be detected. The waves given off by the cataclysmic merger of GW150914 reached Earth as a ripple in spacetime that changed the length of a 1,120 km LIGO effective span by a thousandth of the width of a proton, proportionally equivalent to changing the distance to the nearest star outside the Solar System by one hair's width. The energy released by the binary as it spiralled together and merged was immense, with the energy of c2 ( joules or foes) in total radiated as gravitational waves, reaching a peak emission rate in its final few milliseconds of about watts – a level greater than the combined power of all light radiated by all the stars in the observable universe.
The observation confirmed the last remaining directly undetected prediction of general relativity and corroborated its predictions of space-time distortion in the context of large scale cosmic events (known as strong field tests). It was heralded as inaugurating a new era of gravitational-wave astronomy, which enables observations of violent astrophysical events that were not previously possible and allows for the direct observation of the earliest history of the universe. On 15 June 2016, two more detections of gravitational waves, made in late 2015, were announced. Eight more observations were made in 2017, including GW170817, the first observed merger of binary neutron stars, which was also observed in electromagnetic radiation.
Gravitational waves
Albert Einstein predicted the existence of gravitational waves in 1916, on the basis of his theory of general relativity. General relativity interprets gravity as a consequence of distortions in spacetime caused by the presence of mass, and further entails that certain movements or acceleration of these masses will cause distortions – or "ripples" – in spacetime which spread outward from the source at the speed of light. Einstein considered this mostly a curiosity, since he understood that these ripples would be far too minuscule to detect using any technology foreseen at that time. As a further consequence following from the conservation of energy, the energy radiated away by gravitational waves from a system of two objects in mutual orbit would cause them to slowly spiral inwards, although again, this effect would be extremely minute and thus challenging to observe.
One case where gravitational waves would be strongest is during the final moments of the merger of two compact objects such as neutron stars or black holes. Over a span of millions of years, binary neutron stars, and binary black holes lose energy, largely through gravitational waves, and as a result, they spiral in towards each other. At the very end of this process, the two objects will reach extreme velocities, and in the final fraction of a second of their merger a substantial amount of their mass would theoretically be converted into gravitational energy, and travel outward as gravitational waves, allowing a greater than usual chance for detection. However, since little was known about the number of compact binaries in the universe and reaching that final stage can be very slow, there was little certainty as to how often such events might happen.
Observation
Gravitational waves can be detected indirectly – by observing celestial phenomena caused by gravitational waves – or more directly by means of instruments such as the Earth-based LIGO or the planned space-based LISA instrument.
Indirect observation
Evidence of gravitational waves was first deduced in 1974 through the motion of the double neutron star system PSR B1913+16, in which one of the stars is a pulsar that emits electro-magnetic pulses at radio frequencies at precise, regular intervals as it rotates. Russell Hulse and Joseph Taylor, who discovered the stars, also showed that over time, the frequency of pulses shortened, and that the stars were gradually spiralling towards each other with an energy loss that agreed closely with the predicted energy that would be radiated by gravitational waves. For this work, Hulse and Taylor were awarded the Nobel Prize in Physics in 1993. Further observations of this pulsar and others in multiple systems (such as the double pulsar system PSR J0737-3039) also agree with General Relativity to high precision.
Direct observation
Direct observation of gravitational waves was not possible for many decades following their prediction, due to the minuscule effect that would need to be detected and separated from the background of vibrations present everywhere on Earth. A technique called interferometry was suggested in the 1960s and eventually technology developed sufficiently for this technique to become feasible.
In the present approach used by LIGO, a laser beam is split and the two halves are recombined after traveling different paths. Changes to the length of the paths or the time taken for the two split beams, caused by the effect of passing gravitational waves, to reach the point where they recombine are revealed as "beats". Such a technique is extremely sensitive to tiny changes in the distance or time taken to traverse the two paths. In theory, an interferometer with arms about 4 km long would be capable of revealing the change of space-time – a tiny fraction of the size of a single proton – as a gravitational wave of sufficient strength passed through Earth from elsewhere. This effect would be perceptible only to other interferometers of a similar size, such as the Virgo, GEO 600 and planned KAGRA and INDIGO detectors. In practice at least two interferometers would be needed because any gravitational wave would be detected at both of these, but other kinds of disturbances would generally not be present at both. This technique allows the sought-after signal to be distinguished from noise. This project was eventually founded in 1992 as the Laser Interferometer Gravitational-Wave Observatory (LIGO). The original instruments were upgraded between 2010 and 2015 (to Advanced LIGO), giving an increase of around 10 times their original sensitivity.
LIGO operates two gravitational-wave observatories in unison, located apart: the LIGO Livingston Observatory () in Livingston, Louisiana, and the LIGO Hanford Observatory, on the DOE Hanford Site () near Richland, Washington. The tiny shifts in the length of their arms are continually compared and significant patterns which appear to arise synchronously are followed up to determine whether a gravitational wave may have been detected or if some other cause was responsible.
Initial LIGO operations between 2002 and 2010 did not detect any statistically significant events that could be confirmed as gravitational waves. This was followed by a multi-year shut-down while the detectors were replaced by much improved "Advanced LIGO" versions. In February 2015, the two advanced detectors were brought into engineering mode, in which the instruments are operating fully for the purpose of testing and confirming they are functioning correctly before being used for research, with formal science observations due to begin on 18 September 2015.
Throughout the development and initial observations by LIGO, several "blind injections" of fake gravitational wave signals were introduced to test the ability of the researchers to identify such signals. To protect the efficacy of blind injections, only four LIGO scientists knew when such injections occurred, and that information was revealed only after a signal had been thoroughly analyzed by researchers. On 14 September 2015, while LIGO was running in engineering mode but without any blind data injections, the instrument reported a possible gravitational wave detection. The detected event was given the name GW150914.
GW150914 event
Event detection
GW150914 was detected by the LIGO detectors in Hanford, Washington state, and Livingston, Louisiana, USA, at 9:50:45 UTC on 14 September 2015. The LIGO detectors were operating in "engineering mode", meaning that they were operating fully but had not yet begun a formal "research" phase (which was due to commence three days later on 18 September), so initially there was a question as to whether the signals had been real detections or simulated data for testing purposes before it was ascertained that they were not tests.
The chirp signal lasted over 0.2 seconds, and increased in frequency and amplitude in about 8 cycles from 35 Hz to 250 Hz. The signal is in the audible range and has been described as resembling the "chirp" of a bird; astrophysicists and other interested parties the world over excitedly responded by imitating the signal on social media upon the announcement of the discovery. (The frequency increases because each orbit is noticeably faster than the one before during the final moments before merging.)
The trigger that indicated a possible detection was reported within three minutes of acquisition of the signal, using rapid ('online') search methods that provide a quick, initial analysis of the data from the detectors. After the initial automatic alert at 9:54 UTC, a sequence of internal emails confirmed that no scheduled or unscheduled injections had been made, and that the data looked clean. After this, the rest of the collaborating team was quickly made aware of the tentative detection and its parameters.
More detailed statistical analysis of the signal, and of 16 days of surrounding data from 12 September to 20 October 2015, identified GW150914 as a real event, with an estimated significance of at least 5.1 sigma or a confidence level of 99.99994%. Corresponding wave peaks were seen at Livingston seven milliseconds before they arrived at Hanford. Gravitational waves propagate at the speed of light, and the disparity is consistent with the light travel time between the two sites. The waves had traveled at the speed of light for more than a billion years.
At the time of the event, the Virgo gravitational wave detector (near Pisa, Italy) was offline and undergoing an upgrade; had it been online it would likely have been sensitive enough to also detect the signal, which would have greatly improved the positioning of the event. GEO600 (near Hannover, Germany) was not sensitive enough to detect the signal. Consequently, neither of those detectors was able to confirm the signal measured by the LIGO detectors.
Astrophysical origin
The event happened at a luminosity distance of megaparsecs (determined by the amplitude of the signal), or billion light years, corresponding to a cosmological redshift of (90% credible intervals). Analysis of the signal along with the inferred redshift suggested that it was produced by the merger of two black holes with masses of times and times the mass of the Sun (in the source frame), resulting in a post-merger black hole of . The mass–energy of the missing was radiated away in the form of gravitational waves.
During the final 20 milliseconds of the merger, the power of the radiated gravitational waves peaked at about or 526 dBm – 50 times greater than the combined power of all light radiated by all the stars in the observable universe. The amount of this energy that was received by the entire planet Earth was about 36 billion joules, of which only a small amount was absorbed.
Across the 0.2-second duration of the detectable signal, the relative tangential (orbiting) velocity of the black holes increased from 30% to 60% of the speed of light. The orbital frequency of 75 Hz (half the gravitational wave frequency) means that the objects were orbiting each other at a distance of only 350 km by the time they merged. The phase changes to the signal's polarization allowed calculation of the objects' orbital frequency, and taken together with the amplitude and pattern of the signal, allowed calculation of their masses and therefore their extreme final velocities and orbital separation (distance apart) when they merged. That information showed that the objects had to be black holes, as any other kind of known objects with these masses would have been physically larger and therefore merged before that point, or would not have reached such velocities in such a small orbit. The highest observed neutron star mass is 2 , with a conservative upper limit for the mass of a stable neutron star of 3 , so that a pair of neutron stars would not have had sufficient mass to account for the merger (unless exotic alternatives exist, for example, boson stars), while a black hole-neutron star pair would have merged sooner, resulting in a final orbital frequency that was not so high.
The decay of the waveform after it peaked was consistent with the damped oscillations of a black hole as it relaxed to a final merged configuration. Although the inspiral motion of compact binaries can be described well from post-Newtonian calculations, the strong gravitational field merger stage can only be solved in full generality by large-scale numerical relativity simulations.
In the improved model and analysis, the post-merger object is found to be a rotating Kerr black hole with a spin parameter of , i.e. one with 2/3 of the maximum possible angular momentum for its mass.
The two stars which formed the two black holes were likely formed about 2 billion years after the Big Bang with masses of between 40 and 100 times the mass of the Sun.
Location in the sky
Gravitational wave instruments are whole-sky monitors with little ability to resolve signals spatially. A network of such instruments is needed to locate the source in the sky through triangulation. With only the two LIGO instruments in observational mode, GW150914's source location could only be confined to an arc on the sky. This was done via analysis of the ms time-delay, along with amplitude and phase consistency across both detectors. This analysis produced a credible region of 150 deg2 with a probability of 50% or 610 deg2 with a probability of 90% located mainly in the Southern Celestial Hemisphere, in the rough direction of (but much farther than) the Magellanic Clouds.
For comparison, the area of the constellation Orion is 594 deg2.
Coincident gamma-ray observation
The Fermi Gamma-ray Space Telescope reported that its Gamma-Ray Burst Monitor (GBM) instrument detected a weak gamma-ray burst above 50 keV, starting 0.4 seconds after the LIGO event and with a positional uncertainty region overlapping that of the LIGO observation. The Fermi team calculated the odds of such an event being the result of a coincidence or noise at 0.22%. However a gamma ray burst would not have been expected, and observations from the INTEGRAL telescope's all-sky SPI-ACS instrument indicated that any energy emission in gamma-rays and hard X-rays from the event was less than one millionth of the energy emitted as gravitational waves, which "excludes the possibility that the event is associated with substantial gamma-ray radiation, directed towards the observer". If the signal observed by the Fermi GBM was genuinely astrophysical, INTEGRAL would have indicated a clear detection at a significance of 15 sigma above background radiation. The AGILE space telescope also did not detect a gamma-ray counterpart of the event.
A follow-up analysis by an independent group, released in June 2016, developed a different statistical approach to estimate the spectrum of the gamma-ray transient. It concluded that Fermi GBM's data did not show evidence of a gamma ray burst, and was either background radiation or an Earth albedo transient on a 1-second timescale. A rebuttal of this follow-up analysis, however, pointed out that the independent group misrepresented the analysis of the original Fermi GBM Team paper and therefore misconstrued the results of the original analysis. The rebuttal reaffirmed that the false coincidence probability is calculated empirically and is not refuted by the independent analysis.
Black hole mergers of the type thought to have produced the gravitational wave event are not expected to produce gamma-ray bursts, as stellar-mass black hole binaries are not expected to have large amounts of orbiting matter. Avi Loeb has theorised that if a massive star is rapidly rotating, the centrifugal force produced during its collapse will lead to the formation of a rotating bar that breaks into two dense clumps of matter with a dumbbell configuration that becomes a black hole binary, and at the end of the star's collapse it triggers a gamma-ray burst. Loeb suggests that the 0.4 second delay is the time it took the gamma-ray burst to cross the star, relative to the gravitational waves.
Other follow-up observations
The reconstructed source area was targeted by follow-up observations covering radio, optical, near infra-red, X-ray, and gamma-ray wavelengths along with searches for coincident neutrinos. However, because LIGO had not yet started its science run, notice to other telescopes was delayed.
The ANTARES telescope detected no neutrino candidates within ±500 seconds of GW150914. The IceCube Neutrino Observatory detected three neutrino candidates within ±500 seconds of GW150914. One event was found in the southern sky and two in the northern sky. This was consistent with the expectation of background detection levels. None of the candidates were compatible with the 90% confidence area of the merger event. Although no neutrinos were detected, the lack of such observations provided a limit on neutrino emission from this type of gravitational wave event.
Observations by the Swift Gamma-Ray Burst Mission of nearby galaxies in the region of the detection, two days after the event, did not detect any new X-ray, optical or ultraviolet sources.
Announcement
The announcement of the detection was made on 11 February 2016 at a news conference in Washington, D.C. by David Reitze, the executive director of LIGO, with a panel comprising Gabriela González, Rainer Weiss and Kip Thorne, of LIGO, and France A. Córdova, the director of NSF. Barry Barish delivered the first presentation on this discovery to a scientific audience simultaneously with the public announcement.
The initial announcement paper was published during the news conference in Physical Review Letters, with further papers either published shortly afterwards or immediately available in preprint form.
Awards and recognition
In May 2016, the full collaboration, and in particular Ronald Drever, Kip Thorne, and Rainer Weiss, received the Special Breakthrough Prize in Fundamental Physics for the observation of gravitational waves. Drever, Thorne, Weiss, and the LIGO discovery team also received the Gruber Prize in Cosmology. Drever, Thorne, and Weiss were also awarded the 2016 Shaw Prize in Astronomy and the 2016 Kavli Prize in Astrophysics. Barish was awarded the 2016 Enrico Fermi Prize from the Italian Physical Society (Società Italiana di Fisica). In January 2017, LIGO spokesperson Gabriela González and the LIGO team were awarded the 2017 Bruno Rossi Prize.
The 2017 Nobel Prize in Physics was awarded to Rainer Weiss, Barry Barish and Kip Thorne "for decisive contributions to the LIGO detector and the observation of gravitational waves".
Implications
The observation was heralded as inaugurating a revolutionary era of gravitational-wave astronomy. Prior to this detection, astrophysicists and cosmologists were able to make observations based upon electromagnetic radiation (including visible light, X-rays, microwave, radio waves, gamma rays) and particle-like entities (cosmic rays, stellar winds, neutrinos, and so on). These have significant limitations – light and other radiation may not be emitted by many kinds of objects, and can also be obscured or hidden behind other objects. Objects such as galaxies and nebulae can also absorb, re-emit, or modify light generated within or behind them, and compact stars or exotic stars may contain material which is dark and radio silent, and as a result there is little evidence of their presence other than through their gravitational interactions.
Expectations for detection of future binary merger events
On 15 June 2016, the LIGO group announced an observation of another gravitational wave signal, named GW151226. The Advanced LIGO was predicted to detect five more black hole mergers like GW150914 in its next observing campaign from November 2016 until August 2017 (it turned out to be seven), and then 40 binary star mergers each year, in addition to an unknown number of more exotic gravitational wave sources, some of which may not be anticipated by current theory.
Planned upgrades are expected to double the signal-to-noise ratio, expanding the volume of space in which events like GW150914 can be detected by a factor of ten. Additionally, Advanced Virgo, KAGRA, and a possible third LIGO detector in India will extend the network and significantly improve the position reconstruction and parameter estimation of sources.
Laser Interferometer Space Antenna (LISA) is a proposed space based observation mission to detect gravitational waves. With the proposed sensitivity range of LISA, merging binaries like GW150914 would be detectable about 1000 years before they merge, providing for a class of previously unknown sources for this observatory if they exist within about 10 megaparsecs. LISA Pathfinder, LISA's technology development mission, was launched in December 2015 and it demonstrated that the LISA mission is feasible.
A 2016 model predicted LIGO would detect approximately 1000 black hole mergers per year when it reached full sensitivity following upgrades.
Lessons for stellar evolution and astrophysics
The masses of the two pre-merger black holes provide information about stellar evolution. Both black holes were more massive than previously discovered stellar-mass black holes, which were inferred from X-ray binary observations. This implies that the stellar winds from their progenitor stars must have been relatively weak, and therefore that the metallicity (mass fraction of chemical elements heavier than hydrogen and helium) must have been less than about half the solar value.
The fact that the pre-merger black holes were present in a binary star system, as well as the fact that the system was compact enough to merge within the age of the universe, constrains either binary star evolution or dynamical formation scenarios, depending on how the black hole binary was formed. A significant number of black holes must receive low natal kicks (the velocity a black hole gains at its formation in a core-collapse supernova event), otherwise the black hole forming in a binary star system would be ejected and an event like GW would be prevented. The survival of such binaries, through common envelope phases of high rotation in massive progenitor stars, may be necessary for their survival. The majority of the latest black hole model predictions comply with these added constraints.
The discovery of the GW merger event increases the lower limit on the rate of such events, and rules out certain theoretical models that predicted very low rates of less than 1 Gpc−3yr−1 (one event per cubic gigaparsec per year). Analysis resulted in lowering the previous upper limit rate on events like GW150914 from ~140 Gpc−3yr−1 to Gpc−3yr−1.
Impact on future cosmological observation
Measurement of the waveform and amplitude of the gravitational waves from a black hole merger event makes accurate determination of its distance possible. The accumulation of black hole merger data from cosmologically distant events may help to create more precise models of the history of the expansion of the universe and the nature of the dark energy that influences it.
The earliest universe is opaque since the cosmos was so energetic then that most matter was ionized and photons were scattered by free electrons. However, this opacity would not affect gravitational waves from that time, so if they occurred at levels strong enough to be detected at this distance, it would allow a window to observe the cosmos beyond the current visible universe. Gravitational-wave astronomy therefore may some day allow direct observation of the earliest history of the universe.
Tests of general relativity
The inferred fundamental properties, mass and spin, of the post-merger black hole were consistent with those of the two pre-merger black holes, following the predictions of general relativity. This is the first test of general relativity in the very strong-field regime. No evidence could be established against the predictions of general relativity.
The opportunity was limited in this signal to investigate the more complex general relativity interactions, such as tails produced by interactions between the gravitational wave and curved space-time background. Although a moderately strong signal, it is much smaller than that produced by binary-pulsar systems. In the future stronger signals, in conjunction with more sensitive detectors, could be used to explore the intricate interactions of gravitational waves as well as to improve the constraints on deviations from general relativity.
Speed of gravitational waves and limit on possible mass of graviton
The speed of gravitational waves (vg) is predicted by general relativity to be the speed of light (c). The extent of any deviation from this relationship can be parameterized in terms of the mass of the hypothetical graviton. The graviton is the name given to an elementary particle that would act as the force carrier for gravity, in quantum theories about gravity. It is expected to be massless if, as it appears, gravitation has an infinite range. (This is because the more massive a gauge boson is, the shorter is the range of the associated force; as with the infinite range of electromagnetism, which is due to the massless photon, the infinite range of gravity implies that any associated force-carrying particle would also be massless.) If the graviton were not massless, gravitational waves would propagate below lightspeed, with lower frequencies (ƒ) being slower than higher frequencies, leading to dispersion of the waves from the merger event. No such dispersion was observed. The observations of the inspiral slightly improve (lower) the upper limit on the mass of the graviton from Solar System observations to , corresponding to or a Compton wavelength (λg) of greater than km, roughly 1 light-year. Using the lowest observed frequency of 35 Hz, this translates to a lower limit on vg such that the upper limit on 1-vg /c is ~ .
| Physical sciences | Theory of relativity | Physics |
35258497 | https://en.wikipedia.org/wiki/Telephone%20number%20%28mathematics%29 | Telephone number (mathematics) | In mathematics, the telephone numbers or the involution numbers form a sequence of integers that count the ways people can be connected by person-to-person telephone calls. These numbers also describe the number of matchings (the Hosoya index) of a complete graph on vertices, the number of permutations on elements that are involutions, the sum of absolute values of coefficients of the Hermite polynomials, the number of standard Young tableaux with cells, and the sum of the degrees of the irreducible representations of the symmetric group. Involution numbers were first studied in 1800 by Heinrich August Rothe, who gave a recurrence equation by which they may be calculated, giving the values (starting from )
Applications
John Riordan provides the following explanation for these numbers: suppose that people subscribe to a telephone service that can connect any two of them by a call, but cannot make a single call connecting more than two people. How many different patterns of connection are possible? For instance, with three subscribers, there are three ways of forming a single telephone call, and one additional pattern in which no calls are being made, for a total of four patterns. For this reason, the numbers counting how many patterns are possible are sometimes called the telephone numbers.
Every pattern of pairwise connections between people defines an involution, a permutation of the people that is its own inverse. In this permutation, each two people who call each other are swapped, and the people not involved in calls remain fixed in place. Conversely, every possible involution has the form of a set of pairwise swaps of this type. Therefore, the telephone numbers also count involutions. The problem of counting involutions was the original combinatorial enumeration problem studied by Rothe in 1800 and these numbers have also been called involution numbers.
In graph theory, a subset of the edges of a graph that touches each vertex at most once is called a matching. Counting the matchings of a given graph is important in chemical graph theory, where the graphs model molecules and the number of matchings is the Hosoya index. The largest possible Hosoya index of an -vertex graph is given by the complete graphs, for which any pattern of pairwise connections is possible; thus, the Hosoya index of a complete graph on vertices is the same as the -th telephone number.
A Ferrers diagram is a geometric shape formed by a collection of squares in the plane, grouped into a polyomino with a horizontal top edge, a vertical left edge, and a single monotonic chain of edges from top right to bottom left. A standard Young tableau is formed by placing the numbers from 1 to into these squares in such a way that the numbers increase from left to right and from top to bottom throughout the tableau.
According to the Robinson–Schensted correspondence, permutations correspond one-for-one with ordered pairs of standard Young tableaux. Inverting a permutation corresponds to swapping the two tableaux, and so the self-inverse permutations correspond to single tableaux, paired with themselves. Thus, the telephone numbers also count the number of Young tableaux with squares. In representation theory, the Ferrers diagrams correspond to the irreducible representations of the symmetric group of permutations, and the Young tableaux with a given shape form a basis of the irreducible representation with that shape. Therefore, the telephone numbers give the sum of the degrees of the irreducible representations.
In the mathematics of chess, the telephone numbers count the number of ways to place rooks on an chessboard in such a way that no two rooks attack each other (the so-called eight rooks puzzle), and in such a way that the configuration of the rooks is symmetric under a diagonal reflection of the board. Via the Pólya enumeration theorem, these numbers form one of the key components of a formula for the overall number of "essentially different" configurations of mutually non-attacking rooks, where two configurations are counted as essentially different if there is no symmetry of the board that takes one into the other.
Mathematical properties
Recurrence
The telephone numbers satisfy the recurrence relation
first published in 1800 by Heinrich August Rothe, by which they may easily be calculated.
One way to explain this recurrence is to partition the connection patterns of the subscribers to a telephone system into the patterns in which the first person is not calling anyone else, and the patterns in which the first person is making a call. There are connection patterns in which the first person is disconnected, explaining the first term of the recurrence. If the first person is connected to someone, there are choices for that person, and patterns of connection for the remaining people, explaining the second term of the recurrence.
Summation formula and approximation
The telephone numbers may be expressed exactly as a summation
In each term of the first sum, gives the number of matched pairs, the binomial coefficient counts the number of ways of choosing the elements to be matched, and the double factorial
is the product of the odd integers up to its argument and counts the number of ways of completely matching the selected elements. It follows from the summation formula and Stirling's approximation that, asymptotically,
Generating function
The exponential generating function of the telephone numbers is
In other words, the telephone numbers may be read off as the coefficients of the Taylor series of and, in particular, the -th telephone number is the value at zero of the -th derivative of this function. The exponential generating function can be derived in a number of ways; for example, taking the recurrence relation for above, multiplying it by , and summing over gives
The general solution to this differential equation is , and shows that the constant of proportionality is 1.
This function is closely related to the exponential generating function of the Hermite polynomials, which are the matching polynomials of the complete graphs.
The sum of absolute values of the coefficients of the -th (probabilist's) Hermite polynomial is the -th telephone number, and the telephone numbers can also be realized as certain special values of the Hermite polynomials:
Prime factors
For large values of , the -th telephone number is divisible by a large power of two, . More precisely, the 2-adic order (the number of factors of two in the prime factorization) of and of is ; for it is , and for it is .
For any prime number , one can test whether there exists a telephone number divisible by by computing the recurrence for the sequence of telephone numbers, modulo , until either reaching zero or detecting a cycle. The primes that divide at least one telephone number are
The odd primes in this sequence have been called inefficient. Each of them divides infinitely many telephone numbers.
| Mathematics | Sequences | null |
46521228 | https://en.wikipedia.org/wiki/Vertebra | Vertebra | Each vertebra (: vertebrae) is an irregular bone with a complex structure composed of bone and some hyaline cartilage, that make up the vertebral column or spine, of vertebrates. The proportions of the vertebrae differ according to their spinal segment and the particular species.
The basic configuration of a vertebra varies; the vertebral body (also centrum) is of bone and bears the load of the vertebral column. The upper and lower surfaces of the vertebra body give attachment to the intervertebral discs. The posterior part of a vertebra forms a vertebral arch, in eleven parts, consisting of two pedicles (pedicle of vertebral arch), two laminae, and seven processes. The laminae give attachment to the ligamenta flava (ligaments of the spine). There are vertebral notches formed from the shape of the pedicles, which form the intervertebral foramina when the vertebrae articulate. These foramina are the entry and exit conduits for the spinal nerves. The body of the vertebra and the vertebral arch form the vertebral foramen; the larger, central opening that accommodates the spinal canal, which encloses and protects the spinal cord.
Vertebrae articulate with each other to give strength and flexibility to the spinal column, and the shape at their back and front aspects determines the range of movement. Structurally, vertebrae are essentially alike across the vertebrate species, with the greatest difference seen between an aquatic animal and other vertebrate animals. As such, vertebrates take their name from the vertebrae that compose the vertebral column.
Structure
In the human vertebral column, the size of the vertebrae varies according to placement in the vertebral column, spinal loading, posture and pathology. Along the length of the spine, the vertebrae change to accommodate different needs related to stress and mobility. Each vertebra is an irregular bone.
A typical vertebra has a body (vertebral body), also known as the centrumwhich consists of a large anterior middle portion, and a posterior vertebral arch, also called a neural arch. The body is composed of cancellous bone, which is the spongy type of osseous tissue, whose microanatomy has been specifically studied within the pedicle bones. This cancellous bone is in turn, covered by a thin coating of cortical bone (or compact bone), the hard and dense type of osseous tissue. The vertebral arch and processes have thicker coverings of cortical bone. The upper and lower surfaces of the body of the vertebra are flattened and rough in order to give attachment to the intervertebral discs. These surfaces are the vertebral endplates which are in direct contact with the intervertebral discs and form the joint. The endplates are formed from a thickened layer of the cancellous bone of the vertebral body, the top layer being more dense. The endplates function to contain the adjacent discs, to evenly spread the applied loads, and to provide anchorage for the collagen fibers of the disc. They also act as a semi-permeable interface for the exchange of water and solutes.
The vertebral arch is formed by pedicles and laminae. Two pedicles extend from the sides of the vertebral body to join the body to the arch. The pedicles are short thick processes that extend, one from each side, posteriorly, from the junctions of the posteriolateral surfaces of the centrum, on its upper surface.
From each pedicle a broad plate, a lamina, projects backward and medially to join and complete the vertebral arch and form the posterior border of the vertebral foramen, which completes the triangle of the vertebral foramen. The upper surfaces of the laminae are rough to give attachment to the ligamenta flava. These ligaments connect the laminae of adjacent vertebra along the length of the spine from the level of the second cervical vertebra. Above and below the pedicles are shallow depressions called vertebral notches (superior and inferior). When the vertebrae articulate the notches align with those on adjacent vertebrae and these form the openings of the intervertebral foramina. The foramina allow the entry and exit of the spinal nerves from each vertebra, together with associated blood vessels. The articulating vertebrae provide a strong pillar of support for the body.
Processes
There are seven processes projecting from the vertebra:
one spinous process
two transverse processes
four articular processes
A major part of a vertebra is a backward extending spinous process (sometimes called the neural spine) which projects centrally. This process points dorsally and caudally from the junction of the laminae. The spinous process serves to attach muscles and ligaments.
The two transverse processes, one on each side of the vertebral body, project laterally from either side at the point where the lamina joins the pedicle, between the superior and inferior articular processes. They also serve for the attachment of muscles and ligaments, in particular the intertransverse ligaments. There is a facet on each of the transverse processes of thoracic vertebrae which articulates with the tubercle of the rib. A facet on each side of the thoracic vertebral body articulates with the head of the rib. The transverse process of a lumbar vertebra is also sometimes called the costal or costiform process because it corresponds to a rudimentary rib (costa) which, as opposed to the thorax, is not developed in the lumbar region.
There are superior and inferior articular facet joints on each side of the vertebra, which serve to restrict the range of movement possible. These facets are joined by a thin portion of the vertebral arch called the pars interarticularis.
Regional variation
Vertebrae take their names from the regions of the vertebral column that they occupy. There are usually thirty-three vertebrae in the human vertebral column — seven cervical vertebrae, twelve thoracic vertebrae, five lumbar vertebrae, five fused sacral vertebrae forming the sacrum and four coccygeal vertebrae, forming the coccyx. Excluding rare deviations, the total number of vertebrae ranges from 32 to 35. In about 10% of people, both the total number of pre-sacral vertebrae and the number of vertebrae in individual parts of the spine can vary. The most frequent deviations are eleven (rarely thirteen) thoracic vertebrae, four or six lumbar vertebrae and three or five coccygeal vertebrae (rarely up to seven).
The regional vertebrae increase in size as they progress downward but become smaller in the coccyx.
Cervical
There are seven cervical vertebrae (but eight cervical spinal nerves), designated C1 through C7. These bones are, in general, small and delicate. Their spinous processes are short (with the exception of C2 and C7, which have palpable spinous processes). C1 is also called the atlas, and C2 is also called the axis. The structure of these vertebrae is the reason why the neck and head have a large range of motion. The atlanto-occipital joint allows the skull to move up and down, while the atlanto-axial joint allows the upper neck to twist left and right. The axis also sits upon the first intervertebral disc of the spinal column.
Cervical vertebrae possess transverse foramina to allow for the vertebral arteries to pass through on their way to the foramen magnum to end in the circle of Willis. These are the smallest, lightest vertebrae and the vertebral foramina are triangular in shape. The spinous processes are short and often bifurcated (the spinous process of C7 is not bifurcated, and is substantially longer than that of the other cervical spinous processes).
The atlas differs from the other vertebrae in that it has no body and no spinous process. It has instead a ring-like form, having an anterior and a posterior arch and two lateral masses. At the outside centre points of both arches there is a tubercle, an anterior tubercle and a posterior tubercle, for the attachment of muscles. The front surface of the anterior arch is convex and its anterior tubercle gives attachment to the longus colli muscle. The posterior tubercle is a rudimentary spinous process and gives attachment to the rectus capitis posterior minor muscle. The spinous process is small so as not to interfere with the movement between the atlas and the skull. On the under surface is a facet for articulation with the dens of the axis.
Specific to the cervical vertebra is the transverse foramen (also known as foramen transversarium). This is an opening on each of the transverse processes which gives passage to the vertebral artery and vein and a sympathetic nerve plexus. On the cervical vertebrae other than the atlas, the anterior and posterior tubercles are on either side of the transverse foramen on each transverse process. The anterior tubercle on the sixth cervical vertebra is called the carotid tubercle because it separates the carotid artery from the vertebral artery.
There is a hook-shaped uncinate process on the side edges of the top surface of the bodies of the third to the seventh cervical vertebrae and of the first thoracic vertebra. Together with the vertebral disc, this uncinate process prevents a vertebra from sliding backward off the vertebra below it and limits lateral flexion (side-bending). Luschka's joints involve the vertebral uncinate processes.
The spinous process on C7 is distinctively long and gives the name vertebra prominens to this vertebra. Also a cervical rib can develop from C7 as an anatomical variation.
The term cervicothoracic is often used to refer to the cervical and thoracic vertebrae together, and sometimes also their surrounding areas.
Thoracic
The twelve thoracic vertebrae and their transverse processes have surfaces that articulate with the ribs. Some rotation can occur between the thoracic vertebrae, but their connection with the rib cage prevents much flexion or other movement. They may also be known as "dorsal vertebrae" in the human context.
The vertebral bodies are roughly heart-shaped and are about as wide anterio-posteriorly as they are in the transverse dimension. Vertebral foramina are roughly circular in shape.
The top surface of the first thoracic vertebra has a hook-shaped uncinate process, just like the cervical vertebrae.
The thoracolumbar spine or thoracolumbar division refers to the thoracic and lumbar vertebrae together, and sometimes also their surrounding areas.
The thoracic vertebrae attach to ribs and so have articular facets specific to them; these are the superior, transverse and inferior costal facets. As the vertebrae progress down the spine they increase in size to match up with the adjoining lumbar section.
Lumbar
The five lumbar vertebrae are the largest of the vertebrae, their robust construction being necessary for supporting greater weight than the other vertebrae. They allow significant flexion, extension and moderate lateral flexion (side-bending). The discs between these vertebrae create a natural lumbar lordosis (a spinal curvature that is concave posteriorly). This is due to the difference in thickness between the front and back parts of the intervertebral discs.
The lumbar vertebrae are located between the ribcage and the pelvis and are the largest of the vertebrae. The pedicles are strong, as are the laminae, and the spinous process is thick and broad. The vertebral foramen is large and triangular. The transverse processes are long and narrow and three tubercles can be seen on them. These are a lateral costiform process, a mammillary process and an accessory process. The superior, or upper tubercle is the mammillary process which connects with the superior articular process. The multifidus muscle attaches to the mammillary process and this muscle extends through the length of the vertebral column, giving support. The inferior, or lower tubercle is the accessory process and this is found at the back part of the base of the transverse process. The term lumbosacral is often used to refer to the lumbar and sacral vertebrae together, and sometimes includes their surrounding areas.
Sacral
There are five sacral vertebrae (S1–S5) which are fused in maturity, into one large bone, the sacrum, with no intervertebral discs. The sacrum with the ilium forms a sacroiliac joint on each side of the pelvis, which articulates with the hips.
Coccygeal
The last three to five coccygeal vertebrae (but usually four) (Co1–Co5) make up the tailbone or coccyx. There are no intervertebral discs.
Development
Somites form in the early embryo and some of these develop into sclerotomes. The sclerotomes form the vertebrae as well as the rib cartilage and part of the occipital bone. From their initial location within the somite, the sclerotome cells migrate medially toward the notochord. These cells meet the sclerotome cells from the other side of the paraxial mesoderm. The lower half of one sclerotome fuses with the upper half of the adjacent one to form each vertebral body. From this vertebral body, sclerotome cells move dorsally and surround the developing spinal cord, forming the vertebral arch. Other cells move distally to the costal processes of thoracic vertebrae to form the ribs.
Function
Functions of vertebrae include:
Support of the vertebrae function in the skeletomuscular system by forming the vertebral column to support the body
Protection. Vertebrae contain a vertebral foramen for the passage of the spinal canal and its enclosed spinal cord and covering meninges. They also afford sturdy protection for the spinal cord. The upper and lower surfaces of the centrum are flattened and rough in order to give attachment to the intervertebral discs.
Movement. The vertebrae also provide the openings, the intervertebral foramina which allow the entry and exit of the spinal nerves. Similarly to the surfaces of the centrum, the upper and lower surfaces of the fronts of the laminae are flattened and rough to give attachment to the ligamenta flava. Working together in the vertebral column their sections provide controlled movement and flexibility.
Feeding of the intervertebral discs through the reflex (hyaline ligament) plate that separates the cancellous bone of the vertebral body from each disk
Clinical significance
There are a number of congenital vertebral anomalies, mostly involving variations in the shape or number of vertebrae, and many of which are unproblematic. Others though can cause compression of the spinal cord. Wedge-shaped vertebrae, called hemivertebrae can cause an angle to form in the spine which can result in the spinal curvature diseases of kyphosis, scoliosis and lordosis. Severe cases can cause spinal cord compression. Block vertebrae where some vertebrae have become fused can cause problems. Spina bifida can result from the incomplete formation of the vertebral arch.
Spondylolysis is a defect in the pars interarticularis of the vertebral arch. In most cases this occurs in the lowest of the lumbar vertebrae (L5), but may also occur in the other lumbar vertebrae, as well as in the thoracic vertebrae.
Spinal disc herniation, more commonly called a slipped disc, is the result of a tear in the outer ring (anulus fibrosus) of the intervertebral disc, which lets some of the soft gel-like material, the nucleus pulposus, bulge out in a hernia. This may be treated by a minimally-invasive endoscopic procedure called Tessys method.
A laminectomy is a surgical operation to remove the laminae in order to access the spinal canal. The removal of just part of a lamina is called a laminotomy.
A pinched nerve caused by pressure from a disc, vertebra or scar tissue might be remedied by a foraminotomy to broaden the intervertebral foramina and relieve pressure. It can also be caused by a foramina stenosis, a narrowing of the nerve opening, as a result of arthritis.
Another condition is spondylolisthesis when one vertebra slips forward onto another. The reverse of this condition is retrolisthesis where one vertebra slips backward onto another.
The vertebral pedicle is often used as a radiographic marker and entry point in vertebroplasty, kyphoplasty, and spinal fusion procedures.
The arcuate foramen is a common anatomical variation more frequently seen in females. It is a bony bridge found on the first cervical vertebra, the atlas where it covers the groove for the vertebral artery.
Degenerative disc disease is a condition usually associated with ageing in which one or more discs degenerate. This can often be a painfree condition but can also be very painful.
Other animals
In other animals, the vertebrae take the same regional names except for the coccygeal – in animals with tails, the separate vertebrae are usually called the caudal vertebrae. Because of the different types of locomotion and support needed between the aquatic and other vertebrates, the vertebrae between them show the most variation, though basic features are shared. The spinous processes which are backward extending are directed upward in animals without an erect stance. These processes can be very large in the larger animals since they attach to the muscles and ligaments of the body. In the elephant, the vertebrae are connected by tight joints, which limit the backbone's flexibility. Spinous processes are exaggerated in some animals, such as the extinct Dimetrodon and Spinosaurus, where they form a sailback or finback.
Vertebrae with saddle-shaped articular surfaces on their bodies, called "heterocoelous", allow vertebrae to flex both vertically and horizontally while preventing twisting motions. Such vertebrae are found in the necks of birds and some turtles.
"Procoelous" vertebrae feature a spherical protrusion extending from the caudal end of the centrum of one vertebra that fits into a concave socket on the cranial end of the centrum of an adjacent vertebra. These vertebrae are most often found in reptiles, but are found in some amphibians such as frogs. The vertebrae fit together in a ball-and-socket articulation, in which the convex articular feature of an anterior vertebra acts as the ball to the socket of a caudal vertebra. This type of connection permits a wide range of motion in most directions, while still protecting the underlying nerve cord. The central point of rotation is located at the midline of each centrum, and therefore flexion of the muscle surrounding the vertebral column does not lead to an opening between vertebrae.
In many species, though not in mammals, the cervical vertebrae bear ribs. In many groups, such as lizards and saurischian dinosaurs, the cervical ribs are large; in birds, they are small and completely fused to the vertebrae. The transverse processes of mammals are homologous to the cervical ribs of other amniotes. In the whale, the cervical vertebrae are typically fused, an adaptation trading flexibility for stability during swimming. All mammals except manatees and sloths have seven cervical vertebrae, whatever the length of the neck. This includes seemingly unlikely animals such as the giraffe, the camel, and the blue whale, for example. Birds usually have more cervical vertebrae with most having a highly flexible neck consisting of 13–25 vertebrae.
In all mammals, the thoracic vertebrae are connected to ribs and their bodies differ from the other regional vertebrae due to the presence of facets. Each vertebra has a facet on each side of the vertebral body, which articulates with the head of a rib. There is also a facet on each of the transverse processes which articulates with the tubercle of a rib. The number of thoracic vertebrae varies considerably across the species. Most marsupials have thirteen, but koalas only have eleven. The usual number is twelve to fifteen in mammals, (twelve in the human), though there are from eighteen to twenty in the horse, tapir, rhinoceros and elephant. In certain sloths, there is an extreme number of twenty-five and at the other end only nine in the cetacean.
There are fewer lumbar vertebrae in chimpanzees and gorillas, which have three in contrast to the five in the genus Homo. This reduction in number gives an inability of the lumbar spine to lordose but gives an anatomy that favours vertical climbing, and hanging ability more suited to feeding locations in high-canopied regions. The bonobo differs by having four lumbar vertebrae.
Caudal vertebrae are the bones that make up the tails of vertebrates. They range in number from a few to fifty, depending on the length of the animal's tail.
In humans and other tailless primates, they are called the coccygeal vertebrae, number from three to five and are fused into the coccyx.
Additional images
| Biology and health sciences | Skeletal system | Biology |
29686197 | https://en.wikipedia.org/wiki/Myalgic%20encephalomyelitis/chronic%20fatigue%20syndrome | Myalgic encephalomyelitis/chronic fatigue syndrome | Myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) is a disabling chronic illness. People with ME/CFS experience profound fatigue that does not go away with rest, as well as sleep issues and problems with memory or concentration. The hallmark symptom is post-exertional malaise, a worsening of the illness which can start immediately or hours to days after even minor physical or mental activity. This "crash" can last from hours or days to several months. Further common symptoms include dizziness or faintness when upright and pain.
The cause of the disease is unknown. ME/CFS often starts after an infection, such as mononucleosis. It can run in families, but no genes that contribute to ME/CFS have been confirmed. ME/CFS is associated with changes in the nervous and immune systems, as well as in energy production. Diagnosis is based on symptoms and a differential diagnosis because no diagnostic test is available.
The illness can improve or worsen over time, but full recovery is uncommon. No therapies or medications are approved to treat the condition, and management is aimed at relieving symptoms. Pacing of activities can help avoid worsening symptoms, and counselling may help in coping with the illness. Before the COVID-19 pandemic, ME/CFS affected two to nine out of every 1,000 people, depending on the definition. However, many people fit ME/CFS diagnostic criteria after contracting long COVID. ME/CFS occurs more often in women than in men. It is more common in middle age, but can occur at all ages, including childhood.
ME/CFS has a large social and economic impact, and the disease can be socially isolating. About a quarter of those affected are unable to leave their bed or home. People with ME/CFS often face stigma in healthcare settings, and care is complicated by controversies around the cause and treatments of the illness. Doctors may be unfamiliar with ME/CFS, as it is often not fully covered in medical school. Historically, research funding for ME/CFS has been far below that of diseases with comparable impact.
Classification and terminology
ME/CFS has been classified as a neurological disease by the World Health Organization (WHO) since 1969, initially under the name benign myalgic encephalomyelitis. The classification of ME/CFS as a neurological disease is based on symptoms which indicate a central role of the nervous system. Alternatively, on the basis of abnormalities in immune cells, ME/CFS is sometimes labelled a neuroimmune condition. The disease can further be regarded as a post-acute infection syndrome (PAIS) or an infection-associated chronic illness. PAISes such as long COVID and post-treatment Lyme disease syndrome share many symptoms with ME/CFS and are suspected to have a similar cause.
Many names have been proposed for the illness. The most commonly used are chronic fatigue syndrome, myalgic encephalomyelitis, and the umbrella term myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS). Reaching consensus on a name has been challenging because the cause and pathology remain unknown. In the WHO's most recent classification, the ICD-11, chronic fatigue syndrome and myalgic encephalomyelitis are named under post-viral fatigue syndrome. The term post-infectious fatigue syndrome was initially proposed as a subset of "chronic fatigue syndrome" with a documented triggering infection, but might also be used as a synonym of ME/CFS or as a broader set of fatigue conditions after infection.
Many individuals with ME/CFS object to the term chronic fatigue syndrome. They consider the term simplistic and trivialising, which in turn prevents the illness from being taken seriously. At the same time, there are also issues with the use of myalgic encephalomyelitis (myalgia means muscle pain and encephalomyelitis means brain and spinal cord inflammation), as there is only limited evidence of brain inflammation implied by the name. The umbrella term ME/CFS would retain the better-known phrase CFS without trivialising the disease, but some people object to this name too, as they see CFS and ME as distinct illnesses.
A 2015 report from the US Institute of Medicine recommended the illness be renamed systemic exertion intolerance disease (SEID) and suggested new diagnostic criteria. While the new name was not widely adopted, the diagnostic criteria were taken over by the CDC. Like CFS, the name SEID only focuses on a single symptom, and opinion from those affected was generally negative.
Signs and symptoms
ME/CFS causes debilitating fatigue, sleep problems, and post-exertional malaise (PEM, overall symptoms getting worse after mild activity). In addition, cognitive issues, orthostatic intolerance (dizziness or nausea when upright) or other physical symptoms may be present (see also ). Symptoms significantly reduce the ability to function and typically last for three to six months before a diagnosis can be confirmed. ME/CFS usually starts after an infection. Onset can be sudden or more gradual over weeks to months.
Core symptoms
People with ME/CFS experience persistent debilitating fatigue. It is made worse by normal physical, mental, emotional, and social activity, and is not a result of ongoing overexertion. Rest provides limited relief from fatigue. Particularly in the initial period of illness, this fatigue is described as "flu-like". Individuals may feel "physically drained" and unable to start or finish activities. They may also feel restless while fatigued, describing their experience as "wired but tired". When starting an activity, muscle strength may drop rapidly, which can lead to difficulty with coordination, clumsiness or sudden weakness. Mental fatigue may also make cognitive efforts difficult. The fatigue experienced in ME/CFS is of a longer duration and greater severity than in other conditions characterized by fatigue.
The hallmark feature of ME/CFS is a worsening of symptoms after exertion, known as post-exertional malaise or post-exertional symptom exacerbation. PEM involves increased fatigue and is disabling. It can also include flu-like symptoms, pain, cognitive difficulties, gastrointestinal issues, nausea, and sleep problems. All types of activities that require energy, whether physical, cognitive, social, or emotional, can trigger PEM. Examples include attending a school event, food shopping, or even taking a shower. For some, being in a stimulating environment can be sufficient to trigger PEM. PEM usually starts 12 to 48 hours after the activity, but can also follow immediately after. PEM can last hours, days, weeks, or months. Extended periods of PEM, commonly referred to as "crashes" or "flare-ups" by people with the illness, can lead to a prolonged relapse.
Unrefreshing sleep is a further core symptom. People wake up exhausted and stiff rather than restored after a night's sleep. This can be caused by a pattern of sleeping during the day and being awake at night, shallow sleep, or broken sleep. However, even a full night's sleep is typically non-restorative. Some individuals experience insomnia, hypersomnia (excessive sleepiness), or vivid nightmares.
Cognitive dysfunction in ME/CFS can be as disabling as physical symptoms, leading to difficulties at work or school, as well as in social interactions. People with ME/CFS sometimes describe it as "brain fog", and report a slowdown in information processing. Individuals may have difficulty speaking, struggling to find words and names. They may have trouble concentrating or multitasking, or may have difficulties with short-term memory. Tests often show problems with short-term visual memory, reaction time and reading speed. There may also be problems with attention and verbal memory.
People with ME/CFS often experience orthostatic intolerance, symptoms that start or worsen with standing or sitting. Symptoms, which include nausea, lightheadedness, and cognitive impairment, often improve again after lying down. Weakness and vision changes may also be triggered by the upright posture. Some have postural orthostatic tachycardia syndrome (POTS), an excessive increase in heart rate after standing up, which can result in fainting. Additionally, individuals may experience orthostatic hypotension, a drop in blood pressure after standing.
Other common symptoms
Pain and hyperalgesia (an abnormally increased sensitivity to pain) are common in ME/CFS. The pain is not accompanied by swelling or redness. The pain can be present in muscles (myalgia) and joints. Individuals with ME/CFS may have chronic pain behind the eyes and in the neck, as well as neuropathic pain (related to disorders of the nervous system). Headaches and migraines that were not present before the illness can occur as well. However, chronic daily headaches may indicate an alternative diagnosis.
Additional common symptoms include irritable bowel syndrome or other problems with digestion, chills and night sweats, shortness of breath or an irregular heartbeat. Some experience sore lymph nodes and a sore throat. People may also develop allergies or become sensitive to foods, lights, noise, smells or chemicals.
Illness severity
ME/CFS often leads to serious disability, but the degree varies considerably. ME/CFS is generally classified into four categories of illness severity:
People with mild ME/CFS can usually still work and care for themselves, but they will need their free time to recover from these activities rather than engage in social and leisure activities.
Moderate severity impedes activities of daily living (self-care activities, such as making a meal). People are usually unable to work and require frequent rest.
Those with severe ME/CFS are homebound and can do only limited activities of daily living, for instance brushing their teeth. They may be wheelchair-dependent and spend the majority of their time in bed.
With very severe ME/CFS, people are mostly bed-bound and cannot care for themselves.
Roughly a quarter of those living with ME/CFS fall into the mild category, and half fall into the moderate or moderate-to-severe categories. The final quarter falls into the severe or very severe category. Severity may change over time. Symptoms might get worse, improve, or the illness may go into remission for a period of time. People who feel better for a period of time may overextend their activities, triggering PEM and a worsening of symptoms.
Those with severe and very severe ME/CFS experience more extreme and diverse symptoms. They may face severe weakness and greatly limited ability to move. They can lose the ability to speak, swallow, or communicate completely due to cognitive issues. They can further experience severe pain and hypersensitivities to touch, light, sound, and smells. Minor day-to-day activities can be sufficient to trigger PEM.
Individuals with ME/CFS have decreased quality of life when evaluated by the SF-36 questionnaire, especially in the domains of physical and social functioning, general health, and vitality. However, their emotional functioning and mental health are not much lower than those of healthy individuals. Functional impairment in ME/CFS can be greater than multiple sclerosis, heart disease, or lung cancer. Fewer than half of people with ME/CFS are employed, and roughly one in five have a full-time job.
Causes
The cause of ME/CFS is not yet known. Between 60% and 80% of cases start after an infection, usually a viral infection. A genetic factor is believed to contribute, but there is no single gene known to be responsible for increased risk. Instead, many gene variants probably have a small individual effect, but their combined effect can be strong. Other factors may include problems with the nervous and immune systems, as well as energy metabolism. ME/CFS is a biological disease, not a psychological condition, and is not due to deconditioning.
Besides viruses, other reported triggers include stress, traumatic events, and environmental exposures such as to mould. Bacterial infections such as Q-fever are other potential triggers. ME/CFS may further occur after physical trauma, such as an accident or surgery. Pregnancy has been reported in around 3% to 10% of cases as a trigger. ME/CFS can also begin with multiple minor triggering events, followed by a final trigger that leads to a clear onset of symptoms.
Risk factors
ME/CFS can affect people of all ages, ethnicities, and income levels, but it is more common in women than men. People with a history of frequent infections are more likely to develop it. Those with family members who have ME/CFS are also at higher risk, suggesting a genetic factor. In the United States, white Americans are diagnosed more frequently than other groups, but the illness is probably at least as prevalent among African Americans and Hispanics. It used to be thought that ME/CFS was more common among those with higher incomes. Instead, people in minority groups or lower income groups may have increased risks due to poorer nutrition, lower healthcare access, and increased work stress.
Viral infections
Viral infections have long been suspected to cause ME/CFS, based on the observation that ME/CFS sometimes occurs in outbreaks and is possibly connected to autoimmune diseases. How viral infections cause ME/CFS is unclear; it could be via viral persistence or via a "hit and run" mechanism, in which infections dysregulate the immune system or cause autoimmunity.
Different types of viral infection have been implicated in ME/CFS, including airway infections, bronchitis, gastroenteritis, or an acute "flu-like illness". Between 15% and 50% of people with long COVID also meet the diagnostic criteria for ME/CFS. Of people who get infectious mononucleosis, which is caused by the Epstein–Barr virus (EBV), around 8% to 15% develop ME/CFS, depending on criteria. Other viral infections that can trigger ME/CFS are the H1N1 influenza virus, varicella zoster (the virus that causes chickenpox and shingles), and SARS-CoV-1.
Reactivation of latent viruses, in particular EBV and human herpesvirus 6, has also been hypothesised to drive symptoms. EBV is present in about 90% of people, usually in a latent state. The levels of antibodies to EBV are commonly higher in people with ME/CFS, indicating possible viral reactivation.
Pathophysiology
ME/CFS is associated with changes in several areas, including the nervous and immune systems, as well as disturbances in energy metabolism. Neurological differences include autonomic nervous system dysfunction and a change in brain structure and metabolism. Observed changes in the immune system include decreased natural killer cell function and, in some cases, autoimmunity.
Neurological
A range of structural, biochemical, and functional abnormalities are found in brain imaging studies of people with ME/CFS. Common findings are changes in the brainstem and the use of additional brain areas for cognitive tasks. Other consistent findings, based on a smaller number of studies, are low metabolism in some areas, reduced serotonin transporters, and problems with neurovascular coupling.
Neuroinflammation has been proposed as an underlying mechanism of ME/CFS that could explain a large set of symptoms. Several studies suggest neuroinflammation in the cortical and limbic regions of the brain. Individuals with ME/CFS, for instance, have higher brain lactate and choline levels, which are signs of neuroinflammation. More direct evidence from two small positron emission tomography studies of microglia, a type of immune cell in the brain, were contradictory, however.
ME/CFS affects sleep. Individuals experience decreased sleep efficiency, take longer to fall asleep, and take longer to achieve REM sleep, a phase of sleep characterised by rapid eye movement. Changes to non-REM sleep have also been found, together suggesting a role of the autonomic nervous system. Individuals often have a blunted heart rate response to exercise, but a higher heart rate during a tilt table test when the body is rotated from lying flat to an upright position. This again suggests dysfunction in the autonomic nervous system.
Immunological
People with ME/CFS often have immune system abnormalities. A consistent finding in studies is a decreased function of natural killer cells, a type of immune cell that targets virus-infected and tumour cells. They are also more likely to have active viral infections, correlating with cognitive issues and fatigue. T cells show less metabolic activity. This may reflect they have reached an exhausted state and cannot respond effectively against pathogens.
Autoimmunity has been proposed to be a factor in ME/CFS. There is a subset of people with ME/CFS with increased levels of autoantibodies, possibly as a result of viral mimicry. Some may have higher levels of autoantibodies to muscarinic acetylcholine receptors as well as to β2 adrenergic receptors. Problems with these receptors can lead to impaired blood flow.
Energy
Objective signs of PEM have been found with the 2-day cardiopulmonary exercise test. People with ME/CFS have lower performance compared to healthy controls on the first test. On the second test, healthy people's scores stay roughly the same or increase slightly, while those with ME/CFS have a clinically significant decrease in work rate at the anaerobic threshold. Potential causes include mitochondrial dysfunction, and issues with the transport and use of oxygen. Some of the usual recovery processes following exercise may be lacking, providing an alternative explanation for PEM.
Studies have observed mitochondrial abnormalities in cellular energy production, but differences between studies make it hard to draw clear conclusions. ATP, the primary energy carrier in cells, is likely more frequently produced from lipids and amino acids than from carbohydrates.
Other
Some people with ME/CFS have abnormalities in their hypothalamic–pituitary–adrenal axis hormones. This can include lower cortisol levels, less change in cortisol levels throughout the day, and a weaker reaction to stress and stimuli. Other proposed abnormalities are reduced blood flow to the brain under orthostatic stress (as found in a tilt table test), small-fibre neuropathy, and an increase in the amount of gut microbes entering the blood. The diversity of gut microbes is reduced compared to healthy controls. Women with ME/CFS are more likely to experience endometriosis, early menopause, and other menstrual irregularities compared to women without the condition.
Diagnosis
Diagnosis of ME/CFS is based on symptoms and involves taking a medical history and a mental and physical examination. No specific lab tests are approved for diagnosis; while physical abnormalities can be found, no single finding is considered sufficient for diagnosis. Blood and urine tests are used to rule out other conditions that could be responsible for the symptoms. People with ME/CFS often face significant delays in obtaining a diagnosis, and diagnoses may be missed altogether. Specialists in ME/CFS may be asked to confirm the diagnosis, as primary care physicians often lack a good understanding of the illness.
Diagnostic criteria
Multiple research and clinical criteria exist to diagnose ME/CFS. These include the NICE guidelines, Institute of Medicine (IOM) criteria, the International Consensus Criteria (ICC), the Canadian Consensus Criteria (CCC), and CDC criteria. The criteria sets were all developed based on expert consensus and differ in the required symptoms and which conditions preclude a diagnosis of ME/CFS. The definitions differ in their conceptualisation of the cause and mechanisms of ME/CFS.
As there is no single known biomarker for ME/CFS, it is not possible to determine which set of criteria is the most accurate. A trade-off must be made between overdiagnosis and missing more diagnoses. The broad Fukuda criteria have a higher risk of overdiagnosis, whereas the strict ICC criteria have a higher risk of missing people. The IOM and NICE criteria fall in the middle.
The 1994 CDC criteria, sometimes called the Fukuda criteria, require six months of persistent or relapsing fatigue for diagnosis, as well as the persistent presence of four out of eight other symptoms. While used frequently, the Fukuda criteria have limitations: PEM and cognitive issues are not mandatory. The large variety of optional symptoms can lead to diagnosis of individuals who differ significantly from each other.
The Canadian Consensus Criteria, another commonly used criteria set, was developed in 2003. In addition to PEM, fatigue and sleep problems, pain and neurological or cognitive issues are required for diagnosis. Furthermore, three categories of symptoms are defined (orthostatic, thermal instability, and immunological). At least one symptom in two of these categories needs to be present. People diagnosed under the CCC have more severe symptoms compared to those diagnosed under the Fukuda criteria. The 2011 International Consensus Criteria defines ME using symptom clusters and has no minimum duration of symptoms. Similarly to the CCC criteria, ICC is stricter than the Fukuda criteria and selects more severely ill people.
The 2015 IOM criteria share significant similarities with the CCC but were developed to be easy to use for clinicians. Diagnosis requires fatigue, PEM, non-restorative sleep, and either cognitive issues (such as memory impairment) or orthostatic intolerance. Additionally, fatigue must persist for at least six months, substantially impair activities in all areas of life, and have a clearly defined onset. Symptoms must be present at least half of the time, and be of moderate severity or worse; previous criteria just required symptoms to be present. In 2021, NICE revised its criteria based on the IOM criteria. The updated criteria require fatigue, PEM, non-restorative sleep, and cognitive difficulties persisting for at least three months.
Separate diagnostic criteria have been developed for children and young people. A diagnosis for children often requires a shorter symptom duration. For example, the CCC definition only requires three months of persistent symptoms in children compared to six months for adults. NICE requires only four weeks of symptoms to suspect ME/CFS in children, compared to six weeks in adults. Exclusionary diagnoses also differ; for instance, children and teenagers may have anxiety related to school attendance, which could explain symptoms.
Clinical assessment
Screening can be done using the DePaul Symptom Questionnaire, which assesses the frequency and severity of ME/CFS symptoms. Individuals may struggle to answer questions related to PEM, if they are unfamiliar with the symptom. To find patterns in symptoms, they may be asked to keep a diary.
A physical exam may appear completely normal, particularly if the individual has rested substantially before a doctor's visit. There may be tenderness in the lymph nodes and abdomen or signs of hypermobility. Answers to questions may show a temporary difficulty with finding words or other cognitive problems. Cognitive tests and a two-day cardiopulmonary exercise test (CPET) can be helpful to document aspects of the illness, but they may be risky as they can cause severe PEM. They may be warranted to support a disability claim. Orthostatic intolerance can be measured with a tilt table test. If that is unavailable, it can also be assessed with the simpler NASA 10-minute lean test, which tests the response to prolonged standing.
Standard laboratory findings are usually normal. Standard tests when suspecting ME/CFS include an HIV test, and blood tests to determine full blood count, red blood cell sedimentation rate (ESR), C-reactive protein, blood glucose and thyroid-stimulating hormone. Tests for antinuclear antibodies may come back positive, but below the levels that suggest the individual may have lupus. C-reactive protein levels are often at the high end of normal. Serum ferritin levels may be useful to test, as borderline anaemia can make some ME/CFS symptoms worse.
Differential diagnosis
Some medical conditions have symptoms similar to ME/CFS. Diagnosis often involves clinical evaluation, testing, and specialist referrals to identify the correct condition. During the time other possible diagnoses are explored, advice can be given on symptom management to help prevent the condition from getting worse. Before a diagnosis of ME/CFS is confirmed, a waiting period is used to exclude acute medical conditions or symptoms which may resolve within that time frame.
Possible differential diagnoses span a large set of specialties and depend on the medical history. Examples are infectious diseases, such as Epstein–Barr virus and Lyme disease, and neuroendocrine disorders, including diabetes and hypothyroidism. Blood disorders, such as anaemia, and some cancers may also present similar symptoms. Various rheumatological and autoimmune diseases, such as Sjögren's syndrome, lupus, and arthritis, may have overlapping symptoms with ME/CFS. Furthermore, it may be necessary to evaluate psychiatric diseases, such as depression or substance use disorder, as well as neurological disorders, such as narcolepsy, multiple sclerosis, and craniocervical instability. Finally, sleep disorders, coeliac disease, and side effects of medications may also explain symptoms.
Joint and muscle pain without swelling or inflammation is a common feature of ME/CFS, but is more closely associated with fibromyalgia. Modern definitions of fibromyalgia not only include widespread pain but also fatigue, sleep disturbances, and cognitive issues. This makes it difficult to distinguish ME/CFS from fibromyalgia and the two are often co-diagnosed.
Another common condition that often co-occurs with ME/CFS is hypermobile Ehlers–Danlos syndrome (EDS). Unlike ME/CFS, EDS is present from birth. People with ME/CFS are more often hypermobile compared to the general population. Sleep apnoea may also co-occur with ME/CFS. However, many diagnostic criteria require ruling out sleep disorders before confirming a diagnosis of ME/CFS.
Like with other chronic illnesses, depression and anxiety co-occur frequently with ME/CFS. Depression may be differentially diagnosed by the presence of feelings of worthlessness, the inability to feel pleasure, loss of interest, and/or guilt, and the absence of ME/CFS bodily symptoms such as autonomic dysfunction, pain, migraines, and PEM. People with chronic fatigue, which is not due to ME/CFS or other chronic illnesses, may be diagnosed with idiopathic (unexplained) chronic fatigue.
Management
There is no approved drug treatment or cure for ME/CFS, although some symptoms can be treated or managed. Care for ME/CFS involves multidisciplinary healthcare professionals. Usually, the primary care clinician plays an important role in coordinating health care, social care and educational support for those still in school. This coordinator can help provide access to community resources such as occupational therapy and district nursing. Management may start with treating the most disabling symptom first, and tackle symptoms one by one in further health care visits.
Pacing, or managing one's activities to stay within energy limits, can reduce episodes of PEM. Addressing sleep problems with good sleep hygiene, or medication if required, may be beneficial. Chronic pain is common in ME/CFS, and the CDC recommends consulting with a pain management specialist if over-the-counter painkillers are insufficient. For cognitive impairment, adaptations like organisers and calendars may be helpful.
Co-occurring conditions that may interact with and worsen ME/CFS symptoms are common, and treating these may help manage ME/CFS. Commonly diagnosed ones include fibromyalgia, irritable bowel syndrome, migraines and mast cell activation syndrome. The debilitating nature of ME/CFS can cause depression, anxiety, or other psychological problems, which can be treated. People with ME/CFS may be unusually sensitive to medications, especially ones that affect the central nervous system.
Pacing and energy management
Pacing, or activity management, involves balancing periods of rest with periods of activity. The goal of pacing is to stabilize the illness and avoid triggering PEM. This involves staying within an individual's available energy envelope to reduce the PEM "payback" caused by overexertion. The technique was developed for ME/CFS in the 1980s.
Pacing can involve breaking up large tasks into smaller ones and taking extra breaks, or creating easier ways to do activities. For example, this might include sitting down while doing the laundry. The decision to stop an activity (and rest or change an activity) is determined by self-awareness of a worsening of symptoms. Use of a heart rate monitor may help some individuals with pacing.
Research on pacing and energy envelope theory typically shows positive effects. However, these studies have often had a low number of participants and have rarely included methods to check if study participants implemented pacing well. Pacing is difficult to apply for people with very severe ME/CFS, as the activities that trigger PEM in this group, such as eating, cannot be avoided completely.
Those with a stable illness who understand how to "listen to their body" may be able to carefully and flexibly increase their activity levels. The goal of an exercise programme would be to increase stamina, while not interfering with everyday tasks or making the illness more severe. In many chronic illnesses, intense exercise is beneficial, but in ME/CFS it is not recommended. The CDC states:
Graded exercise therapy (GET), a proposed treatment for ME/CFS that assumes deconditioning and a fear of activity play important roles in maintaining the illness, is no longer recommended for people with ME/CFS. Reviews of GET either see weak evidence of a small to moderate effect or no evidence of effectiveness. GET can have serious adverse effects. Similarly, a form of cognitive behavioural therapy (CBT) that assumed the illness is maintained by unhelpful beliefs about the illness and avoidance of activity is no longer recommended.
Symptom relief
The first management step for sleep problems in ME/CFS is improving sleep habits. If sleep problems remain after implementing sleep hygiene routines, cognitive behavioural therapy for insomnia can be offered. Avoiding naps during the day can further improve sleep, but there may be a trade-off with needed rest during the day. Drugs that help with insomnia in fibromyalgia, such as trazodone or suvorexant, may help in ME/CFS too.
Pain is initially managed with over-the-counter pain medication, such as ibuprofen or paracetamol (acetaminophen). If this is insufficient, referral to a pain specialist or counselling on pain management can be the next step. Heat treatment, hydrotherapy and gentle massage can sometimes help. In addition, stretching and exercise may help with pain, but a balance must be struck, as they can trigger PEM. While there is lack of evidence on pharmaceutical options for pain management in ME/CFS, medication that works for fibromyalgia may be tried, such as pregabalin.
Like in other chronic illnesses, those with ME/CFS often experience mental health issues like anxiety and depression. Psychotherapy, such as CBT may help manage the stress of being ill and teach self-management strategies. Family sessions may be useful to educate people close to those with ME/CFS about the severity of the illness. Antidepressants can be useful, but there may be more side effects than in the general population. For instance, it may be difficult to stop weight gain due to exercise intolerance.
Bowel issues are a common symptom of ME/CFS. For some, eliminating specific foods, such as caffeine, alcohol, gluten, or dairy, can alleviate symptoms. Those with orthostatic intolerance can benefit from increased salt and fluid intake. Compression stockings can help with orthostatic intolerance.
Severe ME/CFS
People with moderate to severe ME/CFS may benefit from home adaptations and mobility aids, such as wheelchairs, disability parking, shower chairs, or stair lifts. To manage sensitivities to environmental stimuli, these stimuli can be limited. For instance, the surroundings can be made perfume-free, or an eye mask or earplugs can be used. Those with severe ME/CFS may have significant trouble getting nutrition. Intravenous feeding (via blood) or tube feeding may be necessary to address this or to address electrolyte imbalances.
Patients who cannot move easily in bed may need help to prevent pressure sores. Regular repositioning is important to keep their joints flexible and prevent contractures and stiffness. Osteoporosis may pose a risk over the long term. Symptoms of severe ME/CFS may be misunderstood as neglect or abuse during well-being evaluations, and NICE recommends that professionals with experience in ME/CFS should be involved in any type of assessment for safeguarding.
Prognosis
Information on the prognosis of ME/CFS is limited. Complete recovery, partial improvement, and worsening are all possible, but full recovery is uncommon. Symptoms generally fluctuate over days, weeks, or longer periods, and some people may experience periods of remission. Overall, many will have to adjust to life with ME/CFS.
An early diagnosis may improve care and prognosis. Factors that may make the disease worse over days, but also over longer periods, are physical and mental exertion, a new infection, sleep deprivation, and emotional stress. Some people who improve need to manage their activities to prevent a relapse. Children and teenagers are more likely to recover or improve than adults. For instance, a study in Australia among 6- to 18-year-olds found that two-thirds reported recovery after 10 years and that the typical duration of illness was five years.
The effect of ME/CFS on life expectancy is poorly studied, and the evidence is mixed. One large retrospective study on the topic found no increase in all-cause mortality due to ME/CFS. Death from suicide was, however, significantly higher among those with ME/CFS. In extreme cases, people can die from the illness.
Epidemiology
Reported prevalence rates vary widely depending on how ME/CFS is defined and diagnosed. Overall, around one in 150 people have ME/CFS. Based on the 1994 CDC diagnostic criteria, the global prevalence rate for CFS is 0.89%. In comparison, estimates using the stricter 1988 CDC criteria or the 2003 Canadian Consensus Criteria for ME/CFS produced a prevalence rate of only 0.17%.
In England and Wales, over 250,000 people are estimated to be affected. These estimates are based on data before the COVID-19 pandemic. It is likely that numbers have increased as a large share of people with long COVID meet the diagnostic criteria of ME/CFS. A 2021–2022 CDC survey found that 1.3% of adults in the United States, or 3.3 million, had ME/CFS.
Women are diagnosed with ME/CFS about 1.5 to four times more often than men. The prevalence in children and adolescents is slightly lower than in adults, and children have it less than adolescents. The incidence rate (the onset of ME/CFS) has two peaks, one at 10–19 and another at 30–39 years, and the prevalence is highest in middle age.
History
From 1934 onwards, there were multiple outbreaks globally of an unfamiliar illness, initially mistaken for polio. A 1950s outbreak at London's Royal Free Hospital led to the term "benign myalgic encephalomyelitis" (ME). Those affected displayed symptoms such as malaise, sore throat, pain, and signs of nervous system inflammation. While its infectious nature was suspected, the exact cause remained elusive. The syndrome appeared in sporadic as well as epidemic cases.
In 1970, two UK psychiatrists proposed that these ME outbreaks were psychosocial phenomena, suggesting mass hysteria or altered medical perception as potential causes. This theory, though challenged, sparked controversy and cast doubt on ME's legitimacy in the medical community.
Melvin Ramsay's later research highlighted ME's disabling nature, prompting the removal of "benign" from the name and the creation of diagnostic criteria in 1986. These criteria included the tendency of muscles to tire after minor effort and take multiple days to recover, high symptom variability, and chronicity. Despite Ramsay's work and a UK report affirming that ME was not a psychological condition, scepticism persisted within the medical field, leading to limited research.
In the United States, Nevada and New York State saw outbreaks of what appeared similar to mononucleosis in the middle of the 1980s. People suffered from "chronic or recurrent fatigue", among a large number of other symptoms. The initial link between elevated antibodies and the Epstein–Barr virus led to the name "chronic Epstein–Barr virus syndrome". The CDC renamed it chronic fatigue syndrome (CFS), as a viral cause could not be confirmed in studies. An initial case definition of CFS was outlined in 1988; the CDC published new diagnostic criteria in 1994, which became widely referenced.
In the 2010s, ME/CFS began to gain more recognition from health professionals and the public. Two reports proved key in this shift. In 2015, the US Institute of Medicine produced a report with new diagnostic criteria that described ME/CFS as a "serious, chronic, complex systemic disease". Following this, the US National Institutes of Health published their Pathways to Prevention report, which gave recommendations on research priorities.
Society and culture
Controversy
ME/CFS is a contested illness, with debates mainly revolving around the cause of the illness and treatments. Historically, there was a heated discussion about whether the condition was psychological or neurological. Professionals who subscribed to the psychological model had frequent conflicts with patients, who believed their illness to be organic. While ME/CFS is now generally believed to be a multisystem neuroimmune condition, a subset of professionals still see the condition as psychosomatic, or an "illness-without-disease".
The possible role of chronic viral infection in ME/CFS has been a subject of disagreement. One study caused considerable controversy by establishing a causal relationship between ME/CFS and a retrovirus called XMRV. Some with the illness began taking antiretroviral drugs targeted specifically for HIV/AIDS, another retrovirus, and national blood supplies were suspected to be tainted with the retrovirus. After several years of study, the XMRV findings were determined to be the result of contamination of the testing materials.
Treatments based on behavioural and psychological models of the illness have also been the subject of much contention. The largest clinical trial on behavioural interventions, the 2011 PACE trial, concluded that graded exercise therapy and CBT are moderately effective. The trial drew heavy criticism. The study authors weakened their definition of recovery during the trial: some participants now met a key criterion for recovery before the trial started. A reanalysis under the original clinical trial protocol showed no significant difference in recovery rate between treatment groups and the controls receiving standard care.
Doctor–patient relations
People with ME/CFS often face stigma in healthcare settings, and the majority of individuals report negative healthcare experiences. They may feel that their doctor inappropriately calls their illness psychological or doubts the severity of their symptoms. They may also feel forced to prove that they are legitimately ill. Some may be given outdated treatments that provoke symptoms or assume their illness is due to unhelpful thoughts and deconditioning.
Clinicians may be unfamiliar with ME/CFS, as it is often not fully covered in medical school. Due to this unfamiliarity, people may go undiagnosed for years or be misdiagnosed with mental health conditions. As individuals gain knowledge about their illness over time, their relationship with treating physicians changes. They may feel on a more equal footing with their doctors and able to work in partnership. At times, relationships may deteriorate instead as the previous asymmetry of knowledge breaks down.
Economic and social impact
ME/CFS negatively impacts people's social lives and relationships. Stress can be compounded by disbelief in the illness from the support network, who can be sceptical due to the subjective nature of diagnosis. Many people with the illness feel socially isolated, and thoughts of suicide are high, especially in those without a supportive care network. ME/CFS interrupts normal development in children, making them more dependent on their family for assistance instead of gaining independence as they age. Caring for somebody with ME/CFS can be a full-time role, and the stress of caregiving is made worse by the lack of effective treatments.
Economic costs due to ME/CFS are significant. In the United States, estimates range from $36 to $51 billion per year, considering both lost wages and healthcare costs. A 2017 estimate for the annual economic burden in the United Kingdom was £3.3 billion.
Advocacy
Patient organisations have aimed to involve researchers via activism but also by publishing research themselves—similarly to AIDS activism in the 1980s, which also sought to combat underfunding and stigma. Citizen scientists, for example, helped start discussions about weaknesses in trials of psychological treatments.
ME/CFS International Awareness Day takes place on 12 May. The goal of the day is to raise awareness among the public and health care workers about the diagnosis and treatment of ME/CFS. The date was chosen because it is the birthday of Florence Nightingale, who had an unidentified illness similar to ME/CFS.
Research
Research into ME/CFS seeks to find a better understanding of the disease's causes, biomarkers to aid in diagnosis, and treatments to relieve symptoms. The emergence of long COVID has sparked increased interest in ME/CFS, as the two conditions may share pathology and treatment for one may treat the other.
Funding
Historical research funding for ME/CFS has been far below that of comparable diseases. In a 2015 report, the US National Academy of Sciences said that "remarkably little research funding" had been dedicated to causes, mechanisms, and treatment. Lower funding levels have led to a smaller number and size of studies. In addition, drug companies have invested very little in the disease.
The US National Institutes of Health (NIH) is the largest biomedical funder worldwide. Using rough estimates of disease burden, a study found NIH funding for ME/CFS was only 3% to 7% of the average disease per healthy life year lost between 2015 and 2019. Worldwide, multiple sclerosis, which affects fewer people and results in disability no worse than ME/CFS, received 20 times as much funding between 2007 and 2015.
Multiple reasons have been proposed for the low funding levels. Diseases for which society "blames the victim" are frequently underfunded. This may explain why COPD, a severe lung disease often caused by smoking, receives low funding per healthy life year lost. Similarly, for ME/CFS, the historical belief that it is caused by psychological factors may have contributed to lower funding. Gender bias may also play a role; the NIH spends less on diseases that predominantly affect women in relation to disease burden. Less well-funded research areas may also struggle to compete with more mature areas of medicine for the same grants.
Directions
Many biomarkers for ME/CFS have been proposed. Studies on biomarkers have often been too small to draw robust conclusions. Natural killer cells have been identified as an area of interest for biomarker research as they show consistent abnormalities. Other proposed markers include electrical measurements of blood cells and Raman microscopy of immune cells. Several small studies have investigated the genetics of ME/CFS, but none of their findings have been replicated. A larger study, DecodeME, is currently underway in the United Kingdom.
Various drug treatments for ME/CFS are being explored. Drugs under investigation often target the nervous system, the immune system, autoimmunity, or pain directly. More recently, there has been a growing interest in drugs targeting energy metabolism. In several clinical trials of ME/CFS, rintatolimod showed a small reduction in symptoms, but improvements were not sustained after discontinuation. Rintatolimod has been approved in Argentina. Rituximab, a drug that depletes B cells, was studied and found to be ineffective. Another option targeting autoimmunity is immune adsorption, which removes a large set of (auto)antibodies from the blood.
Challenges
Symptoms and their severity can widely differ among people with ME/CFS. This poses a challenge for research into the cause and progression of the disease. Dividing people into subtypes may help manage this heterogeneity. The existence of multiple diagnostic criteria and variations in how scientists apply them complicate comparisons between studies. Definitions also vary in which co-occurring conditions preclude a diagnosis of ME/CFS.
| Biology and health sciences | Specific diseases | Health |
29692404 | https://en.wikipedia.org/wiki/Type%20II%20Cepheid | Type II Cepheid | Type II Cepheids are variable stars which pulsate with periods typically between 1 and 50 days. They are population II stars: old, typically metal-poor, low mass objects.
Like all Cepheid variables, Type IIs exhibit a relationship between the star's luminosity and pulsation period, making them useful as standard candles for establishing distances where little other data is available
Longer period Type II Cepheids, which are more luminous, have been detected beyond the Local Group in the galaxies NGC 5128 and NGC 4258.
Classification
Historically Type II Cepheids were called W Virginis variables, but are now divided into three subclasses based on the length of their period. Stars with periods between 1 and 4 days are of the BL Herculis subclass and 10–20 days belong to the W Virginis subclass. Stars with periods greater than 20 days, and usually alternating deep and shallow minima, belong to the RV Tauri subclass. RV Tauri variables are usually classified by a formal period from deep minimum to deep minimum, hence 40 days or more.
The divisions between the types are not always clearcut or agreed. For example, the dividing line between BL Her and W Vir types is quoted at anything between 4 and 10 days, with no obvious division between the two. RV Tau variables may not have obvious alternating minima, while some W Vir stars do. Nevertheless, each type is thought to represent a distinct different evolutionary stage, with BL Her stars being helium core burning objects moving from the horizontal branch towards the asymptotic giant branch (AGB), W Vir stars undergoing hydrogen or helium shell burning on a blue loop, and RV Tau stars being post-AGB objects at or near the end of nuclear fusion.
RV Tau stars in particular show irregularities in their light curves, with slow variations in the brightness of both maxima and minima, variations in the period, intervals with little variation, and sometimes a temporary breakdown into chaotic behaviour. R Scuti has one of the most irregular light curves.
Properties
The physical properties of all the type II Cepheid variables are very poorly known. For example, it is expected that they have masses near or below that of the Sun, but there are few examples of reliable known masses.
Period-luminosity relationship
Type II Cepheids are fainter than their classical Cepheid counterparts for a given period by about 1.6 magnitudes. Cepheid variables are used to establish the distance to the Galactic Center, globular clusters, and galaxies.
Examples
Type II Cepheids are not as well known as their type I counterparts, with only a couple of naked eye examples. In this list, the period quoted for RV Tauri variables is the interval between successive deep minima, hence twice the comparable period for the other sub-types.
| Physical sciences | Stellar astronomy | Astronomy |
25025301 | https://en.wikipedia.org/wiki/Slab%20pull | Slab pull | Slab pull is a geophysical mechanism whereby the cooling and subsequent densifying of a subducting tectonic plate produces a downward force along the rest of the plate. In 1975 Forsyth and Uyeda used the inverse theory method to show that, of the many forces likely to be driving plate motion, slab pull was the strongest. Plate motion is partly driven by the weight of cold, dense plates sinking into the mantle at oceanic trenches. This force and slab suction account for almost all of the force driving plate tectonics. The ridge push at rifts contributes only 5 to 10%.
Carlson et al. (1983) in Lallemand et al. (2005) defined the slab pull force as:
Where:
K is (gravitational acceleration = 9.81 m/s2) according to McNutt (1984);
Δρ = 80 kg/m3 is the mean density difference between the slab and the surrounding asthenosphere;
L is the slab length calculated only for the part above 670 km (the upper/lower mantle boundary);
A is the slab age in Ma at the trench.
The slab pull force manifests itself between two extreme forms:
The aseismic back-arc extension as in the Izu–Bonin–Mariana Arc.
And as the Aleutian and Chile tectonics with strong earthquakes and back-arc thrusting.
Between these two examples there is the evolution of the Farallon Plate: from the huge slab width with the Nevada, the Sevier and Laramide orogenies; the Mid-Tertiary ignimbrite flare-up and later left as Juan de Fuca and Cocos plates, the Basin and Range Province under extension, with slab break off, smaller slab width, more edges and mantle return flow.
Some early models of plate tectonics envisioned the plates riding on top of convection cells like conveyor belts. However, most scientists working today believe that the asthenosphere does not directly cause motion by the friction of such basal forces. The North American Plate is nowhere being subducted, yet it is in motion. Likewise the African, Eurasian and Antarctic Plates. Ridge push is thought responsible for the motion of these plates.
The subducting slabs around the Pacific Ring of Fire cool down the Earth and its core-mantle boundary. Around the African Plate upwelling mantle plumes from the core-mantle boundary produce rifting including the African and Ethiopian rift valleys.
| Physical sciences | Tectonics | Earth science |
32058867 | https://en.wikipedia.org/wiki/WhatsApp | WhatsApp | WhatsApp (officially WhatsApp Messenger) is an instant messaging (IM) and voice-over-IP (VoIP) service owned by technology conglomerate Meta. It allows users to send text, voice messages and video messages, make voice and video calls, and share images, documents, user locations, and other content. WhatsApp's client application runs on mobile devices, and can be accessed from computers. The service requires a cellular mobile telephone number to sign up. In January 2018, WhatsApp released a standalone business app called WhatsApp Business which can communicate with the standard WhatsApp client.
The service was created by WhatsApp Inc. of Mountain View, California, which was acquired by Facebook in February 2014 for approximately US$19.3 billion. It became the world's most popular messaging application by 2015, and had more than 2billion users worldwide by February 2020, confirmed four years later by 200 million new registrations per month. By 2016, it had become the primary means of Internet communication in regions including the Americas, the Indian subcontinent, and large parts of Europe and Africa.
History
2009–2014
WhatsApp was founded in February 2009 by Brian Acton and Jan Koum, former employees of Yahoo! A month earlier, after Koum purchased an iPhone, he and Acton decided to create an app for the App Store. The idea started off as an app that would display statuses in a phone's Contacts menu, showing if a person was at work or on a call.
Their discussions often took place at the home of Koum's Russian friend Alex Fishman in West San Jose. They realized that to take the idea further, they would need an iPhone developer. Fishman visited RentACoder.com, found Russian developer Igor Solomennikov, and introduced him to Koum.
Koum named the app WhatsApp to sound like "what's up". On February 24, 2009, he incorporated WhatsApp Inc. in California. However, when early versions of WhatsApp kept crashing, Koum considered giving up and looking for a new job. Acton encouraged him to wait for a "few more months".
In June 2009, when the app had been downloaded by only a handful of Fishman's Russian-speaking friends, Apple launched push notifications, allowing users to be pinged even when not using the app.
Koum updated WhatsApp so that everyone in the user's network would be notified when a user's status changed. This new facility, to Koum's surprise, was used by users to ping "each other with jokey custom statuses like, 'I woke up late' or 'I'm on my way.'"
Fishman said "At some point it sort of became instant messaging".
WhatsApp 2.0, released for iPhone in August 2009, featured a purpose-designed messaging component; the number of active users suddenly increased to 250,000.
Although Acton was working on another startup idea, he decided to join the company. In October 2009, Acton persuaded five former friends at Yahoo! to invest $250,000 in seed funding, and Acton became a co-founder and was given a stake. He officially joined WhatsApp on November 1. Koum then hired a friend in Los Angeles, Chris Peiffer, to develop a BlackBerry version, which arrived two months later. Subsequently, WhatsApp for Symbian OS was added in May 2010, and for Android OS in August 2010. In 2010 Google made multiple acquisition offers for WhatsApp, which were all declined.
To cover the cost of sending verification texts to users, WhatsApp was changed from a free service to a paid one. In December 2009, the ability to send photos was added to the iOS version. By early 2011, WhatsApp was one of the top 20 apps in the U.S. Apple App Store.
In April 2011, Sequoia Capital invested about $8 million for more than 15% of the company, after months of negotiation by Sequoia partner Jim Goetz.
By February 2013, WhatsApp had about 200 million active users and 50 staff members. Sequoia invested another $50 million, and WhatsApp was valued at $1.5 billion. Some time in 2013 WhatsApp acquired Santa Clara–based startup SkyMobius, the developers of Vtok, a video and voice calling app.
In a December 2013 blog post, WhatsApp claimed that 400 million active users used the service each month. The year 2013 ended with $148 million in expenses, of which $138 million in losses.
2014–2015
On February 19, 2014, one year after a venture capital financing round at a $1.5 billion valuation, Facebook, Inc. (now Meta Platforms) announced it was acquiring WhatsApp for US$19 billion, its largest acquisition to date. At the time, it was the largest acquisition of a venture-capital-backed company in history. Sequoia Capital received an approximate 5,000% return on its initial investment. Facebook, which was advised by Allen & Co, paid $4 billion in cash, $12 billion in Facebook shares, and, advised by Morgan Stanley, an additional $3 billion in restricted stock units granted to WhatsApp's founders Koum and Acton. Employee stock was scheduled to vest over four years subsequent to closing. Days after the announcement, WhatsApp users experienced a loss of service, leading to anger across social media.
The acquisition was influenced by the data provided by Onavo, Facebook's research app for monitoring competitors and trending usage of social activities on mobile phones, as well as startups that were performing "unusually well".
The acquisition caused many users to try, or move to, other message services. Telegram claimed that it acquired 8 million new users. and Line, 2 million.
At a keynote presentation at the Mobile World Congress in Barcelona in February 2014, Facebook CEO Mark Zuckerberg said that Facebook's acquisition of WhatsApp was closely related to the Internet.org vision. A TechCrunch article said about Zuckerberg's vision:The idea, he said, is to develop a group of basic internet services that would be free of charge to use – "a 911 for the internet". These could be a social networking service like Facebook, a messaging service, maybe search and other things like weather. Providing a bundle of these free of charge to users will work like a gateway drug of sorts – users who may be able to afford data services and phones these days just don't see the point of why they would pay for those data services. This would give them some context for why they are important, and that will lead them to pay for more services like this – or so the hope goes.
Three days after announcing the Facebook purchase, Koum said they were working to introduce voice calls. He also said that new mobile phones would be sold in Germany with the WhatsApp brand, and that their ultimate goal was to be on all smartphones.
In August 2014, WhatsApp was the most popular messaging app in the world, with more than 600 million users. By early January 2015, WhatsApp had 700 million monthly users and over 30 billion messages every day. In April 2015, Forbes predicted that between 2012 and 2018, the telecommunications industry would lose $386 billion because of "over-the-top" services like WhatsApp and Skype. That month, WhatsApp had over 800 million users. By September 2015, it had grown to 900 million; and by February 2016, one billion.
On November 30, 2015, the Android WhatsApp client made links to messaging service Telegram unclickable and uncopyable. Multiple sources confirmed that it was intentional, not a bug, and that it had been implemented when the Android source code that recognized Telegram URLs had been identified. (The word "telegram" appeared in WhatsApp's code.) Some considered it an anti-competitive measure; WhatsApp offered no explanation.
2016–2019
On January 18, 2016, WhatsApp's co-founder Jan Koum announced that it would no longer charge users a $1 annual subscription fee, in an effort to remove a barrier faced by users without payment cards. He also said that the app would not display any third-party ads, and that it would have new features such as the ability to communicate with businesses.
On May 18, 2017, the European Commission announced that it was fining Facebook €110 million for "providing misleading information about WhatsApp takeover" in 2014. The Commission said that in 2014 when Facebook acquired the messaging app, it "falsely claimed it was technically impossible to automatically combine user information from Facebook and WhatsApp." However, in the summer of 2016, WhatsApp had begun sharing user information with its parent company, allowing information such as phone numbers to be used for targeted Facebook advertisements. Facebook acknowledged the breach, but said the errors in their 2014 filings were "not intentional".
In September 2017, WhatsApp's co-founder Brian Acton left the company to start a nonprofit group, later revealed as the Signal Foundation, which developed the WhatsApp competitor Signal. He explained his reasons for leaving in an interview with Forbes a year later. WhatsApp also announced a forthcoming business platform to enable companies to provide customer service at scale, and airlines KLM and Aeroméxico announced their participation in the testing. Both airlines had previously launched customer services on the Facebook Messenger platform.
In January 2018, WhatsApp launched WhatsApp Business for small business use.
In April 2018, WhatsApp co-founder and CEO Jan Koum announced he would be leaving the company. By leaving before November 2018, due to concerns about privacy, advertising, and monetization by Facebook, Acton and Koum gave up $1.3 billion in unvested stock options. Facebook later announced that Koum's replacement would be Chris Daniels.
On November 25, 2019, WhatsApp announced an investment of $250,000 through a partnership with Startup India to provide 500 startups with Facebook ad credits of $500 each.
In December 2019, WhatsApp announced that a new update would lock out any Apple users who hadn't updated to iOS 9 or higher and Samsung, Huawei, Sony and Google users who hadn't updated to version 4.0 by February 1, 2020. The company also reported that Windows Phone operating systems would no longer be supported after December 31, 2019. WhatsApp was announced to be the 3rd most downloaded mobile phone app of the decade 2010–2019.
Since 2020
In March, WhatsApp partnered with the World Health Organization and UNICEF to provide messaging hotlines for people to get information on the 2019–2020 coronavirus pandemic. In the same month WhatsApp began testing a feature to help users find out more information and context about information they receive to help combat misinformation.
In January 2021, WhatsApp announced a controversial new Privacy Policy allowing WhatsApp to share data with its parent company, Facebook; users who did not accept by February 8, 2021, would lose access to the app. This led many users to ditch WhatsApp and move to other services such as Signal and Telegram. However, Facebook said the WhatsApp policy would not apply in the EU, since it violates the principles of GDPR. Facing criticism, WhatsApp postponed the update to May 15, 2021, but said they had no plans to limit functionality of users, nor nag users who did not approve the new terms.
On October 4, 2021, Facebook had its worst outage since 2008, which also affected other platforms owned by Facebook, such as Instagram and WhatsApp.
In August 2022, WhatsApp launched an integration with JioMart, available only to users in India. Local users can text special numbers in the app to launch an in-app shopping process, where they can order groceries.
In 2022, WhatsApp added the ability for users to turn off their online status.
In March 2024, Meta announced that WhatsApp would let third-party messaging services enable interoperability with WhatsApp, a requirement of the EU's Digital Markets Act (DMA). This allows users to send messages between other messaging apps and WhatsApp while maintaining end-to-end encryption.
Features
In November 2010, a slate of improvements for the iOS version of WhatsApp were released: including the ability to search for messages in your chat history, trimming long videos to a sendable size, the ability to cancel media messages as they upload or download, and previewing photos before sending them.
In March 2012, WhatsApp improved its location-sharing function, allowing users to share not only their location, but also the location of places, such as restaurants or hotels.
In August 2013, WhatsApp added voice messages to their apps, giving users a way to send short audio recordings directly in their chats.
In January 2015, WhatsApp launched a web client that allowed users to scan a QR code with their mobile app, mirroring their chats to their browser. The web client was not standalone, and required the user's phone to stay on and connected to the internet. It was also not available for iOS users on launch, due to limitations from Apple.
Voice calls between two accounts were added to the app in March and April 2015. By June 2016, the company's blog reported more than 100 million voice calls per day were being placed on WhatsApp.
On November 10, 2016, WhatsApp launched a beta version of two-factor authentication for Android users, which allowed them to use their email addresses for further protection. Also in November 2016, Facebook ceased collecting WhatsApp data for advertising in Europe. Later that month, video calls between two accounts were introduced.
On February 24, 2017, (WhatsApp's 8th birthday), WhatsApp launched a new Status feature similar to Snapchat and Facebook stories.
In July 2017, WhatsApp added support for file uploads of all file types, with a limit of 100 MB. Previously between March 2016 and May 2017, only limited file types categorised as images (JPG, PNG, GIF), videos (MP4, AVI), and documents (CSV, DOC/DOCX, PDF, PPT/PPTX, RTF, TXT, XLS/XLSX), were allowed to be shared for file attachments.
Later in September 2018, WhatsApp introduced group audio and video call features. In October, the "Swipe to Reply" option was added to the Android beta version, 16 months after it was introduced for iOS.
On October 25, 2018, WhatsApp announced support for Stickers. But unlike other platforms WhatsApp requires third-party apps to add Stickers to WhatsApp.
In October 2019, WhatsApp officially launched a new fingerprint app-locking feature for Android users.
In early 2020, WhatsApp launched its "dark mode" for iPhone and Android devices – a new design consisting of a darker palette.
In October 2020, WhatsApp rolled out a feature allowing users to mute both individuals and group chats forever. The mute options are "8 hours", "1 week", and "Always". The "Always" option replaced the "1 year" option that was originally part of the settings.
In March 2021, WhatsApp started rolling out support for third-party animated stickers, initially in Iran, Brazil and Indonesia, then worldwide.
In July 2021, WhatsApp announced forthcoming support for sending uncompressed images and videos in 3 options: Auto, Best Quality and Data Saver, and end-to-end encryption for backups stored in Facebook's cloud. The company was also testing multi-device support, allowing Computer users to run WhatsApp without an active phone session.
In August 2021, WhatsApp launched a feature that allows chat history to be transferred between mobile operating systems. This was implemented only on Samsung phones, with plans to expand to Android and iOS "soon".
WhatsApp has the facility to hide users' online status ("Last Seen"). In December 2021, WhatsApp changed the default setting from "everyone" to only people in the user's contacts or who have been conversed with ("nobody" is also an option).
In April 2022, WhatsApp announced undated plans to roll out a Communities feature allowing several group chats to exist in a shared space, getting unified notifications and opening up smaller discussion groups. The company also announced plans to implement reactions, the ability for administrators to delete messages in groups and voice calls up to 32 participants.
In May 2022, the file upload limit was raised from 100 MB to 2 GB, and maximum group size increased to 512 members.
In April 2023, the app rolled out a feature that would allow account access across multiple phones, in a shift that would make it more like competitors. Messages would still be end-to-end encrypted.
WhatsApp officially rolled out the Companion mode for Android users, allowing you to link up to five Android phones to a single account. Now, the feature is also made available to iOS users, allowing them to link up to four iPhones.
In May 2023, WhatsApp allowed users to edit messages, aligning itself with competitors such as Telegram and Signal which already offered this feature. According to the company, messages could be edited within a 15-minute window after being sent. Edited messages were tagged as "edited" to inform recipients that the content had been modified. WhatsApp has rolled out a feature called 'Voice Status Updates', which allows users to record voice notes and share them as their status on the app.
In June 2023, a feature called WhatsApp Channels was launched which allows content creators, public figures and organizations to send newsletter-like broadcasts to large numbers of users. Unlike messages in groups or private chats, channels are not end-to-end encrypted. Channels were initially only available to users in Colombia and Singapore, then later Egypt, Chile, Malaysia, Morocco, Ukraine, Kenya and Peru before becoming widely available in September 2023.
In July 2023, video messages were added to WhatsApp. Similar to voice messages, this feature allows users to record and send short videos directly in a chat. This lets users share videos of themselves more quickly, and without adding anything to their device's gallery. Currently, video messages are limited to 60 seconds.
In October 2023, support for logging in to multiple accounts was added, allowing users to switch between different WhatsApp accounts in the same app. They also introduced passkey support, where a user can verify their login with on-device biometrics, rather than SMS. Text formatting options like code blocks, quote blocks, and bulleted lists and became available for the first time.
In November 2023, WhatsApp added a "voice chat" feature for groups with more than 32 members. Unlike their 32-person group calls, starting a voice chat doesn't call all group members directly; they instead receive a notification to join the voice chat. WhatsApp also began rolling out support for sending login codes to a linked email address, rather than via SMS. In a later update on November 30, WhatsApp added a Secret Code feature, which allows those who use locked chats to enter a unique password that hides those chats from view when unlocking the app.
In December 2023, WhatsApp's "View Once" feature expanded to include voice messages. Voice messages sent this way are deleted after the recipient listens to them the first time.
In April 2024, an AI-powered "Smart Assistant" became widely available in WhatsApp, allowing users to ask it questions or have it complete tasks such as generating images. The assistant is based on the LLaMa 3 model, and is also available on other Meta platforms like Facebook and Instagram. WhatsApp also introduced chat filters, allowing users to sort their chats by All, Unread or Groups.
In June 2024, improvements were made to voice and video calls, allowing up to 32 participants in video calls, adding audio to screen sharing, and introducing a new codec to increase call reliability.
In September 2024, WhatsApp expanded support for Meta AI, allowing users to send text and photos to Meta AI in order to ask questions, identify objects, translate text or edit pictures.
In October 2024, WhatsApp expanded their chat filter feature, adding the ability for users to create custom lists that contain specific chats of their choice.
In November 2024, the ability to transcribe voice messages was added, allowing users to read out what was said in a voice message, rather than listening to the audio.
In December 2024, WhatsApp introduced several new video calling features, including the ability to select specific participants from a group to make a call, rather than calling all group members. Visual effects also became available, adding visual filters to a user's video feed.
In December 2024, WhatsApp introduced a reverse image search feature, allowing users to verify image authenticity directly within the app using Google Search
Platform support
Platform history
After months at beta stage, the official first release of WhatsApp for iOS launched in November 2009. In January 2010, support for BlackBerry smartphones was added; and subsequently for Symbian OS in May 2010, and for Android OS in August 2010. In August 2011, a beta for Nokia's non-smartphone OS Series 40 was added. A month later, support for Windows Phone was added, followed by BlackBerry 10 in March 2013. In April 2015, support for Samsung's Tizen OS was added. The oldest device capable of running WhatsApp was the Symbian-based Nokia N95 released in March 2007, but support was later discontinued.
In August 2014, WhatsApp released an update, adding support for Android Wear smartwatches.
On January 21, 2015, WhatsApp launched WhatsApp Web, a browser-based web client that could be used by syncing with a mobile device's connection.
On February 26, 2016, WhatsApp announced they would cease support for BlackBerry (including BlackBerry 10), Nokia Series 40, and Symbian S60, as well as older versions of Android (2.2), Windows Phone (7.0), and iOS (6), by the end of 2016. BlackBerry, Nokia Series 40, and Symbian support was then extended to June 30, 2017. In June 2017, support for BlackBerry and Series 40 was once again extended until the end of 2017, while Symbian was dropped.
Support for BlackBerry and older (version 8.0) Windows Phone and older (version 6) iOS devices was dropped on January 1, 2018, but was extended to December 2018 for Nokia Series 40. In July 2018, it was announced that WhatsApp would soon be available for KaiOS feature phones.
Android and iPhone
WhatsApp's principal platforms, which are fully supported, are devices supporting mobile telephony running Android, and iPhones.
WhatsApp Web
WhatsApp was officially made available for PCs through a web client, under the name WhatsApp Web, in late January 2015 through an announcement made by Koum on his Facebook page: "Our web client is simply an extension of your phone: the web browser mirrors conversations and messages from your mobile device—this means all of your messages still live on your phone". As of January 21, 2015, the desktop version was only available to Android, BlackBerry, and Windows Phone users. Later on, it also added support for iOS, Nokia Series 40, and Nokia S60 (Symbian).
Previously the WhatsApp user's handset had to be connected to the Internet for the browser application to function but as of an update in October 2021 that is no longer the case. All major desktop browsers are supported except for Internet Explorer. WhatsApp Web's user interface is based on the default Android one and can be accessed through web.whatsapp.com. Access is granted after the users scan their personal QR code through their mobile WhatsApp application.
There are similar solutions for macOS, such as the open-source ChitChat, previously known as WhatsMac.
In January 2021, the limited Android beta version allowed users to use WhatsApp Web without having to keep the mobile app connected to the Internet. In March 2021, this beta feature was extended to iOS users. However, linked devices (using WhatsApp Web, WhatsApp Desktop or Facebook Portal) will become disconnected if people don't use their phone for over 14 days. The multi-device beta can only show messages for the last 3 months on the web version, which was not the case without the beta because the web version was syncing with the phone.
Since April 2022, the multi-device beta is integrated by default in WhatsApp and users cannot check old messages on the web version anymore.
Windows and Mac
On May 10, 2016, the messaging service was introduced for both Microsoft Windows and macOS operating systems. Support for video and voice calls from desktop clients was later added. Similar to the WhatsApp Web format, the app, which synchronises with a user's mobile device, is available for download on the website. It supported operating systems Windows 8 and OS X 10.10 and higher.
In 2023, WhatsApp replaced the Electron-based apps with native versions for their respective platforms. The Windows version is based on UWP while the Mac version is a port of the iOS version using Catalyst technology.
Smartwatches
WhatsApp added support for Android Wear (now called Wear OS) in 2014.
Lack of iPad support
, WhatsApp does not have an official iPad client. While the majority of iPhone apps can run on the iPad in an iPhone-sized window, WhatsApp was one of the very few apps to be completely unavailable on the iPad due to the "telephony" restriction. In a 2022 interview with The Verge, WhatsApp chief Will Cathcart acknowledged that "[p]eople have wanted an iPad app for a long time" and said that the team would "love to do it." In September 2023, a beta version of WhatsApp was released for iPad. No official release date has been announced.
iPad users searching for WhatsApp are shown numerous third-party clients. Several top results have names and logos resembling WhatsApp itself, and some users do not realize they are using a third-party client. Per WhatsApp's policy, using third-party clients can result in the account getting permanently banned.
Technical
WhatsApp uses a customized version of the open standard Extensible messaging and presence protocol (XMPP). Upon installation, it creates a user account using the user's phone number as the username (Jabber ID: [phone number]@s.whatsapp.net).
WhatsApp software automatically compares all the phone numbers from the device's address book with its central database of WhatsApp users to automatically add contacts to the user's WhatsApp contact list. Previously the Android and Nokia Series 40 versions used an MD5-hashed, reversed-version of the phone's IMEI as password, while the iOS version used the phone's Wi-Fi MAC address instead of IMEI. A 2012 update implemented generation of a random password on the server side. Alternatively a user can send to any contact in the WhatsApp database through the url https://api.whatsapp.com/send/?phone=[phone number] where [phone number] is the number of the contact including the country code.
Some devices using dual SIMs may not be compatible with WhatsApp, though there are unofficial workarounds to install the app.
In February 2015, WhatsApp implemented voice calling, which helped WhatsApp to attract a different segment of the user population. WhatsApp's voice codec is Opus, which uses the modified discrete cosine transform (MDCT) and linear predictive coding (LPC) audio compression algorithms. WhatsApp uses Opus at 816 kHz sampling rates. On November 14, 2016, WhatsApp video calling for users using Android, iPhone, and Windows Phone devices.
In November 2017, WhatsApp implemented a feature giving users seven minutes to delete messages sent by mistake.
Multimedia messages are sent by uploading the image, audio or video to be sent to an HTTP server and then sending a link to the content along with its Base64 encoded thumbnail, if applicable.
WhatsApp uses a "store and forward" mechanism for exchanging messages between two users. When a user sends a message, it is stored on a WhatsApp server, which tries to forward it to the addressee, and repeatedly requests acknowledgement of receipt. When the message is acknowledged, the server deletes it; if undelivered after 30 days, it is also deleted.
End-to-end encryption
On November 18, 2014, Open Whisper Systems announced a partnership with WhatsApp to provide end-to-end encryption by incorporating the encryption protocol used in Signal into each WhatsApp client platform. Open Whisper Systems said that they had already incorporated the protocol into the latest WhatsApp client for Android, and that support for other clients, group/media messages, and key verification would be coming soon after. WhatsApp confirmed the partnership to reporters, but there was no announcement or documentation about the encryption feature on the official website, and further requests for comment were declined. In April 2015, German magazine Heise security used ARP spoofing to confirm that the protocol had been implemented for Android-to-Android messages, and that WhatsApp messages from or to iPhones running iOS were still not end-to-end encrypted. They expressed the concern that regular WhatsApp users still could not tell the difference between end-to-end encrypted messages and regular messages.
On April 5, 2016, WhatsApp and Open Whisper Systems announced that they had finished adding end-to-end encryption to "every form of communication" on WhatsApp, and that users could now verify each other's keys. Users were also given the option to enable a trust on first use mechanism in order to be notified if a correspondent's key changes. According to a white paper that was released along with the announcement, WhatsApp messages are encrypted with the Signal Protocol. WhatsApp calls are encrypted with SRTP, and all client-server communications are "layered within a separate encrypted channel".
On October 14, 2021, WhatsApp rolled out end-to-end encryption for backups on Android and iOS. The feature has to be turned on by the user and provides the option to encrypt the backup either with a password or a 64-digit encryption key.
The application can store encrypted copies of the chat messages onto the SD card, but chat messages are also stored unencrypted in the SQLite database file "msgstore.db".
WhatsApp Payments
WhatsApp Payments (marketed as WhatsApp Pay) is a peer-to-peer money transfer feature. The service became generally available in India and Brazil, and in Singapore for WhatsApp Business transactions only.
India
In July 2017, WhatsApp received permission from the National Payments Corporation of India (NPCI) to enter into partnership with multiple Indian banks, for transactions over Unified Payments Interface (UPI), which relies on mobile phone numbers to make account-to-account transfers. In November 2020, UPI payments via WhatsApp were initially restricted to 20 million users, and to 100 million users in April 2022, and became generally available to everyone in August 2022.
Facebook/WhatsApp cryptocurrency project, 2019–2022
On February 28, 2019, The New York Times reported that Facebook was "hoping to succeed where Bitcoin failed" by developing an in-house cryptocurrency that would be incorporated into WhatsApp. The project reportedly involved more than 50 engineers under the direction of former PayPal president David A. Marcus. This 'Facebook coin' would reportedly be a stablecoin pegged to the value of a basket of different foreign currencies.
In June 2019, Facebook said that the project would be named Libra, and that a digital wallet named "Calibra" was to be integrated into Facebook and WhatsApp. After financial regulators in many regions raised concerns, Facebook stated that the currency, renamed Diem since December 2020, would require a government-issued ID for verification, and the wallet app would have fraud protection. Calibra was rebranded to Novi in May 2020.
Meta (formerly Facebook) ended its Novi project on September 1, 2022.
Controversies and criticism
Misinformation
WhatsApp has repeatedly imposed limits on message forwarding in response to the spread of misinformation in countries including India and Australia. The measure, first introduced in 2018 to combat spam, was expanded and remained active in 2021. WhatsApp stated that the forwarding limits had helped to curb the spread of misinformation regarding COVID-19.
Murders in India
In India, WhatsApp encouraged people to report messages that were fraudulent or incited violence after lynch mobs in India murdered innocent people because of malicious WhatsApp messages falsely accusing the victims of intending to abduct children. There were a series of incidents between 2017 and 2020, after which WhatsApp announced changes for Indian users of the platform that labels forwarded messages as such.
2018 elections in Brazil
In an investigation on the use of social media in politics, it was found that WhatsApp was being abused for the spread of fake news in the 2018 presidential elections in Brazil. It was reported that US$3 million was spent in illegal concealed contributions related to this practice.
Researchers and journalists called on WhatsApp's parent company, Facebook, to adopt measures similar to those adopted in India and restrict the spread of hoaxes and fake news.
Security and privacy
WhatsApp was initially criticized for its lack of encryption, sending information as plaintext. Encryption was first added in May 2012. End-to-end encryption was only fully implemented in April 2016 after a two-year process. , it is known that WhatsApp makes extensive use of outside contractors and artificial intelligence systems to examine certain user messages, images and videos (those that have been flagged by users as possibly abusive); and turns over to law enforcement metadata including critical account and location information.
In 2016, WhatsApp was widely praised for the addition of end-to-end encryption and earned a 6 out of 7 points on the Electronic Frontier Foundation's "Secure Messaging Scorecard". WhatsApp was criticized by security researchers and the Electronic Frontier Foundation for using backups that are not covered by end-to-end encryption and allow messages to be accessed by third-parties.
In May 2019, a security vulnerability in WhatsApp was found and fixed that allowed a remote person to install spyware by making a call which did not need to be answered.
In September 2019, WhatsApp was criticized for its implementation of a 'delete for everyone' feature. iOS users can elect to save media to their camera roll automatically. When a user deletes media for everyone, WhatsApp does not delete images saved in the iOS camera roll and so those users are able to keep the images. WhatsApp released a statement saying that "the feature is working properly," and that images stored in the camera roll cannot be deleted due to Apple's security layers.
In November 2019, WhatsApp released a new privacy feature that let users decide who can add them to groups.
In December 2019, WhatsApp confirmed a security flaw that would allow hackers to use a malicious GIF image file to gain access to the recipient's data. When the recipient opened the gallery within WhatsApp, even if not sending the malicious image, the hack is triggered and the device and its contents become vulnerable. The flaw was patched and users were encouraged to update WhatsApp.
On December 17, 2019, WhatsApp fixed a security flaw that allowed cyber attackers to repeatedly crash the messaging application for all members of group chat, which could only be fixed by forcing the complete uninstall and reinstall of the app. The bug was discovered by Check Point in August 2019 and reported to WhatsApp. It was fixed in version 2.19.246 onwards.
For security purposes, since February 1, 2020, WhatsApp has been made unavailable on smartphones using legacy operating systems like Android 2.3.7 or older and iPhone iOS 8 or older that are no longer updated by their providers.
In April 2020, the NSO Group held its governmental clients accountable for the allegation of human rights abuses by WhatsApp. In its revelation via documents received from court,
the group claimed that the lawsuit brought against the company by WhatsApp threatened to infringe on its clients' "national security and foreign policy concerns". However, the company did not reveal names of the end users, which according to a research by Citizen Lab include, Saudi Arabia, Bahrain, Kazakhstan, Morocco, Mexico and the United Arab Emirates.
On December 16, 2020, a claim that WhatsApp gave Google access to private messages was included in the anti-trust case against the latter. As the complaint was heavily redacted due to being an ongoing case, it did not disclose whether this was alleged tampering with the app's end-to-end encryption, or Google accessing user backups.
In January 2021, WhatsApp announced an update to their Privacy Policy which stated that WhatsApp would share user data with Facebook and its "family of companies" beginning February 2021. Previously, users could opt-out of such data sharing, but the new policy removed this option. The new Privacy Policy would not apply within the EU, as it is illegal under the GDPR. Facebook and WhatsApp were widely criticized for this move. The enforcement of the privacy policy was postponed from February 8 to May 15, 2021, WhatsApp announced they had no plans to limit the functionality of the app for those who did not approve the new terms.
On October 15, 2021, WhatsApp announced that it would begin offering an end-to-end encryption service for chat backups, meaning no third party (including both WhatsApp and the cloud storage vendor) would have access to a user's information. This new encryption feature added an additional layer of protection to chat backups stored either on Apple iCloud or Google Drive.
On November 29, 2021, an FBI document was uncovered by Rolling Stone, revealing that WhatsApp responds to warrants and subpoenas from law enforcement within minutes, providing user metadata to the authorities. The metadata includes the user's contact information and address book.
In January 2022, an unsealed surveillance application revealed that WhatsApp started tracking seven users from China and Macau in November 2021, based on a request from US DEA investigators. The app collected data on who the users contacted and how often, and when and how they were using the app. This is reportedly not an isolated occurrence, as federal agencies can use the Electronic Communications Privacy Act to covertly track users without submitting any probable cause or linking a user's number to their identity.
At the beginning of 2022, it was revealed that San Diego–based startup Boldend had developed tools to hack WhatsApp's encryption, gaining access to user data, at some point since the startup's inception in 2017. The vulnerability was reportedly patched in January 2021. Boldend is financed, in part, by Peter Thiel, a notable investor in Facebook.
In September 2022, a critical security issue in WhatsApp's Android video call feature was reported. An integer overflow bug allowed a malicious user to take full control of the victim's application once a video call between two WhatsApp users was established. The issue was patched on the day it was officially reported.
UK institutions
, WhatsApp is widely used by government institutions in the UK, although such use is viewed as problematical since it hinders the public, including journalists, from obtaining accurate government records when making freedom of information requests.
The information commissioner has said that the use of WhatsApp posed risks to transparency since members of Parliament, government ministers, and officials who wished to avoid scrutiny might use WhatsApp despite there being official channels. Transparency campaigners have challenged the practice in court.
Notably, during the COVID-19 pandemic, the UK government routinely used WhatsApp to make decisions on managing the crisis, including on personal rather than government-issued devices. When the official inquiry into the pandemic began seeking evidence in May 2023, this presented issues for its ability to gather the material it sought. A personal device of the former Prime Minister, Boris Johnson, had been compromised by a security breach, and it was claimed that it could not be switched on in order to recover messages. Further, the Cabinet Office had claimed that since many messages were not relevant to the inquiry, it only needed to hand over material it had
selected as being relevant. The High Court, in a judicial review sought by the Cabinet Office, declared that all documents sought by the inquiry were to be handed over unredacted.
In 2018, it was reported that around 500,000 National Health Service (NHS) staff used WhatsApp and other instant messaging systems at work and around 29,000 had faced disciplinary action for doing so. Higher usage was reported by frontline clinical staff to keep up with care needs, even though NHS trust policies do not permit their use.
Mods and fake versions
In March 2019, WhatsApp released a guide for users who had installed unofficial modified versions of WhatsApp and warned that it may ban those using unofficial clients.
WhatsApp snooping scandal
In May 2019, WhatsApp was attacked by hackers who installed spyware on a number of victims' smartphones. The hack, allegedly developed by Israeli surveillance technology firm NSO Group, injected malware onto WhatsApp users' phones via a remote-exploit bug in the app's Voice over IP calling functions. A Wired report noted the attack was able to inject malware via calls to the targeted phone, even if the user did not answer the call.
In October 2019, WhatsApp filed a lawsuit against NSO Group in a San Francisco court, claiming that the alleged cyberattack violated US laws including the Computer Fraud and Abuse Act (CFAA). According to WhatsApp, the exploit "targeted at least 100 human-rights defenders, journalists and other members of civil society" among a total of 1,400 users in 20 countries.
In April 2020, the NSO Group held its governmental clients accountable for the allegation of human rights abuses by WhatsApp. In its revelation via documents received via court, the group claimed that the lawsuit brought against the company by WhatsApp threatened to infringe on its clients' "national security and foreign policy concerns". However, the company did not reveal the names of the end users, which according to research by Citizen Lab include, Saudi Arabia, Bahrain, Kazakhstan, Morocco, Mexico and the United Arab Emirates.
In July 2020, a US federal judge ruled that the lawsuit against NSO group could proceed. NSO Group filed a motion to have the lawsuit dismissed, but the judge denied all of its arguments.
Jeff Bezos phone hack
In January 2020, a digital forensic analysis revealed that the Amazon founder Jeff Bezos received an encrypted message on WhatsApp from the official account of Saudi Arabia's Crown Prince Mohammed bin Salman. The message reportedly contained a malicious file, the receipt of which resulted in Bezos' phone being hacked. The United Nations' special rapporteur David Kaye and Agnes Callamard later confirmed that Jeff Bezos' phone was hacked through WhatsApp, as he was one of the targets of Saudi's hit list of individuals close to The Washington Post journalist Jamal Khashoggi.
FBI
In 2021, an FBI document obtained through a Freedom of Information request by Property of the People, Inc., a 501(c)(3) nonprofit organization, revealed that WhatsApp and iMessage are vulnerable to law-enforcement real-time searches.
Tek Fog
In January 2022, an investigation by The Wire claimed that BJP, an Indian political party, allegedly used an app called Tek Fog which was capable of hacking inactive WhatsApp accounts en masse in order to mass message their contacts with propaganda. According to the report, a whistleblower with app access was able to hack a test WhatsApp account controlled by reporters "within minutes." It was later determined that staff of their Meta investigative team had been duped by false information; The Wire fired the staff member involved and issued a formal apology to its readers.
Terrorism
In December 2015, it was reported that terrorist organization ISIS had been using WhatsApp to plot the November 2015 Paris attacks. According to The Independent, ISIS also uses WhatsApp to traffic sex slaves.
In March 2017, British Home Secretary Amber Rudd said encryption capabilities of messaging tools like WhatsApp are unacceptable, as news reported that Khalid Masood used the application several minutes before perpetrating the 2017 Westminster attack. Rudd publicly called for police and intelligence agencies to be given access to WhatsApp and other encrypted messaging services to prevent future terror attacks.
In April 2017, the perpetrator of the Stockholm truck attack reportedly used WhatsApp to exchange messages with an ISIS supporter shortly before and after the incident. The messages involved discussing how to make an explosive device and a confession to the attack.
In April 2017, nearly 300 WhatsApp groups with about 250 members each were reportedly being used to mobilize stone-pelters in Jammu and Kashmir to disrupt security forces' operations at encounter sites. According to police, 90% of these groups were closed down after police contacted their admins. Further, after a six-month probe which involved the infiltration of 79 WhatsApp groups, the National Investigation Agency reported that out of about 6386 members and admins of these groups, about 1000 were residents of Pakistan and gulf nations. Further, for their help in negating anti-terror operations, the Indian stone pelters were getting funded through barter trade from Pakistan and other indirect means.
In May 2022, the FBI stated that an ISIS sympathizer, who was plotting to assassinate George W. Bush, was arrested based on his WhatsApp data. According to the arrest warrant for the suspect, his WhatsApp account was placed under surveillance.
Scams and malware
There are numerous ongoing scams on WhatsApp that let hackers spread viruses or malware. In May 2016, some WhatsApp users were reported to have been tricked into downloading a third-party application called WhatsApp Gold, which was part of a scam that infected the users' phones with malware. A message that promises to allow access to their WhatsApp friends' conversations, or their contact lists, has become the most popular hit against anyone who uses the application in Brazil. Clicking on the message actually sends paid text messages. Since December 2016, more than 1.5 million people have clicked and lost money.
Another application called GB WhatsApp is considered malicious by cybersecurity firm Symantec because it usually performs some unauthorized operations on end-user devices.
Bans
China
WhatsApp is owned by Meta, whose main social media service Facebook has been blocked in China since 2009. In September 2017, security researchers reported to The New York Times that the WhatsApp service had been completely blocked in China. On April 19, 2024, Apple removed WhatsApp from the App Store in China, citing government orders that stemmed from national security concerns.
Iran
On May 9, 2014, the government of Iran announced that it had proposed to block the access to WhatsApp service to Iranian residents. "The reason for this is the assumption of WhatsApp by the Facebook founder Mark Zuckerberg, who is an American Zionist," said Abdolsamad Khorramabadi, head of the country's Committee on Internet Crimes. Subsequently, Iranian president Hassan Rouhani issued an order to the Ministry of ICT to stop filtering WhatsApp. It was blocked permanently until Meta answers September 2022.
Turkey
Turkey temporarily banned WhatsApp in 2016, following the assassination of the Russian ambassador to Turkey.
Brazil
On March 1, 2016, Diego Dzodan, Facebook's vice-president for Latin America was arrested in Brazil for not cooperating with an investigation in which WhatsApp conversations were requested. On March 2, 2016, at dawn the next day, Dzodan was released because the Court of Appeal held that the arrest was disproportionate and unreasonable.
On May 2, 2016, mobile providers in Brazil were ordered to block WhatsApp for 72 hours for the service's second failure to cooperate with criminal court orders. Once again, the block was lifted following an appeal, after less than 24 hours.
Brazil's Central Bank issued an order to payment card companies Visa and Mastercard on June 23, 2020, to stop working with WhatsApp on its new electronic payment system. A statement from the Bank asserted the decision to block the Facebook-owned company's latest offering was taken in order to "preserve an adequate competitive environment" in the mobile payments space and to ensure "functioning of a payment system that's interchangeable, fast, secure, transparent, open and cheap."
Uganda
The government of Uganda banned WhatsApp and Facebook, along with other social media platforms, to enforce a tax on the use of social media. Users are to be charged USh.200/= per day to access these services according to the new law set by parliament.
United Arab Emirates (UAE)
The United Arab Emirates banned WhatsApp video chat and VoIP call applications in as early as 2013 due to what is often reported as an effort to protect the commercial interests of their home grown nationally owned telecom providers (du and Etisalat). Their app ToTok has received press suggesting it is able to spy on users.
Cuba
In July 2021, the Cuban government blocked access to several social media platforms, including WhatsApp, to curb the spread of information during the anti-government protests.
Switzerland
In December 2021, the Swiss army banned the use of WhatsApp and several other non-Swiss encrypted messaging services by army personnel. The ban was prompted by concerns of US authorities potentially accessing user data for such apps because of the CLOUD Act. The army recommended that all army personnel use Threema instead, as the service is based in Switzerland.
Zambia
In August 2021, the digital rights organization Access Now reported that WhatsApp along with several other social media apps was being blocked in Zambia for the duration of the general election. The organization reported a massive drop-off in traffic for the blocked services, though the country's government made no official statements about the block.
Third-party clients
In mid-2013, WhatsApp Inc. filed for the DMCA takedown of the discussion thread on the XDA Developers forums about the then popular third-party client "WhatsApp Plus".
In 2015, some third-party WhatsApp clients that were reverse-engineering the WhatsApp mobile app, received a cease and desist to stop activities that were violating WhatsApp legal terms. As a result, users of third-party WhatsApp clients were also banned.
WhatsApp Business
WhatsApp launched two business-oriented apps in January 2018, separated by the intended userbase:
A WhatsApp Business app for small companies
An Enterprise Solution for bigger companies with global customer bases, such as airlines, e-commerce retailers and banks, who would be able to offer customer service and conversational commerce (e-commerce) via WhatsApp chat, using live agents or chatbots (as far back as 2015, companies like Meteordesk had provided unofficial solutions for enterprises to attend to large numbers of users, but these were shut down by WhatsApp)
In October 2020, Facebook announced the introduction of pricing tiers for services offered via the WhatsApp Business API, charged on a per-message basis.
User statistics
WhatsApp handled ten billion messages per day in August 2012, growing from two billion in April 2012, and one billion the previous October. On June 13, 2013, WhatsApp announced that they had reached their new daily record by processing 27 billion messages. According to the Financial Times, WhatsApp "has done to SMS on mobile phones what Skype did to international calling on landlines".
By April 22, 2014, WhatsApp had over 500 million monthly active users, 700 million photos and 100 million videos were being shared daily, and the messaging system was handling more than 10 billion messages each day.
On August 24, 2014, Koum announced on his Twitter account that WhatsApp had over 600 million active users worldwide. At that point WhatsApp was adding about 25 million new users every month, or 833,000 active users per day.
In May 2017, it was reported that WhatsApp users spend over 340 million minutes on video calls each day on the app. This is the equivalent of roughly 646 years of video calls per day.
By February 2017, WhatsApp had over 1.2 billion users globally, reaching 1.5 billion monthly active users by the end of 2017.
In January 2020, WhatsApp reached over 5 billion installs on Google Play Store making it only the second non-Google app to achieve this milestone.
As of February 2020, WhatsApp had over 2 billion users globally.
Specific markets
India is by far WhatsApp's largest market in terms of total number of users. In May 2014, WhatsApp crossed 50 million monthly active users in India, which is also its largest country by the number of monthly active users, then 70 million in October 2014, making users in India 10% of WhatsApp's total user base. In February 2017, WhatsApp reached 200 million monthly active users in India.
Israel is one of WhatsApp's strongest markets in terms of ubiquitous usage. According to Globes, already by 2013 the application was installed on 92% of all smartphones, with 86% of users reporting daily use.
In July 2024, WhatsApp reached 100 million users in the United States.
Competition
WhatsApp competes with a number of messaging services. They include services like iMessage (estimated 1.3 billion active users), WeChat (1.26 billion active users), Telegram (900 million users), Viber (260 million active users), LINE (217 million active users), KakaoTalk (57 million active users), and Signal (40 million active users). Both Telegram and Signal in particular were reported to get registration spikes during WhatsApp outages and controversies.
WhatsApp has increasingly drawn its innovation from competing services, such as a Telegram-inspired web version and features for groups. In 2016, WhatsApp was accused of copying features from a then-unreleased version of iMessage.
| Technology | Social network and blogging | null |
32065115 | https://en.wikipedia.org/wiki/Arab%20Mashreq%20International%20Road%20Network | Arab Mashreq International Road Network | The Arab Mashreq international Road Network is an international road network between the primarily Arab countries of the Mashriq (Syria, Iraq, Jordan, Palestine, Lebanon, Kuwait, Egypt, Saudi Arabia, Bahrain, Qatar, UAE, Oman and Yemen). In addition, part of the network passes through Israel, which is not a party to the agreement that created it as well as non-Arab parts of the region. The network is a result of the 2001 Agreement on International Roads in the Arab Mashreq, a United Nations multilateral treaty that entered into force in 2003 and has been ratified by 13 of the 14 (all except Israel) countries that the network serves.
Route list
| Technology | Ground transportation networks | null |
40472458 | https://en.wikipedia.org/wiki/Job%20%28computing%29 | Job (computing) | In computing, a job is a unit of work or unit of execution (that performs said work). A component of a job (as a unit of work) is called a task or a step (if sequential, as in a job stream). As a unit of execution, a job may be concretely identified with a single process, which may in turn have subprocesses (child processes; the process corresponding to the job being the parent process) which perform the tasks or steps that comprise the work of the job; or with a process group; or with an abstract reference to a process or process group, as in Unix job control.
Jobs can be started interactively, such as from a command line, or scheduled for non-interactive execution by a job scheduler, and then controlled via automatic or manual job control. Jobs that have finite input can complete, successfully or unsuccessfully, or fail to complete and eventually be terminated. By contrast, online processing such as by servers has open-ended input (they service requests as long as they run), and thus never complete, only stopping when terminated (sometimes called "canceled"): a server's job is never done.
History
The term "job" has a traditional meaning as "piece of work", from Middle English "jobbe of work", and is used as such in manufacturing, in the phrase "job production", meaning "custom production", where it is contrasted with batch production (many items at once, one step at a time) and flow production (many items at once, all steps at the same time, by item). Note that these distinctions have become blurred in computing, where the oxymoronic term "batch job" is found, and used either for a one-off job or for a round of "batch processing" (same processing step applied to many items at once, originally punch cards).
In this sense of "job", a programmable computer performs "jobs", as each one can be different from the last. The term "job" is also common in operations research, predating its use in computing, in such uses as job shop scheduling (see, for example and references thereof from throughout the 1950s, including several "System Research Department Reports" from IBM Research Center). This analogy is applied to computer systems, where the system resources are analogous to machines in a job shop, and the goal of scheduling is to minimize the total time from beginning to end (makespan). The term "job" for computing work dates to the mid 1950s, as in this use from 1955:
The term continued in occasional use, such as for the IBM 709 (1958), and in wider use by early 1960s, such as for the IBM 7090, with widespread use from the Job Control Language of OS/360 (announced 1964). A standard early use of "job" is for compiling a program from source code, as this is a one-off task. The compiled program can then be run on batches of data.
| Technology | Operating systems | null |
23534209 | https://en.wikipedia.org/wiki/Environmental%20impact%20of%20agriculture | Environmental impact of agriculture | The environmental impact of agriculture is the effect that different farming practices have on the ecosystems around them, and how those effects can be traced back to those practices. The environmental impact of agriculture varies widely based on practices employed by farmers and by the scale of practice. Farming communities that try to reduce environmental impacts through modifying their practices will adopt sustainable agriculture practices. The negative impact of agriculture is an old issue that remains a concern even as experts design innovative means to reduce destruction and enhance eco-efficiency. Animal agriculture practices tend to be more environmentally destructive than agricultural practices focused on fruits, vegetables and other biomass. The emissions of ammonia from cattle waste continue to raise concerns over environmental pollution.
When evaluating environmental impact, experts use two types of indicators: "means-based", which is based on the farmer's production methods, and "effect-based", which is the impact that farming methods have on the farming system or on emissions to the environment. An example of a means-based indicator would be the quality of groundwater, which is affected by the amount of nitrogen applied to the soil. An indicator reflecting the loss of nitrate to groundwater would be effect-based. The means-based evaluation looks at farmers' practices of agriculture, and the effect-based evaluation considers the actual effects of the agricultural system. For example, the means-based analysis might look at pesticides and fertilization methods that farmers are using, and effect-based analysis would consider how much is being emitted or what the nitrogen content of the soil is.
The environmental impact of agriculture involves impacts on a variety of different factors: the soil, water, the air, animal and soil variety, people, plants, and the food itself. Agriculture contributes to a number larger of environmental issues that cause environmental degradation including: climate change, deforestation, biodiversity loss, dead zones, genetic engineering, irrigation problems, pollutants, soil degradation, and waste. Because of agriculture's importance to global social and environmental systems, the international community has committed to increasing sustainability of food production as part of Sustainable Development Goal 2: “End hunger, achieve food security and improved nutrition and promote sustainable agriculture". The United Nations Environment Programme's 2021 "Making Peace with Nature" report highlighted agriculture as both a driver and an industry under threat from environmental degradation.
By agricultural practice
Animal agriculture
Irrigation
Pesticides
Plastics
By environmental issue
Deforestation
Deforestation is clearing the Earth's forests on a large scale worldwide and resulting in many land damages. One of the causes of deforestation is clearing land for pasture or crops. According to British environmentalist Norman Myers, 5% of deforestation is due to cattle ranching, 19% due to over-heavy logging, 22% due to the growing sector of palm oil plantations, and 54% due to slash-and-burn farming.
Deforestation causes the loss of habitat for millions of species, and is also a driver of climate change. Trees act as a carbon sink: that is, they absorb carbon dioxide, an unwanted greenhouse gas, out of the atmosphere. Removing trees releases carbon dioxide into the atmosphere and leaves behind fewer trees to absorb the increasing amount of carbon dioxide in the air. In this way, deforestation exacerbates climate change. When trees are removed from forests, the soils tend to dry out because there is no longer shade, and there are not enough trees to assist in the water cycle by returning water vapor back to the environment. With no trees, landscapes that were once forests can potentially become barren deserts. The tree's roots also help to hold the soil together, so when they are removed, mudslides can also occur. The removal of trees also causes extreme fluctuations in temperature.
In 2000 the United Nations Food and Agriculture Organisation (FAO) found that "the role of population dynamics in a local setting may vary from decisive to negligible," and that deforestation can result from "a combination of population pressure and stagnating economic, social and technological conditions."
Genetic engineering
Pollutants
Soil degradation
Soil degradation is the decline in soil quality that can be a result of many factors, especially from agriculture. Soils hold the majority of the world's biodiversity, and healthy soils are essential for food production and adequate water supply. Common attributes of soil degradation can be salting, waterlogging, compaction, pesticide contamination, a decline in soil structure quality, loss of fertility, changes in soil acidity, alkalinity, salinity, and erosion. Soil erosion is the wearing away of topsoil by water, wind, or farming activities. Topsoil is very fertile, which makes it valuable to farmers growing crops. Soil degradation also has a huge impact on biological degradation, which affects the microbial community of the soil and can alter nutrient cycling, pest and disease control, and chemical transformation properties of the soil.
Soil erosion
Large scale farming can cause large amounts of soil erosion. 25 to 40 percent of eroded soil ends up in water sources. Soil that carries pesticides and fertilizers pollutes the bodies of water it enters. In the United States and Europe especially, large-scale agriculture has grown and small-scale-agriculture has shrunk due to financial arrangements such as contract farming. Bigger farms tend to favour monocultures, overuse water resources, and accelerate deforestation and soil quality decline. A study from 2020 by the International Land Coalition, together with Oxfam and World Inequality Lab, found that 1% of land owners manage 70% of the world's farmland. The highest discrepancy can be found in Latin America, where the poorest 50% own just 1% of the land. Small landowners, as individuals or families, tend to be more cautious in land use compared to large landowners. As of 2020, however, the proportion of small landowners has been decreasing since the 1980s. Currently, the largest share of smallholdings can be found in Asia and Africa.
Tillage erosion
Waste
Plasticulture is the use of plastic mulch in agriculture. Farmers use plastic sheets as mulch to cover 50-70% of the soil and allow them to use drip irrigation systems to have better control over soil nutrients and moisture. Rain is not required in this system, and farms that use plasticulture are built to encourage the fastest runoff of rain. The use of pesticides with plasticulture allows pesticides to be transported easier in the surface runoff towards wetlands or tidal creeks. The runoff from pesticides and chemicals in the plastic can cause serious deformations and death in shellfish as the runoff carries the chemicals toward the oceans.
In addition to the increased runoff that results from plasticulture, there is also the problem of the increased amount of waste from the plastic mulch itself. The use of plastic mulch for vegetables, strawberries, and other row and orchard crops exceeds 110 million pounds annually in the United States. Most plastic ends up in the landfill, although there are other disposal options such as disking mulches into the soil, on-site burying, on-site storage, reuse, recycling, and incineration. The incineration and recycling options are complicated by the variety of the types of plastics that are used and by the geographic dispersal of the plastics. Plastics also contain stabilizers and dyes as well as heavy metals, which limits the number of products that can be recycled. Research is continually being conducted on creating biodegradable or photodegradable mulches. While there has been a minor success with this, there is also the problem of how long the plastic takes to degrade, as many biodegradable products take a long time to break down.
Issues by region
The environmental impact of agriculture can vary depending on the region as well as the type of agriculture production method that is being used. Listed below are some specific environmental issues in various different regions around the world.
Hedgerow removal in the United Kingdom.
Soil salinisation, especially in Australia.
Phosphate mining in Nauru
Methane emissions from livestock in New Zealand. See Climate change in New Zealand.
Environmentalists attribute the hypoxic zone in the Gulf of Mexico as being encouraged by nitrogen fertilization of the algae bloom.
Coupled systems from agricultural trade leading to regional effects from cascading effects and spillover systems. Environmental factor (Socioeconomic Drivers Section)
Sustainable agriculture
Sustainable agriculture is the idea that agriculture should occur in a way such that we can continue to produce what is necessary without infringing on the ability for future generations to do the same.
The exponential population increase in recent decades has increased the practice of agricultural land conversion to meet the demand for food which in turn has increased the effects on the environment. The global population is still increasing and will eventually stabilize, as some critics doubt that food production, due to lower yields from global warming, can support the global population.
Agriculture can have negative effects on biodiversity as well. Organic farming is a multifaceted sustainable agriculture set of practices that can have a lower impact on the environment at a small scale. However, in most cases organic farming results in lower yields in terms of production per unit area. Therefore, widespread adoption of organic agriculture will require additional land to be cleared and water resources extracted to meet the same level of production. A European meta-analysis found that organic farms tended to have higher soil organic matter content and lower nutrient losses (nitrogen leaching, nitrous oxide emissions, and ammonia emissions) per unit of field area but higher ammonia emissions, nitrogen leaching and nitrous oxide emissions per product unit. It is believed by many that conventional farming systems cause less rich biodiversity than organic systems. Organic farming has shown to have on average 30% higher species richness than conventional farming. Organic systems on average also have 50% more organisms. This data has some issues because there were several results that showed a negative effect on these things when in an organic farming system. The opposition to organic agriculture believes that these negatives are an issue with the organic farming system. What began as a small scale, environmentally conscious practice has now become just as industrialized as conventional agriculture. This industrialization can lead to the issues shown above such as climate change, and deforestation.
Regenerative agriculture
Techniques
Conservation tillage
Conservation tillage is an alternative tillage method for farming which is more sustainable for the soil and surrounding ecosystem. This is done by allowing the residue of the previous harvest's crops to remain in the soil before tilling for the next crop. Conservation tillage has shown to improve many things such as soil moisture retention, and reduce erosion. Some disadvantages are the fact that more expensive equipment is needed for this process, more pesticides will need to be used, and the positive effects take a long time to be visible. The barriers of instantiating a conservation tillage policy are that farmers are reluctant to change their methods, and would protest a more expensive, and time-consuming method of tillage than the conventional one they are used to.
Biological pest control
| Technology | Agriculture and ecology | null |
23534602 | https://en.wikipedia.org/wiki/Human%E2%80%93computer%20interaction | Human–computer interaction | Human–computer interaction (HCI) is research in the design and the use of computer technology, which focuses on the interfaces between people (users) and computers. HCI researchers observe the ways humans interact with computers and design technologies that allow humans to interact with computers in novel ways. A device that allows interaction between human being and a computer is known as a "Human-computer Interface".
As a field of research, human–computer interaction is situated at the intersection of computer science, behavioral sciences, design, media studies, and several other fields of study. The term was popularized by Stuart K. Card, Allen Newell, and Thomas P. Moran in their 1983 book, The Psychology of Human–Computer Interaction. The first known use was in 1975 by Carlisle. The term is intended to convey that, unlike other tools with specific and limited uses, computers have many uses which often involve an open-ended dialogue between the user and the computer. The notion of dialogue likens human–computer interaction to human-to-human interaction: an analogy that is crucial to theoretical considerations in the field.
Introduction
Humans interact with computers in many ways, and the interface between the two is crucial to facilitating this interaction. HCI is also sometimes termed human–machine interaction (HMI), man-machine interaction (MMI) or computer-human interaction (CHI). Desktop applications, web browsers, handheld computers, and computer kiosks make use of the prevalent graphical user interfaces (GUI) of today. Voice user interfaces (VUIs) are used for speech recognition and synthesizing systems, and the emerging multi-modal and Graphical user interfaces (GUI) allow humans to engage with embodied character agents in a way that cannot be achieved with other interface paradigms.
The Association for Computing Machinery (ACM) defines human–computer interaction as "a discipline that is concerned with the design, evaluation, and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them". A key aspect of HCI is user satisfaction, also referred to as End-User Computing Satisfaction. It goes on to say:
"Because human–computer interaction studies a human and a machine in communication, it draws from supporting knowledge on both the machine and the human side. On the machine side, techniques in computer graphics, operating systems, programming languages, and development environments are relevant. On the human side, communication theory, graphic and industrial design disciplines, linguistics, social sciences, cognitive psychology, social psychology, and human factors such as computer user satisfaction are relevant. And, of course, engineering and design methods are relevant."
Due to the multidisciplinary nature of HCI, people with different backgrounds contribute to its success.
Poorly designed human-machine interfaces can lead to many unexpected problems. A classic example is the Three Mile Island accident, a nuclear meltdown accident, where investigations concluded that the design of the human-machine interface was at least partly responsible for the disaster. Similarly, accidents in aviation have resulted from manufacturers' decisions to use non-standard flight instruments or throttle quadrant layouts: even though the new designs were proposed to be superior in basic human-machine interaction, pilots had already ingrained the "standard" layout. Thus, the conceptually good idea had unintended results.
Human–computer interface
A human–computer interface can be described as the interface of communication between a human user and a computer. The flow of information between the human and computer is defined as the loop of interaction. The loop of interaction has several aspects to it, including:
Visual Based: The visual-based human–computer interaction is probably the most widespread human–computer interaction (HCI) research area.
Audio-Based: The audio-based interaction between a computer and a human is another important area of HCI systems. This area deals with information acquired by different audio signals.
Feedback: Loops through the interface that evaluate, moderate, and confirm processes as they pass from the human through the interface to the computer and back.
Fit: This matches the computer design, the user, and the task to optimize the human resources needed to accomplish the task.
Visual- Based HCI ----
Facial Expression Analysis: This area focuses on visually recognizing and analyzing emotions through facial expressions.
Body Movement Tracking (Large-scale): Researchers in this area concentrate on tracking and analyzing large-scale body movements.
Gesture Recognition: Gesture recognition involves identifying and interpreting gestures made by users, often used for direct interaction with computers in command and action scenarios.
Gaze Detection (Eyes Movement Tracking): Gaze detection involves tracking the movement of a user's eyes and is primarily used to better understand the user's attention, intent, or focus in context-sensitive situations. While the specific goals of each area vary based on applications, they collectively contribute to enhancing human-computer interaction. Notably, visual approaches have been explored as alternatives or aids to other types of interactions, such as audio- and sensor-based methods. For example, lip reading or lip movement tracking has proven influential in correcting speech recognition errors.
Audio - Based HCI ----Audio-based interaction in human-computer interaction (HCI) is a crucial field focused on processing information acquired through various audio signals. While the nature of audio signals may be less diverse compared to visual signals, the information they provide can be highly reliable, valuable, and sometimes uniquely informative. The research areas within this domain include:
Speech Recognition: This area centers on the recognition and interpretation of spoken language.
Speaker Recognition: Researchers in this area concentrate on identifying and distinguishing different speakers.
Auditory Emotion Analysis: Efforts have been made to incorporate human emotions into intelligent human-computer interaction by analyzing emotional cues in audio signals.
Human-Made Noise/Sign Detections: This involves recognizing typical human auditory signs like sighs, gasps, laughs, cries, etc., which contribute to emotion analysis and the design of more intelligent HCI systems.
Musical Interaction: A relatively new area in HCI, it involves generating and interacting with music, with applications in the art industry. This field is studied in both audio- and visual-based HCI systems.
Sensor-Based HCI ----This section encompasses a diverse range of areas with broad applications, all of which involve the use of physical sensors to facilitate interaction between users and machines. These sensors can range from basic to highly sophisticated. The specific areas include:
Pen-Based Interaction: Particularly relevant in mobile devices, focusing on pen gestures and handwriting recognition.
Mouse & Keyboard: Well-established input devices discussed in Section 3.1, commonly used in computing.
Joysticks: Another established input device for interactive control, commonly used in gaming and simulations.
Motion Tracking Sensors and Digitizers: Cutting-edge technology that has revolutionized industries like film, animation, art, and gaming. These sensors, in forms like wearable cloth or joint sensors, enable more immersive interactions between computers and reality.
Haptic Sensors: Particularly significant in applications related to robotics and virtual reality, providing feedback based on touch. They play a crucial role in enhancing sensitivity and awareness in humanoid robots, as well as in medical surgery applications.
Pressure Sensors: Also important in robotics, virtual reality, and medical applications, providing information based on pressure exerted on a surface.
Taste/Smell Sensors: Although less popular compared to other areas, research has been conducted in the field of sensors for taste and smell. These sensors vary in their level of maturity, with some being well-established and others representing cutting-edge technologies.
Goals for computers
Human–computer interaction involves the ways in which humans make—or do not make—use of computational artifacts, systems, and infrastructures. Much of the research in this field seeks to improve the human–computer interaction by improving the usability of computer interfaces. How usability is to be precisely understood, how it relates to other social and cultural values, and when it is, and when it may not be a desirable property of computer interfaces is increasingly debated.
Much of the research in the field of human–computer interaction takes an interest in:
Methods for designing new computer interfaces, thereby optimizing a design for a desired property such as learnability, findability, the efficiency of use.
Methods for implementing interfaces, e.g., by means of software libraries.
Methods for evaluating and comparing interfaces with respect to their usability and other desirable properties.
Methods for studying human–computer use and its sociocultural implications more broadly.
Methods for determining whether or not the user is human or computer.
Models and theories of human–computer use as well as conceptual frameworks for the design of computer interfaces, such as cognitivist user models, Activity Theory, or ethnomethodological accounts of human–computer use.
Perspectives that critically reflect upon the values that underlie computational design, computer use, and HCI research practice.
Visions of what researchers in the field seek to achieve might vary. When pursuing a cognitivist perspective, researchers of HCI may seek to align computer interfaces with the mental model that humans have of their activities. When pursuing a post-cognitivist perspective, researchers of HCI may seek to align computer interfaces with existing social practices or existing sociocultural values.
Researchers in HCI are interested in developing design methodologies, experimenting with devices, prototyping software, and hardware systems, exploring interaction paradigms, and developing models and theories of interaction.
Design
Principles
The following experimental design principles are considered, when evaluating a current user interface, or designing a new user interface:
Early focus is placed on the user(s) and task(s): How many users are needed to perform the task(s) is established and who the appropriate users should be is determined (someone who has never used the interface, and will not use the interface in the future, is most likely not a valid user). In addition, the task(s) the users will be performing and how often the task(s) need to be performed is defined.
Empirical measurement: the interface is tested with real users who come in contact with the interface daily. The results can vary with the performance level of the user and the typical human–computer interaction may not always be represented. Quantitative usability specifics, such as the number of users performing the task(s), the time to complete the task(s), and the number of errors made during the task(s) are determined.
Iterative design: After determining what users, tasks, and empirical measurements to include, the following iterative design steps are performed:
Design the user interface
Test
Analyze results
Repeat
The iterative design process is repeated until a sensible, user-friendly interface is created.
Methodologies
Various strategies delineating methods for human–PC interaction design have developed since the conception of the field during the 1980s. Most plan philosophies come from a model for how clients, originators, and specialized frameworks interface. Early techniques treated clients' psychological procedures as unsurprising and quantifiable and urged plan specialists to look at subjective science to establish zones, (for example, memory and consideration) when structuring UIs. Present-day models, in general, center around a steady input and discussion between clients, creators, and specialists and push for specialized frameworks to be folded with the sorts of encounters clients need to have, as opposed to wrapping user experience around a finished framework.
Activity theory: utilized in HCI to characterize and consider the setting where human cooperations with PCs occur. Action hypothesis gives a structure for reasoning about activities in these specific circumstances and illuminates the design of interactions from an action-driven perspective.
User-centered design (UCD): a cutting-edge, broadly-rehearsed plan theory established on the possibility that clients must become the overwhelming focus in the plan of any PC framework. Clients, architects, and specialized experts cooperate to determine the requirements and restrictions of the client and make a framework to support these components. Frequently, client-focused plans are informed by ethnographic investigations of situations in which clients will associate with the framework. This training is like participatory design, which underscores the likelihood for end-clients to contribute effectively through shared plan sessions and workshops.
Principles of UI design: these standards may be considered during the design of a client interface: resistance, effortlessness, permeability, affordance, consistency, structure, and feedback.
Value sensitive design (VSD): a technique for building innovation that accounts for the individuals who utilize the design straightforwardly, and just as well for those who the design influences, either directly or indirectly. VSD utilizes an iterative planning process that includes three kinds of examinations: theoretical, exact, and specialized. Applied examinations target the understanding and articulation of the different parts of the design, and its qualities or any clashes that may emerge for the users of the design. Exact examinations are subjective or quantitative plans to explore things used to advise the creators' understanding regarding the clients' qualities, needs, and practices. Specialized examinations can include either investigation of how individuals use related advances or the framework plans.
Display designs
Displays are human-made artifacts designed to support the perception of relevant system variables and facilitate further processing of that information. Before a display is designed, the task that the display is intended to support must be defined (e.g., navigating, controlling, decision making, learning, entertaining, etc.). A user or operator must be able to process whatever information a system generates and displays; therefore, the information must be displayed according to principles to support perception, situation awareness, and understanding.
Thirteen principles of display design
Christopher Wickens et al. defined 13 principles of display design in their book An Introduction to Human Factors Engineering.
These human perception and information processing principles can be utilized to create an effective display design. A reduction in errors, a reduction in required training time, an increase in efficiency, and an increase in user satisfaction are a few of the many potential benefits that can be achieved by utilizing these principles.
Certain principles may not apply to different displays or situations. Some principles may also appear to be conflicting, and there is no simple solution to say that one principle is more important than another. The principles may be tailored to a specific design or situation. Striking a functional balance among the principles is critical for an effective design.
Perceptual principles
1.Make displays legible (or audible). A display's legibility is critical and necessary for designing a usable display. If the characters or objects being displayed cannot be discernible, the operator cannot effectively use them.
2.Avoid absolute judgment limits. Do not ask the user to determine the level of a variable based on a single sensory variable (e.g., color, size, loudness). These sensory variables can contain many possible levels.
3.Top-down processing. Signals are likely perceived and interpreted by what is expected based on a user's experience. If a signal is presented contrary to the user's expectation, more physical evidence of that signal may need to be presented to assure that it is understood correctly.
4.Redundancy gain. If a signal is presented more than once, it is more likely to be understood correctly. This can be done by presenting the signal in alternative physical forms (e.g., color and shape, voice and print, etc.), as redundancy does not imply repetition. A traffic light is a good example of redundancy, as color and position are redundant.
5.Similarity causes confusion: Use distinguishable elements. Signals that appear to be similar will likely be confused. The ratio of similar features to different features causes signals to be similar. For example, A423B9 is more similar to A423B8 than 92 is to 93. Unnecessarily similar features should be removed, and dissimilar features should be highlighted.
Mental model principles
6. Principle of pictorial realism. A display should look like the variable that it represents (e.g., the high temperature on a thermometer shown as a higher vertical level). If there are multiple elements, they can be configured in a manner that looks like they would in the represented environment.
7. Principle of the moving part. Moving elements should move in a pattern and direction compatible with the user's mental model of how it actually moves in the system. For example, the moving element on an altimeter should move upward with increasing altitude.
Principles based on attention
8. Minimizing information access cost or interaction cost. When the user's attention is diverted from one location to another to access necessary information, there is an associated cost in time or effort. A display design should minimize this cost by allowing frequently accessed sources to be located at the nearest possible position. However, adequate legibility should not be sacrificed to reduce this cost.
9. Proximity compatibility principle. Divided attention between two information sources may be necessary for the completion of one task. These sources must be mentally integrated and are defined to have close mental proximity. Information access costs should be low, which can be achieved in many ways (e.g., proximity, linkage by common colors, patterns, shapes, etc.). However, close display proximity can be harmful by causing too much clutter.
10. Principle of multiple resources. A user can more easily process information across different resources. For example, visual and auditory information can be presented simultaneously rather than presenting all visual or all auditory information.
Memory principles
11. Replace memory with visual information: knowledge in the world. A user should not need to retain important information solely in working memory or retrieve it from long-term memory. A menu, checklist, or another display can aid the user by easing the use of their memory. However, memory use may sometimes benefit the user by eliminating the need to reference some knowledge globally (e.g., an expert computer operator would rather use direct commands from memory than refer to a manual). The use of knowledge in a user's head and knowledge in the world must be balanced for an effective design.
12. Principle of predictive aiding. Proactive actions are usually more effective than reactive actions. A display should eliminate resource-demanding cognitive tasks and replace them with simpler perceptual tasks to reduce the user's mental resources. This will allow the user to focus on current conditions and to consider possible future conditions. An example of a predictive aid is a road sign displaying the distance to a certain destination.
13. Principle of consistency. Old habits from other displays will easily transfer to support the processing of new displays if they are designed consistently. A user's long-term memory will trigger actions that are expected to be appropriate. A design must accept this fact and utilize consistency among different displays.
Current research
Topics in human–computer interaction include the following:
Social computing
Social computing is an interactive and collaborative behavior considered between technology and people. In recent years, there has been an explosion of social science research focusing on interactions as the unit of analysis, as there are a lot of social computing technologies that include blogs, emails, social networking, quick messaging, and various others. Much of this research draws from psychology, social psychology, and sociology. For example, one study found out that people expected a computer with a man's name to cost more than a machine with a woman's name. Other research finds that individuals perceive their interactions with computers more negatively than humans, despite behaving the same way towards these machines.
Knowledge-driven human–computer interaction
In human and computer interactions, a semantic gap usually exists between human and computer's understandings towards mutual behaviors. Ontology, as a formal representation of domain-specific knowledge, can be used to address this problem by solving the semantic ambiguities between the two parties.
Emotions and human–computer interaction
In the interaction of humans and computers, research has studied how computers can detect, process, and react to human emotions to develop emotionally intelligent information systems. Researchers have suggested several 'affect-detection channels'. The potential of telling human emotions in an automated and digital fashion lies in improvements to the effectiveness of human–computer interaction. The influence of emotions in human–computer interaction has been studied in fields such as financial decision-making using ECG and organizational knowledge sharing using eye-tracking and face readers as affect-detection channels. In these fields, it has been shown that affect-detection channels have the potential to detect human emotions and those information systems can incorporate the data obtained from affect-detection channels to improve decision models.
Brain–computer interfaces
A brain–computer interface (BCI), is a direct communication pathway between an enhanced or wired brain and an external device. BCI differs from neuromodulation in that it allows for bidirectional information flow. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.
Security interactions
Security interactions are the study of interaction between humans and computers specifically as it pertains to information security. Its aim, in plain terms, is to improve the usability of security features in end user applications.
Unlike HCI, which has roots in the early days of Xerox PARC during the 1970s, HCISec is a nascent field of study by comparison. Interest in this topic tracks with that of Internet security, which has become an area of broad public concern only in very recent years.
When security features exhibit poor usability, the following are common reasons:
they were added in casual afterthought
they were hastily patched in to address newly discovered security bugs
they address very complex use cases without the benefit of a software wizard
their interface designers lacked understanding of related security concepts
their interface designers were not usability experts (often meaning they were the application developers themselves)
Factors of change
Traditionally, computer use was modeled as a human–computer dyad in which the two were connected by a narrow explicit communication channel, such as text-based terminals. Much work has been done to make the interaction between a computing system and a human more reflective of the multidimensional nature of everyday communication. Because of potential issues, human–computer interaction shifted focus beyond the interface to respond to observations as articulated by Douglas Engelbart: "If ease of use were the only valid criterion, people would stick to tricycles and never try bicycles."
How humans interact with computers continues to evolve rapidly. Human–computer interaction is affected by developments in computing. These forces include:
Decreasing hardware costs leading to larger memory and faster systems
Miniaturization of hardware leading to portability
Reduction in power requirements leading to portability
New display technologies leading to the packaging of computational devices in new forms
Specialized hardware leading to new functions
Increased development of network communication and distributed computing
Increasingly widespread use of computers, especially by people who are outside of the computing profession
Increasing innovation in input techniques (e.g., voice, gesture, pen), combined with lowering cost, leading to rapid computerization by people formerly left out of the computer revolution.
Wider social concerns leading to improved access to computers by currently disadvantaged groups
the future for HCI is expected to include the following characteristics:
Ubiquitous computing and communication. Computers are expected to communicate through high-speed local networks, nationally over wide-area networks, and portably via infrared, ultrasonic, cellular, and other technologies. Data and computational services will be portably accessible from many if not most locations to which a user travels.
High-functionality systems. Systems can have large numbers of functions associated with them. There are so many systems that most users, technical or non-technical, do not have time to learn about traditionally (e.g., through thick user manuals).
The mass availability of computer graphics. Computer graphics capabilities such as image processing, graphics transformations, rendering, and interactive animation become widespread as inexpensive chips become available for inclusion in general workstations and mobile devices.
Mixed media. Commercial systems can handle images, voice, sounds, video, text, formatted data. These are exchangeable over communication links among users. The separate consumer electronics fields (e.g., stereo sets, DVD players, televisions) and computers are beginning to merge. Computer and print fields are expected to cross-assimilate.
High-bandwidth interaction. The rate at which humans and machines interact is expected to increase substantially due to the changes in speed, computer graphics, new media, and new input/output devices. This can lead to qualitatively different interfaces, such as virtual reality or computational video.
Large and thin displays. New display technologies are maturing, enabling huge displays and displays that are thin, lightweight, and low in power use. This has large effects on portability and will likely enable developing paper-like, pen-based computer interaction systems very different in feel from present desktop workstations.
Information utilities. Public information utilities (such as home banking and shopping) and specialized industry services (e.g., weather for pilots) are expected to proliferate. The proliferation rate can accelerate with the introduction of high-bandwidth interaction and the improvement in the quality of interfaces.
Scientific conferences
One of the main conferences for new research in human–computer interaction is the annually held Association for Computing Machinery's (ACM) Conference on Human Factors in Computing Systems, usually referred to by its short name CHI (pronounced kai, or Khai). CHI is organized by ACM Special Interest Group on Computer-Human Interaction (SIGCHI). CHI is a large conference, with thousands of attendants, and is quite broad in scope. It is attended by academics, practitioners, and industry people, with company sponsors such as Google, Microsoft, and PayPal.
There are also dozens of other smaller, regional, or specialized HCI-related conferences held around the world each year, including:
ACEICFAASRS: ACE – International Conference on Future Applications of AI, Sensors, and Robotics in Society
ASSETS: ACM International Conference on Computers and Accessibility
CSCW: ACM conference on Computer Supported Cooperative Work
CUI: ACM conference on Conversational User Interfaces
DIS: ACM conference on Designing Interactive Systems
ECSCW: European Conference on Computer-Supported Cooperative Work
GROUP: ACM conference on supporting group work
HRI: ACM/IEEE International Conference on Human–robot interaction
HCII: Human–Computer Interaction International
ICMI: International Conference on Multimodal Interfaces
ITS: ACM conference on Interactive Tabletops and Surfaces
MobileHCI: International Conference on Human–Computer Interaction with Mobile Devices and Services
NIME: International Conference on New Interfaces for Musical Expression
OzCHI: Australian Conference on Human–Computer Interaction
TEI: International Conference on Tangible, Embedded and Embodied Interaction
Ubicomp: International Conference on Ubiquitous computing
UIST: ACM Symposium on User Interface Software and Technology
i-USEr: International Conference on User Science and Engineering
INTERACT: IFIP TC13 Conference on Human–Computer Interaction
IHCI: International Conference on Intelligent Human–Computer Interaction
| Technology | Basics_4 | null |
23535218 | https://en.wikipedia.org/wiki/Industrial%20engineering | Industrial engineering | Industrial engineering is an engineering profession that is concerned with the optimization of complex processes, systems, or organizations by developing, improving and implementing integrated systems of people, money, knowledge, information and equipment. Industrial engineering is central to manufacturing operations.
Industrial engineers use specialized knowledge and skills in the mathematical, physical, and social sciences, together with engineering analysis and design principles and methods, to specify, predict, and evaluate the results obtained from systems and processes. Several industrial engineering principles are followed in the manufacturing industry to ensure the effective flow of systems, processes, and operations. These include:
Lean Manufacturing
Six Sigma
Information Systems
Process Capability
Define, Measure, Analyze, Improve and Control (DMAIC).
These principles allow the creation of new systems, processes or situations for the useful coordination of labor, materials and machines and also improve the quality and productivity of systems, physical or social. Depending on the subspecialties involved, industrial engineering may also overlap with, operations research, systems engineering, manufacturing engineering, production engineering, supply chain engineering, management science, engineering management, financial engineering, ergonomics or human factors engineering, safety engineering, logistics engineering, quality engineering or other related capabilities or fields.
History
Origins
Industrial engineering
There is a general consensus among historians that the roots of the industrial engineering profession date back to the Industrial Revolution. The technologies that helped mechanize traditional manual operations in the textile industry including the flying shuttle, the spinning jenny, and perhaps most importantly the steam engine generated economies of scale that made mass production in centralized locations attractive for the first time. The concept of the production system had its genesis in the factories created by these innovations. It has also been suggested that perhaps Leonardo da Vinci was the first industrial engineer because there is evidence that he applied science to the analysis of human work by examining the rate at which a man could shovel dirt around the year 1500. Others also state that the industrial engineering profession grew from Charles Babbage’s study of factory operations and specifically his work on the manufacture of straight pins in 1832 . However, it has been generally argued that these early efforts, while valuable, were merely observational and did not attempt to engineer the jobs studied or increase overall output.
Specialization of labour
Adam Smith's concepts of Division of Labour and the "Invisible Hand" of capitalism introduced in his treatise The Wealth of Nations motivated many of the technological innovators of the Industrial Revolution to establish and implement factory systems. The efforts of James Watt and Matthew Boulton led to the first integrated machine manufacturing facility in the world, including the application of concepts such as cost control systems to reduce waste and increase productivity and the institution of skills training for craftsmen.
Charles Babbage became associated with industrial engineering because of the concepts he introduced in his book On the Economy of Machinery and Manufacturers which he wrote as a result of his visits to factories in England and the United States in the early 1800s. The book includes subjects such as the time required to perform a specific task, the effects of subdividing tasks into smaller and less detailed elements, and the advantages to be gained from repetitive tasks.
Interchangeable parts
Eli Whitney and Simeon North proved the feasibility of the notion of interchangeable parts in the manufacture of muskets and pistols for the US Government. Under this system, individual parts were mass-produced to tolerances to enable their use in any finished product. The result was a significant reduction in the need for skill from specialized workers, which eventually led to the industrial environment to be studied later.
Pioneers
Frederick Taylor (1856–1915) is generally credited as being the father of the industrial engineering discipline. He earned a degree in mechanical engineering from Stevens Institute of Technology and earned several patents from his inventions. His books, Shop Management and The Principles of Scientific Management, which were published in the early 1900s, were the beginning of industrial engineering. Improvements in work efficiency under his methods was based on improving work methods, developing of work standards, and reduction in time required to carry out the work. With an abiding faith in the scientific method, Taylor did many experiments in machine shop work on machines as well as men. Taylor developed "time study" to measure time taken for various elements of a task and then used the study observations to reduce the time further. Time study was done for the improved method once again to provide time standards which are accurate for planning manual tasks and also for providing incentives.
The husband-and-wife team of Frank Gilbreth (1868–1924) and Lillian Gilbreth (1878–1972) was the other cornerstone of the industrial engineering movement whose work is housed at Purdue University School of Industrial Engineering. They categorized the elements of human motion into 18 basic elements called therbligs. This development permitted analysts to design jobs without knowledge of the time required to do a job. These developments were the beginning of a much broader field known as human factors or ergonomics.
In 1908, the first course on industrial engineering was offered as an elective at Pennsylvania State University, which became a separate program in 1909 through the efforts of Hugo Diemer. The first doctoral degree in industrial engineering was awarded in 1933 by Cornell University.
In 1912, Henry Laurence Gantt developed the Gantt chart, which outlines actions the organization along with their relationships. This chart opens later form familiar to us today by Wallace Clark.
With the development of assembly lines, the factory of Henry Ford (1913) accounted for a significant leap forward in the field. Ford reduced the assembly time of a car from more than 700 hours to 1.5 hours. In addition, he was a pioneer of the economy of the capitalist welfare ("welfare capitalism") and the flag of providing financial incentives for employees to increase productivity.
In 1927, the then Technische Hochschule Berlin was the first German university to introduce the degree. The course of studies developed by Willi Prion was then still called Business and Technology and was intended to provide descendants of industrialists with an adequate education.
Comprehensive quality management system (Total quality management or TQM) developed in the forties was gaining momentum after World War II and was part of the recovery of Japan after the war.
The American Institute of Industrial Engineering was formed in 1948. The early work by F. W. Taylor and the Gilbreths was documented in papers presented to the American Society of Mechanical Engineers as interest grew from merely improving machine performance to the performance of the overall manufacturing process, most notably starting with the presentation by Henry R. Towne (1844–1924) of his paper The Engineer as An Economist (1886).
Modern practice
From 1960 to 1975, with the development of decision support systems in supply such as material requirements planning (MRP), one can emphasize the timing issue (inventory, production, compounding, transportation, etc.) of industrial organization. Israeli scientist Dr. Jacob Rubinovitz installed the CMMS program developed in IAI and Control-Data (Israel) in 1976 in South Africa and worldwide.
In the 1970s, with the penetration of Japanese management theories such as Kaizen and Kanban, Japan realized very high levels of quality and productivity. These theories improved issues of quality, delivery time, and flexibility. Companies in the west realized the great impact of Kaizen and started implementing their own continuous improvement programs. W. Edwards Deming made significant contributions in the minimization of variance starting in the 1950s and continuing to the end of his life.
In the 1990s, following the global industry globalization process, the emphasis was on supply chain management and customer-oriented business process design. The theory of constraints, developed by Israeli scientist Eliyahu M. Goldratt (1985), is also a significant milestone in the field.
Comparison to other engineering disciplines
Engineering is traditionally decompositional. To understand the whole of something, it is first broken down into its parts. One masters the parts, then puts them back together to create a better understanding of how to master the whole. The approach of industrial and systems engineering (ISE) is opposite; any one part cannot be understood without the context of the whole system. Changes in one part of the system affect the entire system, and the role of a single part is to better serve the whole system.
Also, industrial engineering considers the human factor and its relation to the technical aspect of the situation and all of the other factors that influence the entire situation, while other engineering disciplines focus on the design of inanimate objects.
"Industrial Engineers integrate combinations of people, information, materials, and equipment that produce innovative and efficient organizations. In addition to manufacturing, Industrial Engineers work and consult in every industry, including hospitals, communications, e-commerce, entertainment, government, finance, food, pharmaceuticals, semiconductors, sports, insurance, sales, accounting, banking, travel, and transportation."
"Industrial Engineering is the branch of Engineering most closely related to human resources in that we apply social skills to work with all types of employees, from engineers to salespeople to top management. One of the main focuses of an Industrial Engineer is to improve the working environments of people – not to change the worker, but to change the workplace."
"All engineers, including Industrial Engineers, take mathematics through calculus and differential equations. Industrial Engineering is different in that it is based on discrete variable math, whereas all other engineering is based on continuous variable math. We emphasize the use of linear algebra and difference equations, as opposed to the use of differential equations which are so prevalent in other engineering disciplines. This emphasis becomes evident in optimization of production systems in which we are sequencing orders, scheduling batches, determining the number of materials handling units, arranging factory layouts, finding sequences of motions, etc. As, Industrial Engineers, we deal almost exclusively with systems of discrete components."
Etymology
While originally applied to manufacturing, the use of industrial in industrial engineering can be somewhat misleading, since it has grown to encompass any methodical or quantitative approach to optimizing how a process, system, or organization operates. In fact, the industrial in industrial engineering means the industry in its broadest sense. People have changed the term industrial to broader terms such as industrial and manufacturing engineering, industrial and systems engineering, industrial engineering and operations research, industrial engineering and management.
Sub-disciplines
Industrial engineering has many sub-disciplines, the most common of which are listed below. Although there are industrial engineers who focus exclusively on one of these sub-disciplines, many deals with a combination of them such as supply chain and logistics, and facilities and energy management.
Methods engineering
Facilities engineering & energy management
Financial engineering
Energy engineering
Human factors & safety engineering
Information systems engineering & management
Manufacturing engineering
Operations engineering & managementOperations research & optimization
Policy planning
Production engineeringQuality & reliability engineering
Supply chain management & logistics
Systems engineering & analysis
Systems simulation
Related disciplines
Organization development & change management
Behavioral economics
Education
Industrial engineers study the interaction of human beings with machines, materials, information, procedures and environments in such developments and in designing a technological system.
Industrial engineering degrees accredited within any member country of the Washington Accord enjoy equal accreditation within all other signatory countries, thus allowing engineers from one country to practice engineering professionally in any other.
Universities offer degrees at the bachelor, masters, and doctoral level.
Undergraduate curriculum
In the United States, the undergraduate degree earned is either a bachelor of science (B.S.) or a bachelor of science and engineering (B.S.E.) in industrial engineering (IE). In South Africa, the undergraduate degree is a bachelor of engineering (BEng). Variations of the title include Industrial & Operations Engineering (IOE), and Industrial & Systems Engineering (ISE or ISyE).
The typical curriculum includes a broad math and science foundation spanning chemistry, physics, mechanics (i.e., statics, kinematics, and dynamics), materials science, computer science, electronics/circuits, engineering design, and the standard range of engineering mathematics (i.e., calculus, linear algebra, differential equations, statistics). For any engineering undergraduate program to be accredited, regardless of concentration, it must cover a largely similar span of such foundational work – which also overlaps heavily with the content tested on one or more engineering licensure exams in most jurisdictions.
The coursework specific to IE entails specialized courses in areas such as optimization, applied probability, stochastic modeling, design of experiments, statistical process control, simulation, manufacturing engineering, ergonomics/safety engineering, and engineering economics. Industrial engineering elective courses typically cover more specialized topics in areas such as manufacturing, supply chains and logistics, analytics and machine learning, production systems, human factors and industrial design, and service systems.
Certain business schools may offer programs with some overlapping relevance to IE, but the engineering programs are distinguished by a much more intensely quantitative focus, required engineering science electives, and the core math and science courses required of all engineering programs.
Graduate curriculum
The usual graduate degree earned is the master of science (MS), master of science and engineering (MSE) or master of engineering (MEng) in industrial engineering or various alternative related concentration titles.
Typical MS curricula may cover:
Manufacturing Engineering
Analytics and machine learning
Computer-aided manufacturing
Engineering economics
Financial engineering
Human factors engineering and ergonomics (safety engineering)
Lean Six Sigma
Management sciences
Materials management
Operations management
Operations research and optimization techniques
Predetermined motion time system and computer use for IE
Product development
Production planning and control
Productivity improvement
Project management
Reliability engineering and life testing
Robotics
Statistical process control or quality control
Supply chain management and logistics
System dynamics and policy planning
Systems simulation and stochastic processes
Time and motion study
Facilities design and work-space design
Quality engineering
System analysis and techniques
Differences in teaching
While industrial engineering as a formal degree has been around for years, consensus on what topics should be taught and studied differs across countries. For example, Turkey focuses on a very technical degree while Denmark, Finland and the United Kingdom have a management focus degree, thus making it less technical. The United States, meanwhile, focuses on case studies, group problem solving and maintains a balance between the technical and non-technical side.
Practicing engineers
Traditionally, a major aspect of industrial engineering was planning the layouts of factories and designing assembly lines and other manufacturing paradigms. And now, in lean manufacturing systems, industrial engineers work to eliminate wastes of time, money, materials, energy, and other resources.
Examples of where industrial engineering might be used include flow process charting, process mapping, designing an assembly workstation, strategizing for various operational logistics, consulting as an efficiency expert, developing a new financial algorithm or loan system for a bank, streamlining operation and emergency room location or usage in a hospital, planning complex distribution schemes for materials or products (referred to as supply-chain management), and shortening lines (or queues) at a bank, hospital, or a theme park.
Modern industrial engineers typically use predetermined motion time systems, computer simulation (especially discrete event simulation), along with extensive mathematical tools for modeling, such as mathematical optimization and queueing theory, and computational methods for system analysis, evaluation, and optimization. Industrial engineers also use the tools of data science and machine learning in their work owing to the strong relatedness of these disciplines with the field and the similar technical background required of industrial engineers (including a strong foundation in probability theory, linear algebra, and statistics, as well as having coding skills).
| Technology | Disciplines | null |
41910874 | https://en.wikipedia.org/wiki/Leg%20mechanism | Leg mechanism | A leg mechanism (walking mechanism) is a mechanical system designed to provide a propulsive force by intermittent frictional contact with the ground. This is in contrast with wheels or continuous tracks which are intended to maintain continuous frictional contact with the ground. Mechanical legs are linkages that can have one or more actuators, and can perform simple planar or complex motion. Compared to a wheel, a leg mechanism is potentially better fitted to uneven terrain, as it can step over obstacles.
An early design for a leg mechanism called the Plantigrade Machine by Pafnuty Chebyshev was shown at the Exposition Universelle (1878). The original engravings for this leg mechanism are available. The design of the leg mechanism for the Ohio State Adaptive Suspension Vehicle (ASV) is presented in the 1988 book Machines that Walk. In 1996, W-B. Shieh presented a design methodology for leg mechanisms.
The artwork of Theo Jansen, see Jansen's linkage, has been particularly inspiring for the design of leg mechanisms, as well as the Klann patent, which is the basis for the leg mechanism of the Mondo Spider.
Design goals
horizontal speed as constant as possible while touching the ground (support phase)
while the foot is not touching the ground, it should move as fast as possible
constant torque/force input (or at least no extreme spikes/changes)
stride height (enough for clearance, not too much to conserve energy)
the foot has to touch the ground for at least half of the cycle for a two/four leg mechanism or respectively, a third of the cycle for a three/six leg mechanism
minimized moving mass
vertical center of mass always inside the base of support
the speed of each leg or group of legs should be separately controllable for steering
the leg mechanism should allow forward and backward walking
Another design goal can be, that stride height and length etc. can be controlled by the operator. This can relatively easily be achieved with a hydraulic leg mechanism, but is not practicable with a crank-based leg mechanism.
The optimization has to be done for the whole vehicle – ideally the force/torque variation during a rotation should cancel each other out.
History
Richard Lovell Edgeworth tried in 1770 to construct a machine he called a "Wooden Horse", but was not successful.
Patents
Patents for leg mechanism designs range from rotating cranks to four-bar and six-bar linkages. See for example the following patents:
U.S. Patent No. 469,169 Figure Toy, F. O. Norton (1892).
U.S. Patent No. 1,146,700, Animated Toy, A. Gund (1915). A leg mechanism formed by an inverted slider-crank.
U.S. Patent No. 1,363,460, Walking Toy, J. A. Ekelund (1920). A leg mechanism formed by a rotating crank with extensions that contact the ground.
U.S. Patent No. 1,576,956, Quadruped Walking Mechanism, E. Dunshee (1926). A four-bar leg mechanism that shows the coupler curve forms the foot trajectory.
U.S. Patent No. 1,803,197, Walking Toy, P. C. Marie (1931). Another rotating crank leg mechanism.
U.S. Patent No. 1,819,029, Mechanical Toy Horse, J. St. C. King (1931). A crank-rocker leg mechanism with a one-way friction mechanisms in the foot.
U.S. Patent No. 2,591,469, Animated Mechanical Toy, H. Saito (1952). An inverted slider crank mechanism for the front foot and crank-rocker for the back foot.
U.S. Patent No. 4095661, Walking Work Vehicle, J. R. Sturges (1978). A lambda mechanism combined with a parallelogram linkage to form a translating leg that follows the coupler curve.
U.S. Patent No. 6,260,862, Walking Device, J. C. Klann (2001). The coupler curve of a four-bar linkage guides the lower link of an RR serial chain to form a leg mechanism, known as the Klann linkage.
U.S. Patent No. 6,481,513, Single Actuator per Leg Robotic Hexapod, M. Buehler et al. (2002). A leg mechanism that consists of a single rotating crank.
U.S. Patent No. 6,488,560, Walking Apparatus, Y. Nishikawa (2002). Another rotating crank leg mechanism.
Gallery
Stationary
Walking
Complex mechanism
Shown above are only planar mechanisms, but there are also more complex mechanisms:
| Technology | Machinery and tools: General | null |
25034192 | https://en.wikipedia.org/wiki/Chemical%20reaction%20engineering | Chemical reaction engineering | Chemical reaction engineering (reaction engineering or reactor engineering) is a specialty in chemical engineering or industrial chemistry dealing with chemical reactors. Frequently the term relates specifically to catalytic reaction systems where either a homogeneous or heterogeneous catalyst is present in the reactor. Sometimes a reactor per se is not present by itself, but rather is integrated into a process, for example in reactive separations vessels, retorts, certain fuel cells, and photocatalytic surfaces. The issue of solvent effects on reaction kinetics is also considered as an integral part.
Origin of chemical reaction engineering
Chemical reaction engineering as a discipline started in the early 1950s under the impulse of researchers at the Shell Amsterdam research center and the university of Delft. The term chemical reaction engineering was apparently coined by J.C. Vlugter while preparing the 1st European Symposium on Chemical Reaction Engineering which was held in Amsterdam in 1957.
Discipline
Chemical reaction engineering aims at studying and optimizing chemical reactions in order to define the best reactor design. Hence, the interactions of flow phenomena, mass transfer, heat transfer, and reaction kinetics are of prime importance in order to relate reactor performance to feed composition and operating conditions. Although originally applied to the petroleum and petrochemical industries, its general methodology combining reaction chemistry and chemical engineering concepts allows optimization of a variety of systems where modeling or engineering of reactions is needed. Chemical reaction engineering approaches are indeed tailored for the development of new processes and the improvement of existing technologies.
Books
The Engineering of Chemical Reactions (2nd Edition), Lanny Schmidt, 2004, Oxford University Press,
Chemical Reaction Engineering (3rd Edition), Octave Levenspiel, 1999, John Wiley & Sons, ,
Elements of Chemical Reaction Engineering (4th Edition), H. Scott Fogler, 2005, Prentice Hall, ,
Chemical Reactor Analysis and Design (2nd Edition), Gilbert F. Froment and Kenneth B. Bischoff, 1990, John Wiley & Sons, ,
Fundamentals of Chemical Reaction Engineering (1st Edition), Mark E. Davis and Robert J. Davis, 2003, The McGraw-Hill Companies, Inc., ,
ISCRE Symposia
The most important series of symposia are the International Symposia on Chemical Reaction Engineering or ISCRE conferences. These three-day conferences are held every two years, rotating among sites in North America, Europe, and the Asia-Pacific region, on a six-year cycle. These conferences bring together for three days distinguished international researchers in reaction engineering, prominent industrial practitioners, and new researchers and students of this multifaceted field. ISCRE symposia are a unique gathering place for reaction engineers where research gains are consolidated and new frontiers explored. The state of the art of various sub-disciplines of reaction engineering is reviewed in a timely manner, and new research initiatives are discussed.
Awards in Chemical Reaction Engineering
The ISCRE Board administers two premiere awards in chemical reaction engineering for senior and junior researchers every three years.
Neal R. Amundson Award for Excellence in Chemical Reaction Engineering
In 1996, the ISCRE Board of Directors established the Neal R. Amundson Award for Excellence in Chemical Reaction Engineering. This award recognizes a pioneer in the field of Chemical Reaction Engineering who has exerted a major influence on the theory or practice of the field, through originality, creativity, and novelty of concept or application. The award is made every three years at an ISCRE meeting, and consists of a Plaque and a check in the amount of $5000. The Amundson Award is generously supported by a grant from the ExxonMobil Corporation. Winners of the award include:
1996: Neal Amundson, Professor - University of Minnesota, University of Houston
1998: Rutherford Aris, Professor - University of Minnesota
2001: Octave Levenspiel, Professor - Oregon State University
2004: Vern Weekman, Mobil
2007: Gilbert Froment, Professor - Ghent University, Texas A&M University
2010: Dan Luss, Professor - University of Houston
2013: Lanny Schmidt, Professor - University of Minnesota
2016: Milorad P. Dudukovic, Professor - Washington University
2019: W. Harmon Ray, Professor - University of Wisconsin
2022: Announced at NASCRE-5
Rutherford Aris Young Investigator Award in Chemical Reaction Engineering
In 2016, the ISCRE, Inc. Board of Directors will bestow the first Rutherford Aris Young Investigator Award for Excellence in Chemical Reaction Engineering. This award will recognize outstanding contributions in experimental and/or theoretical reaction engineering research of investigators in early stages of their career. The recipient must be less than 40 years of age at the end of the calendar year in which the award is presented. The Aris Award is generously supported by a grant from the UOP, L.L.C., a Honeywell Company. The award consists of a plaque, an honorarium of $3000, and up to $2000 in travel funds to present at an ISCRE/NASCRE conference and to present a lecture at UOP. This award complements ISCRE's other major honor, the Neal R. Amundson Award. Winners of the award include:
2016: Paul J. Dauenhauer, Professor - University of Minnesota, USA
2019: Yuriy Roman-Leschkov, Professor, MIT, USA.
2022: Announced at NASCRE-5
| Technology | Disciplines | null |
32077676 | https://en.wikipedia.org/wiki/Food%20chain | Food chain | A food chain is a linear network of links in a food web, often starting with an autotroph (such as grass or algae), also called a producer, and typically ending at an apex predator (such as grizzly bears or killer whales), detritivore (such as earthworms and woodlice), or decomposer (such as fungi or bacteria). It is not the same as a food web. A food chain depicts relations between species based on what they consume for energy in trophic levels, and they are most commonly quantified in length: the number of links between a trophic consumer and the base of the chain.
Food chain studies play an important role in many biological studies.
Food chain stability is very important for the survival of most species. When only one element is removed from the food chain it can result in extinction or immense decreases of survival of a species. Many food chains and food webs contain a keystone species, a species that has a large impact on the surrounding environment and that can directly affect the food chain. If a keystone species is removed it can set the entire food chain off balance.
The efficiency of a food chain depends on the energy first consumed by the primary producers. This energy then moves through the trophic levels.
History
Food Chains were first discussed by al-Jahiz, a 10th century Arab philosopher. The modern concepts of food chains and food webs were introduced by Charles Elton.
Food chain vs. food web
A food chain differs from a food web as a food chain follows a direct linear pathway of consumption and energy transfer. Natural interconnections between food chains make a food web, which are non-linear and depict interconnecting pathways of consumption and energy transfer.
Trophic levels
Food chain models typically predict that communities are controlled by predators at the top and plants (autotrophs or producers) at the bottom.
Thus, the foundation of the food chain typically consists of primary producers. Primary producers, or autotrophs, utilize energy derived from either sunlight or inorganic chemical compounds to create complex organic compounds, such as starch, for energy. Because the sun's light is necessary for photosynthesis, most life could not exist if the sun disappeared. Even so, it has recently been discovered that there are some forms of life, chemotrophs, that appear to gain all their metabolic energy from chemosynthesis driven by hydrothermal vents, thus showing that some life may not require solar energy to thrive. Chemosynthetic bacteria and archaea use hydrogen sulfide and methane from hydrothermal vents and cold seeps as an energy source (just as plants use sunlight) to produce carbohydrates; they form the base of the food chain in regions with little to no sunlight. Regardless of where the energy is obtained, a species that produces its own energy lies at the base of the food chain model, and is a critically important part of an ecosystem.
Higher trophic levels cannot produce their own energy and so must consume producers or other life that itself consumes producers. In the higher trophic levels lies consumers (secondary consumers, tertiary consumers, etc.). Consumers are organisms that eat other organisms. All organisms in a food chain, except the first organism, are consumers. Secondary consumers eat and obtain energy from primary consumers, tertiary consumers eat and obtain energy from secondary consumers, etc.
At the highest trophic level is typically an apex predator, a consumer with no natural predators in the food chain model.
When any trophic level dies, detritivores and decomposers consume their organic material for energy and expel nutrients into the environment in their waste. Decomposers and detritivores break down the organic compounds into simple nutrients that are returned to the soil. These are the simple nutrients that plants require to create organic compounds. It is estimated that there are more than 100,000 different decomposers in existence.
Models of trophic levels also often model energy transfer between trophic levels. Primary consumers get energy from the producer and pass it to the secondary and tertiary consumers.
Studies
Food chains are vital in ecotoxicology studies, which trace the pathways and biomagnification of environmental contaminants. It is also necessary to consider interactions amongst different trophic levels to predict community dynamics; food chains are often the base level for theory development of trophic levels and community/ecosystem investigations.
Length
The length of a food chain is a continuous variable providing a measure of the passage of energy and an index of ecological structure that increases through the linkages from the lowest to the highest trophic (feeding) levels.
Food chains are often used in ecological modeling (such as a three-species food chain). They are simplified abstractions of real food webs, but complex in their dynamics and mathematical implications.
In its simplest form, the length of a chain is the number of links between a trophic consumer and the base of the web. The mean chain length of an entire web is the arithmetic average of the lengths of all chains in the food web. The food chain is an energy source diagram. The food chain begins with a producer, which is eaten by a primary consumer. The primary consumer may be eaten by a secondary consumer, which in turn may be consumed by a tertiary consumer. The tertiary consumers may sometimes become prey to the top predators known as the quaternary consumers. For example, a food chain might start with a green plant as the producer, which is eaten by a snail, the primary consumer. The snail might then be the prey of a secondary consumer such as a frog, which itself may be eaten by a tertiary consumer such as a snake which in turn may be consumed by an eagle. This simple view of a food chain with fixed trophic levels within a species: species A is eaten by species B, B is eaten by C, … is often contrasted by the real situation in which the juveniles of a species belong to a lower trophic level than the adults, a situation more often seen in aquatic and amphibious environments, e.g., in insects and fishes. This complexity was denominated metaphoetesis by G. E. Hutchinson, 1959.
Ecologists have formulated and tested hypotheses regarding the nature of ecological patterns associated with food chain length, such as length increasing with ecosystem volume, limited by the reduction of energy at each successive level, or reflecting habitat type.
Food chain length is important because the amount of energy transferred decreases as trophic level increases; generally only ten percent of the total energy at one trophic level is passed to the next, as the remainder is used in the metabolic process. There are usually no more than five tropic levels in a food chain. Humans are able to receive more energy by going back a level in the chain and consuming the food before, for example getting more energy per pound from consuming a salad than an animal which ate lettuce.
Keystone species
A keystone species is a singular species within an ecosystem that others within the same ecosystem, or the entire ecosystem itself, rely upon. Keystone species' are so vital for an ecosystem that without their presence, an ecosystem could transform or stop existing entirely. One way keystone species impact an ecosystem is through their presence in an ecosystem's food web and, by extension, a food chain within said ecosystem. Sea otters, a keystone species in Pacific coastal regions, prey on sea urchins. Without the presence of sea otters, sea urchins practice destructive grazing on kelp populations which contributes to declines in coastal ecosystems within the northern pacific regions. The presence of sea otters controls sea urchin populations and helps maintain kelp forests, which are vital for other species within the ecosystem.
| Biology and health sciences | Ecology | Biology |
29719481 | https://en.wikipedia.org/wiki/Extraction%20%28chemistry%29 | Extraction (chemistry) | Extraction in chemistry is a separation process consisting of the separation of a substance from a matrix. The distribution of a solute between two phases is an equilibrium condition described by partition theory. This is based on exactly how the analyte moves from the initial solvent into the extracting solvent. The term washing may also be used to refer to an extraction in which impurities are extracted from the solvent containing the desired compound.
Types of extraction
Liquid–liquid extraction
Acid-base extraction
Supercritical fluid extraction
Solid-liquid extraction
Solid-phase extraction
Maceration
Ultrasound-assisted extraction
Microwave-assisted extraction
Heat reflux extraction
Instant controlled pressure drop extraction (Détente instantanée contrôlée)
Perstraction
Laboratory applications and examples
Liquid-liquid extractions in the laboratory usually make use of a separatory funnel, where two immiscible phases are combined to separate a solute from one phase into the other, according to the relative solubility in each of the phases. Typically, this will be to extract organic compounds out of an aqueous phase and into an organic phase, but may also include extracting water-soluble impurities from an organic phase into an aqueous phase.
Common extractants may be arranged in increasing order of polarity according to the Hildebrand solubility parameter:
ethyl acetate < acetone < ethanol < methanol < acetone:water (7:3) < ethanol:water (8:2) < methanol:water (8:2) < water
Solid-liquid extractions at laboratory scales can use Soxhlet extractors. A solid sample containing the desired compound along with impurities is placed in the thimble. An extracting solvent is chosen in which the impurities are insoluble and the desired compound has at least limited solubility. The solvent is refluxed and condensed solvent falls into the thimble and dissolves the desired compound which then passes back through the filter into the flask. After extraction is complete the solvent can be removed and the desired product collected.
Everyday applications and examples
Boiling tea leaves in water extracts the tannins, theobromine, and caffeine out of the leaves and into the water, as an example of a solid-liquid extraction.
Decaffeination of tea and coffee is also an example of an extraction, where the caffeine molecules are removed from the tea leaves or coffee beans, often utilising supercritical fluid extraction with CO2 or standard solid-liquid extraction techniques.
| Physical sciences | Other separations | Chemistry |
29722340 | https://en.wikipedia.org/wiki/Astronomical%20filter | Astronomical filter | An astronomical filter is a telescope accessory consisting of an optical filter used by amateur astronomers to simply improve the details and contrast of celestial objects, either for viewing or for photography. Research astronomers, on the other hand, use various band-pass filters for photometry on telescopes, in order to obtain measurements which reveal objects' astrophysical properties, such as stellar classification and placement of a celestial body on its Wien curve.
Most astronomical filters work by blocking a specific part of the color spectrum above and below a bandpass, significantly increasing the signal-to-noise ratio of the interesting wavelengths, and so making the object gain detail and contrast. While the color filters transmit certain colors from the spectrum and are usually used for observation of the planets and the Moon, the polarizing filters work by adjusting the brightness, and are usually used for the Moon. The broad-band and narrow-band filters transmit the wavelengths that are emitted by the nebulae (by the Hydrogen and Oxygen atoms), and are frequently used for reducing the effects of light pollution.
Filters have been used in astronomy at least since the solar eclipse of May 12, 1706.
Solar filters
White light filters
Solar filters block most of the sunlight to avoid any damage to the eyes. Proper filters are usually made from a durable glass or polymer film that transmits only 0.00001% of the light. For safety, solar filters must be securely fitted over the objective of a refracting telescope or aperture of a reflecting telescope so that the body does not heat up significantly.
Small solar filters threaded behind eyepieces do not block the radiation entering the scope body, causing the telescope to heat up greatly, and it is not unknown for them to shatter from thermal shock. Therefore, most experts do not recommend such solar filters for eyepieces, and some stockists refuse to sell them or remove them from telescope packages. According to NASA: "Solar filters designed to thread into eyepieces that are often provided with inexpensive telescopes are also unsafe. These glass filters can crack unexpectedly from overheating when the telescope is pointed at the Sun, and retinal damage can occur faster than the observer can move the eye from the eyepiece."
Solar filters are used to safely observe and photograph the Sun, which despite being white, may appear as a yellow-orange disk. A telescope with these filters attached can directly and properly view details of solar features, especially sunspots and granulation on the surface, as well as solar eclipses and transits of the inferior planets Mercury and Venus across the solar disk.
Narrowband filters
The Herschel Wedge is a prism-based device combined with a neutral-density filter that directs most of the heat and ultraviolet rays out of the telescope, generally giving better results than most filter types. The H-alpha filter transmits the H-alpha spectral line for viewing solar flares and prominences invisible through common filters. These H-alpha filters are much narrower than those use for night H-alpha observing (see Nebular filters below), passing only 0.05 nm (0.5 angstrom) for one common model, compared with 3 nm-12 nm or more for night filters. Due to the narrow bandpass and temperature shifts often telescopes like that are tunable within about a ±0.05 nm.
NASA included the following filters on the Solar Dynamics Observatory, of which only one is visible to human eyes (450.0 nm): 450.0 nm, 170.0 nm, 160.0 nm, 33.5 nm, 30.4 nm, 19.3 nm, 21.1 nm, 17.1 nm, 13.1 nm, and 9.4 nm. These were chosen for temperature, instead of particular emission lines, as are many narrowband filters such as the H-alpha line mentioned above.
Color filters
Color filters work by absorption/transmission, and can tell which part of the spectrum they are reflecting and transmitting. Filters can be used to increase contrast and enhance the details of the Moon and planets. All of the visible spectrum colors each have a filter, and every color filter is used to bring a certain lunar and planetary feature; for example, the #8 yellow filter is used to show Mars's maria and Jupiter's belts.
The Wratten system is the standard number system used to refer to the color filter types. It was first manufactured by Kodak in 1909.
Professional filters are also colored, but their bandpass centers are placed around other midpoints (such as in the UBVRI and Cousins systems).
Some of common color filters and their uses are:
Chromatic aberration filters: Used for reduction of the purplish halo, caused by chromatic aberration of refracting telescopes. Such halo can obscure features of bright objects, especially Moon and planets. These filters have no effect on observing faint objects.
Red: Reduces sky brightness, particularly during daylight and twilight observations. Improves definition of maria, ice, and polar areas of Mars. Improves contrast of blue clouds against background of Jupiter and Saturn.
Deep yellow: Improves resolution of atmospheric features of Venus, Jupiter (especially in polar regions), and Saturn. Increases contrast of polar caps, clouds, ice and dust storms on Mars. Enhances comet tails.
Dark green: Improves cloud patterns on Venus. Reduces sky brightness during daylight observation of Venus. Increases contrast of ice and polar caps on Mars. Improves visibility of the Great Red Spot on Jupiter and other features in Jupiter atmosphere. Enhances white clouds and polar regions on Saturn.
Medium blue: Enhances contrast of Moon. Increases contrast of faint shading of Venus clouds. Enhances surface features, clouds, ice and dust storms on Mars. Enhances definition of boundaries between features in atmospheres of Jupiter and Saturn. Improves definition of comet gas tails.
Moon filters
Neutral density filters, also known in astronomy as Moon filters, are another approach for contrast enhancement and glare reduction. They work simply by blocking some of the object's light to enhance the contrast. Neutral density filters are mainly used in traditional photography, but are used in astronomy to enhance lunar and planetary observations.
Polarizing filters
Polarizing filters adjust the brightness of images to a better level for observing, but much less so than solar filters. With these types of filter, the range of transmission varies from 3% to 40%. They are usually used for the observation of the Moon, but may also be used for planetary observation. They consist of two polarizing layers in a rotating aluminum cell, which changes the amount of transmission of the filter by rotating them. This reduction in brightness and improvement in contrast can reveal the lunar surface features and details, especially when it is near full. Polarizing filters should not be used in place of solar filters designed specially for observing the sun.
Nebular filters
Narrowband
Narrow-band filters are astronomical filters which transmit only a narrow band of spectral lines from the spectrum (usually 22 nm bandwidth, or less). They are mainly used for nebulae observation. Emission nebulae mainly radiate the doubly ionized oxygen in the visible spectrum, which emits near 500 nm wavelength. These nebulae also radiate weakly at 486 nm, the Hydrogen-beta line.
There are two main types of Narrowband filters: Ultra-high contrast (UHC), and specific emission line(s) filters.
Specific Emission line filters
Specific emission line (or lines) filters are used to isolate lines of specific elements or molecules to see their distribution within Nebulae. By combining the images from different filters they may also be used to produce false color images. Common filters are often used with the Hubble Space Telescope, forming the so-called HST-palette, with colors assigned as such: Red = S-II; Green = H-alpha; Blue = O-III. These filters are commonly specified with a second figure in nm, which refers to how wide a band is passed, which may cause it to exclude or include other lines. For example, H-alpha at 656 nm, may pick up N-II (at 658–654 nm), some filters will block most of the N-II if they are 3 nm wide.
Commonly used lines / filters are:
H-Alpha Hα / Ha (656 nm) from the Balmer series is emitted by HII Regions and is one of the stronger sources.
H-Beta Hβ / Hb (486 nm) from the Balmer series is visible from stronger sources.
O-III (496 nm and 501 nm) filters allow for both of the Oxygen-III lines to pass through. This is strong in many Emission nebulae.
S-II (672 nm) filters show the Sulfur-II line.
Less common lines/filters:
He-II (468 nm)
He-I: (587 nm)
O-I: (630 nm)
Ar-III: (713 nm)
CA-II Ca-K/Ca-H: (393 and 396 nm) For solar observing, shows the sun with the K and H Fraunhofer lines
N-II (658 nm and 654 nm) Often included in wider H-alpha filters
Methane (889 nm) allowing clouds to be seen on the gas giants, Venus and (with filter) the Sun.
Ultra-High Contrast filters
Known commonly as UHC filters, these filters consist of things which allow multiple strong common emission lines to pass through, which also has the effect of the similar Light Pollution Reduction filters (see below) of blocking most light sources.
The UHC filters range from 484 to 506 nm. It transmits both the O-III and H-beta spectral lines, blocks a large fraction of light pollution, and brings the details of planetary nebula and most of emission nebulae under a dark sky.
Broadband
The broadband, or light pollution reduction (LPR), filters are designed to block the sodium and mercury vapor light, and also block natural skyglow such as the auroral light. This allows observing nebulae from the city and light polluted skies. Broadband filters differ from narrowband with the range of wavelengths transmission. LED lighting is more broadband so it is not blocked, although white LEDs have a considerably lower output around 480 nm, which is close to O III and H-beta wavelength. Broadband filters have a wider range because a narrow transmission range causes a fainter image of sky objects, and since the work of these filters is revealing the details of nebulae from light polluted skies, it has a wider transmission for more brightness. These filters are particularly designed for galaxy observation and photography, and not useful with other deep sky objects such as emission nebulae. However, they can still improve the contrast between the DSOs and the background sky, which may clarify the image.
| Technology | Telescope | null |
23536889 | https://en.wikipedia.org/wiki/Sequoia%20sempervirens | Sequoia sempervirens | Sequoia sempervirens () is the sole living species of the genus Sequoia in the cypress family Cupressaceae (formerly treated in Taxodiaceae). Common names include coast redwood, coastal redwood and California redwood. It is an evergreen, long-lived, monoecious tree living 1,200–2,200 years or more. This species includes the tallest living trees on Earth, reaching up to in height (without the roots) and up to in diameter at breast height. These trees are also among the longest-living trees on Earth. Before commercial logging and clearing began by the 1850s, this massive tree occurred naturally in an estimated along much of coastal California (excluding southern California where rainfall is not sufficient) and the southwestern corner of coastal Oregon within the United States. Being the tallest tree species, with a small range and an extremely long lifespan, many redwoods are preserved in various state and national parks; many of the largest specimens have their own official names.
The name sequoia sometimes refers to the subfamily Sequoioideae, which includes S. sempervirens along with Sequoiadendron (giant sequoia) and Metasequoia (dawn redwood). Here, the term redwood on its own refers to the species covered in this article but not to the other two species.
Description
The coast redwood normally reaches a height of , but will be more than in extraordinary circumstances, with a trunk diameter of . Historically, the American naturalist, physician and founder member of the California Academy of Sciences, William P. Gibbons (1812–1897) described in 1893 the hollow shell of a coast redwood in the Oakland Hills of Alameda County with diameter of at breast height. This tree's girth is rivalled by the "Fieldbrook Stump" of Humboldt County with a diameter of at from the ground.
Coast redwoods have a conical crown, with horizontal to slightly drooping branches. The trunk is remarkably straight. The bark can be very thick, up to , and quite soft and fibrous, with a bright red-brown color when freshly exposed (hence the name redwood), weathering darker. The root system is composed of shallow, wide-spreading lateral roots.
The leaves are variable, being long and flat on young trees and shaded lower branches in older trees. The leaves are scalelike, long on shoots in full sun in the upper crown of older trees, with a full range of transition between the two extremes. They are dark green above and have two blue-white stomatal bands below. Leaf arrangement is spiral, but the larger shade leaves are twisted at the base to lie in a flat plane for maximum light capture.
The species is monoecious, with pollen and seed cones on the same plant. The seed cones are ovoid, long, with 15–25 spirally arranged scales; pollination is in late winter with maturation about 8–9 months after. Each cone scale bears three to seven seeds, each seed long and broad, with two wings wide. The seeds are released when the cone scales dry and open at maturity. The pollen cones are ovular and long.
Its genetic makeup is unusual among conifers, being a hexaploid (6n) and possibly allopolyploid (AAAABB). Both the mitochondrial and chloroplast genomes of the redwood are paternally inherited.
Taxonomy
Scottish botanist David Don described the redwood as Taxodium sempervirens, the "evergreen Taxodium", in his colleague Aylmer Bourke Lambert's 1824 work A description of the genus Pinus. Austrian botanist Stephan Endlicher erected the genus Sequoia in his 1847 work Synopsis coniferarum, giving the redwood its current binomial name of Sequoia sempervirens. It is unknown how Endlicher derived the name Sequoia. See Sequoia Etymology.
The redwood is one of three living species, each in its own genus, in the subfamily Sequoioideae. Molecular studies have shown that the three are each other's closest relatives, generally with the redwood and giant sequoia (Sequoiadendron giganteum) as each other's closest relatives.
However, Yang and colleagues in 2010 queried the polyploid state of the redwood and speculate that it may have arisen as an ancient hybrid between ancestors of the giant sequoia and dawn redwood (Metasequoia). Using two different single copy nuclear genes, LFY and NLY, to generate phylogenetic trees, they found that Sequoia was clustered with Metasequoia in the tree generated using the LFY gene, but with Sequoiadendron in the tree generated with the NLY gene. Further analysis strongly supported the hypothesis that Sequoia was the result of a hybridization event involving Metasequoia and Sequoiadendron. Thus, Yang and colleagues hypothesize that the inconsistent relationships among Metasequoia, Sequoia, and Sequoiadendron could be a sign of reticulate evolution (in which two species hybridize and give rise to a third) among the three genera. However, the long evolutionary history of the three genera (the earliest fossil remains being from the Jurassic) make resolving the specifics of when and how Sequoia originated once and for all a difficult matter—especially since it in part depends on an incomplete fossil record.
Names
The species name "sempervirens" means "evergreen", thought to be because of its previous placement in the same genus as Taxodium distichum (baldcypress) of the southeastern USA. Unlike coast redwood, baldcypress loses its leaves in winter. The common name "redwood", applied to both the coast redwood and the giant redwood, is a reference to the red heartwood of the trees. Common names that refer to Sequoia sempervirens alone include "California redwood", "coastal redwood", "coastal sequoia", and "coast redwood".
Distribution and habitat
Coast redwoods occupy a narrow strip of land approximately in length and in width along the Pacific coast of North America; the most southerly native grove is in Monterey County, California, and the most northerly groves are in extreme southwestern Oregon. The aforementioned qualification of "native" is because the species was introduced to various locations in Victoria, Australia, in the 1930s for experimental reasons and have since flourished. The prevailing native elevation range is above sea level, occasionally down to 0 and up to about . They usually grow in the mountains where precipitation from the incoming moisture off the ocean is greater. The tallest and oldest trees are found in deep valleys and gullies, where year-round streams can flow, and fog drip is regular. The terrain also made it harder for loggers to get to the trees and to get them out after felling. The trees above the fog layer, above about , are shorter and smaller due to the drier, windier, and colder conditions. In addition, Douglas-fir, pine, and tanoak often crowd out redwoods at these elevations. Few redwoods grow close to the ocean, due to intense salt spray, sand, and wind. Coalescence of coastal fog accounts for a considerable part of the trees' water needs. Fog in the 21st century is, however, reduced from what it was in the prior century, which is a problem that may be compounded by climate change.
The northern boundary of its range is marked by two groves on mountain slopes along the north side of the Chetco River, which is on the western fringe of the Klamath Mountains, near the California–Oregon border. The northernmost grove is located within Alfred A. Loeb State Park and Siskiyou National Forest at the approximate coordinates 42°07'36"N 124°12'17"W. The southern boundary of its range is the Los Padres National Forest's Silver Peak Wilderness in the Santa Lucia Mountains of the Big Sur area of Monterey County, California. The southernmost grove is in the Southern Redwood Botanical Area, just north of the national forest's Salmon Creek trailhead and near the San Luis Obispo County line.
The largest and tallest populations are in California's Redwood National and State Parks (Del Norte and Humboldt counties) and Humboldt Redwoods State Park, with the overall majority located in the large Humboldt County.
The ancient range of the genus is considerably greater, with relatives of the coast redwood living in Europe and Asia prior to the Quaternary geologic period. In recent geologic time there have been considerable shifts in the coast redwood's range in North America. Coast redwood bark has been found in the La Brea Tar Pits, showing that 25,000–40,000 years before the present redwood trees grew as far south as the Los Angeles during the last ice age. The authors of a 2022 paper suggest, "Were it not for the remarkable ability to sprout after fire, many southern forests may have lost their Sequoia component long ago." As to previous redwood range to the north, an upright fossil stump of a coast redwood on a beach in central Oregon was documented north of the current range.
Assisted migration
The ability of Coast Redwood to live for more than a thousand years, along with its unusual capacity to resprout from its root crown when felled by natural or human causes, have earned this species the label of "carbon-sequestration champion." Its potential to contribute toward climate change mitigation, as well as its demonstrated ability to thrive in coastal regions of the Pacific Northwest, led to the formation of a citizen group in Seattle, Washington undertaking assisted migration of this species hundreds of miles north of its native range.
In contrast to cautionary statements made by forestry professionals assessing other tree species for assisted migration, the citizens involved with the group known as PropagationNation had met with little controversy until in 2023 a national news outlet published a lengthy article that cast a favorable light on their efforts. The New York Times Magazine wrote:Not wanting to cause ecological problems by planting the trees across the Pacific Northwest, [Philip] Stielstra would eventually contact one of the foremost experts on the coast redwood, a botanist and forest ecologist named Stephen Sillett, at Cal Poly Humboldt, and ask if moving redwoods north was safe. Sillett thought planting redwoods around Seattle was a fantastic idea. ("It's not like it's going to escape and become a nuisance species," Sillett told me, before adding, "it just has so many benefits.") Another factor encouraged Stielstra too: Millions of years ago, redwoods — or their close relatives — grew across the Pacific Northwest. By moving them, Stielstra reasoned, he was helping the magnificent trees regain lost territory.
In December 2023, the Associated Press exclusively reported criticism from professionals in the region and nationally: While beginning to favor experiments in assisted population migration of more southerly genetics of the main native timber tree, Douglas-fir, professionals were united against large-scale plantings of California redwoods into the Pacific Northwest. The next month, January 2024, carried a regional news article that, once again, showed strong support as well as bold statements by the group's founder.
Even before the controversy developed in Washington state, professionals in Canada were documenting horticultural plantings of the California species already in place in southwestern British Columbia. In 2022 a Canadian Forestry Service publication used northward horticultural plantings, along with a review of research detailing redwood's paleobiogeography and current range conditions, as grounds for proposing that Canada's Vancouver Island already offered "narrow strips of optimal habitat" for extending the range of coast redwood. The authors point to a topographical "bottleneck" north of the California border that could have impeded northward migration during the Holocene. The bottleneck entails a lack of lowland passages through the Oregon Coast Range north of the Chetco River, coupled with the absence of coastal landscapes beyond storm salt-spray and tsunami inundation — for which this conifer species is highly intolerant.
Ecology
Fog and flood adaptations
The native area provides a unique environment with heavy seasonal rains up to annually. Cool coastal air and fog drip keep the forest consistently damp year round. Several factors, including the heavy rainfall, create a soil with fewer nutrients than the trees need, causing them to depend heavily on the entire biotic community of the forest, and making efficient recycling of dead trees especially important. This forest community includes coast Douglas-fir, Pacific madrone, tanoak, western hemlock, and other trees, along with a wide variety of ferns, mosses, mushrooms, and redwood sorrel. Redwood forests provide habitat for a variety of amphibians, birds, mammals, and reptiles. Old-growth redwood stands provide habitat for the federally threatened spotted owl and the California-endangered marbled murrelet.
The height of S. sempervirens is closely tied to fog availability; taller trees become less abundant as fog becomes less frequent. As S. sempervirens' height increases, transporting water via water potential to the leaves becomes increasingly difficult due to gravity. Despite the high rainfall that the region receives (up to 100 cm), the leaves in the upper canopy are perpetually stressed for water. This water stress is exacerbated by long droughts in the summer. Water stress is believed to cause the morphological changes in the leaves, stimulating reduced leaf length and increased leaf succulence. To supplement their water needs, redwoods use frequent summer fog events. Fog water is absorbed through multiple pathways. Leaves directly take in fog from the surrounding air through the epidermal tissue, bypassing the xylem. Coast redwoods also absorb water directly through their bark. The uptake of water through leaves and bark repairs and reduces the severity of xylem embolisms, which occur when cavitations form in the xylem preventing the transport of water and nutrients. Fog may also collect on redwood leaves, drip to the forest floor, and be absorbed by the tree's roots. This fog drip may constitute 30% of the total water used by a tree in a year.
Redwoods often grow in flood-prone areas. Sediment deposits can form impermeable barriers that suffocate tree roots, and unstable soil in flooded areas often causes trees to lean to one side, increasing the risk of the wind toppling them. Immediately after a flood, redwoods grow their existing roots upwards into recently deposited sediment layers. A second root system then develops from adventitious buds on the newly buried trunk and the old root system dies. To counter lean, redwoods increase wood production on the vulnerable side, creating a supporting buttress. These adaptations create forests of almost exclusively redwood trees in flood-prone regions.
Pest and pathogen resistance
Coast redwoods are resistant to insect attack, fungal infection, and rot. These properties are conferred by concentrations of terpenoids and tannic acid in redwood leaves, roots, bark, and wood. Despite these chemical defenses, redwoods are still subject to insect infestations; none, however, are capable of killing a healthy tree. Bark is so thick on the bole that bark beetles cannot enter there. However, the canopy branches have thin bark (see photo at right) that native bark beetles are able to bore into for egg-laying and larval growth via tunnels.
Redwoods also face herbivory from mammals: black bears are reported to consume the inner bark of small redwoods, and black-tailed deer are known to eat redwood sprouts.
The oldest known coast redwood is about 2,200 years old; many others in the wild exceed 600 years. The numerous claims of older redwoods are incorrect. Because of their seemingly timeless lifespans, coast redwoods were deemed the "everlasting redwood" at the turn of the century; in Latin, sempervirens means "ever green" or "everlasting". Redwoods must endure various environmental disturbances to attain such great ages.
Fire adaptations
In response to forest fires, the trees have developed various adaptations. The thick, fibrous bark of coast redwoods is extremely fire-resistant; it grows to at least a foot thick and protects mature trees from fire damage. In addition, the redwoods contain little flammable pitch or resin. Fires, moreover, appear to actually benefit redwoods by causing substantial mortality in competing species, while having only minor effects on redwood. Burned areas are favorable to the successful germination of redwood seeds. A study published in 2010, the first to compare post-wildfire survival and regeneration of redwood and associated species, concluded that fires of all severity increase the relative abundance of redwood, and higher-severity fires provide the greatest benefit.
Self-pruning of its lower limbs as height is gained is a crucial adaptation to prevent ground fires from rising into the canopy, where branch bark is thin and leaves are vulnerable. In the millions of years that predated human evolution, this level of protection worked well against natural fires originating from lightning strikes.
When the first humans arrived in North America, redwoods thrived in regions where ground fires were intentionally set by indigenous populations on a seasonal basis. However, when peoples arrived from other continents in the past few centuries, indigenous fire practices were disallowed — even in the few places where the first peoples were permitted to continue living. Flammable brush and young trees thus accumulated.
Clearcut logging further hampered a return to a fire-resistant tall canopy. Governmental policies aimed at suppressing all natural and human-caused fires, even in parks and wilderness areas, amplified the accumulation of dense undergrowth and woody debris. Thus, even a naturally occurring ground fire could threaten to spread upwards and become a canopy fire spreading uncontrollably over a widening area.
Reproduction
Coast redwood reproduces both sexually by seed and asexually by sprouting of buds, layering, or lignotubers. Seed production begins at 10–15 years of age. Cones develop in the winter and mature by fall. In the early stages, the cones look like flowers, and are commonly called "flowers" by professional foresters, although this is not strictly correct. Coast redwoods produce many cones, with redwoods in new forests producing thousands per year. The cones themselves hold 90–150 seeds, but viability of seed is low, typically well below 15% with one estimate of average rates being 3 to 10 percent. The viability does increase with age, trees under 20 years old have a viability of about 1%, and do not generally reach the highest levels of viability until age 250. The rates decrease as the tree starts to get very old, with trees over 1,200 not reaching viability rates over 3%. The low viability may discourage seed predators, which do not want to waste time sorting chaff (empty seeds) from edible seeds. Successful germination often requires a fire or flood, reducing competition for seedlings. The winged seeds are small and light, weighing 3.3–5.0 mg (200–300 seeds/g; 5,600–8,500/ounce). The wings are not effective for wide dispersal, and seeds are dispersed by wind an average of only from the parent tree. Seedlings are susceptible to fungal infection and predation by banana slugs, brush rabbits, and nematodes. Most seedlings do not survive their first three years. However, those that become established grow rapidly, with young trees known to reach tall in 20 years. When canopy space is not available, small trees can remain suppressed for up to 400 years before accelerating their growth rate.
Coast redwoods can also reproduce asexually by layering or sprouting from the root crown, stump, or even fallen branches; if a tree falls over, it generates a row of new trees along the trunk, so many trees naturally grow in a straight line. Sprouts originate from dormant or adventitious buds at or under the surface of the bark. The dormant sprouts are stimulated when the main adult stem gets damaged or starts to die. Many sprouts spontaneously erupt and develop around the circumference of the tree trunk. Within a short period after sprouting, each sprout develops its own root system, with the dominant sprouts forming a ring of trees around the parent root crown or stump. This ring of trees is called a "fairy ring". Sprouts can achieve heights of in a single growing season.
Redwoods may also reproduce using burls. A burl is a woody lignotuber that commonly appears on a redwood tree below the soil line, though usually within in depth from the soil surface. Coast redwoods develop burls as seedlings from the axils of their cotyledon, a trait that is extremely rare in conifers. When provoked by damage, dormant buds in the burls sprout new shoots and roots. Burls are also capable of sprouting into new trees when detached from the parent tree, though exactly how this happens is yet to be studied. Shoot clones commonly sprout from burls and are often turned into decorative hedges when found in suburbia.
Cultivation and uses
Coast redwood is one of the most valuable timber species in the lumbering industry. In California, of redwood forest are logged, virtually all of it second growth. Though many entities have existed in the cutting and management of redwoods, perhaps none has had a more storied role than the Pacific Lumber Company (1863–2008) of Humboldt County, California, where it owned and managed over of forests, primarily redwood. Coast redwood lumber is highly valued for its beauty, light weight, and resistance to decay. Its lack of resin makes it absorb water and resist fire.
P. H. Shaughnessy, Chief Engineer of the San Francisco Fire Department wrote:
In the recent great fire of San Francisco, that began April 18th, 1906, we succeeded in finally stopping it in nearly all directions where the unburned buildings were almost entirely of frame construction, and if the exterior finish of these buildings had not been of redwood lumber, I am satisfied that the area of the burned district would have been greatly extended.
Because of its impressive resistance to decay, redwood was extensively used for railroad ties and trestles throughout California. Many of the old ties have been recycled for use in gardens as borders, steps, house beams, etc. Redwood burls are used in the production of table tops, veneers, and turned goods.
The Yurok people, who occupied the region before European settlement, regularly burned ground cover in redwood forests to bolster tanoak populations from which they harvested acorns, to maintain forest openings, and to boost populations of useful plant species such as those for medicine or basketmaking.
Extensive logging of redwoods began in the early nineteenth century. The trees were felled by ax and saw onto beds of tree limbs and shrubs to cushion their fall. Stripped of their bark, the logs were transported to mills or waterways by oxen or horse. Loggers then burned the accumulated tree limbs, shrubs, and bark. The repeated fires favored secondary forests of primarily redwoods as redwood seedlings sprout readily in burned areas. The introduction of steam engines let crews drag logs through long skid trails to nearby railroads, furthering the reach of loggers beyond the land near rivers previously used to transport trees. This method of harvesting, however, disturbed large amounts of soil, producing secondary-growth forests of species other than redwood such as Douglas-fir, grand fir, and western hemlock. After World War II, trucks and tractors gradually replaced steam engines, giving rise to two harvesting approaches: clearcutting and selection harvesting. Clearcutting involved felling all the trees in a particular area. It was encouraged by tax laws that exempted all standing timber from taxation if 70% of trees in the area were harvested. Selection logging, by contrast, called for the removal 25% to 50% of mature trees in the hopes that the remaining trees would allow for future growth and reseeding. This method, however, encouraged growth of other tree species, converting redwood forests into mixed forests of redwood, grand fir, Sitka spruce, and western hemlock. Moreover, the trees left standing were often felled by windthrow; that is, they were often blown over by the wind.
The coast redwood is naturalized in New Zealand, notably at Whakarewarewa Forest, Rotorua. Redwood has been grown in New Zealand plantations for more than 100 years, and those planted in New Zealand have higher growth rates than those in California, mainly because of even rainfall distribution through the year.
Other areas of successful cultivation outside of the native range include Great Britain, Italy, France, Haida Gwaii, middle elevations of Hawaii, Hogsback in South Africa, the Knysna Afromontane forests in the Western Cape, Grootvadersbosch Forest Reserve near Swellendam, South Africa and the Tokai Arboretum on the slopes of Table Mountain above Cape Town, a small area in central Mexico (Jilotepec), and the southeastern United States from eastern Texas to Maryland. It also does well in the Pacific Northwest (Oregon, Washington, and British Columbia), far north of its northernmost native range in southwestern Oregon. Coast redwood trees were used in a display at Rockefeller Center and then given to Longhouse Reserve in East Hampton, Long Island, New York, and these have now been living there for over twenty years and have survived at .
This fast-growing tree can be grown as an ornamental specimen in those large parks and gardens that can accommodate its massive size. It has gained the Royal Horticultural Society's Award of Garden Merit.
Statistics
Fairly solid evidence indicates that coast redwoods were the world's largest trees before logging, with numerous historical specimens reportedly over . The theoretical maximum potential height of coast redwoods is thought to be limited to between , as evapotranspiration is insufficient to transport water to leaves beyond this range. Further studies have indicated that this maximum requires fog, which is prevalent in these trees' natural environment.
A tree reportedly in length was felled in Sonoma County by the Murphy Brothers saw mill in the 1870s, another claimed to be and in diameter was cut down near Eureka in 1914, and the Lindsay Creek Tree was documented to have a height of when it was uprooted and felled by a storm in 1905. A tree reportedly tall was felled in November 1886 by the Elk River Mill and Lumber Company in Humboldt County, yielding 79,736 marketable board feet from 21 cuts. In 1893, a Redwood cut at the Eel River, near Scotia, reportedly measured in length, and in girth. However, limited evidence corroborates these historical measurements.
Today, trees over are common, and many are over . The current tallest tree is the Hyperion tree, measuring . The tree was discovered in Redwood National Park during mid-2006 by Chris Atkins and Michael Taylor, and is thought to be the world's tallest living organism. The previous record holder was the Stratosphere Giant in Humboldt Redwoods State Park at (as measured in 2004). Until it fell in March 1991, the "Dyerville Giant" was the record holder. It, too, stood in Humboldt Redwoods State Park and was high and estimated to be 1,600 years old. This fallen giant has been preserved in the park.
The largest known living coast redwood is Grogan's Fault, discovered in 2014 by Chris Atkins and Mario Vaden in Redwood National Park, with a main trunk volume of at least Other high-volume coast redwoods include Iluvatar, with a main trunk volume of , and the Lost Monarch, with a main trunk volume of .
Albino redwoods are mutants that cannot manufacture chlorophyll. About 230 examples (including growths and sprouts) are known to exist, reaching heights of up to . These trees survive like parasites, obtaining food from green parent trees. While similar mutations occur sporadically in other conifers, no cases are known of such individuals surviving to maturity in any other conifer species. Recent research news reports that albino redwoods can store higher concentrations of toxic metals, going so far as comparing them to organs or "waste dumps".
List of tallest trees
Heights of the tallest coast redwoods are measured yearly by experts. Even with recent discoveries of tall coast redwoods above , it is likely that no taller trees will be discovered.
Diameter is measured at above average ground level (at breast height). Details of the precise locations for most of the tallest trees were not announced to the general public for fear of causing damage to the trees and the surrounding habitat. The tallest coast redwood easily accessible to the public is the National Geographic Tree, immediately trailside in the Tall Trees Grove of Redwood National Park.
List of largest trees
The following list shows the largest S. sempervirens by volume known as of 2001.
Calculating the volume of a standing tree is the practical equivalent of calculating the volume of an irregular cone, and is subject to error for various reasons. This is partly due to technical difficulties in measurement, and variations in the shape of trees and their trunks. Measurements of trunk circumference are taken at only a few predetermined heights up the trunk, and assume that the trunk is circular in cross-section, and that taper between measurement points is even. Also, only the volume of the trunk (including the restored volume of basal fire scars) is taken into account, and not the volume of wood in the branches or roots. The volume measurements also do not take cavities into account. Most coast redwoods with volumes greater than represent ancient fusions of two or more separate trees, which makes determining whether a coast redwood has a single stem or multiple stems difficult. Starting in 2014, more record-breaking coast redwood trees were discovered. The largest disclosed was a massive redwood called Grogan's Fault/Spartan, which has been measured to have a volume of 38,300 cubic feet. In 2021 during a meeting presentation titled Redwoods 101 run by Henry Cowell Redwoods State Park, an even larger redwood was revealed, allegedly surpassed only by 3 giant sequoias in size. This tree is popularly known as 'Hail Storm', and has a volume of 44,750 cubic feet.
Details of the precise locations for most of the tallest trees were not announced to the general public for fear of causing damage to the trees and the surrounding habitat. The largest coast redwood easily accessible to the public is Iluvatar, which stands prominently about 5 meters (16 ft) to the southeast of the Foothill Trail of Prairie Creek Redwoods State Park.
Canopy layers
Redwood canopy soil forms from leaf and organic material litter shedding from upper portions of the tree, accumulating and decomposing on larger branches. These clusters of soil require a lot of hydration, but they have an incredible amount of retention once saturated. Redwoods can send roots into these wet soils, providing a water source removed from the forest floor. This creates a unique ecosystem within old growth trees full of fungi, vascular plants, and small creatures. An example of a creature that lives there are the Clouded Salamanders that has been discovered up to 40 meters high. Evidence shows they breed and are born in the canopy soil of Redwood trees. Due to the sheer mass height of these trees and the canopy layer, it was almost never explored for the last century. Due to the mass of these trees and the amount of trees in the surrounding area different molds of moss form on these canopies that are called epiphytes. These epiphytes have different characteristics but all of said species are very adaptable to the tough treetop weather and characteristics. After hundreds of years these trees have been shaped into making it possible for these epiphytes to survive through the winter rain and the fall fog.
Other notable examples
The Blossom Rock Navigation Trees were two especially tall sequoias located in the Berkeley Hills used as a navigational aid by sailors to avoid the treacherous Blossom Rock near Yerba Buena Island.
The Crannell Creek Giant was documented to have a trunk volume of at least about 32% larger than Grogan's Fault and 17% larger than General Sherman, the current largest tree. It was felled around 1945.
The Lindsay Creek Tree was documented to have a height of and a trunk volume of at least when it was uprooted and felled by a storm in 1905. If these measurements are to be believed, the Lindsay Creek Tree was about taller than Hyperion, the current tallest tree, 213% larger than Grogan's Fault, and 171% larger than General Sherman.
Old Survivor, also known as the Grandfather, is the last remaining old-growth coastal redwood of the redwood forest that populated the Oakland Hills. The tree was seeded sometime between 1549 and 1554.
One of the largest redwood stumps ever found, measuring in diameter, is in Oakland, California, the Berkeley Hills, in the Roberts Regional Recreation Area section of Redwood Regional Park.
| Biology and health sciences | Cupressaceae | Plants |
23538713 | https://en.wikipedia.org/wiki/Bat | Bat | Bats are flying mammals of the order Chiroptera (). With their forelimbs adapted as wings, they are the only mammals capable of true and sustained flight. Bats are more agile in flight than most birds, flying with their very long spread-out digits covered with a thin membrane or patagium. The smallest bat, and arguably the smallest extant mammal, is Kitti's hog-nosed bat, which is in length, across the wings and in mass. The largest bats are the flying foxes, with the giant golden-crowned flying fox (Acerodon jubatus) reaching a weight of and having a wingspan of .
The second largest order of mammals after rodents, bats comprise about 20% of all classified mammal species worldwide, with over 1,400 species. These were traditionally divided into two suborders: the largely fruit-eating megabats, and the echolocating microbats. But more recent evidence has supported dividing the order into Yinpterochiroptera and Yangochiroptera, with megabats as members of the former along with several species of microbats. Many bats are insectivores, and most of the rest are frugivores (fruit-eaters) or nectarivores (nectar-eaters). A few species feed on animals other than insects; for example, the vampire bats feed on blood. Most bats are nocturnal, and many roost in caves or other refuges; it is uncertain whether bats have these behaviours to escape predators. Bats are present throughout the world, with the exception of extremely cold regions. They are important in their ecosystems for pollinating flowers and dispersing seeds; many tropical plants depend entirely on bats for these services.
Bats provide humans with some direct benefits, at the cost of some disadvantages. Bat dung has been mined as guano from caves and used as fertiliser. Bats consume insect pests, reducing the need for pesticides and other insect management measures. Some bats are also predators of mosquitoes, suppressing the transmission of mosquito-borne diseases. Bats are sometimes numerous enough and close enough to human settlements to serve as tourist attractions, and they are used as food across Asia and the Pacific Rim. However, fruit bats are frequently considered pests by fruit growers. Due to their physiology, bats are one type of animal that acts as a natural reservoir of many pathogens, such as rabies; and since they are highly mobile, social, and long-lived, they can readily spread disease among themselves. If humans interact with bats, these traits become potentially dangerous to humans.
Depending on the culture, bats may be symbolically associated with positive traits, such as protection from certain diseases or risks, rebirth, or long life, but in the West, bats are popularly associated with darkness, malevolence, witchcraft, vampires, and death.
Etymology
An older English name for bats is flittermouse, which matches their name in other Germanic languages (for example German and Swedish ), related to the fluttering of wings. Middle English had , most likely cognate with Old Swedish (), which may have undergone a shift from -k- to -t- (to Modern English bat) influenced by Latin , . The word bat was probably first used in the early 1570s. The name Chiroptera derives from – , and – , .
Phylogeny and taxonomy
Evolution
The delicate skeletons of bats do not fossilise well; it is estimated that only 12% of bat genera that lived have been found in the fossil record. Most of the oldest known bat fossils were already very similar to modern microbats, such as Archaeopteropus (32 million years ago). The oldest known bat fossils include Archaeonycteris praecursor and Altaynycteris aurora (55–56 million years ago), both known only from isolated teeth. The oldest complete bat skeleton is Icaronycteris gunnelli (52 million years ago), known from two skeletons discovered in Wyoming. The extinct bats Palaeochiropteryx tupaiodon and Hassianycteris kumari, both of which lived 48 million years ago, are the first fossil mammals whose colouration has been discovered: both were reddish-brown.
Bats were formerly grouped in the superorder Archonta, along with the treeshrews (Scandentia), colugos (Dermoptera), and primates. Modern genetic evidence now places bats in the superorder Laurasiatheria, with its sister taxon as Ferungulata, which includes carnivorans, pangolins, odd-toed ungulates, and even-toed ungulates. One study places Chiroptera as a sister taxon to odd-toed ungulates (Perissodactyla).
The flying primate hypothesis proposed that when adaptations to flight are removed, megabats are allied to primates by anatomical features not shared with microbats and thus flight evolved twice in mammals. Genetic studies have strongly supported the monophyly of bats and the single origin of mammal flight.
Coevolutionary evidence
An independent molecular analysis trying to establish the dates when bat ectoparasites (bedbugs) evolved came to the conclusion that bedbugs similar to those known today (all major extant lineages, all of which feed primarily on bats) had already diversified and become established over 100 mya (i.e., long before the oldest records for bats, 52 mya), suggesting that they initially all evolved on non-bat hosts and "bats were colonized several times independently, unless the evolutionary origin of bats has been grossly underestimated." Fleas, as a group, are quite old (most flea families formed around the end of the Cretaceous), but no analyses have provided estimates for the age of the flea lineages associated with bats. The oldest known members of a different lineage of bat ectoparasites (bat flies), however, are from roughly 20 mya, well after the origin of bats. The bat-ectoparasitic earwig family Arixeniidae has no fossil record, but is not believed to originate more than 23 mya.
Inner systematics
Genetic evidence indicates that megabats originated during the early Eocene, and belong within the four major lines of microbats. Two new suborders have been proposed; Yinpterochiroptera includes the Pteropodidae, or megabat family, as well as the families Rhinolophidae, Hipposideridae, Craseonycteridae, Megadermatidae, and Rhinopomatidae. Yangochiroptera includes the other families of bats (all of which use laryngeal echolocation), a conclusion supported by a 2005 DNA study. A 2013 phylogenomic study supported the two new proposed suborders.
The 2003 discovery of an early fossil bat from the 52-million-year-old Green River Formation, Onychonycteris finneyi, indicates that flight evolved before echolocative abilities. Onychonycteris had claws on all five of its fingers, whereas modern bats have at most two claws on two digits of each hand. It also had longer hind legs and shorter forearms, similar to climbing mammals that hang under branches, such as sloths and gibbons. This palm-sized bat had short, broad wings, suggesting that it could not fly as fast or as far as later bat species. Instead of flapping its wings continuously while flying, Onychonycteris probably alternated between flaps and glides in the air. This suggests that this bat did not fly as much as modern bats, but flew from tree to tree and spent most of its time climbing or hanging on branches. The distinctive features of the Onychonycteris fossil also support the hypothesis that mammalian flight most likely evolved in arboreal locomotors, rather than terrestrial runners. This model of flight development, commonly known as the "trees-down" theory, holds that bats first flew by taking advantage of height and gravity to drop down on to prey, rather than running fast enough for a ground-level take off.
The molecular phylogeny was controversial, as it pointed to microbats not having a unique common ancestry, which implied that some seemingly unlikely transformations occurred. The first is that laryngeal echolocation evolved twice in bats, once in Yangochiroptera and once in the rhinolophoids. The second is that laryngeal echolocation had a single origin in Chiroptera, was subsequently lost in the family Pteropodidae (all megabats), and later evolved as a system of tongue-clicking in the genus Rousettus. Analyses of the sequence of the vocalization gene FoxP2 were inconclusive on whether laryngeal echolocation was lost in the pteropodids or gained in the echolocating lineages. Echolocation probably first derived in bats from communicative calls. The Eocene bats Icaronycteris (52 million years ago) and Palaeochiropteryx had cranial adaptations suggesting an ability to detect ultrasound. This may have been used at first mainly to forage on the ground for insects and map out their surroundings in their gliding phase, or for communicative purposes. After the adaptation of flight was established, it may have been refined to target flying prey by echolocation. Analyses of the hearing gene Prestin seem to favour the idea that echolocation developed independently at least twice, rather than being lost secondarily in the pteropodids, but ontogenic analysis of the cochlea supports that laryngeal echolocation evolved only once.
Classification
Bats are placental mammals. After rodents, they are the largest order, making up about 20% of mammal species. In 1758, Carl Linnaeus classified the seven bat species he knew of in the genus Vespertilio in the order Primates. Around twenty years later, the German naturalist Johann Friedrich Blumenbach gave them their own order, Chiroptera. Since then, the number of described species has risen to over 1,400, traditionally classified as two suborders: Megachiroptera (megabats), and Microchiroptera (microbats/echolocating bats). Not all megabats are larger than microbats. Several characteristics distinguish the two groups. Microbats use echolocation for navigation and finding prey, but megabats apart from those in the genus Rousettus do not. Accordingly, megabats have a well-developed eyesight. Megabats have a claw on the second finger of the forelimb. The external ears of microbats do not close to form a ring; the edges are separated from each other at the base of the ear. Megabats eat fruit, nectar, or pollen, while most microbats eat insects; others feed on fruit, nectar, pollen, fish, frogs, small mammals, or blood.
Below is a table chart following the bat classification of families recognized by various authors of the ninth volume of Handbook of the Mammals of the World published in 2019:
Anatomy and physiology
Skull and dentition
The head and teeth shape of bats can vary by species. In general, megabats have longer snouts, larger eye sockets and smaller ears, giving them a more dog-like appearance, which is the source of their nickname of "flying foxes". Among microbats, longer snouts are associated with nectar-feeding, while vampire bats have reduced snouts to accommodate large incisors and canines.
Small insect-eating bats can have as many as 38 teeth, while vampire bats have only 20. Bats that feed on hard-shelled insects have fewer but larger teeth with longer canines and more robust lower jaws than species that prey on softer bodied insects. In nectar-feeding bats, the canines are long while the cheek-teeth are reduced. In fruit-eating bats, the cusps of the cheek teeth are adapted for crushing. The upper incisors of vampire bats lack enamel, which keeps them razor-sharp. The bite force of small bats is generated through mechanical advantage, allowing them to bite through the hardened armour of insects or the skin of fruit.
Wings and flight
Bats are the only mammals capable of sustained flight, as opposed to gliding, as in the flying squirrel. The fastest bat, the Mexican free-tailed bat (Tadarida brasiliensis), can achieve a ground speed of .
The finger bones of bats are much more flexible than those of other mammals, owing to their flattened cross-section and to low levels of calcium near their tips. The elongation of bat digits, a key feature required for wing development, is due to the upregulation of bone morphogenetic proteins (Bmps). During embryonic development, the gene controlling Bmp signalling, Bmp2, is subjected to increased expression in bat forelimbsresulting in the extension of the manual digits. This crucial genetic alteration helps create the specialized limbs required for powered flight. The relative proportion of extant bat forelimb digits compared with those of Eocene fossil bats have no significant differences, suggesting that bat wing morphology has been conserved for over fifty million years. During flight, the bones undergo bending and shearing stress; the bending stresses felt are smaller than in terrestrial mammals, but the shearing stress is larger. The wing bones of bats have a slightly lower breaking stress point than those of birds.
As in other mammals, and unlike in birds, the radius is the main component of the forearm. Bats have five elongated digits, which all radiate around the wrist. The thumb points forward and supports the leading edge of the wing, and the other digits support the tension held in the wing membrane. The second and third digits go along the wing tip, allowing the wing to be pulled forward against aerodynamic drag, without having to be thick as in pterosaur wings. The fourth and fifth digits go from the wrist to the trailing edge, and repel the bending force caused by air pushing up against the stiff membrane. Due to their flexible joints, bats are more maneuverable and more dexterous than gliding mammals.
The wings of bats are much thinner and consist of more bones than the wings of birds, allowing bats to maneuver more accurately than the latter, and fly with more lift and less drag. By folding the wings in toward their bodies on the upstroke, they save 35 percent energy during flight. The membranes are delicate, tearing easily, but can regrow, and small tears heal quickly. The surface of the wings is equipped with touch-sensitive receptors on small bumps called Merkel cells, also found on human fingertips. These sensitive areas are different in bats, as each bump has a tiny hair in the center, making it even more sensitive and allowing the bat to detect and adapt to changing airflow; the primary use is to judge the most efficient speed at which to fly, and possibly also to avoid stalls. Insectivorous bats may also use tactile hairs to help perform complex maneuvers to capture prey in flight.
The patagium is the wing membrane; it is stretched between the arm and finger bones, and down the side of the body to the hind limbs and tail. This skin membrane consists of connective tissue, elastic fibres, nerves, muscles, and blood vessels. The muscles keep the membrane taut during flight. The extent to which the tail of a bat is attached to a patagium can vary by species, with some having completely free tails or even no tails. The skin on the body of the bat, which has one layer of epidermis and dermis, as well as hair follicles, sweat glands and a fatty subcutaneous layer, is very different from the skin of the wing membrane. Depending on the bat species the presence of hair follicles and sweat glands will vary in the patagium. This patagium is an extremely thin double layer of epidermis; these layers are separated by a connective tissue center, rich with collagen and elastic fibers. In some bat species sweat glands will be present in between this connective tissue. Furthermore, if hair follicles are present this supports the bat in order to adjust sudden flight maneuvers. For bat embryos, apoptosis (programmed cell death) affects only the hindlimbs, while the forelimbs retain webbing between the digits that forms into the wing membranes. Unlike birds, whose stiff wings deliver bending and torsional stress to the shoulders, bats have a flexible wing membrane that can resist only tension. To achieve flight, a bat exerts force inwards at the points where the membrane meets the skeleton, so that an opposing force balances it on the wing edges perpendicular to the wing surface. This adaptation does not permit bats to reduce their wingspans, unlike birds, which can partly fold their wings in flight, radically reducing the wing span and area for the upstroke and for gliding. Hence bats cannot travel over long distances as birds can.
Nectar- and pollen-eating bats can hover, in a similar way to hummingbirds. The sharp leading edges of the wings can create vortices, which provide lift. The vortex may be stabilized by the animal changing its wing curvatures.
Roosting and gaits
When not flying, bats hang upside down from their feet, a posture known as roosting. The femurs are attached at the hips in a way that allows them to bend outward and upward in flight. The ankle joint can flex to allow the trailing edge of the wings to bend downwards. This does not permit many movements other than hanging or clambering up trees. Most megabats roost with the head tucked towards the belly, whereas most microbats roost with the neck curled towards the back. This difference is reflected in the structure of the cervical or neck vertebrae in the two groups, which are clearly distinct. Tendons allow bats to lock their feet closed when hanging from a roost. Muscular power is needed to let go, but not to grasp a perch or when holding on.
When on the ground, most bats can only crawl awkwardly. A few species such as the New Zealand lesser short-tailed bat and the common vampire bat are agile on the ground. Both species make lateral gaits (the limbs move one after the other) when moving slowly but vampire bats move with a bounding gait (all limbs move in unison) at greater speeds, the folded up wings being used to propel them forward. Vampire bats likely evolved these gaits to follow their hosts while short-tailed bats developed in the absence of terrestrial mammal competitors. Enhanced terrestrial locomotion does not appear to have reduced their ability to fly.
Internal systems
Bats have an efficient circulatory system. They seem to make use of particularly strong venomotion, a rhythmic contraction of venous wall muscles. In most mammals, the walls of the veins provide mainly passive resistance, maintaining their shape as deoxygenated blood flows through them, but in bats they appear to actively support blood flow back to the heart with this pumping action. Since their bodies are relatively small and lightweight, bats are not at risk of blood flow rushing to their heads when roosting.
Bats possess a highly adapted respiratory system to cope with the demands of powered flight, an energetically taxing activity that requires a large continuous throughput of oxygen. In bats, the relative alveolar surface area and pulmonary capillary blood volume are larger than in most other small quadrupedal mammals. During flight the respiratory cycle has a one-to-one relationship with the wing-beat cycle. Because of the limits of mammalian lungs, bats cannot maintain high-altitude flight.
It takes a lot of energy and an efficient circulatory system to work the flight muscles of bats. Energy supply to the muscles engaged in flight requires about double the amount compared to the muscles that do not use flight as a means of mammalian locomotion. In parallel to energy consumption, blood oxygen levels of flying animals are twice as much as those of their terrestrially locomoting mammals. As the blood supply controls the amount of oxygen supplied throughout the body, the circulatory system must respond accordingly. Therefore, compared to a terrestrial mammal of the same relative size, the bat's heart can be up to three times larger, and pump more blood. Cardiac output is directly derived from heart rate and stroke volume of the blood; an active microbat can reach a heart rate of 1000 beats per minute.
With its extremely thin membranous tissue, a bat's wing can significantly contribute to the organism's total gas exchange efficiency. Because of the high energy demand of flight, the bat's body meets those demands by exchanging gas through the patagium of the wing. When the bat has its wings spread it allows for an increase in surface area to volume ratio. The surface area of the wings is about 85% of the total body surface area, suggesting the possibility of a useful degree of gas exchange. The subcutaneous vessels in the membrane lie very close to the surface and allow for the diffusion of oxygen and carbon dioxide.
The digestive system of bats has varying adaptations depending on the species of bat and its diet. As in other flying animals, food is processed quickly and effectively to keep up with the energy demand. Insectivorous bats may have certain digestive enzymes to better process insects, such as chitinase to break down chitin, which is a large component of insects. Vampire bats, probably due to their diet of blood, are the only vertebrates that do not have the enzyme maltase, which breaks down malt sugar, in their intestinal tract. Nectivorous and frugivorous bats have more maltase and sucrase enzymes than insectivorous, to cope with the higher sugar contents of their diet.
The adaptations of the kidneys of bats vary with their diets. Carnivorous and vampire bats consume large amounts of protein and can output concentrated urine; their kidneys have a thin cortex and long renal papillae. Frugivorous bats lack that ability and have kidneys adapted for electrolyte-retention due to their low-electrolyte diet; their kidneys accordingly have a thick cortex and very short conical papillae. Bats have higher metabolic rates associated with flying, which lead to an increased respiratory water loss. Their large wings are composed of the highly vascularized membranes, increasing the surface area, and leading to cutaneous evaporative water loss. Water helps maintain their ionic balance in their blood, thermoregulation system, and removal of wastes and toxins from the body via urine. They are also susceptible to blood urea poisoning if they do not receive enough fluid.
The structure of the uterine system in female bats can vary by species, with some having two uterine horns while others have a single mainline chamber.
Senses
Echolocation
Microbats and a few megabats emit ultrasonic sounds to produce echoes. Sound intensity of these echos are dependent on subglottic pressure. The bats' cricothyroid muscle controls the orientation pulse frequency, which is an important function. This muscle is located inside the larynx and it is the only tensor muscle capable of aiding phonation. By comparing the outgoing pulse with the returning echoes, bats can gather information on their surroundings. This allows them to detect prey in darkness. Some bat calls can reach 140 decibels. Microbats use their larynx to emit echolocation signals through the mouth or the nose. Microbat calls range in frequency from 14,000 to well over 100,000 Hz, extending well beyond the range of human hearing (between 20 and 20,000 Hz). Various groups of bats have evolved fleshy extensions around and above the nostrils, known as nose-leaves, which play a role in sound transmission.
In low-duty cycle echolocation, bats can separate their calls and returning echoes by time. They have to time their short calls to finish before echoes return. The delay of the returning echoes allows the bat to estimate the range to their prey. In high-duty cycle echolocation, bats emit a continuous call and separate pulse and echo in frequency using the Doppler effect of their motion in flight. The shift of the returning echoes yields information relating to the motion and location of the bat's prey. These bats must deal with changes in the Doppler shift due to changes in their flight speed. They have adapted to change their pulse emission frequency in relation to their flight speed so echoes still return in the optimal hearing range.
In addition to echolocating prey, bat ears are sensitive to sounds made by their prey, such as the fluttering of moth wings. The complex geometry of ridges on the inner surface of bat ears helps to sharply focus echolocation signals, and to passively listen for any other sound produced by the prey. These ridges can be regarded as the acoustic equivalent of a Fresnel lens, and exist in a large variety of unrelated animals, such as the aye-aye, lesser galago, bat-eared fox, mouse lemur, and others. Bats can estimate the elevation of their target using the interference patterns from the echoes reflecting from the tragus, a flap of skin in the external ear.
By repeated scanning, bats can mentally construct an accurate image of the environment in which they are moving and of their prey. Some species of moth have exploited this, such as the tiger moths, which produces aposematic ultrasound signals to warn bats that they are chemically protected and therefore distasteful. Moth species including the tiger moth can produce signals to jam bat echolocation. Many moth species have a hearing organ called a tympanum, which responds to an incoming bat signal by causing the moth's flight muscles to twitch erratically, sending the moth into random evasive manoeuvres.
Vision
The eyes of most microbat species are small and poorly developed, leading to poor visual acuity, but no species is blind. Most microbats have mesopic vision, meaning that they can detect light only in low levels, whereas other mammals have photopic vision, which allows colour vision. Microbats may use their vision for orientation and while travelling between their roosting grounds and feeding grounds, as echolocation is effective only over short distances. Some species can detect ultraviolet (UV). As the bodies of some microbats have distinct coloration, they may be able to discriminate colours.
Megabat species often have eyesight as good as, if not better than, human vision. Their eyesight is adapted to both night and daylight vision, including some colour vision.
Magnetoreception
Microbats make use of magnetoreception, in that they have a high sensitivity to the Earth's magnetic field, as birds do. Microbats use a polarity-based compass, meaning that they differentiate north from south, unlike birds, which use the strength of the magnetic field to differentiate latitudes, which may be used in long-distance travel. The mechanism is unknown but may involve magnetite particles.
Thermoregulation
Most bats are homeothermic (having a stable body temperature), the exception being the vesper bats (Vespertilionidae), the horseshoe bats (Rhinolophidae), the free-tailed bats (Molossidae), and the bent-winged bats (Miniopteridae), which extensively use heterothermy (where body temperature can vary). Compared to other mammals, bats have a high thermal conductivity. The wings are filled with blood vessels, and lose body heat when extended. At rest, they may wrap their wings around themselves to trap a layer of warm air. Smaller bats generally have a higher metabolic rate than larger bats, and so need to consume more food in order to maintain homeothermy.
Bats may avoid flying during the day to prevent overheating in the sun, since their dark wing-membranes absorb solar radiation. Bats may not be able to dissipate heat if the ambient temperature is too high; they use saliva to cool themselves in extreme conditions. Among megabats, the flying fox Pteropus hypomelanus uses saliva and wing-fanning to cool itself while roosting during the hottest part of the day. Among microbats, the Yuma myotis (Myotis yumanensis), the Mexican free-tailed bat, and the pallid bat (Antrozous pallidus) cope with temperatures up to by panting, salivating, and licking their fur to promote evaporative cooling; this is sufficient to dissipate twice their metabolic heat production.
Bats also possess a system of sphincter valves on the arterial side of the vascular network that runs along the edge of their wings. When fully open, these allow oxygenated blood to flow through the capillary network across the wing membrane; when contracted, they shunt flow directly to the veins, bypassing the wing capillaries. This allows bats to control how much heat is exchanged through the flight membrane, allowing them to release heat during flight. Many other mammals use the capillary network in oversized ears for the same purpose.
Torpor
Torpor, a state of decreased activity where the body temperature and metabolism decreases, is especially useful for bats, as they use a large amount of energy while active, depend upon an unreliable food source, and have a limited ability to store fat. They generally drop their body temperature in this state to , and may reduce their energy expenditure by 50 to 99%. Tropical bats may use it to avoid predation, by reducing the amount of time spent on foraging and thus reducing the chance of being caught by a predator. Megabats were generally believed to be homeothermic, but three species of small megabats, with a mass of about , have been known to use torpor: the common blossom bat (Syconycteris australis), the long-tongued nectar bat (Macroglossus minimus), and the eastern tube-nosed bat (Nyctimene robinsoni). Torpid states last longer in the summer for megabats than in the winter.
During hibernation, bats enter a torpid state and decrease their body temperature for 99.6% of their hibernation period; even during periods of arousal, when they return their body temperature to normal, they sometimes enter a shallow torpid state, known as "heterothermic arousal". Some bats become dormant during higher temperatures to keep cool in the summer months.
Heterothermic bats during long migrations may fly at night and go into a torpid state roosting in the daytime. Unlike migratory birds, which fly during the day and feed during the night, nocturnal bats have a conflict between travelling and eating. The energy saved reduces their need to feed, and also decreases the duration of migration, which may prevent them from spending too much time in unfamiliar places, and decrease predation. In some species, pregnant individuals may not use torpor.
Size
The smallest bat is Kitti's hog-nosed bat (Craseonycteris thonglongyai), which is long with a wingspan and weighs . It is also arguably the smallest extant species of mammal, next to the Etruscan shrew. The largest bats are a few species of Pteropus megabats and the giant golden-crowned flying fox, (Acerodon jubatus), which can weigh with a wingspan of . Larger bats tend to use lower frequencies and smaller bats higher for echolocation; high-frequency echolocation is better at detecting smaller prey. Small prey may be absent in the diets of large bats as they are unable to detect them. The adaptations of a particular bat species can directly influence what kinds of prey are available to it.
Ecology
Flight has enabled bats to become one of the most widely distributed groups of mammals. Apart from the Arctic, the Antarctic and a few isolated oceanic islands, bats exist in almost every habitat on Earth. Tropical areas tend to have more species than temperate ones. Different species select different habitats during different seasons, ranging from seasides to mountains and deserts, but they require suitable roosts. Bat roosts can be found in hollows, crevices, foliage, and even human-made structures, and include "tents" the bats construct with leaves. Megabats generally roost in trees. Most microbats are nocturnal and megabats are typically diurnal or crepuscular. Microbats are known to exhibit diurnal behaviour in temperate regions during summer when there is insufficient night time to forage, and in areas where there are few avian predators during the day.
In temperate areas, some microbats migrate hundreds of kilometres to winter hibernation dens; others pass into torpor in cold weather, rousing and feeding when warm weather allows insects to be active. Others retreat to caves for winter and hibernate for as much as six months. Microbats rarely fly in rain; it interferes with their echolocation, and they are unable to hunt.
Food and feeding
Different bat species have different diets, including insects, nectar, pollen, fruit and even vertebrates. Megabats are mostly fruit, nectar and pollen eaters. Due to their small size, high-metabolism and rapid burning of energy through flight, bats must consume large amounts of food for their size. Insectivorous bats may eat over 120 percent of their body weight per day, while frugivorous bats may eat over twice their weight. They can travel significant distances each night, exceptionally as much as in the spotted bat (Euderma maculatum), in search of food. Bats use a variety of hunting strategies. Bats get most of their water from the food they eat; many species also drink from water sources like lakes and streams, flying over the surface and dipping their tongues into the water.
The Chiroptera as a whole are in the process of losing the ability to synthesise vitamin C. In a test of 34 bat species from six major families, including major insect- and fruit-eating bat families, all were found to have lost the ability to synthesise it, and this loss may derive from a common bat ancestor, as a single mutation. At least two species of bat, the frugivorous bat (Rousettus leschenaultii) and the insectivorous bat (Hipposideros armiger), have retained their ability to produce vitamin C.
Insects
Most microbats, especially in temperate areas, prey on insects. The diet of an insectivorous bat may span many species, including flies, mosquitos, beetles, moths, grasshoppers, crickets, termites, bees, wasps, mayflies and caddisflies. Large numbers of Mexican free-tailed bats (Tadarida brasiliensis) fly hundreds of metres above the ground in central Texas to feed on migrating moths. Species that hunt insects in flight, like the little brown bat (Myotis lucifugus), may catch an insect in mid-air with the mouth, and eat it in the air or use their tail membranes or wings to scoop up the insect and carry it to the mouth. The bat may also take the insect back to its roost and eat it there. Slower moving bat species, such as the brown long-eared bat (Plecotus auritus) and many horseshoe bat species, may take or glean insects from vegetation or hunt them from perches. Insectivorous bats living at high latitudes have to consume prey with higher energetic value than tropical bats.
Fruit and nectar
Fruit eating, or frugivory, is found in both major suborders. Bats prefer ripe fruit, pulling it off the trees with their teeth. They fly back to their roosts to eat the fruit, sucking out the juice and spitting the seeds and pulp out onto the ground. This helps disperse the seeds of these fruit trees, which may take root and grow where the bats have left them, and many species of plants depend on bats for seed dispersal. The Jamaican fruit bat (Artibeus jamaicensis) has been recorded carrying fruits weighing or even as much as .
Nectar-eating bats have acquired specialised adaptations. These bats possess long muzzles and long, extensible tongues covered in fine bristles that aid them in feeding on particular flowers and plants. The tube-lipped nectar bat (Anoura fistulata) has the longest tongue of any mammal relative to its body size. This is beneficial to them in terms of pollination and feeding. Their long, narrow tongues can reach deep into the long cup shape of some flowers. When the tongue retracts, it coils up inside the rib cage. Because of these features, nectar-feeding bats cannot easily turn to other food sources in times of scarcity, making them more prone to extinction than other types of bat. Nectar feeding also aids a variety of plants, since these bats serve as pollinators, as pollen gets attached to their fur while they are feeding. Around 500 species of flowering plant rely on bat pollination and thus tend to open their flowers at night. Many rainforest plants depend on bat pollination.
Vertebrates
Some bats prey on other vertebrates, such as fish, frogs, lizards, birds and mammals. The fringe-lipped bat (Trachops cirrhosus,) for example, is skilled at catching frogs. These bats locate large groups of frogs by tracking their mating calls, then plucking them from the surface of the water with their sharp canine teeth. The greater noctule bat can catch birds in flight. Some species, like the greater bulldog bat (Noctilio leporinus) hunt fish. They use echolocation to detect small ripples on the water's surface, swoop down and use specially enlarged claws on their hind feet to grab the fish, then take their prey to a feeding roost and consume it. At least two species of bat are known to feed on other bats: the spectral bat (Vampyrum spectrum), and the ghost bat (Macroderma gigas).
Blood
A few species, specifically the common, white-winged, and hairy-legged vampire bats, feed only on animal blood (hematophagy). The common vampire bat typically feeds on large mammals such as cattle; the hairy-legged and white-winged vampires feed on birds. Vampire bats target sleeping prey and can detect deep breathing. Heat sensors in the nose help them to detect blood vessels near the surface of the skin. They pierce the animal's skin with their teeth, biting away a small flap, and lap up the blood with their tongues, which have lateral grooves adapted to this purpose. The blood is kept from clotting by an anticoagulant in the saliva.
Predators, parasites, and diseases
Bats are subject to predation from birds of prey, such as owls, hawks, and falcons, and at roosts from terrestrial predators able to climb, such as cats. Low-flying bats are vulnerable to crocodiles. Twenty species of tropical New World snakes are known to capture bats, often waiting at the entrances of refuges, such as caves, for bats to fly past. J. Rydell and J. R. Speakman argue that bats evolved nocturnality during the early and middle Eocene period to avoid predators. The evidence is thought by some zoologists to be equivocal so far.
As are most mammals, bats are hosts to a number of internal and external parasites. Among ectoparasites, bats carry fleas and mites, as well as specific parasites such as bat bugs and bat flies (Nycteribiidae and Streblidae). Bats are among the few non-aquatic mammalian orders that do not host lice, possibly due to competition from more specialised parasites that occupy the same niche.
White nose syndrome is a condition associated with the deaths of millions of bats in the Eastern United States and Canada. The disease is named after a white fungus, Pseudogymnoascus destructans, found growing on the muzzles, ears, and wings of affected bats. The fungus is mostly spread from bat to bat, and causes the disease. The fungus was first discovered in central New York State in 2006 and spread quickly to the entire Eastern US north of Florida; mortality rates of 90–100% have been observed in most affected caves. New England and the mid-Atlantic states have, since 2006, witnessed entire species completely extirpated and others with numbers that have gone from the hundreds of thousands, even millions, to a few hundred or less. Nova Scotia, Quebec, Ontario, and New Brunswick have witnessed identical die offs, with the Canadian government making preparations to protect all remaining bat populations in its territory. Scientific evidence suggests that longer winters where the fungus has a longer period to infect bats result in greater mortality. In 2014, the infection crossed the Mississippi River, and in 2017, it was found on bats in Texas.
Bats are natural reservoirs for a large number of zoonotic pathogens, including rabies, endemic in many bat populations, histoplasmosis both directly and in guano, Nipah and Hendra viruses, and possibly the ebola virus, whose natural reservoir is yet unknown. Their high mobility, broad distribution, long life spans, substantial sympatry (range overlap) of species, and social behaviour make bats favourable hosts and vectors of disease. Reviews have found different answers as to whether bats have more zoonotic viruses than other mammal groups. One 2015 review found that bats, rodents, and primates all harbored significantly more zoonotic viruses (which can be transmitted to humans) than other mammal groups, though the differences among the aforementioned three groups were not significant (bats have no more zoonotic viruses than rodents and primates). Another 2020 review of mammals and birds found that the identity of the taxonomic groups did not have any impact on the probability of harboring zoonotic viruses. Instead, more diverse groups had greater viral diversity.
They seem to be highly resistant to many of the pathogens they carry, suggesting a degree of adaptation to their immune systems. Their interactions with livestock and pets, including predation by vampire bats, accidental encounters, and the scavenging of bat carcasses, compound the risk of zoonotic transmission. Bats are implicated in the emergence of severe acute respiratory syndrome (SARS) in China, since they serve as natural hosts for coronaviruses, several from a single cave in Yunnan, one of which developed into the SARS virus. However, they neither cause nor spread COVID-19.
Behaviour and life history
Social structure
Some bats lead solitary lives, while others live in colonies of more than a million. For instance, the Mexican free-tailed bat fly for more than one thousand miles to the wide cave known as Bracken Cave every March to October which plays home to an astonishing twenty million of the species, whereas a mouse-eared bat lives an almost completely solitary life. Living in large colonies lessens the risk to an individual of predation. Temperate bat species may swarm at hibernation sites as autumn approaches. This may serve to introduce young to hibernation sites, signal reproduction in adults and allow adults to breed with those from other groups.
Several species have a fission-fusion social structure, where large numbers of bats congregate in one roosting area, along with breaking up and mixing of subgroups. Within these societies, bats are able to maintain long-term relationships. Some of these relationships consist of matrilineally related females and their dependent offspring. Food sharing and mutual grooming may occur in certain species, such as the common vampire bat (Desmodus rotundus), and these strengthen social bonds. Homosexual fellatio has been observed in the Bonin flying fox Pteropus pselaphon and the Indian flying fox Pteropus medius, though the function and purpose of this behaviour is not clear.
Communication
Bats are among the most vocal of mammals and produce calls to attract mates, find roost partners and defend resources. These calls are typically low-frequency and can travel long distances. Mexican free-tailed bats are one of the few species to "sing" like birds. Males sing to attract females. Songs have three phrases: chirps, trills and buzzes, the former having "A" and "B" syllables. Bat songs are highly stereotypical but with variation in syllable number, phrase order, and phrase repetitions between individuals. Among greater spear-nosed bats (Phyllostomus hastatus), females produce loud, broadband calls among their roost mates to form group cohesion. Calls differ between roosting groups and may arise from vocal learning.
In a study on captive Egyptian fruit bats, 70% of the directed calls could be identified by the researchers as to which individual bat made it, and 60% could be categorised into four contexts: squabbling over food, jostling over position in their sleeping cluster, protesting over mating attempts and arguing when perched in close proximity to each other. The animals made slightly different sounds when communicating with different individual bats, especially those of the opposite sex. In the highly sexually dimorphic hammer-headed bat (Hypsignathus monstrosus), males produce deep, resonating, monotonous calls to attract females. Bats in flight make vocal signals for traffic control. Greater bulldog bats honk when on a collision course with each other.
Bats also communicate by other means. Male little yellow-shouldered bats (Sturnira lilium) have shoulder glands that produce a spicy odour during the breeding season. Like many other species, they have hair specialised for retaining and dispersing secretions. Such hair forms a conspicuous collar around the necks of the some Old World megabat males. Male greater sac-winged bats (Saccopteryx bilineata) have sacs in their wings in which they mix body secretions like saliva and urine to create a perfume that they sprinkle on roost sites, a behaviour known as "salting". Salting may be accompanied by singing.
Reproduction and life cycle
Most bat species are polygynous, where males mate with multiple females. Male pipistrelle, noctule and vampire bats may claim and defend resources that attract females, such as roost sites, and mate with those females. Males unable to claim a site are forced to live on the periphery where they have less reproductive success. Promiscuity, where both sexes mate with multiple partners, exists in species like the Mexican free-tailed bat and the little brown bat. There appears to be bias towards certain males among females in these bats. In a few species, such as the yellow-winged bat and spectral bat, adult males and females form monogamous pairs. Lek mating, where males aggregate and compete for female choice through display, is rare in bats but occurs in the hammerheaded bat.
For temperate living bats, mating takes place in late summer and early autumn. Tropical bats may mate during the dry season. After copulation, the male may leave behind a mating plug to block the sperm of other males and thus ensure his paternity. In hibernating species, males are known to mate with females in torpor. Female bats use a variety of strategies to control the timing of pregnancy and the birth of young, to make delivery coincide with maximum food ability and other ecological factors. Females of some species have delayed fertilisation, in which sperm is stored in the reproductive tract for several months after mating. Mating occurs in late summer to early autumn but fertilisation does not occur until the following late winter to early spring. Other species exhibit delayed implantation, in which the egg is fertilised after mating, but remains free in the reproductive tract until external conditions become favourable for giving birth and caring for the offspring. In another strategy, fertilisation and implantation both occur, but development of the foetus is delayed until good conditions prevail. During the delayed development the mother keeps the fertilised egg alive with nutrients. This process can go on for a long period, because of the advanced gas exchange system.
For temperate living bats, births typically take place in May or June in the Northern Hemisphere; births in the Southern Hemisphere occur in November and December. Tropical species give birth at the beginning of the rainy season. In most bat species, females carry and give birth to a single pup per litter. At birth, a bat pup can be up to 40 percent of the mother's weight, and the pelvic girdle of the female can expand during birth as the two-halves are connected by a flexible ligament. Females typically give birth in a head-up or horizontal position, using gravity to make birthing easier. The young emerges rear-first, possibly to prevent the wings from getting tangled, and the female cradles it in her wing and tail membranes. In many species, females give birth and raise their young in maternity colonies and may assist each other in birthing.
Most of the care for a young bat comes from the mother. In monogamous species, the father plays a role. Allo-suckling, where a female suckles another mother's young, occurs in several species. This may serve to increase colony size in species where females return to their natal colony to breed. A young bat's ability to fly coincides with the development of an adult body and forelimb length. For the little brown bat, this occurs about eighteen days after birth. Weaning of young for most species takes place in under eighty days. The common vampire bat nurses its offspring beyond that and young vampire bats achieve independence later in life than other species. This is probably due to the species' blood-based diet, which is difficult to obtain on a nightly basis.
Life expectancy
The maximum lifespan of bats is three-and-a-half times longer than other mammals of similar size. Six species have been recorded to live over thirty years in the wild: the brown long-eared bat (Plecotus auritus), the little brown bat (Myotis lucifugus), the Siberian bat (Myotis sibiricus), the lesser mouse-eared bat (Myotis blythii) the greater horseshoe bat (Rhinolophus ferrumequinum), and the Indian flying fox (Pteropus giganteus). One hypothesis consistent with the rate-of-living theory links this to the fact that they slow down their metabolic rate while hibernating; bats that hibernate, on average, have a longer lifespan than bats that do not.
Another hypothesis is that flying has reduced their mortality rate, which would also be true for birds and gliding mammals. Bat species that give birth to multiple pups generally have a shorter lifespan than species that give birth to only a single pup. Cave-roosting species may have a longer lifespan than non-roosting species because of the decreased predation in caves. A male Siberian bat was recaptured in the wild after 41 years, making it the oldest known bat.
Interactions with humans
Conservation
Groups such as the Bat Conservation International aim to increase awareness of bats' ecological roles and the environmental threats they face. This group called for Bat Appreciation Week from October 24–31 every year to promote awareness on the ecological importance of bats. In the United Kingdom, all bats are protected under the Wildlife and Countryside Acts, and disturbing a bat or its roost can be punished with a heavy fine.
In Sarawak, Malaysia, "all bats" are protected under the Wildlife Protection Ordinance 1998, but species such as the hairless bat (Cheiromeles torquatus) are still eaten by the local communities. Humans have caused the extinction of several species of bat in modern history, the most recent being the Christmas Island pipistrelle (Pipistrellus murrayi), which was declared extinct in 2009.
Many people put up bat houses to attract bats. The 1991 University of Florida bat house is the largest occupied artificial roost in the world, with around 400,000 residents. In Britain, thickwalled and partly underground World War II pillboxes have been converted to make roosts for bats, and purpose-built bat houses are occasionally built to mitigate damage to habitat from road or other developments. Cave gates are sometimes installed to limit human entry into caves with sensitive or endangered bat species. The gates are designed not to limit the airflow, and thus to maintain the cave's micro-ecosystem. Of the 47 species of bats found in the United States, 35 are known to use human structures, including buildings and bridges. Fourteen species use bat houses.
Bats are eaten in countries across Africa, Asia and the Pacific Rim. In some cases, such as in Guam, flying foxes have become endangered through being hunted for food. There is evidence that suggests that wind turbines might create sufficient barotrauma (pressure damage) to kill bats. Bats have typical mammalian lungs, which are thought to be more sensitive to sudden air pressure changes than the lungs of birds, making them more liable to fatal rupture. Bats may be attracted to turbines, perhaps seeking roosts, increasing the death rate. Acoustic deterrents may help to reduce bat mortality at wind farms.
The diagnosis and contribution of barotrauma to bat deaths near wind turbine blades have been disputed by other research comparing dead bats found near wind turbines with bats killed by impact with buildings in areas with no turbines.
Cultural significance
Since bats are mammals, yet can fly, they are considered to be liminal beings in various traditions. In many cultures, including in Europe, bats are associated with darkness, death, witchcraft, and malevolence. Among Native Americans such as the Creek, Cherokee and Apache, the bat is identified as a trickster. In Tanzania, a winged batlike creature known as Popobawa is believed to be a shapeshifting evil spirit that assaults and sodomises its victims. In Aztec mythology, bats symbolised the land of the dead, destruction, and decay. An East Nigerian tale tells that the bat developed its nocturnal habits after causing the death of his partner, the bush-rat, and now hides by day to avoid arrest.
More positive depictions of bats exist in some cultures. In China, bats have been associated with happiness, joy and good fortune. Five bats are used to symbolise the "Five Blessings": longevity, wealth, health, love of virtue and peaceful death. The bat is sacred in Tonga and is often considered the physical manifestation of a separable soul. In the Zapotec civilisation of Mesoamerica, the bat god presided over corn and fertility.
The Weird Sisters in Shakespeare's Macbeth used the fur of a bat in their brew. In Western culture, the bat is often a symbol of the night and its foreboding nature. The bat is a primary animal associated with fictional characters of the night, both villainous vampires, such as Count Dracula and before him Varney the Vampire, and heroes, such as the DC Comics character Batman. Kenneth Oppel's Silverwing novels narrate the adventures of a young bat, based on the silver-haired bat of North America.
The bat is sometimes used as a heraldic symbol in Spain and France, appearing in the coats of arms of the towns of Valencia, Palma de Mallorca, Fraga, Albacete, and Montchauvet. Three US states have an official state bat. Texas and Oklahoma are represented by the Mexican free-tailed bat, while Virginia is represented by the Virginia big-eared bat (Corynorhinus townsendii virginianus).
Economics
Insectivorous bats in particular are especially helpful to farmers, as they control populations of agricultural pests and reduce the need to use pesticides. It has been estimated that bats save the agricultural industry of the United States anywhere from $3.7billion to $53billion per year in pesticides and damage to crops. This also prevents the overuse of pesticides, which can pollute the surrounding environment, and may lead to resistance in future generations of insects.
Bat dung, a type of guano, is rich in nitrates and is mined from caves for use as fertiliser. During the US Civil War, saltpetre was collected from caves to make gunpowder. At the time, it was believed that the nitrate all came from the bat guano, but it is now known that most of it is produced by nitrifying bacteria.
The Congress Avenue Bridge in Austin, Texas, is the summer home to North America's largest urban bat colony, an estimated 1,500,000 Mexican free-tailed bats. About 100,000 tourists a year visit the bridge at twilight to watch the bats leave the roost.
| Biology and health sciences | Bats | null |
23546532 | https://en.wikipedia.org/wiki/Cucumis%20melo | Cucumis melo | Cucumis melo, also known as melon, is a species of Cucumis that has been developed into many cultivated varieties. The fruit is a pepo. The flesh is either sweet or bland, with or without an aroma, and the rind can be smooth (such as honeydew), ribbed (such as European cantaloupe), wrinkled (such as Cassaba melon), or netted (such as American cantaloupe). The species is sometimes referred to as muskmelon, but there is no consensus about the usage of this term, as it can also be used as a specific name for the musky netted-rind American cantaloupe, or as a generic name for any sweet-flesh variety such the inodorous smooth-rind honeydew melon.
The origin of melons is not known. Research has revealed that seeds and rootstocks were among the goods traded along the caravan routes of the Ancient World. Some botanists consider melons native to the Levant and Egypt, while others place their origin in Iran, India or Central Asia. Still others support an African origin, and in modern times wild melons can still be found in some African countries.
Background
The melon is an annual, trailing herb. It grows well in subtropical or warm, temperate climates. It can be found as a weed around sites of recently built airports in American Samoa.
Melons prefer warm, well-fertilized soil with good drainage that is rich in nutrients, but are vulnerable to downy mildew and anthracnose. Disease risk is reduced by crop rotation with non-cucurbit crops, avoiding crops susceptible to similar diseases as melons. Cross pollination has resulted in some varieties developing resistance to powdery mildew. Insects attracted to melons include the cucumber beetle, melon aphid, melonworm moth and the pickleworm.
Genetics
Melons are monoecious or andromonoecious plants. They do not cross with watermelon, cucumber, pumpkin, or squash, but varieties within the species intercross frequently.
The genome of Cucumis melo was first sequenced in 2012. Some authors treat C. melo as having two subspecies, C. melo agrestis and C. melo melo. Variants within these subspecies fall into groups whose genetics largely agree with their phenotypic traits, such as disease resistance, rind texture, flesh color, and fruit shape. Variants or landraces (some of which were originally classified as species; see the synonyms list to the right) include C. melo var. acidulus (Mangalore melon), adana, agrestis (wild melon), ameri (summer melon), cantalupensis (cantaloupe), reticulatus (muskmelon), chandalak, chate, chito, conomon (Oriental pickling melon), dudaim (pocket melon), flexuosus (snake melon), inodorus (winter melon), momordica (snap melon), tibish, chinensis and makuwa (Oriental melon).
Not all varieties are sweet melons. The snake melon, also called the Armenian cucumber and Serpent cucumber, is a non-sweet melon found throughout Asia from Turkey to Japan. It is similar to a cucumber in taste and appearance. Outside Asia, snake melons are grown in the United States, Italy, Sudan and parts of North Africa, including Egypt. The snake melon is more popular in Arab countries.
Other varieties grown in Africa are bitter, cultivated for their edible seeds.
For commercially grown varieties certain features like protective hard netting and firm flesh are preferred for purposes of shipping and other requirements of commercial markets.
Nutrition
For a reference amount of , a raw cantaloupe melon provides 34 calories and is a rich source (defined as at least 20% of Daily Value, DV) of both vitamin A and vitamin C; other micronutrients are at a negligible level. A raw melon is 90% water and 9% carbohydrates, with less than 1% each of protein and fat.
Uses
In addition to their consumption when fresh, melons are sometimes dried. Other varieties are cooked, or grown for their seeds, which are processed to produce melon oil. Still other varieties are grown only for their pleasant fragrance. The Japanese liqueur Midori is flavored with melon.
It was once a frequently cultivated plant in Tonga (katiu) as a snack and its flowers used for leis, but has since been extirpated.
History
There is debate among scholars whether the abattiach in The Book of Numbers 11:5 refers to a melon or a watermelon. Both types of melon were known in Ancient Egypt and other settled areas. Some botanists consider melons native to the Levant and Egypt, while others n Persia, India or Central Asia, thus the origin is uncertain. Researchers have shown that seeds and rootstocks were among the goods traded along the caravan routes of the Ancient World. Several scientists support an African origin, and in modern times wild melons can still be found in several African countries in East Africa like Ethiopia, Somalia and Tanzania.
Melon was domesticated in West Asia and over time many cultivars developed with variety in shape and sweetness. Iran, India, Uzbekistan, Afghanistan and China become centers for melon production. Melons were consumed in Ancient Greece and Rome.
Gallery
| Biology and health sciences | Cucurbitales | null |
31096097 | https://en.wikipedia.org/wiki/Monsoon%20of%20South%20Asia | Monsoon of South Asia | The Monsoon of South Asia is among several geographically distributed global monsoons. It affects the Indian subcontinent, where it is one of the oldest and most anticipated weather phenomena and an economically important pattern every year from June through September, but it is only partly understood and notoriously difficult to predict. Several theories have been proposed to explain the origin, process, strength, variability, distribution, and general vagaries of the monsoon, but understanding and predictability are still evolving.
The unique geographical features of the Indian subcontinent, along with associated atmospheric, oceanic, and geographical factors, influence the behavior of the monsoon. Because of its effect on agriculture, on flora and fauna, and on the climates of nations such as Bangladesh, Bhutan, India, Nepal, Pakistan, and Sri Lanka – among other economic, social, and environmental effects – the monsoon is one of the most anticipated, tracked, and studied weather phenomena in the region. It has a significant effect on the overall well-being of residents and has even been dubbed the "real finance minister of India".
Definition
The word monsoon (derived from the Arabic "mausim", meaning "seasonal reversal of winds"), although generally defined as a system of winds characterized by a seasonal reversal of direction, lacks a consistent, detailed definition. Some examples are:
The American Meteorological Society calls it a name for seasonal winds, first applied to the winds blowing over the Arabian Sea from the northeast for six months and from the southwest for six months. The term has since been extended to similar winds in other parts of the world.
The Intergovernmental Panel on Climate Change (IPCC) describes a monsoon as a tropical and subtropical seasonal reversal in both surface winds and associated precipitation, caused by differential heating between a continental-scale land mass and the adjacent ocean.
The India Meteorological Department defines it as the seasonal reversal of the direction of winds along the shores of the Indian Ocean, especially in the Arabian Sea, which blow from the southwest for half of the year and from the northeast for the other half.
Colin Stokes Ramage, in Monsoon Meteorology, defines the monsoon as a seasonal reversing wind accompanied by corresponding changes in precipitation.
Background
The first people to observe the combined pattern of the monsoons' branches over different regions of South Asia were sailors in the Arabian Sea who traveled between Africa, India, and Southeast Asia.
The monsoon can be categorized into two branches based on their spread over the subcontinent:
Arabian Sea branch
Bay of Bengal branch
Alternatively, it can be categorized into two segments based on the direction of rain-bearing winds:
Southwest (SW) monsoon
Northeast (NE) monsoon
Based on the time of year that these winds bring rain to India, the monsoon can also be categorized into two periods:
Summer monsoon (May to September)
Winter monsoon (October to November)
The complexity of the monsoon of South Asia is not completely understood, making it difficult to accurately predict the quantity, timing, and geographic distribution of the accompanying precipitation. These are the most monitored components of the monsoon, and they determine the water availability in India for any given year.
Changes of the Monsoon
Monsoons typically occur in tropical areas. One area that monsoons impact greatly is India. In India monsoons create an entire season in which the winds reverse completely.
The rainfall is a result of the convergence of wind flow from the Bay of Bengal and reverse winds from the South China Sea.
The onset of the monsoon occurs over the Bay of Bengal in May, arriving at the Indian Peninsula by June, and then the winds move towards the South China Sea.
Effect of geographical relief features
Although the southwest and northeast monsoon winds are seasonally reversible, they do cause precipitation on their own.
Two factors are essential for rain formation:
Moisture-laden winds
Droplet formation
Additionally, one of the causes of rain must happen. In the case of the monsoon, the cause is primarily orographic, due to the presence of highlands in the path of the winds. Orographic barriers force wind to rise. Precipitation then occurs on the windward side of the highlands because of adiabatic cooling and condensation of the moist rising air.
The unique geographic relief features of the Indian subcontinent come into play in allowing all of the above factors to occur simultaneously. The relevant features in explaining the monsoon mechanism are as follows:
The presence of abundant water bodies around the subcontinent: the Arabian Sea, Bay of Bengal, and Indian Ocean. These help moisture accumulate in the winds during the hot season.
The presence of abundant highlands like the Western Ghats and the Himalayas right across the path of the southwest monsoon winds. These are the main cause of the substantial orographic precipitation throughout the subcontinent.
The Western Ghats are the first highlands of India that the southwest monsoon winds encounter. The Western Ghats rise abruptly from the Western Coastal Plains of the subcontinent, making effective orographic barriers for the monsoon winds.
The Himalayas play more than the role of orographic barriers for the monsoon. They also help confine it to the subcontinent. Without them, the southwest monsoon winds would blow right over the Indian subcontinent into Tibet, Afghanistan, and Russia without causing any rain.
For the northeast monsoon, the highlands of the Eastern Ghats play the role of orographic barrier.
Features of monsoon rains
There are some unique features of the rains that the monsoon brings to the Indian subcontinent.
"Bursting"
Bursting of monsoon refers to the sudden change in weather conditions in India (typically from hot and dry weather to wet and humid weather during the southwest monsoon), characterized by an abrupt rise in the mean daily rainfall. Similarly, the burst of the northeast monsoon refers to an abrupt increase in the mean daily rainfall over the affected regions.
Rain variability ("vagaries")
One of the most commonly used words to describe the erratic nature of the monsoon is "vagaries", used in newspapers, magazines, books, web portals to insurance plans, and India's budget discussions.
In some years, it rains too much, causing floods in parts of India; in others, it rains too little or not at all, causing droughts. In some years, the rain quantity is sufficient but its timing arbitrary. Sometimes, despite average annual rainfall, the daily distribution or geographic distribution of the rain is substantially skewed. In the recent past, rainfall variability in short time periods (about a week) were attributed to desert dust over the Arabian Sea and Western Asia.
Ideal and normal monsoon rains
Normally, the southwest monsoon can be expected to "burst" onto the western coast of India (near Thiruvananthapuram) at the beginning of June and to cover the entire country by mid-July. Its withdrawal from India typically starts at the beginning of September and finishes by the beginning of October.
The northeast monsoon usually "bursts" around 20 October and lasts for about 50 days before withdrawing.
However, a rainy monsoon is not necessarily a normal monsoon – that is, one that performs close to statistical averages calculated over a long period. A normal monsoon is generally accepted to be one involving close to the average quantity of precipitation over all the geographical locations under its influence (mean spatial distribution) and over the entire expected time period (mean temporal distribution). Additionally, the arrival date and the departure date of both the southwest and northeast monsoon should be close to the mean dates. The exact criteria for a normal monsoon are defined by the India Meteorological Department with calculations for the mean and standard deviation of each of these variables.
Theories for mechanism of monsoon
Theories of the mechanism of the monsoon primarily try to explain the reasons for the seasonal reversal of winds and the timing of their reversal.
Traditional theory
Because of differences in the specific heat capacity of land and water, continents heat up faster than seas. Consequently, the air above coastal lands heats up faster than the air above seas. These create areas of low air pressure above coastal lands compared with pressure over the seas, causing winds to flow from the seas onto the neighboring lands. This is known as sea breeze.
Process of monsoon creation
Also known as the thermal theory or the differential heating of sea and land theory, the traditional theory portrays the monsoon as a large-scale sea breeze. It states that during the hot subtropical summers, the massive landmass of the Indian Peninsula heats up at a different rate than the surrounding seas, resulting in a pressure gradient from south to north. This causes the flow of moisture-laden winds from sea to land. On reaching land, these winds rise because of the geographical relief, cooling adiabatically and leading to orographic rains. This is the southwest monsoon.
The reverse happens during the winter, when the land is colder than the sea, establishing a pressure gradient from land to sea. This causes the winds to blow over the Indian subcontinent toward the Indian Ocean in a northeasterly direction, causing the northeast monsoon. Because the southwest monsoon flows from sea to land, it carries more moisture, and therefore causes more rain, than the northeast monsoon. Only part of the northeast monsoon passing over the Bay of Bengal picks up moisture, causing rain in Andhra Pradesh and Tamil Nadu during the winter months.
However, many meteorologists argue that the monsoon is not a local phenomenon as explained by the traditional theory, but a general weather phenomenon along the entire tropical zone of Earth. This criticism does not deny the role of differential heating of sea and land in generating monsoon winds, but casts it as one of several factors rather than the only one.
Dynamic theory
The prevailing winds of the atmospheric circulation arise because of the difference in pressure at various latitudes and act as means for distribution of thermal energy on the planet. This pressure difference is because of the differences in solar insolation received at different latitudes and the resulting uneven heating of the planet. Alternating belts of high pressure and low pressure develop along the equator, the two tropics, the Arctic Circle and Antarctic Circle, and the two polar regions, giving rise to the trade winds, the westerlies, and the polar easterlies. However, geophysical factors like Earth's orbit, its rotation, and its axial tilt cause these belts to shift gradually north and south, following the Sun's seasonal shifts.
Process of monsoon creation
The dynamic theory explains the monsoon on the basis of the annual shifts in the position of global belts of pressure and winds. According to this theory, the monsoon is a result of the shift of the Intertropical Convergence Zone (ITCZ) under the influence of the vertical sun. Though the mean position of the ITCZ is taken as the equator, it shifts north and south with the migration of the vertical sun toward the Tropics of Cancer and Capricorn during the summer of the respective hemispheres (Northern and Southern Hemisphere). As such, during the northern summer (May and June), the ITCZ moves north, along with the vertical sun, toward the Tropic of Cancer. The ITCZ, as the zone of lowest pressure in the tropical region, is the target destination for the trade winds of both hemispheres. Consequently, with the ITCZ at the Tropic of Cancer, the southeast trade winds of the Southern Hemisphere have to cross the equator to reach it. However, because of the Coriolis effect (which causes winds in the Northern Hemisphere to turn right, whereas winds in the Southern Hemisphere turn left), these southeast trade winds are deflected east in the Northern Hemisphere, transforming into southwest trades. These pick up moisture while traveling from sea to land and cause orographic rain once they hit the highlands of the Indian Peninsula. This results in the southwest monsoon.
The dynamic theory explains the monsoon as a global weather phenomenon rather than just a local one. And when coupled with the traditional theory (based on the heating of sea and land), it enhances the explanation of the varying intensity of monsoon precipitation along the coastal regions with orographic barriers.
Jet stream theory
This theory tries to explain the establishment of the northeast and southwest monsoons, as well as unique features like "bursting" and variability.
The jet streams are systems of upper-air westerlies. They give rise to slowly moving upper-air waves, with 250-knot winds in some air streams. First observed by World War II pilots, they develop just below the tropopause over areas of steep pressure gradient on the surface. The main types are the polar jets, the subtropical westerly jets, and the less common tropical easterly jets. They follow the principle of geostrophic winds.
Process of monsoon creation
Over India, a subtropical westerly jet develops in the winter season and is replaced by the tropical easterly jet in the summer season. The high temperature during the summer over the Tibetan Plateau, as well as over Central Asia in general, is believed to be the critical factor leading to the formation of the tropical easterly jet over India.
The mechanism affecting the monsoon is that the westerly jet causes high pressure over northern parts of the subcontinent during the winter. This results in the north-to-south flow of the winds in the form of the northeast monsoon. With the northward shift of the vertical sun, this jet shifts north, too. The intense heat over the Tibetan Plateau, coupled with associated terrain features like the high altitude of the plateau, generate the tropical easterly jet over central India. This jet creates a low-pressure zone over the northern Indian plains, influencing the wind flow toward these plains and assisting the development of the southwest monsoon.
Theories for "bursting"
The "bursting" of the monsoon is primarily explained by the jet stream theory and the dynamic theory.
Dynamic theory
According to this theory, during the summer months in the Northern Hemisphere, the ITCZ shifts north, pulling the southwest monsoon winds onto the land from the sea. However, the huge landmass of the Himalayas restricts the low-pressure zone onto the Himalayas themselves. It is only when the Tibetan Plateau heats up significantly more than the Himalayas that the ITCZ rises abruptly and swiftly shifts north, leading to the bursting of monsoon rains over the Indian subcontinent.
The reverse shift takes place for the northeast monsoon winds, leading to a second, minor burst of rainfall over the eastern Indian Peninsula during the Northern Hemisphere winter months.
Jet stream theory
According to this theory, the onset of the southwest monsoon is driven by the shift of the subtropical westerly jet north from over the plains of India toward the Tibetan Plateau. This shift is due to the intense heating of the plateau during the summer months. The northward shift is not a slow and gradual process, as expected for most changes in weather pattern. The primary cause is believed to be the height of the Himalayas. As the Tibetan Plateau heats up, the low pressure created over it pulls the westerly jet north. Because of the lofty Himalayas, the westerly jet's movement is inhibited. But with continuous dropping pressure, sufficient force is created for the movement of the westerly jet across the Himalayas after a significant period. As such, the shift of the jet is sudden and abrupt, causing the bursting of southwest monsoon rains onto the Indian plains. The reverse shift happens for the northeast monsoon.
Theories for monsoon variability
The jet stream effect
The jet stream theory also explains the variability in timing and strength of the monsoon.
Timing: A timely northward shift of the subtropical westerly jet at the beginning of summer is critical to the onset of the southwest monsoon over India. If the shift is delayed, so is the southwest monsoon. An early shift results in an early monsoon.
Strength: The strength of the southwest monsoon is determined by the strength of the easterly tropical jet over central India. A strong easterly tropical jet results in a strong southwest monsoon over central India, and a weak jet results in a weak monsoon.
El Niño–Southern Oscillation effect
El Niño is a warm ocean current originating along the coast of Peru that replaces the usual cold Humboldt Current. The warm surface water moving toward the coast of Peru with El Niño is pushed west by the trade winds, thereby raising the temperature of the southern Pacific Ocean. The reverse condition is known as La Niña.
Southern Oscillation, a phenomenon first observed by Sir Gilbert Walker, director general of observatories in India, refers to the seesaw relationship of atmospheric pressures between Tahiti and Darwin, Australia. Walker noticed that when pressure was high in Tahiti, it was low in Darwin, and vice versa. A Southern Oscillation Index (SOI), based on the pressure difference between Tahiti and Darwin, has been formulated by the Bureau of Meteorology (Australia) to measure the strength of the oscillation. Walker noticed that the quantity of rainfall in the Indian subcontinent was often negligible in years of high pressure over Darwin (and low pressure over Tahiti). Conversely, low pressure over Darwin bodes well for precipitation quantity in India. Thus, Walker established the relationship between southern oscillation and quantities of monsoon rains in India.
Ultimately, the southern oscillation was found to be simply an atmospheric component of the El Niño/La Niña effect, which happens in the ocean. Therefore, in the context of the monsoon, the two together came to be known as the El Niño–Southern Oscillation (ENSO) effect. The effect is known to have a pronounced influence on the strength of the southwest monsoon over India, with the monsoon being weak (causing droughts) during El Niño years, while La Niña years bring particularly strong monsoons.
Indian Ocean dipole effect
Although the ENSO effect was statistically effective in explaining several past droughts in India, in recent decades, its relationship with the Indian monsoon seemed to weaken. For example, the strong El niño of 1997 did not cause drought in India. However, it was later discovered that, just like ENSO in the Pacific Ocean, a similar seesaw ocean-atmosphere system in the Indian Ocean was also in play. This system was discovered in 1999 and named the Indian Ocean Dipole (IOD). An index to calculate it was also formulated. IOD develops in the equatorial region of the Indian Ocean from April to May and peaks in October. With a positive IOD, winds over the Indian Ocean blow from east to west. This makes the Arabian Sea (the western Indian Ocean near the African coast) much warmer and the eastern Indian Ocean around Indonesia colder and drier. In negative dipole years, the reverse happens, making Indonesia much warmer and rainier.
A positive IOD index often negates the effect of ENSO, resulting in increased monsoon rains in years such as 1983, 1994, and 1997. Further, the two poles of the IOD – the eastern pole (around Indonesia) and the western pole (off the African coast) — independently and cumulatively affect the quantity of monsoon rains.
Equatorial Indian Ocean oscillation
As with ENSO, the atmospheric component of the IOD was later discovered and the cumulative phenomenon named Equatorial Indian Ocean oscillation (EQUINOO). When EQUINOO effects are factored in, certain failed forecasts, like the acute drought of 2002, can be further accounted for. The relationship between extremes of the Indian summer monsoon rainfall, along with ENSO and EQUINOO, have been studied, and models to better predict the quantity of monsoon rains have been statistically derived.
Impact of climate change
Since 1950s, the South Asian summer monsoon has been exhibiting large changes, especially in terms of droughts and floods. The observed monsoon rainfall indicates a gradual decline over central India, with a reduction of up to 10%. This is primarily due to a weakening monsoon circulation as a result of the rapid warming in the Indian Ocean, and changes in land use and land cover, while the role of aerosols remains elusive. Since the strength of the monsoon is partially dependent on the temperature difference between the ocean and the land, higher ocean temperatures in the Indian Ocean have weakened the moisture bearing winds from the ocean to the land. The reduction in the summer monsoon rainfall has grave consequences over central India because at least 60% of the agriculture in this region is still largely rain-fed.
A recent assessment of the monsoonal changes indicate that the land warming has increased during 2002–2014, possibly reviving the strength of the monsoon circulation and rainfall. Future changes in the monsoon will depend on a competition between land and ocean—on which is warming faster than the other.
Meanwhile, there has been a three-fold rise in widespread extreme rainfall events during the years 1950 to 2015, over the entire central belt of India, leading to a steady rise in the number of flash floods with significant socioeconomic losses. Widespread extreme rainfall events are those rainfall events which are larger than 150 mm/day and spread over a region large enough to cause floods.
Monsoon rain prediction models
Since the Great Famine of 1876–1878 in India, various attempts have been made to predict monsoon rainfall. At least five prediction models exist.
Seasonal Prediction of Indian Monsoon (SPIM)
The Centre for Development of Advanced Computing (CDAC) at Bengaluru facilitated the Seasonal Prediction of Indian Monsoon (SPIM) experiment on the PARAM Padma supercomputing system.
This project involved simulated runs of historical data from 1985 to 2004 to try to establish the relationship of five atmospheric general circulation models with monsoon rainfall distribution.
India Meteorological Department model
The department has tried to forecast the monsoon for India since 1884, and is the only official agency entrusted with making public forecasts about the quantity, distribution, and timing of the monsoon rains. Its position as the sole authority on the monsoon was cemented in 2005 by the Department of Science and Technology (DST), New Delhi. In 2003, IMD substantially changed its forecast methodology, model, and administration. A sixteen-parameter monsoon forecasting model used since 1988 was replaced in 2003. However, following the 2009 drought in India (worst since 1972), The department decided in 2010 that it needed to develop an "indigenous model" to further improve its prediction capabilities.
Significance
The monsoon is the primary delivery mechanism for fresh water in the Indian subcontinent. As such, it affects the environment (and associated flora, fauna, and ecosystems), agriculture, society, hydro-power production, and geography of the subcontinent (like the availability of fresh water in water bodies and the underground water table), with all of these factors cumulatively contributing to the health of the economy of affected countries.
The monsoon turns large parts of India from semi-deserts into green grasslands. See photos taken only three months apart in the Western Ghats.
Geographical (wettest spots on Earth)
Mawsynram and Cherrapunji, both in the Indian state of Meghalaya, alternate as the wettest places on Earth given the quantity of their rainfall, though there are other cities with similar claims. They receive more than 11,000 millimeters of rain each from the monsoon.
Agricultural
In India, which has historically had a primarily agrarian economy, the services sector recently overtook the farm sector in terms of GDP contribution. However, the agriculture sector still contributes 17–20% of GDP and is the largest employer in the country, with about 60% of Indians dependent on it for employment and livelihood. About 49% of India's land is agricultural; that number rises to 55% if associated wetlands, dryland farming areas, etc., are included. Because more than half of these farmlands are rain-fed, the monsoon is critical to food sufficiency and quality of life.
Despite progress in alternative forms of irrigation, agricultural dependence on the monsoon remains far from insignificant. Therefore, the agricultural calendar of India is governed by the monsoon. Any fluctuations in the time distribution, spatial distribution, or quantity of the monsoon rains may lead to floods or droughts, causing the agricultural sector to suffer. This has a cascading effect on the secondary economic sectors, the overall economy, food inflation, and therefore the general population's quality and cost of living.
Economic
The economic significance of the monsoon is aptly described by Pranab Mukherjee's remark that the monsoon is the "real finance minister of India".
A good monsoon results in better agricultural yields, which brings down prices of essential food commodities and reduces imports, thus reducing food inflation overall. Better rains also result in increased hydroelectric production.
All of these factors have positive ripple effects throughout the economy of India.
The down side however is that when monsoon rains are weak, crop production is low leading to higher food prices with limited supply. As a result, the Indian government is actively working with farmers and the nation's meteorological department to produce more drought resistant crops.
Health
The onset of the monsoon increases fungal and bacterial activity. A host of mosquito-borne, water-borne and air-borne infections become more common as a result of the change in the ecosystem. These include diseases such as dengue, malaria, cholera, and colds.
Social
D. Subbarao, former governor of the Reserve Bank of India, emphasized during a quarterly review of India's monetary policy that the lives of Indians depend on the performance of the monsoon. His own career prospects, his emotional well-being, and the performance of his monetary policy are all "a hostage" to the monsoon, he said, as is the case for most Indians. Additionally, farmers rendered jobless by failed monsoon rains tend to migrate to cities. This crowds city slums and aggravates the infrastructure and sustainability of city life.
Travel
In the past, Indians usually refrained from traveling during monsoons for practical as well as religious reasons. But with the advent of globalization, such travel is gaining popularity. Places like Kerala and the Western Ghats get a large number of tourists, both local and foreigners, during the monsoon season. Kerala is one of the top destinations for tourists interested in Ayurvedic treatments and massage therapy. One major drawback of traveling during the monsoon is that most wildlife sanctuaries are closed. Also, some mountainous areas, especially in Himalayan regions, get cut off when roads are damaged by landslides and floods during heavy rains.
Environmental
The monsoon is the primary bearer of fresh water to the area. The peninsular/Deccan rivers of India are mostly rain-fed and non-perennial in nature, depending primarily on the monsoon for water supply.
Most of the coastal rivers of Western India are also rain-fed and monsoon-dependent. As such, the flora, fauna, and entire ecosystems of these areas rely heavily on the monsoon.
| Physical sciences | Seasons | Earth science |
36284621 | https://en.wikipedia.org/wiki/Wu%20experiment | Wu experiment | The Wu experiment was a particle and nuclear physics experiment conducted in 1956 by the Chinese American physicist Chien-Shiung Wu in collaboration with the Low Temperature Group of the US National Bureau of Standards. The experiment's purpose was to establish whether or not conservation of parity (P-conservation), which was previously established in the electromagnetic and strong interactions, also applied to weak interactions. If P-conservation were true, a mirrored version of the world (where left is right and right is left) would behave as the mirror image of the current world. If P-conservation were violated, then it would be possible to distinguish between a mirrored version of the world and the mirror image of the current world.
The experiment established that conservation of parity was violated (P-violation) by the weak interaction, providing a way to operationally define left and right without reference to the human body. This result was not expected by the physics community, which had previously regarded parity as a symmetry applying to all forces of nature. Tsung-Dao Lee and Chen-Ning Yang, the theoretical physicists who originated the idea of parity nonconservation and proposed the experiment, received the 1957 Nobel Prize in Physics for this result. While not awarded the Nobel Prize, Chien-Shiung Wu's role in the discovery was mentioned in the Nobel Prize acceptance speech of Yang and Lee, but she was not honored until 1978, when she was awarded the first Wolf Prize.
History
In 1927, Eugene Wigner formalized the principle of the conservation of parity (P-conservation), the idea that the current world and one built like its mirror image would behave in the same way, with the only difference that left and right would be reversed (for example, a clock which spins clockwise would spin counterclockwise if a mirrored version of it were built).
This principle was widely accepted by physicists, and P-conservation was experimentally verified in the electromagnetic and strong interactions. However, during the mid-1950s, certain decays involving kaons could not be explained by existing theories in which P-conservation was assumed to be true. There seemed to be two types of kaons, one which decayed into two pions, and the other which decayed into three pions. This was known as the puzzle.
Theoretical physicists Tsung-Dao Lee and Chen-Ning Yang did a literature review on the question of parity conservation in all fundamental interactions. They concluded that in the case of the weak interaction, experimental data neither confirmed nor refuted P-conservation. Shortly after, they approached Chien-Shiung Wu, who was an expert on beta decay spectroscopy, with various ideas for experiments. They settled on the idea of testing the directional properties of beta decay in cobalt-60. Wu realized the potential for a breakthrough experiment and began work in earnest at the end of May 1956, cancelling a planned trip to Geneva and the Far East with her husband, wanting to beat the rest of the physics community to the punch. Most physicists, such as close friend Wolfgang Pauli, thought it was impossible and even expressed skepticism regarding the Yang-Lee proposal.
Wu had to contact Henry Boorse and Mark W. Zemansky, who had extensive experience in low-temperature physics, to perform her experiment. At the behest of Boorse and Zemansky, Wu contacted Ernest Ambler, of the National Bureau of Standards, who arranged for the experiment to be carried out in 1956 at the NBS' low-temperature laboratories. After several months of work overcoming technical difficulties, Wu's team observed an asymmetry indicating parity violation in December 1956.
Lee and Yang, who prompted the Wu experiment, were awarded the Nobel prize in Physics in 1957, shortly after the experiment was performed. Wu's role in the discovery was mentioned in the prize acceptance speech, but was not honored until 1978, when she was awarded the inaugural Wolf Prize. Many were outraged, from her close friend Wolfgang Pauli, to Lee and Yang, with 1988 Nobel Laureate Jack Steinberger labeling it as the biggest mistake in the Nobel committee's history. Wu did not publicly discuss her feelings about the prize, but in a letter she wrote to Steinberger, she said, "Although I did not do research just for the prize, it still hurts me a lot that my work was overlooked for certain reasons."
Theory
If a particular interaction respects parity symmetry, it means that if left and right were interchanged, the interaction would behave exactly as it did before the interchange. Another way this is expressed is to imagine that two worlds are constructed that differ only by parity—the "real" world and the "mirror" world, where left and right are swapped. If an interaction is parity symmetric, it produces the same outcomes in both "worlds".
The aim of Wu's experiment was to determine if this was the case for the weak interaction by looking at whether the decay products of cobalt-60 were being emitted preferentially in one direction or not. This would signify the violation of parity symmetry because if the weak interaction were parity conserving, the decay emissions should be emitted with equal probability in all directions. As stated by Wu et al.:
The reason for this is that the cobalt-60 nucleus carries spin, and spin does not change direction under parity (because angular momentum is an axial vector). Conversely, the direction in which the decay products are emitted is changed under parity because momentum is a polar vector. In other words, in the "real" world, if the cobalt-60 nuclear spin and the decay product emissions were both in roughly the same direction, then in the "mirror" world, they would be in roughly opposite directions, because the emission direction would have been flipped, but the spin direction would not.
This would be a clear difference in the behaviour of the weak interaction between both "worlds", and hence the weak interaction could not be said to be parity symmetric. The only way that the weak interaction could be parity symmetric is if there were no preference in the direction of emission, because then a flip in the direction of emissions in the "mirror" world would look no different from the "real" world because there were equal numbers of emissions in both directions anyway.
Experiment
The experiment monitored the decay of cobalt-60 (60Co) atoms that were aligned by a uniform magnetic field (the polarizing field) and cooled to near absolute zero so that thermal motions did not ruin the alignment. Cobalt-60 is an unstable isotope of cobalt that decays by beta decay to the stable isotope nickel-60 (60Ni). During this decay, one of the neutrons in the cobalt-60 nucleus decays to a proton by emitting an electron (e−) and an electron antineutrino (e). The resulting nickel nucleus, however, is in an excited state and promptly decays to its ground state by emitting two gamma rays (γ). Hence the overall nuclear equation of the reaction is:
Gamma rays are photons, and their release from the nickel-60 nucleus is an electromagnetic (EM) process. This is important because EM was known to respect parity conservation, and therefore they would be emitted roughly equally in all directions (they would be distributed roughly "isotropically"). Hence, the distribution of the emitted electrons could be compared to the distribution of the emitted gamma rays in order to compare whether they too were being emitted isotropically. In other words, the distribution of the gamma rays acted as a control for the distribution of the emitted electrons. Another benefit of the emitted gamma rays was that it was known that the degree to which they were not distributed perfectly equally in all directions (the "anisotropy" of their distribution) could be used to determine how well the cobalt-60 nuclei had been aligned (how well their spins were aligned). If the cobalt-60 nuclei were not aligned at all, then no matter how the electron emission was truly distributed, it would not be detected by the experiment. This is because an unaligned sample of nuclei could be expected to be oriented randomly, and thus the electron emissions would be random and the experiment would detect equal numbers of electron emissions in all directions, even if they were being emitted from each individual nucleus in only one direction.
The experiment then essentially counted the rate of emission for gamma rays and electrons in two distinct directions and compared their values. This rate was measured over time and with the polarizing field oriented in opposite directions. If the counting rates for the electrons did not differ significantly from those of the gamma rays, then there would have been evidence to suggest that parity was indeed conserved by the weak interaction. If, however, the counting rates were significantly different, then there would be strong evidence that the weak interaction does indeed violate parity conservation.
Materials and methods
The experimental challenge in this experiment was to obtain the highest possible polarization of the 60Co nuclei. Due to the very small magnetic moments of the nuclei as compared to electrons, strong magnetic fields were required at extremely low temperatures, far lower than could be achieved by liquid helium cooling alone. The low temperatures were achieved using the method of adiabatic demagnetization. Radioactive cobalt was deposited as a thin surface layer on a crystal of cerium-magnesium nitrate, a paramagnetic salt with a highly anisotropic Landé g-factor.
The salt was magnetized along the axis of high g-factor, and the temperature was decreased to 1.2 K by pumping the helium to low pressure. Shutting off the horizontal magnetic field resulted in the temperature decreasing to about 0.003 K. The horizontal magnet was opened up, allowing room for a vertical solenoid to be introduced and switched on to align the cobalt nuclei either upwards or downwards. Only a negligible increase in temperature was caused by the solenoid magnetic field, since the magnetic field orientation of the solenoid was in the direction of low g-factor. This method of achieving high polarization of 60Co nuclei had been originated by Gorter and Rose.
The production of gamma rays was monitored using equatorial and polar counters as a measure of the polarization. Gamma ray polarization was continuously monitored over the next quarter-hour as the crystal warmed up and anisotropy was lost. Likewise, beta-ray emissions were continuously monitored during this warming period.
Results
In the experiment carried out by Wu, the gamma ray anisotropy was approximately 0.6. That is, approximately 60% of the electrons were emitted in one direction, where as 40% were emitted in the other. If parity were conserved in beta decay, the emitted electrons would have had no preferred direction of decay relative to the nuclear spin, and the asymmetry in emission direction would have been close to the value for the gamma rays. However, Wu observed that the electrons were emitted in a direction preferentially opposite to that of the gamma rays with an asymmetry significantly greater than the gamma ray anisotropy value. That is, most of the electrons favored a very specific direction of decay, specifically opposite to that of the nuclear spin. The observed electron asymmetry also did not change sign when the polarizing field was reversed, meaning that the asymmetry was not being caused by remanent magnetization in the samples. It was later established that parity violation was in fact maximal.
The results greatly surprised the physics community. Several researchers then scrambled to reproduce the results of Wu's group, while others reacted with disbelief at the results. Wolfgang Pauli upon being informed by Georges M. Temmer, who also worked at the NBS, that parity conservation could no longer be assumed to be true in all cases, exclaimed "That's total nonsense!" Temmer assured him that the experiment's result confirmed this was the case, to which Pauli curtly replied "Then it must be repeated!" By the end of 1957, further research confirmed the original results of Wu's group, and P-violation was firmly established.
Mechanism and consequences
The results of the Wu experiment provide a way to operationally define the notion of left and right. This is inherent in the nature of the weak interaction. Previously, if the scientists on Earth were to communicate with a newly discovered planet's scientist, and they had never met in person, it would not have been possible for each group to determine unambiguously the other group's left and right. With the Wu experiment, it is possible to communicate to the other group what the words left and right mean exactly and unambiguously. The Wu experiment has finally solved the Ozma problem which is to give an unambiguous definition of left and right scientifically.
At the fundamental level (as depicted in the Feynman diagram on the right), Beta decay is caused by the conversion of the negatively charged () down quark to the positively charged () up quark by emission of a boson; the boson subsequently decays into an electron and an electron antineutrino:
→ + + .
The quark has a left part and a right part. As it walks across the spacetime, it oscillates back and forth from right part to left part and from left part to right part. From analyzing the Wu experiment's demonstration of parity violation, it can be deduced that only the left part of down quarks decay and the weak interaction involves only the left part of quarks and leptons (or the right part of antiquarks and antileptons). The right part of the particle simply does not feel the weak interaction. If the down quark did not have mass it would not oscillate, and its right part would be quite stable by itself. Yet, because the down quark is massive, it oscillates and decays.
Overall, as , the strong magnetic field vertically polarizes the nuclei such that . Since and the decay conserves angular momentum, implies that . Thus, the concentration of beta rays in the negative-z direction indicated a preference for left-handed quarks and electrons.
From experiments such as the Wu experiment and the Goldhaber experiment, it was determined that massless neutrinos must be left-handed, while massless antineutrinos must be right-handed. Since it is currently known that neutrinos have a small mass, it has been proposed that right-handed neutrinos and left-handed antineutrinos could exist. These neutrinos would not couple with the weak Lagrangian and would interact only gravitationally, possibly forming a portion of the dark matter in the universe.
Impact and influence
The discovery set the stage for the development of the Standard Model, as the model relied on the idea of symmetry of particles and forces and how particles can sometimes break that symmetry. The wide coverage of her discovery prompted the discoverer of fission Otto Robert Frisch to mention that people at Princeton would often say that her discovery was the most significant since the Michelson–Morley experiment that inspired Einstein's theory of relativity. The AAUW called it the “solution to the number-one riddle of atomic and nuclear physics.” Beyond showing the distinct characteristic of weak interaction from the other three conventional forces of interaction, this eventually led to general CP violation, the violation of the charge conjugation parity symmetry. This violation meant researchers could distinguish matter from antimatter and create a solution that would explain the existence of the universe as one that is filled with matter. This is since the lack of symmetry gave the possibility of matter-antimatter imbalance which would allow matter to exist today through the Big Bang. In recognition of their theoretical work, Lee and Yang were awarded the Nobel Prize for Physics in 1957. To further quote the impact it had, Nobel laureate Abdus Salam quipped, If any classical writer had ever considered giants (cyclops) with only the left eye. [One] would confess that one-eyed giants have been described and [would have] supplied me with a full list of them; but they always sport their solitary eye in the middle of the forehead. In my view what we have found is that space is a weak left-eyed giant. Wu's discovery would pave the way for a unified electroweak force that Salam proved, which is theoretically described to merge with the strong force to create a total new model and a Grand Unified Theory.
| Physical sciences | Quantum mechanics | Physics |
26456640 | https://en.wikipedia.org/wiki/Membrane | Membrane | A membrane is a selective barrier; it allows some things to pass through but stops others. Such things may be molecules, ions, or other small particles. Membranes can be generally classified into synthetic membranes and biological membranes. Biological membranes include cell membranes (outer coverings of cells or organelles that allow passage of certain constituents); nuclear membranes, which cover a cell nucleus; and tissue membranes, such as mucosae and serosae. Synthetic membranes are made by humans for use in laboratories and industry (such as chemical plants).
This concept of a membrane has been known since the eighteenth century but was used little outside of the laboratory until the end of World War II. Drinking water supplies in Europe had been compromised by The War and membrane filters were used to test for water safety. However, due to the lack of reliability, slow operation, reduced selectivity and elevated costs, membranes were not widely exploited. The first use of membranes on a large scale was with microfiltration and ultrafiltration technologies. Since the 1980s, these separation processes, along with electrodialysis, are employed in large plants and, today, several experienced companies serve the market.
The degree of selectivity of a membrane depends on the membrane pore size. Depending on the pore size, they can be classified as microfiltration (MF), ultrafiltration (UF), nanofiltration (NF) and reverse osmosis (RO) membranes. Membranes can also be of various thickness, with homogeneous or heterogeneous structure. Membranes can be neutral or charged, and particle transport can be active or passive. The latter can be facilitated by pressure, concentration, chemical or electrical gradients of the membrane process.
Membrane processes classifications
Microfiltration (MF)
Microfiltration removes particles higher than 0.08-2 μm and operates within a range of 7-100 kPa. Microfiltration is used to remove residual suspended solids (SS), to remove bacteria in order to condition the water for effective disinfection and as a pre-treatment step for reverse osmosis.
Relatively recent developments are membrane bioreactors (MBR) which combine microfiltration and a bioreactor for biological treatment.
Ultrafiltration (UF)
Ultrafiltration removes particles higher than 0.005-2 μm and operates within a range of 70-700kPa. Ultrafiltration is used for many of the same applications as microfiltration. Some ultrafiltration membranes have also been used to remove dissolved compounds with high molecular weight, such as proteins and carbohydrates. Also, they can remove viruses and some endotoxins.
Nanofiltration (NF)
Nanofiltration is also known as "loose" RO and can reject particles smaller than 0,002 μm. Nanofiltration is used for the removal of selected dissolved constituents from wastewater. NF is primarily developed as a membrane softening process which offers an alternative to chemical softening.
Likewise, nanofiltration can be used as a pre-treatment before directed reverse osmosis. The main objectives of NF pre-treatment are: (1). minimize particulate and microbial fouling of the RO membranes by removal of turbidity and bacteria, (2) prevent scaling by removal of the hardness ions, (3) lower the operating pressure of the RO process by reducing the feed-water total dissolved solids (TDS) concentration.
Reverse osmosis (RO)
Reverse osmosis is commonly used for desalination. As well, RO is commonly used for the removal of dissolved constituents from wastewater remaining after advanced treatment with microfiltration. RO excludes ions but requires high pressures to produce deionized water (850–7000 kPa). RO is the most widely used desalination technology because of its simplicity of use and relatively low energy costs compared with distillation, which uses technology based on thermal processes. Note that RO membranes remove water constituents at the ionic level. To do so, most current RO systems use a thin-film composite (TFC), mainly consisting of three layers: a polyamide layer, a polysulphone layer and a polyester layer.
Nanostructured membranes
An emerging class of membranes rely on nanostructure channels to separate materials at the molecular scale. These include carbon nanotube membranes, graphene membranes, membranes made from polymers of intrinsic microporosity (PIMS), and membranes incorporating metal–organic frameworks (MOFs). These membranes can be used for size selective separations such as nanofiltration and reverse osmosis, but also adsorption selective separations such as olefins from paraffins and alcohols from water that traditionally have required expensive and energy intensive distillation.
Membrane configurations
In the membrane field, the term module is used to describe a complete unit composed of the membranes, the pressure support structure, the feed inlet, the outlet permeate and retentate streams, and an overall support structure. The principal types of membrane modules are:
Tubular, where membranes are placed inside a support porous tubes, and these tubes are placed together in a cylindrical shell to form the unit module. Tubular devices are primarily used in micro- and ultrafiltration applications because of their ability to handle process streams with high solids and high viscosity properties, as well as for their relative ease of cleaning.
Hollow fiber membrane, consists of a bundle of hundreds to thousands of hollow fibers. The entire assembly is inserted into a pressure vessel. The feed can be applied to the inside of the fiber (inside-out flow) or the outside of the fiber (outside-in flow).
Spiral wound, where a flexible permeate spacer is placed between two flat membranes sheet. A flexible feed spacer is added and the flat sheets are rolled into a circular configuration. In recent developments, surface patterning techniques have allowed for the integration of permeable feed spacers directly into the membrane, giving rise to the concept of an integrated membrane
Plate and frame consist of a series of flat membrane sheets and support plates. The water to be treated passes between the membranes of two adjacent membrane assemblies. The plate supports the membranes and provides a channel for the permeate to flow out of the unit module.
Ceramic and polymeric flat sheet membranes and modules. Flat sheet membranes are typically built-into submerged vacuum-driven filtration systems which consist of stacks of modules each with several sheets. Filtration mode is outside-in where the water passes through the membrane and is collected in permeate channels. Cleaning can be performed by aeration, backwash and CIP.
Membrane process operation
The key elements of any membrane process relate to the influence of the following parameters on the overall permeate flux are:
The membrane permeability (k)
The operational driving force per unit membrane area (Trans Membrane Pressure, TMP)
The fouling and subsequent cleaning of the membrane surface.
Flux, pressure, permeability
The total permeate flow from a membrane system is given by following equation:
Where Qp is the permeate stream flowrate [kg·s−1], Fw is the water flux rate [kg·m−2·s−1] and A is the membrane area [m2]
The permeability (k) [m·s−2·bar−1] of a membrane is given by the next equation:
The trans-membrane pressure (TMP) is given by the following expression:
where PTMP is the trans-membrane pressure [kPa], Pf the inlet pressure of feed stream [kPa]; Pc the pressure of concentrate stream [kPa]; Pp the pressure if permeate stream [kPa].
The rejection (r) could be defined as the number of particles that have been removed from the feedwater.
The corresponding mass balance equations are:
To control the operation of a membrane process, two modes, concerning the flux and the TMP, can be used. These modes are (1) constant TMP, and (2) constant flux.
The operation modes will be affected when the rejected materials and particles in the retentate tend to accumulate in the membrane. At a given TMP, the flux of water through the membrane will decrease and at a given flux, the TMP will increase, reducing the permeability (k). This phenomenon is known as fouling, and it is the main limitation to membrane process operation.
Dead-end and cross-flow operation modes
Two operation modes for membranes can be used. These modes are:
Dead-end filtration where all the feed applied to the membrane passes through it, obtaining a permeate. Since there is no concentrate stream, all the particles are retained in the membrane. Raw feed-water is sometimes used to flush the accumulated material from the membrane surface.
Cross-flow filtration where the feed water is pumped with a cross-flow tangential to the membrane and concentrate and permeate streams are obtained. This model implies that for a flow of feed-water across the membrane, only a fraction is converted to permeate product. This parameter is termed "conversion" or "recovery" (S). The recovery will be reduced if the permeate is further used for maintaining processes operation, usually for membrane cleaning.
Filtration leads to an increase in the resistance against the flow. In the case of the dead-end filtration process, the resistance increases according to the thickness of the cake formed on the membrane. As a consequence, the permeability (k) and the flux rapidly decrease, proportionally to the solids concentration and, thus, requiring periodic cleaning.
For cross-flow processes, the deposition of material will continue until the forces of the binding cake to the membrane will be balanced by the forces of the fluid. At this point, cross-flow filtration will reach a steady-state condition , and thus, the flux will remain constant with time. Therefore, this configuration will demand less periodic cleaning.
Fouling
Fouling can be defined as the potential deposition and accumulation of constituents in the feed stream on the membrane. The loss of RO performance can result from irreversible organic and/or inorganic fouling and chemical degradation of the active membrane layer. Microbiological fouling, generally defined as the consequence of irreversible attachment and growth of bacterial cells on the membrane, is also a common reason for discarding old membranes. A variety of oxidative solutions, cleaning and anti-fouling agents is widely used in desalination plants, and their repetitive and incidental exposure can adversely affect the membranes, generally through the decrease of their rejection efficiencies.
Fouling can take place through several physicochemical and biological mechanisms which are related to the increased deposition of solid material onto the membrane surface. The main mechanisms by which fouling can occur, are:
Build-up of constituents of the feedwater on the membrane which causes a resistance to flow. This build-up can be divided into different types:
Pore narrowing, which consists of solid material that it has been attached to the interior surface of the pores.
Pore blocking occurs when the particles of the feed-water become stuck in the pores of the membrane.
Gel/cake layer formation takes places when the solid matter in the feed is larger than the pore sizes of the membrane.
Formation of chemical precipitates known as scaling
Colonization of the membrane or biofouling takes place when microorganisms grow on the membrane surface.
Fouling control and mitigation
Since fouling is an important consideration in the design and operation of membrane systems, as it affects pre-treatment needs, cleaning requirements, operating conditions, cost and performance, it should prevent, and if necessary, removed. Optimizing the operation conditions is important to prevent fouling. However, if fouling has already taken place, it should be removed by using physical or chemical cleaning.
Physical cleaning techniques for membrane include membrane relaxation and membrane backwashing.
Back-washing or back-flushing consists of pumping the permeate in the reverse direction through the membrane. Back-washing removes successfully most of the reversible fouling caused by pore blocking. Backwashing can also be enhanced by flushing air through the membrane. Backwashing increase the operating costs since energy is required to achieve a pressure suitable for permeate flow reversion.
Membrane relaxation consists of pausing the filtration during a period, and thus, there is no need for permeate flow reversion. Relaxation allows filtration to be maintained for a longer period before the chemical cleaning of the membrane.
Back pulsing high frequency back pulsing resulting in efficient removal of dirt layer. This method is most commonly used for ceramic membranes
Recent studies have assessed to combine relaxation and backwashing for optimum results.
Chemical cleaning. Relaxation and backwashing effectiveness will decrease with operation time as more irreversible fouling accumulates on the membrane surface. Therefore, besides the physical cleaning, chemical cleaning may also be recommended. It includes:
Chemical enhanced backwash, that is, a low concentration of chemical cleaning agent is added during the backwashing period.
Chemical cleaning, where the main cleaning agents are sodium hypochlorite (for organic fouling) and citric acid (for inorganic fouling). Every membrane supplier proposes their chemical cleaning recipes, which differ mainly in terms of concentration and methods.
Optimizing the operation condition. Several mechanisms can be carried out to optimize the operating conditions of the membrane to prevent fouling, for instance:
Reducing flux. The flux always reduces fouling but it impacts on capital cost since it demands more membrane area. It consists of working at sustainable flux which can be defined as the flux for which the TMP increases gradually at an acceptable rate, such that chemical cleaning is not necessary.
Using cross-flow filtration instead of dead-end. In cross-flow filtration, only a thin layer is deposited on the membrane since not all the particles are retained on the membrane, but the concentrate removes them.
Pre-treatment of the feed water is used to reduce the suspended solids and bacterial content of the feed-water. Flocculants and coagulants are also used, like ferric chloride and aluminium sulphate that, once dissolved in the water, adsorbs materials such as suspended solids, colloids and soluble organic. Metaphysical numerical models have been introduced in order to optimize transport phenomena
Membrane alteration. Recent efforts have focused on eliminating membrane fouling by altering the surface chemistry of the membrane material to reduce the likelihood that foulants will adhere to the membrane surface. The exact chemical strategy used is dependent on the chemistry of the solution that is being filtered. For example, membranes used in desalination might be made hydrophobic to resist fouling via accumulation of minerals, while membranes used for biologics might be made hydrophilic to reduce protein/organic accumulation. Modification of surface chemistry via thin film deposition can thereby largely reduce fouling. One drawback to using modification techniques is that, in some cases, the flux rate and selectivity of the membrane process can be negatively impacted.
Recycling of RO membranes
Waste prevention
Once the membrane reaches a significant performance decline it is discarded. Discarded RO membrane modules are currently classified worldwide as inert solid waste and are often disposed of in landfills; although they can also be energetically recovered. However, various efforts have been made over the past decades to avoid this, such as waste prevention, direct reapplication, and ways of recycling.
In this regard, membranes also follows the waste management hierarchy. This means that the most preferable action is to upgrade the design of the membrane which leads to a reduction in use at same application and the least preferred action is a disposal and landfilling
RO membranes have some environmental challenges that must be resolved in order to comply with the circular economy principles. Mainly they have a short service life of 5–10 years. Over the past two decades, the number of RO desalination plants has increased by 70%. The size of these RO plants has also increased significantly, with some reaching a production capacity exceeding 600,000 m3 of water per day. This means a generation of 14,000 tonnes of membrane waste that is landfilled every year.
To increment the lifespan of a membrane, different prevention methods are developed: combining the RO process with the pre-treatment process to improve efficiency; developing anti-fouling techniques; and developing suitable procedures for cleaning the membranes.
Pre-treatment processes lower the operating costs because of lesser amounts of chemical additives in the saltwater feed and the lower operational maintenance required for the RO system.
Four types of fouling are found on RO membranes: (i) Inorganic (salt precipitation), (ii) Organic, (iii) Colloidal (particle deposition in the suspension) (iv) Microbiological (bacteria and fungi). Thereby, an appropriate combination of pre-treatment procedures and chemical dosing, as well as an efficient cleaning plan that tackle these types of fouling, should enable the development of an effective anti-fouling technique.
Most plants clean their membranes every week (CEB – Chemically Enhanced Backwash). In addition to this maintenance cleaning, an intensive cleaning (CIP) is recommended, from two to four times annually.
Reuse
Reuse of RO membranes include the direct reapplication of modules in other separation processes with less stringent specifications. The conversion from the RO TFC membrane to a porous membrane is possible by degrading the dense layer of polyamide. Converting RO membranes by chemical treatment with different oxidizing solutions are aimed at removing the active layer of the polyamide membrane, intended for reuse in applications such as MF or UF. This causes an extended life of approximately two years.
A very limited number of reports have mentioned the potential of direct RO reuse. Studies shows that hydraulic permeability, salt rejection, morphological and topographical characteristics, and field emission scanning electron and atomic force microscopy were used in an autopsy investigation conducted. The old RO element's performance resembled that of nanofiltration (NF) membranes, thus it was not surprising to see the permeability increase from 1.0 to 2.1 L m-2 h-1 bar-1 and the drop in NaCl rejection from >90% to 35-50%.
On the other hand, In order to maximize the overall efficiency of the process, it has lately been common practice to combine RO elements of varying performances within the same pressure vessel, which is called Multi-membrane vessel design. In principle, this innovative hybrid system recommends using high rejection, low productivity membranes in the upstream segment of the filtration train, followed by high productivity, low energy membranes in the downstream section. There are two ways in which this design can help: either by decreasing energy use due to decreased pressure needs or by increasing output. Since this concept would reduce the number of modules and pressure vessels needed for a given application, it has the potential to significantly reduce initial investment costs. It is proposed to adapt this original concept, by internally reusing older RO membranes within the same pressure vessel.
Recycle
Recycling of materials is a general term that involves physically transforming the material or its components so that they can be regenerated into other useful products. The membrane modules are complex structures, consisting of a number of different polymeric components and, potentially, the individual components can be recovered for other purposes. Plastic solid waste treatment and recycling can be separated into mechanical recycling, chemical recycling and energy recovery.
Techniques recycling
Mechanical recycling characteristics:
A first separation of the components of interest is needed.
Previous washing to avoid property deterioration during the process.
Grinding of the polymeric materials into suitable size (loss of 5% of the material).
Possible posterior washing.
Melting and extrusion process (loss of 10 % of material).
Membrane components than can be recycled (thermoplastics): PP, polyester, etc.
Membrane sheets: constructed from a number of different polymers and additives and therefore inherently difficult to accurately and efficiently separate.
Main advantage: it displaces virgin plastic production. • Main disadvantages: need to separate all components, large-enough amount of components to be viable.
Chemical recycling characteristics:
Break down the polymers into smaller molecules, using depolymerisation and degradation techniques.
Cannot be used with contaminated materials.
Chemical recycling processes are tailored for specific materials.
Advantage: that heterogeneous polymers with limited use of pre-treatment can be processed.
Disadvantage: more expensive and complex than mechanical recycling.
Polyester materials (such as in the permeate spacer and components of the membrane sheet) are suitable for chemical recycling processes, and hydrolysis is used to reverse the poly-condensation reaction used to make the polymer, with the addition of water to cause decomposition.
Energetic recovery characteristics:
Volume reduction by 90–99%, reducing the strain on landfill.
Waste incinerators can generally operate from 760 °C to 1100 °C and would therefore be capable of removing all combustible material, with the exception of the residual inorganic filler in the fiberglass casing.
Heat energy can be recovered and used for electricity generation or other heat related processes, and can also offset the greenhouse gas emissions from traditional energy.
If not properly controlled, can emit greenhouse gases as well as other harmful products.
Post-treatment
After applying the chosen technique, it is necessary to carry out a post-treatment process to ensure that the membrane can function normally again.
The first step in post-treatment involves removing all residual waste from the equipment. This ensures that no contaminants remain that could affect the membrane's performance.
Separation techniques are employed to recover valuable materials from reverse osmosis membranes, such as polyamide or polysulfone, which can be recycled and reused in the production of new membranes or other products. During the material recovery stage, physical or chemical separation processes are conducted to isolate and purify these materials, ensuring their quality and facilitating their reintroduction into the production chain.
Following waste removal, the membrane is tested in a pilot system. During this phase, its performance is carefully analyzed to determine if the output meets the defined parameters and limits. This step is crucial to verify that the membrane operates efficiently and effectively after treatment.
Advantages of RO membranes recycling
Implementing a recycling process for RO membranes can incur additional costs, which many companies or organizations may be hesitant to accept. Moreover, recycled membranes often exhibit lower performance and efficiency. However, one significant advantage of recycling is the reduction of the environmental impact associated with producing new membranes from raw materials. RO membranes contain polymers derived from petroleum, a major source of greenhouse gases (GHGs) that contribute to climate change. Additionally, these polymers are not biodegradable, making them challenging to recycle.
By recycling RO membranes, we reduce the need for new materials, thereby lessening the environmental footprint. Producing new membranes from petroleum-derived polymers increases GHG emissions. Recycling existing membranes helps mitigate this impact by reusing materials that would otherwise contribute to environmental degradation.
The demand for RO membranes has surged due to stricter regulations on wastewater discharge. This demand could potentially surpass supply, making the recycling of current RO membranes a viable solution to address this challenge.
The increasing demand for RO membranes has led to higher prices. In contrast, the recycling process is generally more cost-effective than purchasing new membranes. This cost advantage can help offset the initial investment required for setting up recycling operations.
Applications
Distinct features of membranes are responsible for the interest in using them as additional unit operation for separation processes in fluid processes.
Some advantages noted include:
Less energy-intensive, since they do not require major phase changes
Do not demand adsorbents or solvents, which may be expensive or difficult to handle
Equipment simplicity and modularity, which facilitates the incorporation of more efficient membranes
Membranes are used with pressure as the driving processes in membrane filtration of solutes and in reverse osmosis. In dialysis and pervaporation the chemical potential along a concentration gradient is the driving force. Also perstraction as a membrane assisted extraction process relies on the gradient in chemical potential. A submerged flexible mound breakwater as a type of using membrane can be employed for wave control in shallow water as an advanced alternative to the conventional rigid submerged designs.
However, their overwhelming success in biological systems is not matched by their application. The main reasons for this are:
Fouling – the decrease of function with use
Prohibitive cost per membrane area
Lack of solvent resistant materials
Scale-up risks
| Physical sciences | Other separations | Chemistry |
23552434 | https://en.wikipedia.org/wiki/Ammonium%20dihydrogen%20phosphate | Ammonium dihydrogen phosphate | Ammonium dihydrogen phosphate (ADP), also known as monoammonium phosphate (MAP) is a chemical compound with the chemical formula (NH4)(H2PO4). ADP is a major ingredient of agricultural fertilizers and dry chemical fire extinguishers. It also has significant uses in optics and electronics.
Chemical properties
Monoammonium phosphate is soluble in water and crystallizes from it as the anhydrous salt in the tetragonal system, as elongated prisms or needles. It is practically insoluble in ethanol.
Solid monoammonium phosphate can be considered stable in practice for temperatures up to 200 °C, when it decomposes into gaseous ammonia and molten phosphoric acid . At 125 °C the partial pressure of ammonia is 0.05 mm Hg.
A solution of stoichometric monoammonium phosphate is acidic (pH 4.7 at 0.1% concentration, 4.2 at 5%).
Preparation
Monoammonium phosphate is industrially prepared by the exothermic reaction of phosphoric acid and ammonia in the correct proportions:
+ →
Crystalline MAP then precipitates.
Uses
Agriculture
The largest use of monoammonium phosphate by weight is in agriculture, as an ingredient of fertilizers. It supplies soil with the elements nitrogen and phosphorus in a form usable by plants. Its NPK label is 12-61-0 (12-27-0), meaning that it contains 12% by weight of elemental nitrogen and (nominally) 61% of phosphorus pentoxide , or 27% of elemental phosphorus.
Fire extinguishers
The compound is also a component of the ABC powder in some dry chemical fire extinguishers.
Optics
Monoammonium phosphate is a widely used crystal in the field of optics due to its birefringence properties. As a result of its tetragonal crystal structure, this material has negative uniaxial optical symmetry with typical refractive indices and at optical wavelengths.
Electronics
Monoammonium phosphate crystals are piezoelectric, a property required in some active sonar transducers (the alternative being transducers that use magnetostriction). In the 1950s ADP crystals largely replaced the quartz and Rochelle salt crystals in transducers because they are easier to work than quartz and, unlike Rochelle salt, are not deliquescent.
Toys
Being relatively non-toxic, MAP is also a popular substance for recreational crystal growing, being sold as toy kits mixed with dyes of various colors.
Natural occurrence
The compound appears in nature as the rare mineral biphosphammite. It is formed in guano deposits. A related compound, that is the monohydrogen counterpart, is the even more scarce phosphammite.
| Physical sciences | Phosphoric oxyanions | Chemistry |
23555003 | https://en.wikipedia.org/wiki/Grifola%20frondosa | Grifola frondosa | Grifola frondosa (also known as hen-of-the-woods, in Japanese, ram's head or sheep's head) is a polypore mushroom that grows at the base of trees, particularly old growth oaks or maples. It is native to China, Europe, and North America.
Description
Like the sulphur shelf mushroom, G. frondosa is a perennial fungus that often grows in the same place for several years in succession.
G. frondosa grows from an underground tuber-like structure known as a sclerotium, about the size of a potato. The fruiting body, individually up to across but whole clumps up to , rarely , is a cluster consisting of multiple grayish-brown caps which are often curled or spoon-shaped, with wavy margins and broad. The undersurface of each cap bears about one to three pores per millimeter, with the tubes rarely deeper than . The milky-white stipe (stalk) has a branchy structure and becomes tough as the mushroom matures.
In Japan, the can grow to more than .
Identification
This is a very distinct mushroom except for its cousin, the black staining mushroom, which is similar in taste but rubbery. Edible species which look similar to G. frondosa include Meripilus sumstinei (which stains black), Sparassis spathulata and Laetiporus sulphureus, another edible bracket fungus that is commonly called chicken of the woods or "sulphur shelf".
Distribution and habitat
It is native to China, Europe (August to October), and North America.
It occurs most prolifically in the northeastern regions of the United States, but has been found as far west as Idaho.
Uses
The species is a choice edible mushroom. Maitake has been consumed for centuries in China and Japan where it is one of the major culinary mushrooms. The mushroom is used in many Japanese dishes, such as nabemono. The softer caps must be thoroughly cooked.
Research
Although under laboratory and preliminary clinical research for many years, particularly for the possible biological effects of its polysaccharides, there are no completed, high-quality clinical studies for the species .
| Biology and health sciences | Edible fungi | Plants |
31109470 | https://en.wikipedia.org/wiki/Siemens%20%28unit%29 | Siemens (unit) | The siemens (symbol: S) is the unit of electric conductance, electric susceptance, and electric admittance in the International System of Units (SI). Conductance, susceptance, and admittance are the reciprocals of resistance, reactance, and impedance respectively; hence one siemens is equal to the reciprocal of one ohm () and is also referred to as the mho. The siemens was adopted by the IEC in 1935, and the 14th General Conference on Weights and Measures approved the addition of the siemens as a derived unit in 1971.
The unit is named after Ernst Werner von Siemens. In English, the same word siemens is used both for the singular and plural. Like other SI units named after people, the symbol (S) is capitalized but the name of the unit is not. For the siemens this distinguishes it from the second, symbol (lower case) s.
The related property, electrical conductivity, is measured in units of siemens per metre (S/m).
Definition
For an element conducting direct current, electrical resistance and electrical conductance are defined as
where is the electric current through the object and is the voltage (electrical potential difference) across the object.
The unit siemens for the conductance G is defined by
where is the ohm, is the ampere, and is the volt.
For a device with a conductance of one siemens, the electric current through the device will increase by one ampere for every increase of one volt of electric potential difference across the device.
The conductance of a resistor with a resistance of five ohms, for example, is (5 Ω)−1, which is equal to a conductance of 200 mS.
Mho
A historical equivalent for the siemens is the mho (). The name is derived from the word ohm spelled backwards as the reciprocal of one ohm, at the suggestion of Sir William Thomson (Lord Kelvin) in 1883. Its symbol is an inverted capital Greek letter omega: .
NIST's Guide for the Use of the International System of Units (SI) refers to the mho as an "unaccepted special name for an SI unit", and indicates that it should be strictly avoided.
The SI term siemens is used universally in science and often in electrical applications, while mho is still used in some electronic contexts.
The inverted capital omega symbol (℧), while not an official SI abbreviation, is less likely to be confused with a variable than the letter "S" when writing the symbol by hand. The usual typographical distinctions (such as italic for variables and roman for units) are difficult to maintain. Likewise, it is difficult to distinguish the symbol "S" (siemens) from the lower-case "s" (seconds), potentially causing confusion. So, for example, a pentode’s transconductance of might alternatively be written as or (most common in the 1930s) or .
The ohm had officially replaced the old "siemens unit", a unit of resistance, at an international conference in 1881.
| Physical sciences | Electromagnetism | null |
40502503 | https://en.wikipedia.org/wiki/UY%20Scuti | UY Scuti | UY Scuti (BD-12°5055) is a red supergiant star, located 5,900 light-years away in the constellation Scutum. It is also a pulsating variable star, with a maximum brightness of magnitude 8.29 and a minimum of magnitude 10.56, which is too dim for naked-eye visibility. It is considered to be one of the largest known stars, with a radius estimated at , thus a volume of 750 million times that of the Sun. This estimate implies if it were placed at the center of the Solar System, its photosphere would extend past the orbit of Mars or even the asteroid belt.
Nomenclature and history
UY Scuti was first catalogued in 1860 by German astronomers at the Bonn Observatory, who were completing a survey of stars for the Bonner Durchmusterung Stellar Catalogue. It was designated BD-12°5055, the 5,055th star between 12°S and 13°S counting from 0h right ascension.
On detection in the second survey, the star was found to have changed slightly in brightness, suggesting that it was a new variable star. In accordance with the international standard for designation of variable stars, it was called UY Scuti, denoting it as the 38th variable star of the constellation Scutum.
UY Scuti is located a few degrees north of the A-type star Gamma Scuti and northeast of the Eagle Nebula. Although the star is very luminous, it is, at its brightest, only 9th magnitude as viewed from Earth, due to its distance and location in the Zone of Avoidance within the Cygnus rift.
Characteristics
UY Scuti is a dust-enshrouded bright red supergiant and is classified as a semiregular variable with an approximate pulsation period of 740 days. Based on an old radius of , this pulsation would be an overtone of the fundamental pulsation period, or it may be a fundamental mode corresponding to a smaller radius.
In mid 2012, AMBER interferometry with the Very Large Telescope (VLT) in the Atacama Desert in Chile was used to measure the parameters of three red supergiants near the Galactic Center region: UY Scuti, AH Scorpii, and KW Sagittarii. They determined that all three stars are over 1,000 times bigger than the Sun and over 100,000 times more luminous than the Sun. The stars' sizes were calculated using the Rosseland radius, the location at which the optical depth is , with distances adopted from earlier publications. UY Scuti was analyzed to be the largest and the most luminous of the three stars measured, at based on an angular diameter of and an assumed distance of (kpc) (about ) which was originally derived in 1970 based on the modelling of the spectrum of UY Scuti. The luminosity is then calculated to be at an effective temperature of , giving an initial mass of (possibly up to for a non-rotating star).
A 2023 measurement based on the multimessenger monitoring of supernovae, puts the radius at a smaller value of , together with a smaller luminosity of and effective temperature of 3,550K. Direct measurements of the parallax of UY Scuti published in the Gaia Data Release 2 give a parallax of , implying a closer distance of approximately , and consequently much lower luminosity and radius values of around and respectively. However, the Gaia parallax might be unreliable, at least until further observations, due to a very high level of astrometric noise.
The distance of UY Scuti has been re-measured by Bailer-Jones et al. in 2021, based on a method that uses the stellar parallax from Gaia EDR3, its color and apparent brightness, giving it a much closer distance of .
UY Scuti has no known companion star and so its mass is uncertain. However, it is expected on theoretical grounds to be between . Mass is being lost at per year, leading to an extensive and complex circumstellar environment of gas and dust.
Supernova
Based on current models of stellar evolution, UY Scuti has begun to fuse helium and continues to fuse hydrogen in a shell around the core. The location of UY Scuti deep within the Milky Way disc suggests that it is a metal-rich star.
After fusing heavy elements, its core will begin to produce iron, disrupting the balance of gravity and radiation in its core and resulting in a core collapse supernova. It is expected that a star like UY Scuti should evolve back to hotter temperatures to become a yellow hypergiant, luminous blue variable, or a Wolf–Rayet star, creating a strong stellar wind that will eject its outer layers and expose the core, before exploding as a type IIb, IIn, or type Ib/Ic supernova.
| Physical sciences | Notable stars | Astronomy |
32104696 | https://en.wikipedia.org/wiki/Matoke | Matoke | Matoke, locally also known as matooke, amatooke in Buganda (Central Uganda), ekitookye in southwestern Uganda, ekitooke in western Uganda, kamatore in Lugisu (Eastern Uganda), ebitooke in northwestern Tanzania, igitoki in Rwanda, Burundi and by the cultivar name East African Highland banana, are a group of starchy triploid banana cultivars, originating from the African Great Lakes. The fruit is harvested green, carefully peeled, and then cooked and often mashed or pounded into a meal. In Uganda and Rwanda, the fruit is steam-cooked, and the mashed meal is considered a national dish in both countries.
Matoke bananas are a staple food crop in Uganda, Kenya, Tanzania and other Great Lakes countries. They are also known as the Mutika/Lujugira subgroup.
The medium-sized green fruits, which are of a specific group of banana, the East African Highland bananas (Musa AAA-EA), are known in the Bantu languages of Uganda and Western Kenya as matoke.
Cooking bananas have long been and still are a common staple crop around the Lake Victoria area of Kenya and Uganda, and in the West and Kilimanjaro regions of Tanzania.
Description
In Uganda, East African Highland bananas are easily distinguishable from other banana cultivars by the numerous black (or more rarely brown or bronze) blotches on their pseudostems, giving them the appearance of polished metal. The outermost sheath of their pseudostems is a medium green, superimposed over the pink to purple underlying sheaths.
Their leaves are also darker green and dull, a difference more apparent when comparing them side by side with other banana cultivars from a distance.
The inflorescence has peduncles covered with coarse hair. The bracts are ovate to lanceolate in shape with outer surfaces that are purple to brown and inner surfaces which are red fading to yellow towards the base. The male flowers have cream colored tepals with yellow lobes. The anthers are pink, while the stigmata are orange.
The fruits are recurved and can vary in length. They are inflated with blunt tips. The pulp is white in unripe fruits and cream-colored in ripe fruits.
Taxonomy
East African Highland bananas are triploid (AAA) cultivars. Their official designation is Musa acuminata Colla (AAA-EA). Synonyms include Musa brieyi De Wild. Their paternal parent is the blood banana subspecies (M. acuminata ssp. zebrina) of the wild banana species Musa acuminata.
East African Highland bananas are a subgroup that refers to about 200 individual banana cultivars (or clones). They can be subdivided into five distinct groups of clones known as clone sets:
Mbidde or beer clone set
The Mbidde clone set contains 14 cultivars. Mbidde means 'beer', and clones belonging to this clone set are usually used for making banana beer. Their pulp is bitter and astringent with sticky brown excretions.
Nakitembe clone set
Nakabululu clone set
Nakabululu clones are soft-textured and savory. They mature quickly, but their fruits are smaller and have lesser overall yields per bunch.
Musakala clone set
Musakala clones are characterized by slender fruits with bottle-necked tips. Other characteristics are the same as the preceding three clone sets.
Nfuuka clone set
Nfuuka clones are characterized by inflated, rounded, or almost rectangular fruits with intermediate-shaped tips. The bunch shape is mainly rectangular. Other characteristics are the same as the other clone sets. It is the most diverse of the five clone sets, a probable result of its tendency to mutate more frequently. They bear heavy compacted bunches and are thus more often exploited commercially than other clone sets.
Over 500 local names are known for cultivars from the EAHB subgroup.
Origin and distribution
East African Highland bananas were introduced early into Africa from Southeast Asia during the first to sixth centuries AD, probably via trade. They are genetically distinct from the other AAA cultivars, having evolved locally in the African Great Lakes region for over a millennium. They are found nowhere else in the world, and the African Great Lakes has been called the secondary center of banana diversity because of this (with Southeast Asia being the first). East African Highland bananas are considered to be especially diverse in Uganda, Burundi, and Rwanda. However, genetic analysis has revealed that all East African Highland bananas are genetically uniform, having most likely originated from a single ancestral clone (introduced to Africa within the past 2000 years) that underwent population expansion by vegetative propagation. The triploid East African Highland banana gene pool arose from a single hybridization event, which generated a genetic bottleneck during the foundation of the crop genepool. Triploid East African Highland bananas are sterile, and have been asexually vegetatively propagated for generations by successive generations of farmers since their introduction to Africa. This has likely led to the emergence of the genetically near-isogenic somatic mutants (i.e. today's East African Highland banana varieties) that have been selected by farmers and environments across East Africa.
Economic importance
East African Highland bananas are one of the most important staple food crops in the African Great Lakes region, particularly for Uganda, Tanzania, Kenya, Burundi, and Rwanda. Per capita annual consumption of bananas in Uganda is the highest in the world at daily per person. Including Rwanda and Burundi, consumption is about per person annually (about three to 11 bananas each day). Uganda is the second-largest producer of bananas in the world. It is, however, one of the smallest exporters, with the crops being used mostly for domestic consumption.
East African Highland bananas are so important as food crops, the local name matoke (or more commonly matooke) is synonymous for the word "food" in Uganda. Also, a portion of the East African Highland bananas locally known as mbidde is used to produce juice and beer.
Food preparation
Matoke are peeled using a knife, wrapped in the plant's leaves (or plastic bags), and set in a cooking pot (Swahili: sufuria) atop the banana stalks. The pot is then placed on a charcoal or wood fire and the matoke is steamed for a couple of hours; water is poured into the bottom of the cooking pot multiple times. The stalks in the bottom of the pot keep the leaf-wrapped fruits above the level of the hot water. While uncooked, the matoke is white and fairly hard; cooking turns it soft and yellow. The matoke is then mashed while still wrapped in the leaves or bags and often served on a fresh banana leaf. It is typically eaten with a sauce made of vegetables, ground peanut, or some type of meat (goat or beef).
Matoke are also used to make a popular breakfast dish called katogo in Uganda. Katogo is commonly cooked as a combination of peeled bananas and peanuts or beef, though offal or goat meat are also common.
In Bukoba, Tanzania, matoke (or ebitooke) are cooked with meat or smoked catfish, and beans or groundnuts. This method eliminates the need for preparing a separate sauce. In this recipe, the matoke are not mashed. Until the early 1980s, this was the most common meal in Bukoba and would be eaten all year.
| Biology and health sciences | Tropical and tropical-like fruit | Plants |
26465940 | https://en.wikipedia.org/wiki/Herpes%20esophagitis | Herpes esophagitis | Herpes esophagitis is a viral infection of the esophagus caused by Herpes simplex virus (HSV).
While the disease most often occurs in immunocompromised patients, including post-chemotherapy, immunosuppression with organ transplants and in AIDS, herpes esophagitis can also occur in immunocompetent individuals.
Signs and symptoms
People with herpes esophagitis experience pain with eating and trouble swallowing. Other symptoms can include food impaction, hiccups, weight loss, fever, and on rare occasions upper gastrointestinal bleeding as noted in the image above and tracheoesophageal fistula. Frequently one can see herpetiform lesions in the mouth and lips.
Diagnosis
Upper Endoscopy often reveals ulcers throughout the esophagus with intervening normal-appearing mucosa. In severe cases the ulcers can coalesce and on rare occasions have a black appearance known as black esophagus. While the diagnosis of herpes esophagitis can be inferred clinically it can only be accurately diagnosed through endoscopically obtained biopsies with microscopic evaluation by a pathologist finding the appropriate inclusion bodies and diagnostic immunochemical staining. False negative findings may occur if biopsies are taken from the ulcer rather than from the margin of the ulcer as the inclusion particles are to be found in viable epithelial cells. Viral tissue culture represents the most accurate means of diagnosing the precise cause.
Differential diagnosis
CMV, VZV as well as HIV infections of the esophagus can have a similar presentation. Tissue culture is the most accurate means of distinguishing between the different viral causes. Caustic esophagitis, pill-induced esophagitis as well as yeast esophagitis can have a similar clinical presentation.
Prevention
Herpes simplex virus is commonly found in humans, yet uncommonly results in systemic manifestations. Suppression of HIV with antiretroviral medications, careful monitoring of immunosuppressive medications are important means of prevention. Antiviral prophylaxis such as daily acyclovir in immunocompromised individuals may be considered.
Treatment
Antivirals such as acyclovir, famciclovir, or valacyclovir may be used. Intravenous acyclovir is reserved for individuals who cannot swallow due to the pain, individuals with other systemic manifestations of herpes or severely immunocompromised individuals.
| Biology and health sciences | Viral diseases | Health |
25067088 | https://en.wikipedia.org/wiki/Soldering | Soldering | Soldering (; ) is a process of joining two metal surfaces together using a filler metal called solder. The soldering process involves heating the surfaces to be joined and melting the solder, which is then allowed to cool and solidify, creating a strong and durable joint.
Soldering is commonly used in the electronics industry for the manufacture and repair of printed circuit boards (PCBs) and other electronic components. It is also used in plumbing and metalwork, as well as in the manufacture of jewelry and other decorative items.
The solder used in the process can vary in composition, with different alloys used for different applications. Common solder alloys include tin-lead, tin-silver, and tin-copper, among others. Lead-free solder has also become more widely used in recent years due to health and environmental concerns associated with the use of lead.
In addition to the type of solder used, the temperature and method of heating also play a crucial role in the soldering process. Different types of solder require different temperatures to melt, and heating must be carefully controlled to avoid damaging the materials being joined or creating weak joints.
There are several methods of heating used in soldering, including soldering irons, torches, and hot air guns. Each method has its own advantages and disadvantages, and the choice of method depends on the application and the materials being joined.
Soldering is an important skill for many industries and hobbies, and it requires a combination of technical knowledge and practical experience to achieve good results.
Origins
There is evidence that soldering was employed as early as 5,000 years ago in Mesopotamia. Soldering and brazing are thought to have originated very early in the history of metal-working, probably before 4000 BC. Sumerian swords from were assembled using hard soldering.
Soldering was historically used to make jewelry, cookware and cooking tools, assembling stained glass, as well as other uses.
Applications
Soldering is used in plumbing, electronics, and metalwork from flashing to jewelry and musical instruments.
Soldering provides reasonably permanent but reversible connections between copper pipes in plumbing systems as well as joints in sheet metal objects such as food cans, roof flashing, rain gutters and automobile radiators.
Jewelry components, machine tools and some refrigeration and plumbing components are often assembled and repaired by the higher temperature silver soldering process. Small mechanical parts are often soldered or brazed as well. Soldering is also used to join lead came and copper foil in stained glass work.
Electronic soldering connects electrical wiring to devices, and electronic components to printed circuit boards. Electronic connections may be hand-soldered with a soldering iron. Automated methods such as wave soldering or use of ovens can make many joints on a complex circuit board in one operation, vastly reducing production cost of electronic devices.
Musical instruments, especially brass and woodwind instruments, use a combination of soldering and brazing in their assembly. Brass bodies are often soldered together, while keywork and braces are most often brazed.
The USSR and Russian military used zinc coffins sealed with solder to transport the dead.
Solderability
The solderability of a substrate is a measure of the ease with which a soldered joint can be made to that material.
Some metals are easier to solder than others. Copper, zinc, brass, silver and gold are easy. Iron, mild steel and nickel are next in difficulty. Because of their thin, strong oxide films, stainless steel and some aluminium alloys are even more difficult to solder. Titanium, magnesium, cast irons, some high-carbon steels, ceramics, and graphite can be soldered but it involves a process similar to joining carbides: they are first plated with a suitable metallic element that induces interfacial bonding.
Solders
Soldering filler materials are available in many different alloys for differing applications. In electronics assembly, the eutectic alloy with 63% tin and 37% lead (or 60/40, which is almost identical in melting point) has been the alloy of choice. Other alloys are used for plumbing, mechanical assembly, and other applications. Some examples of soft-solder are tin-lead for general purposes, tin-zinc for joining aluminium, lead-silver for strength at higher than room temperature, cadmium-silver for strength at high temperatures, zinc-aluminium for aluminium and corrosion resistance, and tin-silver and tin-bismuth for electronics.
A eutectic formulation has advantages when applied to soldering: the liquidus and solidus temperatures are the same, so there is no plastic phase, and it has the lowest possible melting point. Having the lowest possible melting point minimizes heat stress on electronic components during soldering. And, having no plastic phase allows for quicker wetting as the solder heats up, and quicker setup as the solder cools. A non-eutectic formulation must remain still as the temperature drops through the liquidus and solidus temperatures. Any movement during the plastic phase may result in cracks, resulting in an unreliable joint.
Common solder formulations based on tin and lead are listed below. The fraction represent percentage of tin first, then lead, totaling 100%:
63/37: melts at (eutectic: the only mixture that melts at a point, instead of over a range)
60/40: melts between
50/50: melts between
For environmental reasons and the introduction of regulations such as the European RoHS (Restriction of Hazardous Substances Directive), lead-free solders are becoming more widely used. They are also suggested anywhere young children may come into contact with (since young children are likely to place things into their mouths), or for outdoor use where rain and other precipitation may wash the lead into the groundwater. Unfortunately, common lead-free solders are not eutectic formulations, melting at around , making it more difficult to create reliable joints with them.
Other common solders include low-temperature formulations (often containing bismuth), which are often used to join previously soldered assemblies without unsoldering earlier connections, and high-temperature formulations (usually containing silver) which are used for high-temperature operation or for first assembly of items which must not become unsoldered during subsequent operations. Alloying silver with other metals changes the melting point, adhesion and wetting characteristics, and tensile strength. Of all the brazing alloys, silver solders have the greatest strength and the broadest applications. Specialty alloys are available with properties such as higher strength, the ability to solder aluminum, better electrical conductivity, and higher corrosion resistance.
Soldering vs. brazing
There are three forms of soldering, each requiring progressively higher temperatures and producing an increasingly stronger joint strength:
soft soldering, which originally used a tin-lead alloy as the filler metal
silver soldering, which uses an alloy containing silver
brazing which uses a brass alloy for the filler
The alloy of the filler metal for each type of soldering can be adjusted to modify the melting temperature of the filler. Soldering differs from gluing significantly in that the filler metals directly bond with the surfaces of the workpieces at the junction to form a bond that is both electrically conductive and gas- and liquid-tight.
Soft soldering is characterized by having a melting point of the filler metal below approximately , whereas silver soldering and brazing use higher temperatures, typically requiring a flame or carbon arc torch to achieve the melting of the filler. Soft solder filler metals are typically alloys (often containing lead) that have liquidus temperatures below .
In this soldering process, heat is applied to the parts to be joined, causing the solder to melt and to bond to the workpieces in a surface alloying process called wetting. In stranded wire, the solder is drawn up into the wire between the strands by capillary action in a process called 'wicking'. Capillary action also takes place when the workpieces are very close together or touching. The joint's tensile strength is dependent on the filler metal used; in electrical soldering little tensile strength comes from the added solder which is why it is advised that wires be twisted or folded together before soldering to provide some mechanical strength for a joint. A good solder joint produces an electrically conductive, water- and gas-tight join.
Each type of solder offers advantages and disadvantages. Soft solder is so called because of the soft lead that is its primary ingredient. Soft soldering uses the lowest temperatures (and so thermally stresses components the least) but does not make a strong joint and is unsuitable for mechanical load-bearing applications. It is also unsuitable for high-temperature applications as it loses strength, and eventually melts. Silver soldering, as used by jewelers, machinists and in some plumbing applications, requires the use of a torch or other high-temperature source, and is much stronger than soft soldering. Brazing provides the strongest of the non-welded joints but also requires the hottest temperatures to melt the filler metal, requiring a torch or other high temperature source and darkened goggles to protect the eyes from the bright light produced by the white-hot work. It is often used to repair cast-iron objects, wrought-iron furniture, etc.
Soldering operations can be performed with hand tools, one joint at a time, or en masse on a production line. Hand soldering is typically performed with a soldering iron, soldering gun, or a torch, or occasionally a hot-air pencil. Sheetmetal work was traditionally done with "soldering coppers" directly heated by a flame, with sufficient stored heat in the mass of the soldering copper to complete a joint; gas torches (e.g. butane or propane) or electrically heated soldering irons are more convenient. All soldered joints require the same elements of cleaning of the metal parts to be joined, fitting up the joint, heating the parts, applying flux, applying the filler, removing heat and holding the assembly still until the filler metal has completely solidified. Depending on the nature of flux material used and the application, cleaning of the joint may be required after it has cooled.
Each solder alloy has characteristics that work best for certain applications, notably strength and conductivity, and each type of solder and alloy has different melting temperatures. The term silver solder denotes the type of solder that is used. Some soft solders are "silver-bearing" alloys used to solder silver-plated items. Lead-based solders should not be used on precious metals because the lead dissolves the metal and disfigures it.
The distinction between soldering and brazing is based on the melting temperature of the filler alloy. A temperature of 450 °C is usually used as a practical demarcation between soldering and brazing. Soft soldering can be done with a heated iron whereas the other methods typically require a higher temperature torch or a furnace to melt the filler metal.
Different equipment is usually required since a soldering iron cannot achieve high enough temperatures for hard soldering or brazing. Brazing filler metal is stronger than silver solder, which is stronger than lead-based soft solder. Brazing solders are formulated primarily for strength, silver solder is used by jewelers to protect the precious metal and by machinists and refrigeration technicians for its tensile strength but lower melting temperature than brazing, and the primary benefit of soft solder is the low temperature used (to prevent heat damage to electronic components and insulation).
Since the joint is produced using a metal with a lower melting temperature than the workpiece, the joint will weaken as the ambient temperature approaches the melting point of the filler metal. For that reason, the higher temperature processes produce joints which are effective at higher temperatures. Brazed connections can be as strong or nearly as strong as the parts they connect, even at elevated temperatures.
Silver soldering
"Hard soldering" or "silver soldering" is used to join precious and semi-precious metals such as gold, silver, brass, and copper. The solder is usually described as easy, medium, or hard in reference to its melting temperature, not the strength of the joint. Extra-easy solder contains 56% silver and has a melting point of . Extra-hard solder has 80% silver and melts at . If multiple joints are needed, then the jeweler will start with hard or extra-hard solder and switch to lower-temperature solders for later joints.
Silver solder is somewhat absorbed by the surrounding metal, resulting in a joint that is actually stronger than the metal being joined. The metal being joined must be perfectly flush, as silver solder cannot normally be used as a filler and will not fill gaps.
Another difference between brazing and soldering is how the solder is applied. In brazing, one generally uses rods that are touched to the joint while being heated. With silver soldering, small pieces of solder wire are placed onto the metal prior to heating. A flux, often made of boric acid and denatured alcohol, is used to keep the metal and solder clean and to prevent the solder from moving before it melts.
When silver solder melts, it tends to flow towards the area of greatest heat. Jewelers can somewhat control the direction the solder moves by leading it with a torch; it will even sometimes run straight up along a seam.
Mechanical and aluminium soldering
A number of solder materials, primarily zinc alloys, are used for soldering aluminium and alloys and to a lesser extent steel and zinc. This mechanical soldering is similar to a low temperature brazing operation, in that the mechanical characteristics of the joint are reasonably good and it can be used for structural repairs of those materials.
The American Welding Society defines brazing as using filler metals with melting points over — or, by the traditional definition in the United States, above . Aluminium soldering alloys generally have melting temperatures around . This soldering / brazing operation can use a propane torch heat source.
These materials are often advertised as "aluminium welding", but the process does not involve melting the base metal, and therefore is not properly a weld.
United States Military Standard or MIL-SPEC specification MIL-R-4208 defines one standard for these zinc-based brazing/soldering alloys. A number of products meet this specification. or very similar performance standards.
Flux
The purpose of flux is to facilitate the soldering process. One of the obstacles to a successful solder joint is an impurity at the site of the joint; for example, dirt, oil or oxidation. The impurities can be removed by mechanical cleaning or by chemical means, but the elevated temperatures required to melt the filler metal (the solder) encourages the work piece (and the solder) to re-oxidize. This effect is accelerated as the soldering temperatures increase and can completely prevent the solder from joining to the workpiece. One of the earliest forms of flux was charcoal, which acts as a reducing agent and helps prevent oxidation during the soldering process. Some fluxes go beyond the simple prevention of oxidation and also provide some form of chemical cleaning (corrosion). Many fluxes also act as a wetting agent in the soldering process, reducing the surface tension of the molten solder and causing it to flow and wet the workpieces more easily.
For many years, the most common type of flux used in electronics (soft soldering) was rosin-based, using the rosin from selected pine trees. It was nearly ideal in that it was non-corrosive and non-conductive at normal temperatures but became mildly reactive (corrosive) at elevated soldering temperatures. Plumbing and automotive applications, among others, typically use an acid-based (hydrochloric acid) flux which provides rather aggressive cleaning of the joint. These fluxes cannot be used in electronics because their residues are conductive leading to unintended electrical connections, and because they will eventually dissolve small diameter wires. Citric acid is an excellent water-soluble acid-type flux for copper and electronics but must be washed off afterwards.
Fluxes for soft solder are currently available in three basic formulations:
Water-soluble fluxes – higher activity fluxes which can be removed with water after soldering (no VOCs required for removal).
No-clean fluxes – mild enough to not "require" removal due to their non-conductive and non-corrosive residues. These fluxes are called "no-clean" because the residue left after the solder operation is non-conductive and will not cause electrical shorts; nevertheless they leave a plainly visible white residue. No-clean flux residue is acceptable on all 3 classes of PCBs as defined by IPC-610 provided it does not inhibit visual inspection, access to test points, or have a wet, tacky or excessive residue that may spread onto other areas. Connector mating surfaces must also be free of flux residue. Fingerprints in no-clean residue are a class 3 defect
Traditional rosin fluxes – available in non-activated (R), mildly activated (RMA) and activated (RA) formulations. RA and RMA fluxes contain rosin combined with an activating agent, typically an acid, which increases the wettability of metals to which it is applied by removing existing oxides. The residue resulting from the use of RA flux is corrosive and must be cleaned. RMA flux is formulated to result in a residue which is less corrosive, so that cleaning becomes optional, though usually preferred. R flux is still less active and even less corrosive.
Flux performance must be carefully evaluated for best results; a very mild 'no-clean' flux might be perfectly acceptable for production equipment, but not give adequate performance for more variable hand-soldering operations.
Heating methods
Different types of soldering tools are made for specific applications. The required heat can be generated from burning fuel or from an electrically operated heating element or by passing an electric current through the item to be soldered. Another method for soldering is to place solder and flux at the locations of joints in the object to be soldered, then heat the entire object in an oven to melt the solder; toaster ovens and hand-held infrared lights have been used by hobbyists to replicate production soldering processes on a much smaller scale. A third method of soldering is to use a solder pot where the part (with flux) is dipped in a small heated iron cup of liquid solder, or a pump in a bath of liquid solder produces an elevated "wave" of solder which the part is quickly passed through. Wave soldering uses surface tension to keep solder from bridging the insulating gaps between the copper lines of flux-coated printed wiring boards/printed circuit boards.
The electric soldering iron is widely used for hand-soldering, consisting of a heating element in contact with the "iron" (a larger mass of metal, usually copper) which is in contact with the working tip made of copper. Usually, soldering irons can be fitted with a variety of tips, ranging from blunt, to very fine, to chisel heads for hot-cutting plastics rather than soldering. Plain copper tips are subject to erosion/dissolution in hot solder, and may be plated with pure iron to prevent that. The simplest irons do not have temperature regulation. Small irons rapidly cool when used to solder to, say, a metal chassis, while large irons have tips too cumbersome for working on printed circuit boards (PCBs) and similar fine work. A 25-watt iron will not provide enough heat for large electrical connectors, joining copper roof flashing, or large stained-glass lead came. On the other hand, a 100-watt iron may provide too much heat for PCBs. Temperature-controlled irons have a reserve of power and can maintain temperature over a wide range of work.
A soldering gun heats a small cross-section copper tip very quickly by conducting a large AC current through it using a large cross-section one-turn transformer; the copper tip then conducts the heat to the part like other soldering irons. A soldering gun will be larger and heavier than a heating-element soldering iron of the same power rating because of the built-in transformer.
Gas-powered irons using a catalytic tip to heat a bit, without flame, are used for portable applications. Hot-air guns and pencils allow rework of component packages (such as surface mount devices) which cannot easily be performed with electric irons and guns.
For non-electronic applications, soldering torches use a flame rather than a soldering tip to heat solder. Soldering torches are often powered by butane and are available in sizes ranging from very small butane/oxygen units suitable for very fine but high-temperature jewelry work, to full-size oxy-fuel torches suitable for much larger work such as copper piping. Common multipurpose propane torches, the same kind used for heat-stripping paint and thawing pipes, can be used for soldering pipes and other fairly large objects either with or without a soldering tip attachment; pipes are generally soldered with a torch by directly applying the open flame.
A soldering copper is a tool with a large copper head and a long handle which is heated with a small direct flame and used to apply heat to sheet metal such as tin plated steel for soldering. Typical soldering coppers have heads weighing between one and four pounds. The head provides a large thermal mass to store enough heat for soldering large areas before needing re-heating in the fire; the larger the head, the longer the working time. The copper surface of the tool must be constantly cleaned and re-tinned during use. Historically, soldering coppers were standard tools used in auto bodywork, although body solder has been mostly superseded by spot welding for mechanical connection, and non-metallic fillers for contouring.
During WW2 and for some time afterwards SOE forces used small pyrotechnic self-soldering joints to make connections for the remote detonation of demolition and sabotage explosives. These consisted of a small copper tube partially filled with solder and a slow-burning pyrotechnic composition wrapped around the tube. The wires to be joined would be inserted into the tube and a small blob of ignition compound allowed the device to be struck like a match to ignite the pyrotechnic and heat the tube for long enough to melt the solder and make the joint.
Laser soldering
Laser soldering is a technique where a 30–50 W laser is used to melt and solder an electrical connection joint. Diode laser systems based on semiconductor junctions are used for this purpose. Suzanne Jenniches patented laser soldering in 1980.
Wavelengths are typically 808 nm through 980 nm. The beam is delivered via an optical fiber to the workpiece, with fiber diameters 800 μm and smaller. Since the beam out of the end of the fiber diverges rapidly, lenses are used to create a suitable spot size on the workpiece at a suitable working distance. A wire feeder is used to supply solder.
Both lead-tin and silver-tin material can be soldered. Process recipes will differ depending on the alloy composition. For soldering 44-pin chip carriers to a board using soldering preforms, power levels were on the order of 10 watts and solder times approximately 1 second. Low power levels can lead to incomplete wetting and the formation of voids, both of which can weaken the joint.
Photonic soldering
Photonic soldering is a relatively new process that uses broadband light from rapidly pulsing flashlamps to solder components to a circuit board. Energy consumption is approximately 85% less than that of a reflow oven, while the throughput is higher, and the footprint is smaller. It is similar to photonic curing, in that the components to be soldered are heated while the substrate remains relatively cool. This enables the use of high-temperature solders, such as SAC305, even on thermally fragile substrates such as PET, cellulose, and fabrics. An entire circuit board can be processed in a few seconds. In some cases, masks are used, but it can also be performed without registration, enabling very high processing rates.
Induction soldering
Induction soldering uses induction heating by high-frequency alternating current in a surrounding copper coil. This induces currents in the part being soldered, which generates heat because of the higher resistance of a joint versus its surrounding metal (resistive heating). These copper coils can be shaped to fit the joint more precisely. A filler metal (solder) is placed between the facing surfaces, and this solder melts at a fairly low temperature. Fluxes are commonly used in induction soldering. This technique is particularly suited to continuously soldering, in which case these coils wrap around a cylinder or a pipe that needs to be soldered.
Fiber focus infrared soldering
Fiber focus infrared soldering is technique where many infrared sources are led through fibers, then focused onto a single spot at which the connection is soldered.
Resistance soldering
Resistance soldering is soldering in which the heat required to melt the solder is created by passing an electric current through the parts to be soldered. When electric current is conducted through any metal, heat is generated; when that current is confined to a smaller cross-sectional area, the heat produced in the entire circuit is concentrated in the portion with the reduced cross-sectional area. The current doing the heating is applied by electrodes or tips energized from a low (open-circuit) voltage source, typically 2-7 volts. They can be tweezer-like for general connections or specially-shaped to make contact with parts located closely together.
Resistance soldering is unlike using a conduction iron, where heat is produced within an element and then passed through a thermally conductive tip into the joint area. A cold soldering iron requires time to reach working temperature and must be kept hot between solder joints. Thermal transfer may be inhibited if the tip is not kept properly wetted during use. With resistance soldering an intense heat can be rapidly developed directly within the joint area and in a tightly controlled manner. This allows a faster ramp up to the required solder melt temperature and minimizes thermal travel away from the solder joint, which helps to minimize the potential for thermal damage to materials or components in the surrounding area. Heat is only produced while each joint is being made, making resistance soldering more energy efficient. Because of these advantages, resistance soldering is common in industries which solder in small spaces such as connectors and wire terminals, and where high power is required, such as desoldering automotive parts.
Resistance soldering equipment, unlike conduction irons, can be used for difficult soldering and brazing applications where significantly higher temperatures may be required. This makes resistance comparable to flame soldering in some situations, but the resistance heat is more localized because of direct contact, whereas the flame might heat a larger area.
Active soldering
Flux-less soldering with aid of conventional soldering iron, ultrasonic soldering iron or specialized solder pot and active solder that contains an active element, most often titanium, zirconium or chromium. The active elements, owing to mechanical activation, react with the surface of the materials generally considered difficult to solder without premetallization. The active solders can be protected against excessive oxidation of their active element by addition of rare-earth elements with higher affinity to oxygen (typically cerium or lanthanum). Another common additive is gallium – usually introduced as a wetting promoter. Mechanical activation, needed for active soldering, can be performed by brushing (for example with use of stainless wire brush or steel spatula) or ultrasonic vibration (20–60 kHz). Active soldering has been shown to effectively bond ceramics, aluminium, titanium, silicon, graphite and carbon nanotube based structures at temperatures lower than 450 °C or use of protective atmosphere.
Pipe soldering
Copper pipe, or 'tube', is commonly joined by soldering. When applied in a plumbing trade context in the United States, soldering is often referred to as sweating, and a tubing connection so made is referred to as a sweated joint.
Outside the United States, "sweating" refers to the joining of flat metallic surfaces by a two step process by which solder is first applied to one surface, then this first piece is placed in position against the second surface and both are re-heated to achieve the desired joint.
Copper tubing conducts heat away much faster than a conventional hand-held soldering iron or gun can provide, so a propane torch is most commonly used to deliver the necessary power; for large tubing sizes and fittings a MAPP-fueled, acetylene-fueled, or propylene-fueled torch is used with atmospheric air as the oxidizer; MAPP/oxygen or acetylene/oxygen are rarely used because the flame temperature is much higher than the melting point of copper. Too much heat destroys the temper of hard-tempered copper tubing, and can burn the flux out of a joint before the solder is added, resulting in a faulty joint. For larger tubing sizes, a torch fitted with various sizes of interchangeable swirl tips is employed to deliver the needed heating power. In the hands of a skilled tradesman, the hotter flame of acetylene, MAPP, or propylene allows more joints to be completed per hour without damage to copper tempering.
However, it is possible to use an electrical tool to solder joints in copper pipe sized from . For example, the Antex Pipemaster is recommended for use in tight spaces, when open flames are hazardous, or by do-it-yourself users. The pliers-like tool uses heated fitted jaws that completely encircle the pipe, allowing a joint to be melted in as little as 10 seconds.
Solder fittings, also known as 'capillary fittings', are usually used for copper joints. These fittings are short sections of smooth pipe designed to slide over the outside of the mating tube. Commonly used fittings include for straight connectors, reducers, bends, and tees. There are two types of solder fittings: 'end feed fittings' which contain no solder, and 'solder ring fittings' (also known as Yorkshire fittings), in which there is a ring of solder in a small circular recess inside the fitting.
As with all solder joints, all parts to be joined must be clean and oxide free. Internal and external wire brushes are available for the common pipe and fitting sizes; emery cloth and wire-wool are frequently used as well, although metal wool products are discouraged, as they can contain oil, which would contaminate the joint.
Because of the size of the parts involved, and the high activity and contaminating tendency of the flame, plumbing fluxes are typically much more chemically active, and often more acidic, than electronic fluxes. Because plumbing joints may be done at any angle, even upside down, plumbing fluxes are generally formulated as pastes which stay in place better than liquids. Flux is applied to all surfaces of the joint, inside and out. Flux residues are removed after the joint is complete to prevent erosion and failure of the joint.
Many plumbing solder formulations are available, with different characteristics, such as higher or lower melting temperature, depending on the specific requirements of the job. Building codes currently almost universally require the use of lead-free solder for drinking water piping (and also flux must be approved for drinking water applications), though traditional tin-lead solder is still available. Studies have shown that lead-soldered plumbing pipes can result in elevated levels of lead in drinking water.
Since copper pipe quickly conducts heat away from a joint, great care must be taken to ensure that the joint is properly heated through to obtain a good bond. After the joint is properly cleaned, fluxed and fitted, the torch flame is applied to the thickest part of the joint, typically the fitting with the pipe inside it, with the solder applied at the gap between the tube and the fitting. When all the parts are heated through, the solder will melt and flow into the joint by capillary action. The torch may need to be moved around the joint to ensure all areas are wetted out. However, the installer must take care to not overheat the areas being soldered. If the tube begins to discolor it means that the tube has been over-heated and is beginning to oxidize, stopping the flow of the solder and causing the soldered joint not to seal properly. Before oxidation the molten solder will follow the heat of the torch around the joint. When the joint is properly wetted out, the solder and then the heat are removed, and while the joint is still very hot, it is usually wiped with a dry rag. This removes excess solder as well as flux residue before it cools down and hardens. With a solder ring joint, the joint is heated until a ring of molten solder is visible around the edge of the fitting and allowed to cool.
Of the three methods of connecting copper tubing, solder connections require the most skill, but soldering copper is a very reliable process, provided some basic conditions are met:
The tubing and fittings must be cleaned to bare metal with no tarnish
Any pressure which is formed by heating of the tubing must have an outlet
The joint must be dry (which can be challenging when repairing water pipes)
Copper is only one material that is joined in this manner. Brass fittings are often used for valves or as a connection fitting between copper and other metals. Brass piping is soldered in this manner in the making of brass instruments and some woodwind (saxophone and flute) musical instruments
Wire brush, wire wool and emery cloth are commonly used to prepare plumbing joints for connection. Bristle brushes are usually used to apply plumbing paste flux. A heavy rag is usually used to remove flux from a plumbing joint before it cools and hardens. A fiberglass brush can also be used.
When soldering pipes closely connected to valves such as in refrigeration systems it may be necessary to protect the valve from heat that could damage rubber or plastic components within, in this case a wet cloth wrapped around the valve can often sink sufficient heat through the boiling of the water to protect the valve.
Copper tube soldering defects
In the joining of copper tube, failure to properly heat and fill a joint may lead to a 'void' being formed. This is usually a result of improper placement of the flame. If the heat of the flame is not directed at the back of the fitting cup, and the solder wire applied degrees opposite the flame, then solder will quickly fill the opening of the fitting, trapping some flux inside the joint. This bubble of trapped flux is the void; an area inside a soldered joint where solder is unable to completely fill the fittings' cup, because flux has become sealed inside the joint, preventing solder from occupying that space.
Stained glass soldering
Historically, stained glass soldering tips were copper, heated by being placed in a charcoal-burning brazier. Multiple tips were used; when one tip cooled down from use, it was placed back in the brazier of charcoal and the next tip was used.
More recently, electrically heated soldering irons are used. These are heated by a coil or ceramic heating element inside the tip of the iron. Different power ratings are available, and temperature can be controlled electronically. These characteristics allow longer beads to be run without interrupting the work to change tips. Soldering irons designed for electronic use are often effective though they are sometimes underpowered for the heavy copper and lead came used in stained glass work.
Oleic acid is the classic flux material that has been used to improve solderability.
Tiffany-type stained glass is made by gluing copper foil around the edges of the pieces of glass and then soldering them together. This method makes it possible to create three-dimensional stained glass pieces.
Electronics soldering
Hand soldering
For attachment of electronic components to a PCB, proper selection and use of flux helps prevent oxidation during soldering; it is essential for good wetting and heat transfer. The soldering iron tip must be clean and pre-tinned with solder to ensure rapid heat transfer.
Electronic joints are usually made between surfaces that have been tinned and rarely require mechanical cleaning, though tarnished component leads and copper traces with a dark layer of oxide passivation (due to aging), as on a new prototyping board that has been on the shelf for about a year or more, may need to be mechanically cleaned.
To simplify soldering, beginners are usually advised to apply the soldering iron and the solder separately to the joint, rather than the solder being applied directly to the iron. When sufficient solder is applied, the solder wire is removed. When the surfaces are adequately heated, the solder will flow around the workpieces. The iron is then removed from the joint.
If all metal surfaces have not been properly cleaned ("fluxed") or brought entirely above the melting temperature of the solder used, the result will be an unreliable ("cold solder") joint, even though its appearance may suggest otherwise.
Excess solder, unconsumed flux and residue is sometimes wiped from the soldering iron tip between joints. The tip of the bit (commonly iron plated to reduce erosion) is kept wetted with solder ("tinned") when hot to assist soldering, and to minimize oxidation and corrosion of the tip itself.
After inserting a through-hole mounted component, the excess lead is cut off, leaving a length of about the radius of the pad.
Hand-soldering techniques require a great deal of skill for the fine-pitch soldering of surface-mount chip packages. In particular ball grid array (BGA) devices are notoriously difficult, if not impossible, to rework by hand.
Defects
Cold joints
Various problems may arise in the soldering process which lead to joints which are nonfunctional either immediately or after a period of use.
The most common defect when hand-soldering results from the parts being joined not exceeding the solder's liquidus temperature, resulting in a "cold solder" joint. This is usually the result of the soldering iron being used to heat the solder directly, rather than the parts themselves. Properly done, the iron heats the parts to be connected, which in turn melt the solder, guaranteeing adequate heat in the joined parts for thorough wetting. If using solder wire with an embedded flux core, heating the solder first may cause the flux to evaporate before it cleans the surfaces being soldered.
A cold-soldered joint may not conduct at all, or may conduct only intermittently. Cold-soldered joints also happen in mass production, and are a common cause of equipment which passes testing, but malfunctions after sometimes years of operation.
Dry joints
A "dry joint" occurs when the cooling solder is moved. Since non-eutectic solder alloys have a small plastic range, the joint must not be moved until the solder has cooled down through both the liquidus and solidus temperatures. Dry joints often occur because the joint moves when the soldering iron is removed from the joint. They are weak mechanically and poor conductors electrically.
Avoiding overheating of components
For hand soldering, the heat source tool is selected to provide adequate heat for the size of joint to be completed. A 100-watt soldering iron may provide too much heat for printed circuit boards (PCBs), while a 25-watt iron will not provide enough heat for large electrical connectors.
Using a tool with too high a temperature can damage sensitive components, but protracted heating by a tool that is too cool or under powered can also cause heat damage. Excessive heating of a PCB may result in delamination — the copper traces may actually lift off the substrate, particularly on single sided PCBs without through hole plating.
While hand-soldering, a heat sink, such as a crocodile clip, may be used on the leads of heat-sensitive components to reduce heat transfer to the components and avoid damaging them. This is especially applicable to germanium parts.
The heat sink limits the temperature of the component body by absorbing and dissipating heat, by reducing the thermal resistance between the component and the air. Meanwhile, the thermal resistance of the leads maintains the temperature difference between the part of the leads being soldered and the component body. Thus, the leads become hot enough to melt the solder while the component body remains cooler. The heat sink will mean the use of more heat to complete the joint, since heat taken up by the heat sink will not heat the work pieces.
Components which dissipate large amounts of heat during operation are sometimes elevated above the PCB to avoid PCB overheating. Plastic or metal mounting clips or holders may be used with large devices to aid heat dissipation and reduce joint stresses.
Visual inspection of joints
When visually inspected, a good solder joint will appear smooth, bright and shiny, with the outline of the soldered wire clearly visible. In general a good-looking soldered joint is a good joint.
A matte gray surface is a good indicator of a joint that was moved during soldering. A dry joint has a characteristically dull or grainy appearance immediately after the joint is made. This appearance is caused by crystallization of the liquid solder. Too little solder will result in a dry and unreliable joint.
Cold solder joints are dull and sometimes cracked or pock-marked. If the joint has lumps or balls of otherwise shiny solder, the metal has not wetted properly. Too much solder (the familiar 'solder blob' to beginners) is not necessarily unsound, but tends to mean poor wetting.
A concave fillet is ideal. The boundary between the solder and the workpiece in a good joint will have a low angle. This indicates good wetting and minimal use of solder, and therefore minimal heating of heat sensitive components. A joint may be good, but if a large amount of unnecessary solder is used, then excess heating was obviously required.
Lead-free solder formulations may cool to a dull surface even if the joint is good. The solder looks shiny while molten, and suddenly hazes over as it solidifies even though it has not been disturbed during cooling.
Flux use and residue
An improperly selected or applied flux can cause joint failure. Without flux the joint may not be clean, or may be oxidized, resulting in an unsound joint.
For electronic work, flux-core solder wire is generally used, but additional flux may be used from a flux pen or dispensed from a small bottle with a syringe-like needle.
Some fluxes are designed to be stable and inactive when cool and do not need to be cleaned off, though they can if desired. If such fluxes are used, cleaning may merely be a matter of aesthetics or to make visual inspection of joints easier in specialised 'mission critical' applications such as medical devices, military and aerospace. For satellites, this will also reduce weight, slightly but usefully. In high humidity, since even non-corrosive flux might remain slightly active, the flux may be removed to reduce corrosion over time.
Some fluxes are corrosive and flux residue must be removed after soldering. If not properly cleaned, the flux may corrode the joint or the PCB. Water, alcohol, acetone, or other solvents compatible with the flux and the parts involved are commonly used with cotton swabs or bristle brushes.
In some applications, the PCB might also be coated in some form of protective material such as a lacquer to protect it and exposed solder joints from the environment.
Desoldering and resoldering
Used solder contains some of the dissolved base metals and is unsuitable for reuse in making new joints. Once the solder's capacity for the base metal has been reached, it will no longer properly bond with the base metal, usually resulting in a brittle cold solder joint with a crystalline appearance.
It is good practice to remove solder from a joint prior to resoldering — desoldering braids (or wicks) or vacuum desoldering equipment (solder suckers) can be used. Desoldering wicks contain plenty of flux which will remove the oxidation from the copper trace and any device leads that are present. This will leave a bright, shiny, clean junction to be resoldered.
The lower melting point of solder means it can be melted away from the base metal, leaving it mostly intact, though the outer layer will be "tinned" with solder. Flux will remain which can easily be removed by abrasive or chemical processes. This tinned layer will allow solder to flow onto a new joint, resulting in a new joint, as well as making the new solder flow very quickly and easily.
Wave soldering and reflow soldering
Currently, mass-production printed circuit boards (PCBs) are mostly wave soldered or reflow soldered, though hand soldering of production electronics is also still widely used.
In wave soldering, components are prepped (trimmed or modified) and installed on the PCB. Sometimes, to prevent movement they are temporarily kept in place with small dabs of adhesive or secured with a fixture, then the assembly is passed over flowing solder in a bulk container. This solder flow is forced to produce a standing wave so the whole PCB is not submerged in solder, but rather just touched. The result is that solder stays on pins and pads, but not on the PCB itself.
Reflow soldering is a process in which a solder paste (a mixture of prealloyed solder powder and a flux-vehicle that has a peanut butter-like consistency) is used to stick the components to their attachment pads, after which the assembly is heated by an infrared lamp, a hot air pencil, or, more commonly, by passing it through a carefully controlled oven.
Since different components can be best assembled by different techniques, it is common to use two or more processes for a given PCB. For example, surface mounted parts may be reflow soldered first, with a wave soldering process for the through-hole mounted components coming next, and bulkier parts hand-soldered last.
Hot-bar reflow
Hot-bar reflow is a selective soldering process where two pre-fluxed, solder coated parts are heated with a heating element (called a thermode) to a temperature sufficient to melt the solder.
Pressure is applied through the entire process (usually 15 seconds) to ensure that components stay in place during cooling. The heating element is heated and cooled for each connection. Up to 4000 W can be used in the heating element, allowing fast soldering, good results with connections requiring high energy.
Environmental regulation and RoHS
Environmental legislation in many countries has led to a change in formulation of both solders and fluxes.
The RoHS directives in the European Community required many new electronic circuit boards to be lead-free by 1 July 2006, mostly in the consumer goods industry, but in some others as well. In Japan, lead was phased out prior to legislation by manufacturers, due to the additional expense in recycling products containing lead.
Water-soluble non-rosin-based fluxes have been increasingly used since the 1980s so that soldered boards can be cleaned with water or water-based cleaners. This eliminates hazardous solvents from the production environment, and from factory effluents.
Even without the presence of lead, soldering can release fumes that are harmful and/or toxic to humans. It is highly recommended to use a device that can remove the fumes from the work area either by ventilating outside or filtering the air.
Lead-free
Lead free soldering requires higher soldering temperatures than lead/tin soldering. SnPb 63/37 eutectic solder melts at . SAC lead-free solder melts at .
Nevertheless, many new technical challenges have arisen with this endeavor. To reduce the melting point of tin-based solder alloys, various new alloys have had to be researched, with additives of copper, silver, bismuth as typical minor additives to reduce the melting point and control other properties. Additionally, tin is a more corrosive metal, and can eventually lead to the failure of solder baths.
Lead-free construction has also extended to components, pins, and connectors. Most of these pins used copper frames, and either lead, tin, gold or other finishes. Tin finishes are the most popular of lead-free finishes. Nevertheless, this brings up the issue of how to deal with tin whiskers. The current movement brings the electronics industry back to the problems solved in the 1960s by adding lead. JEDEC has created a classification system to help lead-free electronic manufacturers decide what provisions to take against whiskers, depending upon their application.
| Technology | Metallurgy | null |
33704019 | https://en.wikipedia.org/wiki/Jameson%27s%20mamba | Jameson's mamba | Jameson's mamba (Dendroaspis jamesoni) is a species of highly venomous snake in the family Elapidae. The species is native to equatorial Africa. A member of the mamba genus, Dendroaspis, it is slender with dull green upper parts and cream underparts and generally ranges from in total length. Described by Scottish naturalist Thomas Traill in 1843, it has two recognised subspecies. The nominate subspecies is found in central and western sub-Saharan Africa, and the eastern black-tailed subspecies is found eastern sub-Saharan Africa, mainly western Kenya.
Predominantly arboreal, Jameson's mamba preys mainly on birds and mammals. Its venom consists of both neurotoxins and cardiotoxins. Symptoms of envenomation in humans include pain and swelling at the bite site, followed by swelling, chills, sweating, abdominal pain and vomiting, with subsequent slurred speech, difficulty breathing and paralysis. Fatalities have been recorded within three to four hours of being bitten. The venom of the eastern subspecies is around twice as potent as that of the nominate subspecies.
Taxonomy and etymology
Jameson's mamba was first described as Elaps jamesoni in 1843 by Thomas Traill, a Scottish doctor, zoologist and scholar of medical jurisprudence. The specific epithet is in honour of Robert Jameson, Traill's contemporary and the Regius Professor of Natural History at the University of Edinburgh where Traill studied. In 1848, German naturalist Hermann Schlegel created the genus Dendroaspis, designating Jameson's mamba as the type species. The generic name is derived from the Ancient Greek words (, 'tree') and ( 'asp'). The genus was misspelt as Dendraspis by French zoologist Auguste Duméril in 1856, and went generally uncorrected by subsequent authors. In 1936, Dutch herpetologist Leo Brongersma corrected the spelling to the original.
In 1936, British biologist Arthur Loveridge described a new subspecies D. jamesoni kaimosae, from a specimen collected from the Kaimosi Forest in western Kenya, observing that it had fewer subcaudal scales and a black (rather than green) tail. Analysis of the components of the venom of all mambas places Jameson's mamba as sister species to the western green mamba (Dendroaspis viridis), as shown in the cladogram below.
Description
Jameson's mamba is a long and slender snake with smooth scales and a tail which typically accounts for 20 to 25% of its total length. The total length (including tail) of an adult snake is approximately . It may grow as large as . The general consensus is that the sexes are of similar sizes, although fieldwork in southeastern Nigeria found that males were significantly larger than females. Adults tend to be dull green across the back, blending to pale green towards the underbelly with scales generally edged with black. The neck, throat and underparts are typically cream or yellowish in colour. Jameson's mamba has a narrow and elongated head containing small eyes and round pupils. Like the western green mamba, the neck may be flattened. The subspecies D. jamesoni kaimosae, which is found in the eastern part of the species' range, features a black tail, while central and western examples typically have a pale green or yellow tail. The thin fangs are attached to the upper jaw and have a furrow running down their anterior surface.
Scalation
The number and pattern of scales on a snake's body play a key role in the identification and differentiation at the species level. Jameson's mamba has between 15 and 17 rows of dorsal scales at midbody, 210 to 236 () jamesoni) or 202 to 227 ventral scales ( kaimosae), 94 to 122 ( jamesoni) or 94 to 113 ( kaimosae) divided subcaudal scales, and a divided anal scale. Its mouth is lined with 7 to 9 (usually 8) supralabial scales above and 8 to 10 (usually 9) sublabial scales below, the fourth ones located over and under the eye. Its eyes have three preocular, three postocular and one subocular scale.
Distribution and habitat
Jameson's mamba occurs mostly in Central Africa and West Africa, and in some parts of East Africa. In Central Africa it can be found from Angola northwards to the Democratic Republic of the Congo, Republic of the Congo, Central African Republic, and as far north as the Imatong Mountains of South Sudan. In West Africa it ranges from Ghana eastwards to Togo, Nigeria, Cameroon, Equatorial Guinea and Gabon. In East Africa it can be found in Uganda, Kenya, Rwanda, Burundi and Tanzania. The subspecies D. jamesoni kaimosae is endemic to East Africa and chiefly found in western Kenya, where its type locality is located, as well as in Uganda, Rwanda, and the adjacent Democratic Republic of the Congo. It is a relatively common and widespread snake, particularly across its western range. Fieldwork in Nigeria indicated the species is sedentary.
Found in primary and secondary rainforests, woodland, forest-savanna and deforested areas at elevations of up to high, Jameson's mamba is an adaptable species; it persists in areas where there has been extensive deforestation and human development. It is often found around buildings, town parks, farmlands and plantations. Jameson's mamba is a highly arboreal snake, more so than its close relatives the eastern green mamba and western green mamba, and significantly more so than the black mamba.
Behaviour and ecology
Jameson's mamba is a highly agile snake. Like other mambas it is capable of flattening its neck in mimicry of a cobra when it feels threatened, and its body shape and length give an ability to strike at significant range. Generally not aggressive, it will typically attempt to escape if confronted.
Breeding
In Nigeria males fight each other for access to females (and then breed) over the dry season of December, January and February; mating was recorded in September in the Kakamega Forest in Kenya. Jameson's mamba is oviparous; the female lays a clutch of 5–16 eggs; in Nigeria laying was recorded from April to June, and most likely soon after November in Uganda. Egg clutches have been recovered from abandoned termite colonies.
Diet and predators
Jameson's mamba has been difficult to study in the field due to its arboreal nature and green coloration. It has not been observed hunting but is thought to use a sit-and-wait strategy, which has been reported for the eastern green mamba. The bulk of its diet is made up of birds and tree-dwelling mammals, such as cisticolas, woodpeckers, doves, squirrels, shrews and mice. Smaller individuals of under in length have been recorded feeding on lizards such as the common agama, and toads. There is no evidence they have adapted to hunting terrestrial rodents such as rats, though they have been recorded eating rodents in Kenya, and have accepted them in captivity.
The main predators of this species are birds of prey, including the martial eagle, bateleur, and the Congo serpent eagle. Other predators may include the honey badger, other snakes, and species of mongoose.
Venom
Jameson's mamba is classified as a Snake of Medical Importance in Sub-Saharan Africa by the World Health Organization, although there are few records of snakebites. Field observations over a 16-year period in the Niger Delta in southern Nigeria found that both humans and snakes were most active in rural areas during the rainy season, April to August, hence rendering this a peak period for snakebite. As well as succumbing to snakebites, workers were reported to have perished from falling from trees after encountering Jameson's mambas in the canopy of trees in palm oil plantations. Snake bites are rare in cities but more common in forested areas in countries such as the Democratic Republic of the Congo; the country's poor infrastructure and lack of facilities render access to antivenom difficult.
Like other mambas, the venom of the Jameson's mamba is highly neurotoxic. Symptoms of envenomation by this species include pain and swelling of the bite site. Systemic effects include generalised swelling, chills, sweating, abdominal pain and vomiting, with subsequent slurred speech, difficulty breathing, and paralysis. Death has been recorded within three to four hours of being bitten; there is an unconfirmed report of a child dying within 30 minutes. With an average intravenous murine median lethal dose (LD50) of 0.53 mg/kg, the venom of the eastern subspecies kaimosae is more than twice as potent as that of the nominate subspecies jamesoni at 1.2 mg/kg. The reason for this is unclear as the venom compositions are similar between the two subspecies, though kaimosae has higher concentrations of the potent neurotoxin-1.
Similarly to the venom of most other mambas, Jameson's mamba's contains predominantly three-finger toxin agents as well as dendrotoxins. Other toxins of the three-finger family present include alpha-neurotoxin, cardiotoxins and fasciculins. Dendrotoxins are akin to kunitz-type protease inhibitors that interact with voltage-dependent potassium channels, stimulating acetylcholine and causing an excitatory effect, and are thought to cause symptoms such as sweating. Unlike that of many snake species, the venom of mambas has little phospholipase A2. Although cardiotoxins have been isolated in higher proportions from its venom than other mamba species, their role in toxicity is unclear and probably not prominent.
Treatment
The speed of onset of envenomation means that urgent medical attention is needed. Standard first aid treatment for any bite from a suspected venomous snake is the application of a pressure bandage, minimisation of the victim's movement, and rapid conveyance to a hospital or clinic. Due to the neurotoxic nature of green mamba venom, an arterial tourniquet may be beneficial. Tetanus toxoid is sometimes administered, though the main treatment is the administration of the appropriate antivenom. Trivalent and monovalent antivenoms for the black, eastern green, and Jameson's mambas became available in the 1950s and 1960s.
| Biology and health sciences | Snakes | Animals |
40516793 | https://en.wikipedia.org/wiki/Colored%20pencil | Colored pencil | A colored pencil (American English), coloured pencil (Commonwealth English), colour pencil (Indian English), map pencil, pencil crayon, or coloured/colouring lead (Canadian English, Newfoundland English) is a type of pencil constructed of a narrow, pigmented core encased in a wooden cylindrical case. Unlike graphite and charcoal pencils, colored pencils' cores are wax- or oil-based and contain varying proportions of pigments, additives, and binding agents. Water-soluble (watercolor) pencils and pastel pencils are also manufactured as well as colored cores for mechanical pencils.
Colored pencils are made in a wide range of price, quality and usability, from student-grade to professional-grade. Concentration of pigments in the core, lightfastness of the pigments, durability of the colored pencil, and softness of the core are some determinants of a brand's quality and, consequently, its market price. There is no general quality difference between wax/oil-based and water-soluble colored pencils, although some manufacturers rate their water-soluble pencils as less lightfast than their similar wax/oil-based pencils. Colored pencils are commonly stored in pencil cases to prevent damage.
Despite colored pencils' existence for more than a century, the art world has historically treated the medium with less admiration than other art media. However, the discovery of new techniques and methods, the development of lightfast pencils, and the formation of authoritative organizations is better enabling colored pencils to compete with other media. Additionally, colored pencils are more affordable, cleaner, and simpler compared to other media.
History
The use of wax-based media in crayons can be traced back to the Greek Golden Age, and was later documented by Roman scholar, Pliny the Elder. Wax-based materials have appealed to artists for centuries due to their resistance to decay, the vividness and brilliance of their colors, and their unique rendering qualities.
Although colored pencils had been used for “checking and marking” for decades prior, it was not until the early 20th century that artist-quality colored pencils were produced. Manufacturers that began producing artist-grade colored pencils included Faber-Castell in 1908 (the Polychromos range was initially 60 colors) and Caran d’Ache in 1924, followed by Berol Prismacolor in 1938. Other notable manufacturers include Bruynzeel-Sakura, Cretacolor, Derwent, Koh-i-Noor Hardtmuth, Mitsubishi (uni-ball), Schwan-Stabilo, and Staedtler.
Types
Several types of colored pencils are manufactured for both artistic and practical uses.
Artist- and professional-grade
Artist and professional-grade pencils are made with higher concentrations of high-quality pigments than student-grade colored pencils. Their lightfastness – resistance to UV rays in sunlight – is also measured and documented. Core durability, break and water resistance, and brand popularity are also notable features of artist-grade colored pencils. Artist-grade pencils have the largest color ranges; 72 color sets are very common and there are several brands of 120 colors or more. They are also typically available as individual pencils.
Student- and scholastic-grade
Many of the same companies that produce artist-grade colored pencils also offer student-grade materials and scholastic-level colored pencils. These products do not usually include a lightfastness rating, and core composition and pigment-binder ratio vary, even between products manufactured by the same company. Student- and scholastic-grade colored pencils lack the high quality pigments and lightfastness of artist-grade products, and their color range is smaller, often limited to 24 or 36 colors.
However, using lower-grade colored pencils does have some advantages. Some companies offer erasable colored pencils for beginning artists to experiment with. Student-grade colored pencils also tend to cost significantly less than their higher-grade counterparts, which makes them more accessible for children and students.
Watercolor pencils
Watercolor pencils, otherwise known as water-soluble pencils, are a versatile art medium. The pencils can be used dry—like normal colored pencils—or they can be applied wet to get the desired watercolor effect. In wet application, the artist first lays down the dry pigment and then follows up with a damp paintbrush to intensify and spread the colors. This technique can also be used to blend colors together, and many artists will apply both techniques in one art piece. Artist-grade watercolor pencils typically come in 60 or 72 colors but can go up all the way up to 120 colors.
Oil-based
Oil-based pencils utilize an oil-based binder. The oil binder imparts unique characteristics such as a smoother finish, enhanced durability, and the ability to create fine details with less wax bloom compared to their wax-based counterparts. This composition facilitates superior blending and layering capabilities, allowing artists to achieve subtle color transitions and complex depths in their artwork. The pencils are known for their vibrant colors and are versatile enough for use on a variety of surfaces, making them a favorite among professionals for applications requiring precision and longevity.
Pastel pencils
Pastel pencils are similar to hard pastels. Pastel pencils can be used on their own or in combination with other mediums. They can be used dry, wet or blended together. Many artists use them for preliminary sketches, given that graphite pencils are not compatible with pastels. They can also be sharpened to a fine point to add details on pastel drawings.
Techniques
Colored pencils can be used in combination with several other drawing mediums. When used by themselves, there are two main rendering techniques colored pencil artists use.
Layering is usually used in the beginning stages of a colored pencil drawing, but can also be used for entire pieces. In layering, tones are gradually built up using several layers of primary colors. Layered drawings usually expose the tooth of the paper and are characterized by a grainy, fuzzy finish.
Burnishing is a blending technique in which a colorless blender or a light-colored pencil is applied firmly to an already layered drawing. This produces a shiny surface of blended colors that gets deep into the grain of the paper.
Roughening is a technique, which creates a rendering of textured surfaces by placing a rough piece of paper underneath the drawing paper. Next, rub the drawing paper with a very smooth object to leave indentions on the paper. Finally, draw over it using colored pencil and the design should stand out.
Scoring patterns can be used to create highlights on objects. The technique requires tracing or transparent paper and a sharp pen. First, place the paper over the area being impressed. Then, with moderate pressure, the desired line or pattern is used.
Fusing colors encourages the colored pencil pigments to be physically blended using solvents, colorless blender, or a combination of both of these. This technique enables the colors to easily mix into a single color.
| Technology | Artist's and drafting tools | null |
23568658 | https://en.wikipedia.org/wiki/Radioanalytical%20chemistry | Radioanalytical chemistry | Radioanalytical chemistry focuses on the analysis of sample for their radionuclide content. Various methods are employed to purify and identify the radioelement of interest through chemical methods and sample measurement techniques.
History
The field of radioanalytical chemistry was originally developed by Marie Curie with contributions by Ernest Rutherford and Frederick Soddy. They developed chemical separation and radiation measurement techniques on terrestrial radioactive substances. During the twenty years that followed 1897 the concepts of radionuclides was born. Since Curie's time, applications of radioanalytical chemistry have proliferated. Modern advances in nuclear and radiochemistry research have allowed practitioners to apply chemistry and nuclear procedures to elucidate nuclear properties and reactions, used radioactive substances as tracers, and measure radionuclides in many different types of samples.
The importance of radioanalytical chemistry spans many fields including chemistry, physics, medicine, pharmacology, biology, ecology, hydrology, geology, forensics, atmospheric sciences, health protection, archeology, and engineering. Applications include: forming and characterizing new elements, determining the age of materials, and creating radioactive reagents for specific tracer use in tissues and organs. The ongoing goal of radioanalytical researchers is to develop more radionuclides and lower concentrations in people and the environment.
Radiation decay modes
Alpha-particle decay
Alpha decay is characterized by the emission of an alpha particle, a 4He nucleus. The mode of this decay causes the parent nucleus to decrease by two protons and two neutrons. This type of decay follows the relation:
Beta-particle decay
Beta decay is characterized by the emission of a neutrino and a negatron which is equivalent to an electron. This process occurs when a nucleus has an excess of neutrons with respect to protons, as compared to the stable isobar. This type of transition converts a neutron into a proton; similarly, a positron is released when a proton is converted into a neutron. These decays follows the relation:
Gamma-ray decay
Gamma ray emission follows the previously discussed modes of decay when the decay leaves a daughter nucleus in an excited state. This nucleus is capable of further de-excitation to a lower energy state by the release of a photon. This decay follows the relation:
Radiation detection principles
Gas ionization detectors
Gaseous ionization detectors collect and record the electrons freed from gaseous atoms and molecules by the interaction of radiation released by the source. A voltage potential is applied between two electrodes within a sealed system. Since the gaseous atoms are ionized after they interact with radiation they are attracted to the anode which produces a signal. It is important to vary the applied voltage such that the response falls within a critical proportional range.
Solid-state detectors
The operating principle of Semiconductor detectors is similar to gas ionization detectors: except that instead of ionization of gas atoms, free electrons and holes are produced which create a signal at the electrodes. The advantage of solid state detectors is the greater resolution of the resultant energy spectrum. Usually NaI(Tl) detectors are used; for more precise applications Ge(Li) and Si(Li) detectors have been developed. For extra sensitive measurements high-pure germanium detectors are used under a liquid nitrogen environment.
Scintillation detectors
Scintillation detectors uses a photo luminescent source (such as ZnS) which interacts with radiation. When a radioactive particle decays and strikes the photo luminescent material a photon is released. This photon is multiplied in a photomultiplier tube which converts light into an electrical signal. This signal is then processed and converted into a channel. By comparing the number of counts to the energy level (typically in keV or MeV) the type of decay can be determined.
Chemical separation techniques
Due to radioactive nucleotides have similar properties to their stable, inactive, counterparts similar analytical chemistry separation techniques can be used. These separation methods include precipitation, Ion Exchange, Liquid Liquid extraction, Solid Phase extraction, Distillation, and Electrodeposition.
Radioanalytical chemistry principles
Sample loss by radiocolloidal behaviour
Samples with very low concentrations are difficult to measure accurately due to the radioactive atoms unexpectedly depositing on surfaces. Sample loss at trace levels may be due to adhesion to container walls and filter surface sites by ionic or electrostatic adsorption, as well as metal foils and glass slides. Sample loss is an ever present concern, especially at the beginning of the analysis path where sequential steps may compound these losses.
Various solutions are known to circumvent these losses which include adding an inactive carrier or adding a tracer. Research has also shown that pretreatment of glassware and plastic surfaces can reduce radionuclide sorption by saturating the sites.
Carrier or tracer addition
Since small amounts of radionuclides are typically being analyzed, the mechanics of manipulating tiny quantities is challenging. This problem is classically addressed by the use of carrier ions. Thus, carrier addition involves the addition of a known mass of stable ion to radionuclide-containing sample solution. The carrier is of the identical element but is non-radioactive. The carrier and the radionuclide of interest have identical chemical properties. Typically the amount of carrier added is conventionally selected for the ease of weighing such that the accuracy of the resultant weight is within 1%. For alpha particles, special techniques must be applied to obtain the required thin sample sources. The use of carries was heavily used by Marie Curie and was employed in the first demonstration of nuclear fission.
Isotope dilution is the reverse of tracer addition. It involves the addition of a known (small) amount of radionuclide to the sample that contains a known stable element. This additive is the "tracer." It is added at the start of the analysis procedure. After the final measurements are recorded, sample loss can be determined quantitatively. This procedure avoids the need for any quantitative recovery, greatly simplifying the analytical process.
Typical radionuclides of interest
Quality assurance
As this is an analytical chemistry technique quality control is an important factor to maintain. A laboratory must produce trustworthy results. This can be accomplished by a laboratories continual effort to maintain instrument calibration, measurement reproducibility, and applicability of analytical methods. In all laboratories there must be a quality assurance plan. This plan describes the quality system and procedures in place to obtain consistent results. Such results must be authentic, appropriately documented, and technically defensible." Such elements of quality assurance include organization, personnel training, laboratory operating procedures, procurement documents, chain of custody records, standard certificates, analytical records, standard procedures, QC sample analysis program and results, instrument testing and maintenance records, results of performance demonstration projects, results of data assessment, audit reports, and record retention policies.
The cost of quality assurance is continually on the rise but the benefits far outweigh this cost. The average quality assurance workload was risen from 10% to a modern load of 20-30%. This heightened focus on quality assurance ensures that quality measurements that are reliable are achieved. The cost of failure far outweighs the cost of prevention and appraisal. Finally, results must be scientifically defensible by adhering to stringent regulations in the event of a lawsuit.
| Physical sciences | Basics_2 | Chemistry |
35319154 | https://en.wikipedia.org/wiki/Occupational%20safety%20and%20health | Occupational safety and health | Occupational safety and health (OSH) or occupational health and safety (OHS) is a multidisciplinary field concerned with the safety, health, and welfare of people at work (i.e., while performing duties required by one's occupation). OSH is related to the fields of occupational medicine and occupational hygiene and aligns with workplace health promotion initiatives. OSH also protects all the general public who may be affected by the occupational environment.
According to the official estimates of the United Nations, the WHO/ILO Joint Estimate of the Work-related Burden of Disease and Injury, almost 2 million people die each year due to exposure to occupational risk factors. Globally, more than 2.78 million people die annually as a result of workplace-related accidents or diseases, corresponding to one death every fifteen seconds. There are an additional 374 million non-fatal work-related injuries annually. It is estimated that the economic burden of occupational-related injury and death is nearly four per cent of the global gross domestic product each year. The human cost of this adversity is enormous.
In common-law jurisdictions, employers have the common law duty (also called duty of care) to take reasonable care of the safety of their employees. Statute law may, in addition, impose other general duties, introduce specific duties, and create government bodies with powers to regulate occupational safety issues. Details of this vary from jurisdiction to jurisdiction.
Prevention of workplace incidents and occupational diseases is addressed through the implementation of occupational safety and health programs at company level.
Definitions
The International Labour Organization (ILO) and the World Health Organization (WHO) share a common definition of occupational health. It was first adopted by the Joint ILO/WHO Committee on Occupational Health at its first session in 1950:
In 1995, a consensus statement was added:
An alternative definition for occupational health given by the WHO is: "occupational health deals with all aspects of health and safety in the workplace and has a strong focus on primary prevention of hazards."
The expression "occupational health", as originally adopted by the WHO and the ILO, refers to both short- and long-term adverse health effects. In more recent times, the expressions "occupational safety and health" and "occupational health and safety" have come into use (and have also been adopted in works by the ILO), based on the general understanding that occupational health refers to hazards associated to disease and long-term effects, while occupational safety hazards are those associated to work accidents causing injury and sudden severe conditions.
History
Research and regulation of occupational safety and health are a relatively recent phenomenon. As labor movements arose in response to worker concerns in the wake of the industrial revolution, workers' safety and health entered consideration as a labor-related issue.
Beginnings
Written works on occupational diseases began to appear by the end of the 15th century, when demand for gold and silver was rising due to the increase in trade and iron, copper, and lead were also in demand from the nascent firearms market. Deeper mining became common as a consequence. In 1473, , a German physician, wrote a short treatise On the Poisonous Wicked Fumes and Smokes, focused on coal, nitric acid, lead, and mercury fumes encountered by metal workers and goldsmiths. In 1587, Paracelsus (1493–1541) published the first work on the mine and smelter workers diseases. In it, he gave accounts of miners' "lung sickness". In 1526, Georgius Agricola's (1494–1553) De re metallica, a treaty on metallurgy, described accidents and diseases prevalent among miners and recommended practices to prevent them. Like Paracelsus, Agricola mentioned the dust that "eats away the lungs, and implants consumption."
The seeds of state intervention to correct social ills were sown during the reign of Elizabeth I by the Poor Laws, which originated in attempts to alleviate hardship arising from widespread poverty. While they were perhaps more to do with a need to contain unrest than morally motivated, they were significant in transferring responsibility for helping the needy from private hands to the state.
In 1713, Bernardino Ramazzini (1633–1714), often described as the father of occupational medicine and a precursor to occupational health, published his De morbis artificum diatriba (Dissertation on Workers' Diseases), which outlined the health hazards of chemicals, dust, metals, repetitive or violent motions, odd postures, and other disease-causative agents encountered by workers in more than fifty occupations. It was the first broad-ranging presentation of occupational diseases.
Percivall Pott (1714–1788), an English surgeon, described cancer in chimney sweeps (chimney sweeps' carcinoma), the first recognition of an occupational cancer in history.
The Industrial Revolution in Britain
The United Kingdom was the first nation to industrialize. Soon shocking evidence emerged of serious physical and moral harm suffered by children and young persons in the cotton textile mills, as a result of exploitation of cheap labor in the factory system. Responding to calls for remedial action from philanthropists and some of the more enlightened employers, in 1802 Sir Robert Peel, himself a mill owner, introduced a bill to parliament with the aim of improving their conditions. This would engender the Health and Morals of Apprentices Act 1802, generally believed to be the first attempt to regulate conditions of work in the United Kingdom. The act applied only to cotton textile mills and required employers to keep premises clean and healthy by twice yearly washings with quicklime, to ensure there were sufficient windows to admit fresh air, and to supply "apprentices" (i.e., pauper and orphan employees) with "sufficient and suitable" clothing and accommodation for sleeping. It was the first of the 19th century Factory Acts.
Charles Thackrah (1795–1833), another pioneer of occupational medicine, wrote a report on The State of Children Employed in Cotton Factories, which was sent to the Parliament in 1818. Thackrah recognized issues of inequalities of health in the workplace, with manufacturing in towns causing higher mortality than agriculture.
The Act of 1833 created a dedicated professional Factory Inspectorate. The initial remit of the Inspectorate was to police restrictions on the working hours in the textile industry of children and young persons (introduced to prevent chronic overwork, identified as leading directly to ill-health and deformation, and indirectly to a high accident rate).
In 1840 a Royal Commission published its findings on the state of conditions for the workers of the mining industry that documented the appallingly dangerous environment that they had to work in and the high frequency of accidents. The commission sparked public outrage which resulted in the Mines and Collieries Act of 1842. The act set up an inspectorate for mines and collieries which resulted in many prosecutions and safety improvements, and by 1850, inspectors were able to enter and inspect premises at their discretion.
On the urging of the Factory Inspectorate, a further act in 1844 giving similar restrictions on working hours for women in the textile industry introduced a requirement for machinery guarding (but only in the textile industry, and only in areas that might be accessed by women or children). The latter act was the first to take a significant step toward improvement of workers' safety, as the former focused on health aspects alone.
The first decennial British Registrar-General's mortality report was issued in 1851. Deaths were categorized by social classes, with class I corresponding to professionals and executives and class V representing unskilled workers. The report showed that mortality rates increased with the class number.
Continental Europe
Otto von Bismarck inaugurated the first social insurance legislation in 1883 and the first worker's compensation law in 1884 – the first of their kind in the Western world. Similar acts followed in other countries, partly in response to labor unrest.
United States
The United States are responsible for the first health program focusing on workplace conditions. This was the Marine Hospital Service, inaugurated in 1798 and providing care for merchant seamen. This was the beginning of what would become the US Public Health Service (USPHS).
The first worker compensation acts in the United States were passed in New York in 1910 and in Washington and Wisconsin in 1911. Later rulings included occupational diseases in the scope of the compensation, which was initially restricted to accidents.
In 1914 the USPHS set up the Office of Industrial Hygiene and Sanitation, the ancestor of the current National Institute for Safety and Health (NIOSH). In the early 20th century, workplace disasters were still common. For example, in 1911 a fire at the Triangle Shirtwaist Company in New York killed 146 workers, mostly women and immigrants. Most died trying to open exits that had been locked. Radium dial painter cancers,"phossy jaw", mercury and lead poisonings, silicosis, and other pneumoconioses were extremely common.
The enactment of the Federal Coal Mine Health and Safety Act of 1969 was quickly followed by the 1970 Occupational Safety and Health Act, which established the Occupational Safety and Health Administration (OSHA) and NIOSH in their current form`.
Workplace hazards
A wide array of workplace hazards can damage the health and safety of people at work. These include but are not limited to, "chemicals, biological agents, physical factors, adverse ergonomic conditions, allergens, a complex network of safety risks," as well a broad range of psychosocial risk factors. Personal protective equipment can help protect against many of these hazards. A landmark study conducted by the World Health Organization and the International Labour Organization found that exposure to long working hours is the occupational risk factor with the largest attributable burden of disease, i.e. an estimated 745,000 fatalities from ischemic heart disease and stroke events in 2016. This makes overwork the globally leading occupational health risk factor.
Physical hazards affect many people in the workplace. Occupational hearing loss is the most common work-related injury in the United States, with 22 million workers exposed to hazardous occupational noise levels at work and an estimated $242 million spent annually on worker's compensation for hearing loss disability. Falls are also a common cause of occupational injuries and fatalities, especially in construction, extraction, transportation, healthcare, and building cleaning and maintenance. Machines have moving parts, sharp edges, hot surfaces and other hazards with the potential to crush, burn, cut, shear, stab or otherwise strike or wound workers if used unsafely.
Biological hazards (biohazards) include infectious microorganisms such as viruses, bacteria and toxins produced by those organisms such as anthrax. Biohazards affect workers in many industries; influenza, for example, affects a broad population of workers. Outdoor workers, including farmers, landscapers, and construction workers, risk exposure to numerous biohazards, including animal bites and stings, urushiol from poisonous plants, and diseases transmitted through animals such as the West Nile virus and Lyme disease. Health care workers, including veterinary health workers, risk exposure to blood-borne pathogens and various infectious diseases, especially those that are emerging.
Dangerous chemicals can pose a chemical hazard in the workplace. There are many classifications of hazardous chemicals, including neurotoxins, immune agents, dermatologic agents, carcinogens, reproductive toxins, systemic toxins, asthmagens, pneumoconiotic agents, and sensitizers. Authorities such as regulatory agencies set occupational exposure limits to mitigate the risk of chemical hazards. International investigations are ongoing into the health effects of mixtures of chemicals, given that toxins can interact synergistically instead of merely additively. For example, there is some evidence that certain chemicals are harmful at low levels when mixed with one or more other chemicals. Such synergistic effects may be particularly important in causing cancer. Additionally, some substances (such as heavy metals and organohalogens) can accumulate in the body over time, thereby enabling small incremental daily exposures to eventually add up to dangerous levels with little overt warning.
Psychosocial hazards include risks to the mental and emotional well-being of workers, such as feelings of job insecurity, long work hours, and poor work-life balance. Psychological abuse has been found present within the workplace as evidenced by previous research. A study by Gary Namie on workplace emotional abuse found that 31% of women and 21% of men who reported workplace emotional abuse exhibited three key symptoms of post-traumatic stress disorder (hypervigilance, intrusive imagery, and avoidance behaviors). Sexual harassment is a serious hazard that can be found in workplaces.
By industry
Specific occupational safety and health risk factors vary depending on the specific sector and industry. Construction workers might be particularly at risk of falls, for instance, whereas fishermen might be particularly at risk of drowning. Similarly psychosocial risks such as workplace violence are more pronounced for certain occupational groups such as health care employees, police, correctional officers and teachers.
Primary sector
Agriculture
Agriculture workers are often at risk of work-related injuries, lung disease, noise-induced hearing loss, skin disease, as well as certain cancers related to chemical use or prolonged sun exposure. On industrialized farms, injuries frequently involve the use of agricultural machinery. The most common cause of fatal agricultural injuries in the United States is tractor rollovers, which can be prevented by the use of roll over protection structures which limit the risk of injury in case a tractor rolls over. Pesticides and other chemicals used in farming can also be hazardous to worker health, and workers exposed to pesticides may experience illnesses or birth defects. As an industry in which families, including children, commonly work alongside their families, agriculture is a common source of occupational injuries and illnesses among younger workers. Common causes of fatal injuries among young farm worker include drowning, machinery and motor vehicle-related accidents.
The 2010 NHIS-OHS found elevated prevalence rates of several occupational exposures in the agriculture, forestry, and fishing sector which may negatively impact health. These workers often worked long hours. The prevalence rate of working more than 48 hours a week among workers employed in these industries was 37%, and 24% worked more than 60 hours a week. Of all workers in these industries, 85% frequently worked outdoors compared to 25% of all US workers. Additionally, 53% were frequently exposed to vapors, gas, dust, or fumes, compared to 25% of all US workers.
Mining and oil and gas extraction
The mining industry still has one of the highest rates of fatalities of any industry. There are a range of hazards present in surface and underground mining operations. In surface mining, leading hazards include such issues as geological instability, contact with plant and equipment, rock blasting, thermal environments (heat and cold), respiratory health (black lung), etc. In underground mining, operational hazards include respiratory health, explosions and gas (particularly in coal mine operations), geological instability, electrical equipment, contact with plant and equipment, heat stress, inrush of bodies of water, falls from height, confined spaces, ionising radiation, etc.
According to data from the 2010 NHIS-OHS, workers employed in mining and oil and gas extraction industries had high prevalence rates of exposure to potentially harmful work organization characteristics and hazardous chemicals. Many of these workers worked long hours: 50% worked more than 48 hours a week and 25% worked more than 60 hours a week in 2010. Additionally, 42% worked non-standard shifts (not a regular day shift). These workers also had high prevalence of exposure to physical/chemical hazards. In 2010, 39% had frequent skin contact with chemicals. Among nonsmoking workers, 28% of those in mining and oil and gas extraction industries had frequent exposure to secondhand smoke at work. About two-thirds were frequently exposed to vapors, gas, dust, or fumes at work.
Secondary sector
Construction
Construction is one of the most dangerous occupations in the world, incurring more occupational fatalities than any other sector in both the United States and in the European Union. In 2009, the fatal occupational injury rate among construction workers in the United States was nearly three times that for all workers. Falls are one of the most common causes of fatal and non-fatal injuries among construction workers. Proper safety equipment such as harnesses and guardrails and procedures such as securing ladders and inspecting scaffolding can curtail the risk of occupational injuries in the construction industry. Due to the fact that accidents may have disastrous consequences for employees as well as organizations, it is of utmost importance to ensure health and safety of workers and compliance with HSE construction requirements. Health and safety legislation in the construction industry involves many rules and regulations. For example, the role of the Construction Design Management (CDM) Coordinator as a requirement has been aimed at improving health and safety on-site.
The 2010 National Health Interview Survey Occupational Health Supplement (NHIS-OHS) identified work organization factors and occupational psychosocial and chemical/physical exposures which may increase some health risks. Among all US workers in the construction sector, 44% had non-standard work arrangements (were not regular permanent employees) compared to 19% of all US workers, 15% had temporary employment compared to 7% of all US workers, and 55% experienced job insecurity compared to 32% of all US workers. Prevalence rates for exposure to physical/chemical hazards were especially high for the construction sector. Among nonsmoking workers, 24% of construction workers were exposed to secondhand smoke while only 10% of all US workers were exposed. Other physical/chemical hazards with high prevalence rates in the construction industry were frequently working outdoors (73%) and frequent exposure to vapors, gas, dust, or fumes (51%).
Tertiary sector
The service sector comprises diverse workplaces. Each type of workplace has its own health risks. While some occupations have become mobile, others still require desk work. As the number of service sector jobs has risen in developed countries, many jobs have turned sedentary, presenting an array of health problems that differ from previous health concerns associated with manufacturing and the primary sector. Contemporary health problems include obesity. Some working conditions, such as occupational stress, workplace bullying, and overwork, have negative consequences for physical and mental health.
Tipped wage workers are at a higher risk of negative mental health outcomes like addiction or depression. The higher rates of mental health issues may be attributed to the precarious nature of their employment, characterized by low and unpredictable incomes, inadequate access to benefits, wage exploitation, and minimal control over work schedules and assigned shifts. Close to 70% of tipped wage workers are women. Additionally, "almost 40 percent of people who work for tips are people of color: 18 percent are Latino, 10 percent are African American, and 9 percent are Asian. Immigrants are also overrepresented in the tipped workforce." According to data from the 2010 NHIS-OHS, hazardous physical and chemical exposures in the service sector were lower than national averages. However, harmful organizational practices and psychosocial risks were fairly prevalent in this sector. Among all workers in the service industry, 30% experienced job insecurity in 2010, 27% worked non-standard shifts (not a regular day shift), 21% had non-standard work arrangements (were not regular permanent employees).
In addition to these organizational risks, some industries pose significant physical dangers due to the manual labor involved. For instance, on a per employee basis, the US Postal Service, UPS and FedEx are the 4th, 5th and 7th most dangerous companies to work for in the United States, respectively.
Healthcare and social assistance
In general, healthcare workers are exposed to many hazards that can adversely affect their health and well-being. Long hours, changing shifts, physically demanding tasks, violence, and exposures to infectious diseases and harmful chemicals are examples of hazards that put these workers at risk for illness and injury. Musculoskeletal injury (MSI) is the most common health hazard in for healthcare workers and in workplaces overall. Injuries can be prevented by using proper body mechanics.
According to the Bureau of Labor statistics, US hospitals recorded 253,700 work-related injuries and illnesses in 2011, which is 6.8 work-related injuries and illnesses for every 100 full-time employees. The injury and illness rate in hospitals is higher than the rates in construction and manufacturing – two industries that are traditionally thought to be relatively hazardous.
Workplace fatality and injury statistics
Worldwide
An estimated 2.90 million work-related deaths occurred in 2019, increased from 2.78 million death from 2015. About, one-third of the total work-related deaths (31%) were due to circulatory diseases, while cancer contributed 29%, respiratory diseases 17%, and occupational injuries contributed 11% (or about 319,000 fatalities). Other diseases such as work-related communicable diseases contributed 6%, while neuropsychiatric conditions contributed 3% and work-related digestive disease and genitourinary diseases contributed 1% each. The contribution of cancers and circulatory diseases to total work-related deaths increased from 2015, while deaths due to occupational injuries decreased. Although work-related injury deaths and non-fatal injuries rates were on a decreasing trend, the total deaths and non-fatal outcomes were on the rise. Cancers represented the most significant cause of mortality in high-income countries. The number of non-fatal occupational injuries for 2019 was estimated to be 402 million.
Mortality rate is unevenly distributed, with male mortality rate (108.3 per 100,000 employed male individuals) being significantly higher than female rate (48.4 per 100,000). 6.7% of all deaths globally are represented by occupational fatalities.
European Union
Certain EU member states admit to having lacking quality control in occupational safety services, to situations in which risk analysis takes place without any on-site workplace visits and to insufficient implementation of certain EU OSH directives. Disparities between member states result in different impact of occupational hazards on the economy. In the early 2000s, the total societal costs of work-related health problems and accidents varied from 2.6% to 3.8% of the national GDPs across the member states.
In 2021, in the EU-27 as a whole, 93% of deaths due to injury were of males.
Russia
One of the decisions taken by the communist regime under Stalin was to reduce the number of accidents and occupational diseases to zero. The tendency to decline remained in the Russian Federation in the early 21st century. However, as in previous years, data reporting and publication was incomplete and manipulated, so that the actual number of work-related diseases and accidents are unknown. The ILO reports that, according to the information provided by the Russian government, there are 190,000 work-related fatalities each year, of which 15,000 due to occupational accidents.
After the demise of the USSR, enterprises became owned by oligarchs who were not interested in upholding safe and healthy conditions in the workplace. Expenditure on equipment modernization was minimal and the share of harmful workplaces increased. The government did not interfere in this, and sometimes it helped employers. At first, the increase in occupational diseases and accidents was slow, due to the fact that in the 1990s it was compensated by mass deindustrialization. However, in the 2000s deindustrialization slowed and occupational diseases and injuries started to rise in earnest. Therefore, in the 2010s the Ministry of Labor adopted federal law no. 426-FZ. This piece of legislation has been described as ineffective and based on the superficial assumption that the issuance of personal protective equipment to the employee means real improvement of working conditions. Meanwhile, the Ministry of Health made significant changes in the methods of risk assessment in the workplace. However, specialists from the Izmerov Research Institute of Occupational Health found that the post-2014 apparent decrease in the share of employees engaged in hazardous working conditions is due to the change in definitions consequent to the Ministry of Health's decision, but does not reflect actual improvements. This was most clearly shown in the results for the aluminum industry.
Further problems in the accounting of workplace fatalities arise from the fact that multiple Russian federal entities collect and publish records, a practice that should be avoided. In 2008 alone, 2074 accidents at work may have not been reported in official government sources.
United Kingdom
In the UK there were 135 fatal injuries at work in financial year 2022–2023, compared with 651 in 1974 (the year when the Health and Safety at Work Act was promulgated). The fatal injury rate declined from 2.1 fatalities per 100,000 workers in 1981 to 0.41 in financial year 2022–2023. Over recent decades reductions in both fatal and non-fatal workplace injuries have been very significant. However, illnesses statistics have not uniformly improved: while musculoskeletal disorders have diminished, the rate of self-reported work-related stress, depression or anxiety has increased, and the rate of mesothelioma deaths has remained broadly flat (due to past asbestos exposures).
United States
The Occupational Safety and Health Statistics (OSHS) program in the Bureau of Labor Statistics of the United States Department of Labor compiles information about workplace fatalities and non-fatal injuries in the United States. The OSHS program produces three annual reports:
Counts and rates of nonfatal occupational injuries and illnesses by detailed industry and case type (SOII summary data)
Case circumstances and worker demographic data for nonfatal occupational injuries and illnesses resulting in days away from work (SOII case and demographic data)
Counts and rates of fatal occupational injuries (CFOI data)
The Bureau also uses tools like AgInjuryNews.org to identify and compile additional sources of fatality reports for their datasets.
Between 1913 and 2013, workplace fatalities dropped by approximately 80%. In 1970, an estimated 14,000 workers were killed on the job. By 2021, in spite of the workforce having since more than doubled, workplace deaths were down to about 5,190. According to the census of occupational injuries 5,486 people died on the job in 2022, up from the 2021 total of 5,190. The fatal injury rate was 3.7 per 100,000 full-time equivalent workers. The decrease in the mortality rate is only partly (about 10–15%) explained by the deindustrialization of the US in the last 40 years.
About 3.5 million nonfatal workplace injuries and illnesses were reported by private industry employers in 2022, occurring at a rate of 3.0 cases per 100 full-time workers.
Management systems
Companies may adopt a safety and health management system (SMS), either voluntarily or because required by applicable regulations, to deal in a structured and systematic way with safety and health risks in their workplace. An SMS provides a systematic way to assess and improve prevention of workplace accidents and incidents based on structured management of workplace risks and hazards. It must be adaptable to changes in the organization's business and legislative requirements. It is usually based on the Deming cycle, or plan-do-check-act (PDCA) principle. An effective SMS should:
Define how the organization is set up to manage risk
Identify workplace hazards and implement suitable controls
Implement effective communication across all levels of the organization
Implement a process to identify and correct non-conformity and non-compliance issues
Implement a continual improvement process
Management standards across a range of business functions such as environment, quality and safety are now being designed so that these traditionally disparate elements can be integrated and managed within a single business management system and not as separate and stand-alone functions. Therefore, some organizations dovetail other management system functions, such as process safety, environmental resource management or quality management together with safety management to meet both regulatory requirements, industry sector requirements and their own internal and discretionary standard requirements.
Standards
International
The ILO published ILO-OSH 2001 on Guidelines on Occupational Safety and Health Management Systems to assist organizations with introducing OSH management systems. These guidelines encouraged continual improvement in employee health and safety, achieved via a constant process of policy; organization; planning and implementation; evaluation; and action for improvement, all supported by constant auditing to determine the success of OSH actions.
From 1999 to 2018, OHSAS 18001 was adopted and widely used internationally. It was developed by a selection of national standards bodies, academic bodies, accreditation bodies, certification bodies and occupational health and safety institutions to address a gap where no third-party certifiable international standard existed. It was designed for integration with ISO 9001 and ISO 14001.
OHSAS 18001 was replaced by ISO 45001, which was published in March 2018 and implemented in March 2021.
National
National management system standards for occupational health and safety include AS/NZS 4801 for Australia and New Zealand (now superseded by ISO 45001), CSA Z1000:14 for Canada (which is due to be discontinued in favor of CSA Z45001:19, the Canadian adoption of ISO 45000) and ANSI/ASSP Z10 for the United States. In Germany, the Bavarian state government, in collaboration with trade associations and private companies, issued their OHRIS standard for occupational health and safety management systems. A new revision was issued in 2018. The Taiwan Occupational Safety and Health Management System (TOSHMS) was issued in 1997 under the auspices of Taiwan's Occupational Safety and Health Administration.
Identifying OSH hazards and assessing risk
Hazards, risks, outcomes
The terminology used in OSH varies between countries, but generally speaking:
A hazard is something that can cause harm if not controlled.
The outcome is the harm that results from an uncontrolled hazard.
A risk is a combination of the probability that a particular outcome may occur and the severity of the harm involved.
"Hazard", "risk", and "outcome" are used in other fields to describe e.g., environmental damage or damage to equipment. However, in the context of OSH, "harm" generally describes the direct or indirect degradation, temporary or permanent, of the physical, mental, or social well-being of workers. For example, repetitively carrying out manual handling of heavy objects is a hazard. The outcome could be a musculoskeletal disorder (MSD) or an acute back or joint injury. The risk can be expressed numerically (e.g., a 0.5 or 50/50 chance of the outcome occurring during a year), in relative terms (e.g., "high/medium/low"), or with a multi-dimensional classification scheme (e.g., situation-specific risks).
Hazard identification
Hazard identification is an important step in the overall risk assessment and risk management process. It is where individual work hazards are identified, assessed and controlled or eliminated as close to source (location of the hazard) as reasonably practicable. As technology, resources, social expectation or regulatory requirements change, hazard analysis focuses controls more closely toward the source of the hazard. Thus, hazard control is a dynamic program of prevention. Hazard-based programs also have the advantage of not assigning or implying there are "acceptable risks" in the workplace. A hazard-based program may not be able to eliminate all risks, but neither does it accept "satisfactory" – but still risky – outcomes. And as those who calculate and manage the risk are usually managers, while those exposed to the risks are a different group, a hazard-based approach can bypass conflict inherent in a risk-based approach.
The information that needs to be gathered from sources should apply to the specific type of work from which the hazards can come from. Examples of these sources include interviews with people who have worked in the field of the hazard, history and analysis of past incidents, and official reports of work and the hazards encountered. Of these, the personnel interviews may be the most critical in identifying undocumented practices, events, releases, hazards and other relevant information. Once the information is gathered from a collection of sources, it is recommended for these to be digitally archived (to allow for quick searching) and to have a physical set of the same information in order for it to be more accessible. One innovative way to display the complex historical hazard information is with a historical hazards identification map, which distills the hazard information into an easy-to-use graphical format.
Risk assessment
Modern occupational safety and health legislation usually demands that a risk assessment be carried out prior to making an intervention. This assessment should:
Identify the hazards
Identify all affected by the hazard and how
Evaluate the risk
Identify and prioritize appropriate control measures.
The calculation of risk is based on the likelihood or probability of the harm being realized and the severity of the consequences. This can be expressed mathematically as a quantitative assessment (by assigning low, medium and high likelihood and severity with integers and multiplying them to obtain a risk factor), or qualitatively as a description of the circumstances by which the harm could arise.
The assessment should be recorded and reviewed periodically and whenever there is a significant change to work practices. The assessment should include practical recommendations to control the risk. Once recommended controls are implemented, the risk should be re-calculated to determine if it has been lowered to an acceptable level. Generally speaking, newly introduced controls should lower risk by one level, i.e., from high to medium or from medium to low.
National legislation and public organizations
Occupational safety and health practice vary among nations with different approaches to legislation, regulation, enforcement, and incentives for compliance. In the EU, for example, some member states promote OSH by providing public monies as subsidies, grants or financing, while others have created tax system incentives for OSH investments. A third group of EU member states has experimented with using workplace accident insurance premium discounts for companies or organizations with strong OSH records.
Australia
In Australia, four of the six states and both territories have enacted and administer harmonized work health and safety legislation in accordance with the Intergovernmental Agreement for Regulatory and Operational Reform in Occupational Health and Safety. Each of these jurisdictions has enacted work health and safety legislation and regulations based on the Commonwealth Work Health and Safety Act 2011 and common codes of practice developed by Safe Work Australia. Some jurisdictions have also included mine safety under the model approach. However, most have retained separate legislation for the time being. In August 2019, Western Australia committed to join nearly every other state and territory in implementing the harmonized Model WHS Act, Regulations and other subsidiary legislation. Victoria has retained its own regime, although the Model WHS laws themselves drew heavily on the Victorian approach.
Canada
In Canada, workers are covered by provincial or federal labor codes depending on the sector in which they work. Workers covered by federal legislation (including those in mining, transportation, and federal employment) are covered by the Canada Labour Code; all other workers are covered by the health and safety legislation of the province in which they work. The Canadian Centre for Occupational Health and Safety (CCOHS), an agency of the Government of Canada, was created in 1978 by an act of parliament. The act was based on the belief that all Canadians had "a fundamental right to a healthy and safe working environment." CCOHS is mandated to promote safe and healthy workplaces and help prevent work-related injuries and illnesses.
China
In China, the Ministry of Health is responsible for occupational disease prevention and the State Administration of Work Safety workplace safety issues. The Work Safety Law (安全生产法) was issued on 1 November 2002. The Occupational Disease Control Act came into force on 1 May 2002. In 2018, the National Health Commission (NHC) was formally established to formulating national health policies. The NHC formulated the "National Occupational Disease Prevention and Control Plan (2021–2025)" in the context of the activities leading to the "Healthy China 2030" initiative.
European Union
The European Agency for Safety and Health at Work was founded in 1994. In the European Union, member states have enforcing authorities to ensure that the basic legal requirements relating to occupational health and safety are met. In many EU countries, there is strong cooperation between employer and worker organizations (e.g., unions) to ensure good OSH performance, as it is recognized this has benefits for both the worker (through maintenance of health) and the enterprise (through improved productivity and quality).
Member states have all transposed into their national legislation a series of directives that establish minimum standards on occupational health and safety. These directives (of which there are about 20 on a variety of topics) follow a similar structure requiring the employer to assess workplace risks and put in place preventive measures based on a hierarchy of hazard control. This hierarchy starts with elimination of the hazard and ends with personal protective equipment.
Denmark
In Denmark, occupational safety and health is regulated by the Danish Act on Working Environment and Cooperation at the Workplace. The Danish Working Environment Authority (Arbejdstilsynet) carries out inspections of companies, draws up more detailed rules on health and safety at work and provides information on health and safety at work. The result of each inspection is made public on the web pages of the Danish Working Environment Authority so that the general public, current and prospective employees, customers and other stakeholders can inform themselves about whether a given organization has passed the inspection.
Netherlands
In the Netherlands, the laws for safety and health at work are registered in the Working Conditions Act (Arbeidsomstandighedenwet and Arbeidsomstandighedenbeleid). Apart from the direct laws directed to safety and health in working environments, the private domain has added health and safety rules in Working Conditions Policies (Arbeidsomstandighedenbeleid), which are specified per industry. The Ministry of Social Affairs and Employment (SZW) monitors adherence to the rules through their inspection service. This inspection service investigates industrial accidents and it can suspend work and impose fines when it deems the Working Conditions Act has been violated. Companies can get certified with a VCA certificate for safety, health and environment performance. All employees have to obtain a VCA certificate too, with which they can prove that they know how to work according to the current and applicable safety and environmental regulations.
Ireland
The main health and safety regulation in Ireland is the Safety, Health and Welfare at Work Act 2005, which replaced earlier legislation from 1989. The Health and Safety Authority, based in Dublin, is responsible for enforcing health and safety at work legislation.
Spain
In Spain, occupational safety and health is regulated by the Spanish Act on Prevention of Labor Risks. The Ministry of Labor is the authority responsible for issues relating to labor environment. The National Institute for Safety and Health at Work (Instituto Nacional de Seguridad y Salud en el Trabajo, INSST) is the government's scientific and technical organization specialized in occupational safety and health.
Sweden
In Sweden, occupational safety and health is regulated by the Work Environment Act. The Swedish Work Environment Authority (Arbetsmiljöverket) is the government agency responsible for issues relating to the working environment. The agency works to disseminate information and furnish advice on OSH, has a mandate to carry out inspections, and a right to issue stipulations and injunctions to any non-compliant employer.
India
In India, the Ministry of Labour and Employment formulates national policies on occupational safety and health in factories and docks with advice and assistance from its Directorate General Factory Advice Service and Labour Institutes (DGFASLI), and enforces its policies through inspectorates of factories and inspectorates of dock safety. The DGFASLI provides technical support in formulating rules, conducting occupational safety surveys and administering occupational safety training programs.
Indonesia
In Indonesia, the Ministry of Manpower (Kementerian Ketenagakerjaan, or Kemnaker) is responsible to ensure the safety, health and welfare of workers. Important OHS acts include the Occupational Safety Act 1970 and the Occupational Health Act 1992. Sanctions, however, are still low (with a maximum of 15 million rupiahs fine and/or a maximum of one year in prison) and violations are still very frequent.
Japan
The Japanese Ministry of Health, Labor and Welfare (MHLW) is the governmental agency overseeing occupational safety and health in Japan. The MHLW is responsible for enforcing Industrial Safety and Health Act of 1972 – the key piece of OSH legislation in Japan –, setting regulations and guidelines, supervising labor inspectors who monitor workplaces for compliance with safety and health standards, investigating accidents, and issuing orders to improve safety conditions. The Labor Standards Bureau is an arm of MHLW tasked with supervising and guiding businesses, inspecting manufacturing facilities for safety and compliance, investigating accidents, collecting statistics, enforcing regulations and administering fines for safety violations, and paying accident compensation for injured workers.
The (JISHA) is a non-profit organization established under the Industrial Safety and Health Act of 1972. It works closely with MHLW, the regulatory body, to promote workplace safety and health. The responsibilities of JISHA include: Providing education and training on occupational safety and health, conducting research and surveys on workplace safety and health issues, offering technical guidance and consultations to businesses, disseminating information and raising awareness about occupational safety and health, and collaborating with international organizations to share best practices and improve global workplace safety standards.
The (JNIOSH) conducts research to support governmental policies in occupational safety and health. The organization categorizes its research into project studies, cooperative research, fundamental research, and government-requested research. Each category focuses on specific themes, from preventing accidents and ensuring workers' health, to addressing changes in employment structure. The organization sets clear goals, develops road maps, and collaborates with the Ministry of Health, Labor and Welfare to discuss progress and policy contributions.
Malaysia
In Malaysia, the Department of Occupational Safety and Health (DOSH) under the Ministry of Human Resources is responsible to ensure that the safety, health and welfare of workers in both the public and private sector is upheld. DOSH is responsible to enforce the Factories and Machinery Act 1967 and the Occupational Safety and Health Act 1994. Malaysia has a statutory mechanism for worker involvement through elected health and safety representatives and health and safety committees. This followed a similar approach originally adopted in Scandinavia.
Saudi Arabia
In Saudi Arabia, the Ministry of Human Resources and Social Development administrates workers' rights and the labor market as a whole, consistent with human rights rules upheld by the Human Rights Commission of the kingdom.
Singapore
In Singapore, the Ministry of Manpower (MOM) is the government agency in charge of OHS policies and enforcement. The key piece of legislation regulating aspects of OHS is the Workplace Safety and Health Act. The MOM promotes and manages campaigns against unsafe work practices, such as when working at height, operating cranes and in traffic management. Examples include Operation Cormorant and the Falls Prevention Campaign.
South Africa
In South Africa the Department of Employment and Labour is responsible for occupational health and safety inspection and enforcement in the commercial and industrial sectors, with the exclusion of mining, where the Department of Mineral Resources is responsible. The main statutory legislation on health and safety in the jurisdiction of the Department of Employment and Labour is the OHS Act or OHSA (Act No. 85 of 1993: Occupational Health and Safety Act, as amended by the Occupational Health and Safety Amendment Act, No. 181 of 1993). Regulations implementing the OHS Act include:
General Safety Regulations, 1986
Environmental Regulations for Workplaces, 1987
Driven Machinery Regulations, 1988
General Machinery Regulations, 1988
Noise Induced Hearing Loss Regulations, 2003
Pressure Equipment Regulations, 2004
General Administrative Regulations, 2003
Diving Regulations, 2009
Construction Regulations, 2014
Syria
In Syria, health and safety is the responsibility of the Ministry of Social Affairs and Labor ().
Taiwan
In Taiwan, the of the Ministry of Labor is in charge of occupational safety and health. The matter is governed under the .
United Arab Emirates
In the United Arab Emirates, national OSH legislation is based on the Federal Law on Labor (1980). Order No. 32 of 1982 on Protection from Hazards and Ministerial Decision No. 37/2 of 1982 are also of importance. The competent authority for safety and health at work at the federal level is the Ministry of Human Resources and Emiratisation (MoHRE).
United Kingdom
Health and safety legislation in the UK is drawn up and enforced by the Health and Safety Executive and local authorities under the Health and Safety at Work etc. Act 1974 (HASAWA or HSWA). HASAWA introduced (section 2) a general duty on an employer to ensure, so far as is reasonably practicable, the health, safety and welfare at work of all his employees, with the intention of giving a legal framework supporting codes of practice not in themselves having legal force but establishing a strong presumption as to what was reasonably practicable (deviations from them could be justified by appropriate risk assessment). The previous reliance on detailed prescriptive rule-setting was seen as having failed to respond rapidly enough to technological change, leaving new technologies potentially unregulated or inappropriately regulated. HSE has continued to make some regulations giving absolute duties (where something must be done with no "reasonable practicability" test) but in the UK the regulatory trend is away from prescriptive rules, and toward goal setting and risk assessment. Recent major changes to the laws governing asbestos and fire safety management embrace the concept of risk assessment. The other key aspect of the UK legislation is a statutory mechanism for worker involvement through elected health and safety representatives and health and safety committees. This followed a similar approach in Scandinavia, and that approach has since been adopted in countries such as Australia, Canada, New Zealand and Malaysia.
The Health and Safety Executive service dealing with occupational medicine has been the Employment Medical Advisory Service. In 2014 a new occupational health organization, the Health and Work Service, was created to provide advice and assistance to employers in order to get back to work employees on long-term sick-leave. The service, funded by the government, offers medical assessments and treatment plans, on a voluntary basis, to people on long-term absence from their employer; in return, the government no longer foots the bill for statutory sick pay provided by the employer to the individual.
United States
In the United States, President Richard Nixon signed the Occupational Safety and Health Act into law on 29 December 1970. The act created the three agencies which administer OSH: the Occupational Safety and Health Administration (OSHA), the National Institute for Occupational Safety and Health (NIOSH), and the Occupational Safety and Health Review Commission (OSHRC). The act authorized OSHA to regulate private employers in the 50 states, the District of Columbia, and territories. It includes a general duty clause (29 U.S.C. §654, 5(a)) requiring an employer to comply with the Act and regulations derived from it, and to provide employees with "employment and a place of employment which are free from recognized hazards that are causing or are likely to cause [them] death or serious physical harm."
OSHA was established in 1971 under the Department of Labor. It has headquarters in Washington, DC, and ten regional offices, further broken down into districts, each organized into three sections: compliance, training, and assistance. Its stated mission is "to ensure safe and healthful working conditions for workers by setting and enforcing standards and by providing training, outreach, education and assistance." The original plan was for OSHA to oversee 50 state plans with OSHA funding 50% of each plan, but this did not work out that way: there are 26 approved state plans (with four covering only public employees) and OSHA manages the plan in the states not participating.
OSHA develops safety standards in the Code of Federal Regulations and enforces those safety standards through compliance inspections conducted by Compliance Officers; enforcement resources are focused on high-hazard industries. Worksites may apply to enter OSHA's Voluntary Protection Program (VPP). A successful application leads to an on-site inspection; if this is passed, the site gains VPP status and OSHA no longer inspect it annually nor (normally) visit it unless there is a fatal accident or an employee complaint until VPP revalidation (after three–five years). VPP sites generally have injury and illness rates less than half the average for their industry.
OSHA has a number of specialists in local offices to provide information and training to employers and employees at little or no cost. Similarly OSHA produces a range of publications and funds consultation services available for small businesses.
OSHA has strategic partnership and alliance programs to develop guidelines, assist in compliance, share resources, and educate workers in OHS. OSHA manages Susan B. Harwood grants to non-profit organizations to train workers and employers to recognize, avoid, and prevent safety and health hazards in the workplace. Grants focus on small business, hard-to-reach workers and high-hazard industries.
The National Institute for Occupational Safety and Health (NIOSH), also created under the Occupational Safety and Health Act, is the federal agency responsible for conducting research and making recommendations for the prevention of work-related injury and illness. NIOSH is part of the Centers for Disease Control and Prevention (CDC) within the Department of Health and Human Services.
Professional roles and responsibilities
Those in the field of occupational safety and health come from a wide range of disciplines and professions including medicine, occupational medicine, epidemiology, physiotherapy and rehabilitation, psychology, human factors and ergonomics, and many others. Professionals advise on a broad range of occupational safety and health matters. These include how to avoid particular pre-existing conditions causing a problem in the occupation, correct posture, frequency of rest breaks, preventive actions that can be undertaken, and so forth. The quality of occupational safety is characterized by (1) the indicators reflecting the level of industrial injuries, (2) the average number of days of incapacity for work per employer, (3) employees' satisfaction with their work conditions and (4) employees' motivation to work safely.
The main tasks undertaken by the OSH practitioner include:
Inspecting, testing and evaluating workplace environments, programs, equipment, and practices to ensure that they follow government safety regulation.
Designing and implementing workplace programs and procedures that control or prevent chemical, physical, or other risks to workers.
Educating employers and workers about maintaining workplace safety.
Demonstrating use of safety equipment and ensuring proper use by workers.
Investigating incidents to determine the cause and possible prevention.
Preparing written reports of their findings.
OSH specialists examine worksites for environmental or physical factors that could harm employee health, safety, comfort or performance. They then find ways to improve potential risk factors. For example, they may notice potentially hazardous conditions inside a chemical plant and suggest changes to lighting, equipment, materials, or ventilation. OSH technicians assist specialists by collecting data on work environments and implementing the worksite improvements that specialists plan. Technicians also may check to make sure that workers are using required protective gear, such as masks and hardhats. OSH specialists and technicians may develop and conduct employee training programs. These programs cover a range of topics, such as how to use safety equipment correctly and how to respond in an emergency. In the event of a workplace safety incident, specialists and technicians investigate its cause. They then analyze data from the incident, such as the number of people impacted, and look for trends in occurrence. This evaluation helps them to recommend improvements to prevent future incidents.
Given the high demand in society for health and safety provisions at work based on reliable information, OSH professionals should find their roots in evidence-based practice. A new term is "evidence-informed decision making". Evidence-based practice can be defined as the use of evidence from literature, and other evidence-based sources, for advice and decisions that favor the health, safety, well-being, and work ability of workers. Therefore, evidence-based information must be integrated with professional expertise and the workers' values. Contextual factors must be considered related to legislation, culture, financial, and technical possibilities. Ethical considerations should be heeded.
The roles and responsibilities of OSH professionals vary regionally but may include evaluating working environments, developing, endorsing and encouraging measures that might prevent injuries and illnesses, providing OSH information to employers, employees, and the public, providing medical examinations, and assessing the success of worker health programs.
The Netherlands
In the Netherlands, the required tasks for health and safety staff are only summarily defined and include:
Providing voluntary medical examinations.
Providing a consulting room on the work environment to the workers.
Providing health assessments (if needed for the job concerned).
Dutch law influences the job of the safety professional mainly through the requirement on employers to use the services of a certified working-conditions service for advice. A certified service must employ sufficient numbers of four types of certified experts to cover the risks in the organizations which use the service:
A safety professional
An occupational hygienist
An occupational physician
A work and organization specialist.
In 2004, 14% of health and safety practitioners in the Netherlands had an MSc and 63% had a BSc. 23% had training as an OSH technician.
Norway
In Norway, the main required tasks of an occupational health and safety practitioner include:
Systematic evaluations of the working environment.
Endorsing preventive measures which eliminate causes of illnesses in the workplace.
Providing information on the subject of employees' health.
Providing information on occupational hygiene, ergonomics, and environmental and safety risks in the workplace.
In 2004, 37% of health and safety practitioners in Norway had an MSc and 44% had a BSc. 19% had training as an OSH technician.
Education and training
Formal education
There are multiple levels of training applicable to the field of occupational safety and health. Programs range from individual non-credit certificates and awareness courses focusing on specific areas of concern, to full doctoral programs. The University of Southern California was one of the first schools in the US to offer a PhD program focusing on the field. Further, multiple master's degree programs exist, such as that of the Indiana State University who offer MSc and MA programs. Other masters-level qualifications include the MSc and Master of Research (MRes) degrees offered by the University of Hull in collaboration with the National Examination Board in Occupational Safety and Health (NEBOSH). Graduate programs are designed to train educators, as well as high-level practitioners.
Many OSH generalists focus on undergraduate studies; programs within schools, such as that of the University of North Carolina's online BSc in environmental health and safety, fill a large majority of hygienist needs. However, smaller companies often do not have full-time safety specialists on staff, thus, they appoint a current employee to the responsibility. Individuals finding themselves in positions such as these, or for those enhancing marketability in the job-search and promotion arena, may seek out a credit certificate program. For example, the University of Connecticut's online OSH certificate provides students familiarity with overarching concepts through a 15-credit (5-course) program. Programs such as these are often adequate tools in building a strong educational platform for new safety managers with a minimal outlay of time and money. Further, most hygienists seek certification by organizations that train in specific areas of concentration, focusing on isolated workplace hazards. The American Society of Safety Professionals (ASSP), Board for Global EHS Credentialing (BGC), and American Industrial Hygiene Association (AIHA) offer individual certificates on many different subjects from forklift operation to waste disposal and are the chief facilitators of continuing education in the OSH sector.
In the US, the training of safety professionals is supported by NIOSH through their NIOSH Education and Research Centers.
In the UK, both NEBOSH and the Institution of Occupational Safety and Health (IOSH) develop health and safety qualifications and courses which cater to a mixture of industries and levels of study. Although both organizations are based in the UK, their qualifications are recognized and studied internationally as they are delivered through their own global networks of approved providers. The Health and Safety Executive has also developed health and safety qualifications in collaboration with the NEBOSH.
In Australia, training in OSH is available at the vocational education and training level, and at university undergraduate and postgraduate level. Such university courses may be accredited by an accreditation board of the Safety Institute of Australia. The institute has produced a Body of Knowledge which it considers is required by a generalist safety and health professional and offers a professional qualification. The Australian Institute of Health and Safety has instituted the national Eric Wigglesworth OHS Education Medal to recognize achievement in OSH doctorate education.
Field training
One form of training delivered in the workplace is known as toolbox talk. According to the UK's Health and Safety Executive, a toolbox talk is a short presentation to the workforce on a single aspect of health and safety. Such talks are often used, especially in the construction industry, by site supervisors, frontline managers and owners of small construction firms to prepare and deliver advice on matters of health, safety and the environment and to obtain feedback from the workforce.
Use of virtual reality
Virtual reality is a novel tool to deliver safety training in many fields. Some applications have been developed and tested especially for fire and construction safety training. Preliminary findings seem to support that virtual reality is more effective than traditional training in knowledge retention.
Contemporary developments
On an international scale, the World Health Organization (WHO) and the International Labour Organization (ILO) have begun focusing on labor environments in developing nations with projects such as Healthy Cities. Many of these developing countries are stuck in a situation in which their relative lack of resources to invest in OSH leads to increased costs due to work-related illnesses and accidents. The ILO estimates that work-related illness and accidents cost up to 10% of GDP in Latin America, compared with just 2.6% to 3.8% in the EU. There is continued use of asbestos, a notorious hazard, in some developing countries. So asbestos-related disease is expected to continue to be a significant problem well into the future.
Artificial intelligence
There are several broad aspects of artificial intelligence (AI) that may give rise to specific hazards.
Many hazards of AI are psychosocial in nature due to its potential to cause changes in work organization. For example, AI is expected to lead to changes in the skills required of workers, requiring retraining of existing workers, flexibility, and openness to change. Increased monitoring may lead to micromanagement or perception of surveillance, and thus to workplace stress. There is also the risk of people being forced to work at a robot's pace, or to monitor robot performance at nonstandard hours. Additionally, algorithms may show algorithmic bias through being trained on past decisions may mimic undesirable human biases, for example, past discriminatory hiring and firing practices. Some approaches to accident analysis may be biased to safeguard a technological system and its developers by assigning blame to the individual human operator instead.
Physical hazards in the form of human–robot collisions may arise from robots using AI, especially collaborative robots (cobots). Cobots are intended to operate in close proximity to humans, which makes it impossible to implement the common hazard control of isolating the robot using fences or other barriers, which is widely used for traditional industrial robots. Automated guided vehicles are a type of cobot in common use, often as forklifts or pallet jacks in warehouses or factories.
Both applications and hazards arising from AI can be considered as part of existing frameworks for occupational health and safety risk management. As with all hazards, risk identification is most effective and least costly when done in the design phase. AI, in common with other computational technologies, requires cybersecurity measures to stop software breaches and intrusions, as well as information privacy measures. Communication and transparency with workers about data usage is a control for psychosocial hazards arising from security and privacy issues. Workplace health surveillance, the collection and analysis of health data on workers, is challenging for AI because labor data are often reported in aggregate, does not provide breakdowns between different types of work, and is focused on economic data such as wages and employment rates rather than skill content of jobs.
Coronavirus
The National Institute of Occupational Safety and Health (NIOSH) National Occupational Research Agenda Manufacturing Council established an externally-lead COVID-19 workgroup to provide exposure control information specific to working in manufacturing environments. The workgroup identified disseminating information most relevant to manufacturing workplaces as a priority, and that would include providing content in Wikipedia. This includes evidence-based practices for infection control plans, and communication tools.
Nanotechnology
Nanotechnology is an example of a new, relatively unstudied technology. A Swiss survey of 138 companies using or producing nanoparticulate matter in 2006 resulted in forty completed questionnaires. Sixty-five per cent of respondent companies stated they did not have a formal risk assessment process for dealing with nanoparticulate matter. Nanotechnology already presents new issues for OSH professionals that will only become more difficult as nanostructures become more complex. The size of the particles renders most containment and personal protective equipment ineffective. The toxicology values for macro sized industrial substances are rendered inaccurate due to the unique nature of nanoparticulate matter. As nanoparticulate matter decreases in size its relative surface area increases dramatically, increasing any catalytic effect or chemical reactivity substantially versus the known value for the macro substance. This presents a new set of challenges in the near future to rethink contemporary measures to safeguard the health and welfare of employees against a nanoparticulate substance that most conventional controls have not been designed to manage.
Occupational health inequalities
Occupational health inequalities refer to differences in occupational injuries and illnesses that are closely linked with demographic, social, cultural, economic, and/or political factors. Although many advances have been made to rectify gaps in occupational health within the past half century, still many persist due to the complex overlapping of occupational health and social factors. There are three main areas of research on occupational health inequities:
Identifying which social factors, either individually or in combination, contribute to the inequitable distribution of work-related benefits and risks.
Examining how the related structural disadvantages materialize in the lives of workers to put them at greater risk for occupational injury or illness.
Translating these findings into intervention research to build an evidence base of effective ways for reducing occupational health inequities.
Transnational and immigrant worker populations
Immigrant worker populations often are at greater risk for workplace injuries and fatalities. For example within the United States, immigrant Mexican workers have one of the highest rates of fatal workplace injuries out of all of the working population. Statistics like these are explained through a combination of social, structural, and physical aspects of the workplace. These workers struggle to access safety information and resources in their native languages because of lack of social and political inclusion. In addition to linguistically tailored interventions, it is also critical for the interventions to be culturally appropriate.
Those residing in a country to work without a visa or other formal authorization may also not have access to legal resources and recourse that are designed to protect most workers. Health and Safety organizations that rely on whistleblowers instead of their own independent inspections may be especially at risk of having an incomplete picture of worker health.
| Biology and health sciences | Health and fitness | null |
35323072 | https://en.wikipedia.org/wiki/Yutyrannus | Yutyrannus | Yutyrannus (Simplified Chinese : 华丽羽王龙 Traditional Chinese : 華麗羽王龍 Pinyin : Huà Lì Yǔ Wáng Lóng meaning "feathered tyrant") is a genus of proceratosaurid tyrannosauroid dinosaur which contains a single known species, Yutyrannus huali. This species lived during the early Cretaceous period in what is now northeastern China. Three fossils of Yutyrannus huali — all found in the rock beds of Liaoning Province — are the largest-known dinosaur specimens that preserve direct evidence of feathers.
Discovery and naming
Yutyrannus huali was named and scientifically described in 2012 by Xu Xing et al. The name is derived from Mandarin Chinese yǔ (羽, "feather") and Latinised Greek tyrannos (τύραννος, "tyrant"), a reference to its classification as a feathered member of the Tyrannosauroidea. The specific name consists of the Mandarin huáli (华丽 simplified, 華麗 traditional, "beautiful"), in reference to the perceived beauty of the plumage.
Yutyrannus is known from three nearly complete fossil specimens (an adult, a subadult and a juvenile) acquired from a fossil dealer who claimed all three had their provenance in a single quarry at Batu Yingzi in Liaoning Province, China. They were thus probably found in a layer of the Yixian Formation, dating from the Aptian, approximately 125 million years old. The specimens had been cut into pieces about the size of bath mats, which could be carried by two people.
The holotype, ZCDM V5000, is the largest specimen, consisting of a nearly complete skeleton with a skull, compressed on a slab, of an adult individual. The paratypes are the two other specimens: ZCDM V5001 consisting of a skeleton of a smaller individual and part of the same slab as the holotype; and ELDM V1001, a juvenile estimated to have been eight years younger than the holotype. The fossils are part of the collections of the Zhucheng Dinosaur Museum and the Erlianhaote Dinosaur Museum but have been prepared by the Institute of Vertebrate Paleontology and Paleoanthropology, under the guidance of Xu.
Description
Yutyrannus was a large bipedal predator. The holotype and oldest-known specimen has an estimated length of and an estimated weight of about . In 2016, Gregory S. Paul gave lower estimations of and . Its skull has an estimated length of . The skulls of the paratypes are and long and their weights have been estimated at and , respectively.
The describers established some diagnostic traits of Yutyrannus, in which it differs from its direct relatives. The snout features a high midline crest, formed by the nasals and the premaxillae and which is covered by large pneumatic recesses. The postorbital bone has a small secondary process, jutting into the upper hind corner of the eye socket. The outer side of the main body of the postorbital is hollowed out. In the lower jaw, the external mandibular fenestra, the main opening in the outer side, is mainly located in the surangular.
Feathers
The described specimens of Yutyrannus contain direct evidence of feathers in the form of fossil imprints. The feathers were long, up to , and filamentous. Because the quality of the preservation was low, it could not be established whether the filaments were simple or compound, broad or narrow. The feathers covered various parts of the body. With the holotype they were present on the pelvis and near the foot. Specimen ZCDM V5000 had feathers on the tail pointing backwards under an angle of 30 degrees with the tail axis. The smallest specimen showed filaments on the neck and feathers at the upper arm. While it has been known since 2004, upon the description of Dilong, that at least some tyrannosauroids possessed filamentous "stage 1" feathers, according to the feather typology of Richard Prum, Y. huali is currently the largest-known species of dinosaur with direct evidence of feathers, forty times heavier than the previous record holder, Beipiaosaurus.
Based on the distribution of the feathers, they may have covered the whole body and served in regulating temperature, given the rather cold climate of the Yixian with an average annual temperature of . Alternatively, if they were restricted to the regions in which they were found, they may have served as display structures. In addition, the two adult specimens had distinctive, "wavy" crests on their snouts, on both sides of a high central crest, which were probably used for display. The presence of feathers on a large basal tyrannosauroid suggests the possibility that later tyrannosaurids were also feathered, even when adult, despite their size. However, scaly skin impressions have been reported from various Late Cretaceous tyrannosaurids (such as Gorgosaurus, Tarbosaurus and Tyrannosaurus) on parts of the body where Yutyrannus was feathered. Since there is no positive evidence for plumage in tyrannosaurids, some researchers have suggested they may have evolved scales secondarily. If scaly skin was the dominant epidermal trait of later genera, then the extent and nature of the integumentary covering may have changed over time in response to body size, a warmer climate, or other factors.
It was considered possible that the integumentary structures of Yutyrannus might not represent true feathers and represent filamentous structures, being ancestral state to feathers. However, this hypothesis is now considered overturned with subsequent research that verified that the structures on various dinosaurs and pterosaurs were true feathers.
Classification
To date, all phylogenetic analyses of Yutyrannus relationships have classified it in the group Tyrannosauroidea. An initial analysis of its relationship to other tyrannosauroids showed that it was more primitive than Eotyrannus in the evolutionary tree, but more advanced than tyrannosauroids such as Dilong, Guanlong and Sinotyrannus. Primitive traits relative to advanced tyrannosaurs included long forelimbs with three fingers and a short foot which was not specialized for running. Advanced traits included a large and deep skull, the outer side of the premaxilla having rotated upwards, a large cuneiform horn on the lacrimal in front of the eye socket, a postorbital process on the back rim of the eye socket, the squamosal and the quadratojugal forming a large process on the back rim of the infratemporal fenestra, short dorsal vertebrae, an ilium with a straight upper rim and an appending lobe, a large pubic foot and a slender ischium.
In 2016, a phylogenetic analysis conducted by Thomas Carr and Stephen Brusatte re-examined the evolutionary relationships of the Tyrannosauroidea. Their analysis found Yutyrannus to be more basal than Dilong, placing it within the family Proceratosauridae. Their cladogram is shown below:
Paleobiology
Ontogeny
The knowledge of specimens representing various different ages has allowed paleontologists to determine the ontogeny, or change during growth, of this species. During growth the lower legs, feet, ilia and forelimbs became relatively smaller. The skull, on the other hand, grew more robust and deeper.
Feeding
According to a 2018 study, Yutyrannus had a simple hyoid structure, indicating it had a flat tongue, like a crocodile. Based on hyoid bone comparisons between living and extinct archosaurs, it was determined that all archosaurs would have had fixed tongues, with the exception of birds, pterosaurs and certain ornithischians.
Social behavior
Because the three known individuals of Yutyrannus were allegedly found together, some paleontologists, including Xu Xing, have interpreted the animal as a pack hunter. Based on the presence of sauropod material in the quarry in which the three specimens were found, Xu has further speculated that Yutyrannus may have hunted sauropods, and that the three known individuals may have died while doing so. In addition, other sauropod hunting theropods such as Mapusaurus are known to have exhibited pack hunting behaviour. The true cause of their death, however, remains unknown. If Yutyrannus did prey on sauropods, it would have been one of two predatory animals known from the Yixian formation capable of doing so, the other being an as-of-yet undescribed large theropod known from a tooth embedded in the rib of a Dongbeititan.
Paleoenvironment
Because the locality of Yutyrannus is uncertain, it is unknown what fauna it coexisted with. Age estimates point towards Yutyrannus originating from the Lujiatun or the Jianshangou beds of the Yixian, meaning it would have been contemporaneous of such dinosaurs as Psittacosaurus, Dongbeititan, Sinosauropteryx, and Caudipteryx. Fish such as Lycoptera would also have been prevalent. Volcanic eruptions and forest fires appear to have been common in the Yixian, and the environment would have been littered with bodies of water and coniferous plants. The environment would have been comparable to the modern day temperate rainforests of British Columbia, and would have experienced significant seasonal changes in temperature.
| Biology and health sciences | Theropods | Animals |
26477290 | https://en.wikipedia.org/wiki/Substance%20use%20disorder | Substance use disorder | Substance use disorder (SUD) is the persistent use of drugs despite substantial harm and adverse consequences to self and others. Related terms include substance use problems and problematic drug or alcohol use.
Substance use disorders vary with regard to the average age of onset. It is not uncommon for those who have SUD to also have other mental health disorders. Substance use disorders are characterized by an array of mental, emotional, physical, and behavioral problems such as chronic guilt; an inability to reduce or stop consuming the substance(s) despite repeated attempts; operating vehicles while intoxicated; and physiological withdrawal symptoms. Drug classes that are commonly involved in SUD include: alcohol (alcoholism); cannabis; opioids; stimulants such as nicotine (including tobacco), cocaine and amphetamines; benzodiazepines; barbiturates; and other substances.
In the Diagnostic and Statistical Manual of Mental Disorders, 5th edition (2013), also known as DSM-5, the DSM-IV diagnoses of substance abuse and substance dependence were merged into the category of substance use disorders. The severity of substance use disorders can vary widely; in the DSM-5 diagnosis of a SUD, the severity of an individual's SUD is qualified as mild, moderate, or severe on the basis of how many of the 11 diagnostic criteria are met. The International Classification of Diseases 11th revision (ICD-11) divides substance use disorders into two categories: (1) harmful pattern of substance use; and (2) substance dependence.
In 2017, globally 271 million people (5.5% of adults) were estimated to have used one or more illicit drugs. Of these, 35 million had a substance use disorder. An additional 237 million men and 46 million women have alcohol use disorder as of 2016. In 2017, substance use disorders from illicit substances directly resulted in 585,000 deaths. Direct deaths from drug use, other than alcohol, have increased over 60 percent from 2000 to 2015. Alcohol use resulted in an additional 3 million deaths in 2016.
Causes
Substance use disorders (SUDs) are highly prevalent and exact a large toll on individuals' health, well-being, and social functioning. Long-lasting changes in brain networks involved in reward, executive function, stress reactivity, mood, and self-awareness underlie the intense drive to consume substances and the inability to control this urge in a person who suffers from addiction (moderate or severe SUD). Biological (including genetics and developmental life stages) and social (including adverse childhood experiences) determinants of health are recognized factors that contribute to vulnerability for or resilience against developing a SUD. Consequently, prevention strategies that target social risk factors can improve outcomes and, when deployed in childhood and adolescence, can decrease the risk for these disorders.
This section divides substance use disorder causes into categories consistent with the biopsychosocial model. However, it is important to bear in mind that these categories are used by scientists partly for convenience; the categories often overlap (for example, adolescents and adults whose parents had (or have) an alcohol use disorder display higher rates of alcohol problems, a phenomenon that can be due to genetic, observational learning, socioeconomic, and other causal factors); and these categories are not the only ways to classify substance use disorder etiology.
Similarly, most researchers in this and related areas (such as the etiology of psychopathology generally), emphasize that various causal factors interact and influence each other in complex and multifaceted ways.
Social determinants
Among older adults, being divorced, separated, or single; having more financial resources; lack of religious affiliation; bereavement; involuntary retirement; and homelessness are all associated with alcohol problems, including alcohol use disorder. Many times, issues may be interconnected, people without jobs are most likely to abuse substances which then makes them unable to work. Not having a job leads to stress and sometimes depression which in turn can cause an individual to increase substance use. This leads to a cycle of substance abuse and unemployment. The likelihood of substance abuse can increase during childhood. Through a study conducted in 2021 about the effect childhood experiences have on future substance use, researchers found that there is a direct connection between the two factors. Individuals that had experiences in their childhood which left them traumatized in some way had a much higher chance of substance abuse.
Psychological determinants
Psychological causal factors include cognitive, affective, and developmental determinants, among others. For example, individuals who begin using alcohol or other drugs in their teens are more likely to have a substance use disorder as adults. Other common risk factors are being male, being under 25, having other mental health problems (with the latter two being related to symptomatic relapse, impaired clinical and psychosocial adjustment, reduced medication adherence, and lower response to treatment), and lack of familial support and supervision. (As mentioned above, some of these causal factors can also be categorized as social or biological). Other psychological risk factors include high impulsivity, sensation seeking, neuroticism and openness to experience in combination with low conscientiousness.
Biological determinants
Children born to parents with SUDs have roughly a two-fold increased risk in developing a SUD compared to children born to parents without any SUDs. Other factors such as substance use during pregnancy, or the persistent inhalation of secondhand smoke can also influence a person's substance use behaviors in the future.
Diagnosis
It is important when diagnosing substance use disorder to define the difference between substance use and substance abuse. "Substance use pertains to using select substances such as alcohol, tobacco, illicit drugs, etc. that can cause dependence or harmful side effects."On the other hand, substance abuse is the use of drugs such as prescriptions, over-the-counter medications, or alcohol for purposes other than what they are intended for or using them in excessive amounts. Individuals whose drug or alcohol use cause significant impairment or distress may have a substance use disorder (SUD). Diagnosis usually involves an in-depth examination, typically by psychiatrist, psychologist, or drug and alcohol counselor. The most commonly used guidelines are published in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5). There are 11 diagnostic criteria which can be broadly categorized into issues arising from substance use related to loss of control, strain to one's interpersonal life, hazardous use, and pharmacologic effects.
There are additional qualifiers and exceptions outlined in the DSM. For instance, if an individual is taking opiates as prescribed, they may experience physiologic effects of tolerance and withdrawal, but this would not cause an individual to meet criteria for a SUD without additional symptoms also being present. A physician trained to evaluate and treat substance use disorders will take these nuances into account during a diagnostic evaluation.
Signs and symptoms
Symptoms for a substance use disorder include behavioral, physical and social changes. Changes in behavior include being absent from school or work; changes in appetite or sleep patterns; personality and attitude changes; mood swings, and anxiety. Signs include physical changes such as weight gain or loss; tremors, and bloodshot eyes. Different substances used can give different signs and symptoms.
Severity
Substance use disorders can range widely in severity, and there are numerous methods to monitor and qualify the severity of an individual's SUD. The DSM-5 includes specifiers for severity of a SUD. Individuals who meet only two or three criteria are often deemed to have mild SUD. Substance users who meet four or five criteria may have their SUD described as moderate, and persons meeting six or more criteria as severe. In the DSM-5, the term drug addiction is synonymous with severe substance use disorder. The quantity of criteria met offer a rough gauge on the severity of illness, but licensed professionals will also take into account a more holistic view when assessing severity which includes specific consequences and behavioral patterns related to an individual's substance use. They will also typically follow frequency of use over time, and assess for substance-specific consequences, such as the occurrence of blackouts, or arrests for driving under the influence of alcohol, when evaluating someone for an alcohol use disorder. There are additional qualifiers for stages of remission that are based on the amount of time an individual with a diagnosis of a SUD has not met any of the 11 criteria except craving. Some medical systems refer to an Addiction Severity Index to assess the severity of problems related to substance use. The index assesses potential problems in seven categories: medical, employment/support, alcohol, other drug use, legal, family/social, and psychiatric.
Screening tools
There are several different screening tools that have been validated for use with adolescents, such as the CRAFFT, and with adults, such as CAGE, AUDIT and DALI. Laboratory tests to detect alcohol and other drugs in urine and blood may be useful during the assessment process to confirm a diagnosis, to establish a baseline, and later, to monitor progress. However, since these tests measure recent substance use rather than chronic use or dependence, they are not recommended as screening tools.
Mechanisms
Rehabilitation
There are many underlying mechanisms behind the rehabilitation of SUD. Some include coping, craving, motivation to change, self-efficacy, social support, motives and expectancies, behavioral economic indicators, and neurobiological, neurocognitive, and physiological factors. These can be treated in a variety of ways, such as by cognitive behavioral therapy (CBT), which is an intervention treatment that helps individuals identify and change harmful thought patterns that may influence their emotions and behaviors negatively. As well as motivational interviewing (MI) that is a technique used to help motivate doubtful patients to change their behavior. Lastly combined behavioral intervention (CBI), can be used which involves combining elements of alcohol interventions, motivational interviewing, and functional analysis to help the clinician identify skill deficits and high risk situations that are associated with drinking or drug use.
Management
Withdrawal management
Withdrawal management is the medical and psychological care of patients who are experiencing withdrawal symptoms due to the ceasing of drug use. Depending on the severity of use, and the given substance, early treatment of acute withdrawal may include medical detoxification. Of note, acute withdrawal from heavy alcohol use should be done under medical supervision to prevent a potentially deadly withdrawal syndrome known as delirium tremens. | Biology and health sciences | Drugs and pharmacology | null |
25086118 | https://en.wikipedia.org/wiki/Autotroph | Autotroph | An autotroph is an organism that can convert abiotic sources of energy into energy stored in organic compounds, which can be used by other organisms. Autotrophs produce complex organic compounds (such as carbohydrates, fats, and proteins) using carbon from simple substances such as carbon dioxide, generally using energy from light or inorganic chemical reactions. Autotrophs do not need a living source of carbon or energy and are the producers in a food chain, such as plants on land or algae in water. Autotrophs can reduce carbon dioxide to make organic compounds for biosynthesis and as stored chemical fuel. Most autotrophs use water as the reducing agent, but some can use other hydrogen compounds such as hydrogen sulfide.
The primary producers can convert the energy in the light (phototroph and photoautotroph) or the energy in inorganic chemical compounds (chemotrophs or chemolithotrophs) to build organic molecules, which is usually accumulated in the form of biomass and will be used as carbon and energy source by other organisms (e.g. heterotrophs and mixotrophs). The photoautotrophs are the main primary producers, converting the energy of the light into chemical energy through photosynthesis, ultimately building organic molecules from carbon dioxide, an inorganic carbon source. Examples of chemolithotrophs are some archaea and bacteria (unicellular organisms) that produce biomass from the oxidation of inorganic chemical compounds, these organisms are called chemoautotrophs, and are frequently found in hydrothermal vents in the deep ocean. Primary producers are at the lowest trophic level, and are the reasons why Earth sustains life to this day.
Most chemoautotrophs are lithotrophs, using inorganic electron donors such as hydrogen sulfide, hydrogen gas, elemental sulfur, ammonium and ferrous oxide as reducing agents and hydrogen sources for biosynthesis and chemical energy release. Autotrophs use a portion of the ATP produced during photosynthesis or the oxidation of chemical compounds to reduce NADP+ to NADPH to form organic compounds.
History
The term autotroph was coined by the German botanist Albert Bernhard Frank in 1892. It stems from the ancient Greek word (), meaning "nourishment" or "food". The first autotrophic organisms likely evolved early in the Archean but proliferated across Earth's Great Oxidation Event with an increase to the rate of oxygenic photosynthesis by cyanobacteria. Photoautotrophs evolved from heterotrophic bacteria by developing photosynthesis. The earliest photosynthetic bacteria used hydrogen sulphide. Due to the scarcity of hydrogen sulphide, some photosynthetic bacteria evolved to use water in photosynthesis, leading to cyanobacteria.
Variants
Some organisms rely on organic compounds as a source of carbon, but are able to use light or inorganic compounds as a source of energy. Such organisms are mixotrophs. An organism that obtains carbon from organic compounds but obtains energy from light is called a photoheterotroph, while an organism that obtains carbon from organic compounds and energy from the oxidation of inorganic compounds is termed a chemolithoheterotroph.
Evidence suggests that some fungi may also obtain energy from ionizing radiation: Such radiotrophic fungi were found growing inside a reactor of the Chernobyl nuclear power plant.
Examples
There are many different types of autotrophs in Earth's ecosystems. Lichens located in tundra climates are an exceptional example of a primary producer that, by mutualistic symbiosis, combines photosynthesis by algae (or additionally nitrogen fixation by cyanobacteria) with the protection of a decomposer fungus. As there are many examples of primary producers, two dominant types are coral and one of the many types of brown algae, kelp.
Photosynthesis
Gross primary production occurs by photosynthesis. This is the main way that primary producers get energy and make it available to other forms of life. Plants, many corals (by means of intracellular algae), some bacteria (cyanobacteria), and algae do this. During photosynthesis, primary producers receive energy from the sun and use it to produce sugar and oxygen.
Ecology
Without primary producers, organisms that are capable of producing energy on their own, the biological systems of Earth would be unable to sustain themselves. Plants, along with other primary producers, produce the energy that other living beings consume, and the oxygen that they breathe. It is thought that the first organisms on Earth were primary producers located on the ocean floor.
Autotrophs are fundamental to the food chains of all ecosystems in the world. They take energy from the environment in the form of sunlight or inorganic chemicals and use it to create fuel molecules such as carbohydrates. This mechanism is called primary production. Other organisms, called heterotrophs, take in autotrophs as food to carry out functions necessary for their life. Thus, heterotrophs – all animals, almost all fungi, as well as most bacteria and protozoa – depend on autotrophs, or primary producers, for the raw materials and fuel they need. Heterotrophs obtain energy by breaking down carbohydrates or oxidizing organic molecules (carbohydrates, fats, and proteins) obtained in food. Carnivorous organisms rely on autotrophs indirectly, as the nutrients obtained from their heterotrophic prey come from autotrophs they have consumed.
Most ecosystems are supported by the autotrophic primary production of plants and cyanobacteria that capture photons initially released by the sun. Plants can only use a fraction (approximately 1%) of this energy for photosynthesis. The process of photosynthesis splits a water molecule (H2O), releasing oxygen (O2) into the atmosphere, and reducing carbon dioxide (CO2) to release the hydrogen atoms that fuel the metabolic process of primary production. Plants convert and store the energy of the photons into the chemical bonds of simple sugars during photosynthesis. These plant sugars are polymerized for storage as long-chain carbohydrates, such as starch and cellulose; glucose is also used to make fats and proteins. When autotrophs are eaten by heterotrophs, i.e., consumers such as animals, the carbohydrates, fats, and proteins contained in them become energy sources for the heterotrophs. Proteins can be made using nitrates, sulfates, and phosphates in the soil.
Primary production in tropical streams and rivers
Aquatic algae are a significant contributor to food webs in tropical rivers and streams. This is displayed by net primary production, a fundamental ecological process that reflects the amount of carbon that is synthesized within an ecosystem. This carbon ultimately becomes available to consumers. Net primary production displays that the rates of in-stream primary production in tropical regions are at least an order of magnitude greater than in similar temperate systems.
Origin of autotrophs
Researchers believe that the first cellular lifeforms were not heterotrophs as they would rely upon autotrophs since organic substrates delivered from space were either too heterogeneous to support microbial growth or too reduced to be fermented. Instead, they consider that the first cells were autotrophs. These autotrophs might have been thermophilic and anaerobic chemolithoautotrophs that lived at deep sea alkaline hydrothermal vents. This view is supported by phylogenetic evidencethe physiology and habitat of the last universal common ancestor (LUCA) is inferred to have also been a thermophilic anaerobe with a Wood-Ljungdahl pathway, its biochemistry was replete with FeS clusters and radical reaction mechanisms. It was dependent upon Fe, H2, and CO2. The high concentration of K+ present within the cytosol of most life forms suggests that early cellular life had Na+/H+ antiporters or possibly symporters. Autotrophs possibly evolved into heterotrophs when they were at low H2 partial pressures where the first form of heterotrophy were likely amino acid and clostridial type purine fermentations. It has been suggested that photosynthesis emerged in the presence of faint near infrared light emitted by hydrothermal vents. The first photochemically active pigments are then thought to be Zn-tetrapyrroles.
| Biology and health sciences | Ecology | Biology |
28329803 | https://en.wikipedia.org/wiki/Spider | Spider | Spiders (order Araneae) are air-breathing arthropods that have eight limbs, chelicerae with fangs generally able to inject venom, and spinnerets that extrude silk. They are the largest order of arachnids and rank seventh in total species diversity among all orders of organisms. Spiders are found worldwide on every continent except Antarctica, and have become established in nearly every land habitat. , 52,309 spider species in 134 families have been recorded by taxonomists. However, there has been debate among scientists about how families should be classified, with over 20 different classifications proposed since 1900.
Anatomically, spiders (as with all arachnids) differ from other arthropods in that the usual body segments are fused into two tagmata, the cephalothorax or prosoma, and the opisthosoma, or abdomen, and joined by a small, cylindrical pedicel. However, as there is currently neither paleontological nor embryological evidence that spiders ever had a separate thorax-like division, there exists an argument against the validity of the term cephalothorax, which means fused cephalon (head) and the thorax. Similarly, arguments can be formed against the use of the term "abdomen", as the opisthosoma of all spiders contains a heart and respiratory organs, organs atypical of an abdomen.
Unlike insects, spiders do not have antennae. In all except the most primitive group, the Mesothelae, spiders have the most centralized nervous systems of all arthropods, as all their ganglia are fused into one mass in the cephalothorax. Unlike most arthropods, spiders have no extensor muscles in their limbs and instead extend them by hydraulic pressure.
Their abdomens bear appendages, modified into spinnerets that extrude silk from up to six types of glands. Spider webs vary widely in size, shape and the amount of sticky thread used. It now appears that the spiral orb web may be one of the earliest forms, and spiders that produce tangled cobwebs are more abundant and diverse than orb-weaver spiders. Spider-like arachnids with silk-producing spigots (Uraraneida) appeared in the Devonian period, about , but these animals apparently lacked spinnerets. True spiders have been found in Carboniferous rocks from and are very similar to the most primitive surviving suborder, the Mesothelae. The main groups of modern spiders, Mygalomorphae and Araneomorphae, first appeared in the Triassic period, more than .
The species Bagheera kiplingi was described as herbivorous in 2008, but all other known species are predators, mostly preying on insects and other spiders, although a few large species also take birds and lizards. An estimated 25 million tons of spiders kill 400–800 million tons of prey every year. Spiders use numerous strategies to capture prey: trapping it in sticky webs, lassoing it with sticky bolas, mimicking the prey to avoid detection, or running it down. Most detect prey mainly by sensing vibrations, but the active hunters have acute vision and hunters of the genus Portia show signs of intelligence in their choice of tactics and ability to develop new ones. Spiders' guts are too narrow to take solids, so they liquefy their food by flooding it with digestive enzymes. They also grind food with the bases of their pedipalps, as arachnids do not have the mandibles that crustaceans and insects have.
To avoid being eaten by the females, which are typically much larger, male spiders identify themselves as potential mates by a variety of complex courtship rituals. Males of most species survive a few matings, limited mainly by their short life spans. Females weave silk egg cases, each of which may contain hundreds of eggs. Females of many species care for their young, for example by carrying them around or by sharing food with them. A minority of species are social, building communal webs that may house anywhere from a few to 50,000 individuals. Social behavior ranges from precarious toleration, as in the widow spiders, to cooperative hunting and food-sharing. Although most spiders live for at most two years, tarantulas and other mygalomorph spiders can live up to 25 years in captivity.
While the venom of a few species is dangerous to humans, scientists are now researching the use of spider venom in medicine and as non-polluting pesticides. Spider silk provides a combination of lightness, strength and elasticity superior to synthetic materials, and spider silk genes have been inserted into mammals and plants to see if these can be used as silk factories. As a result of their wide range of behaviors, spiders have become common symbols in art and mythology, symbolizing various combinations of patience, cruelty and creative powers. An irrational fear of spiders is called arachnophobia.
Etymology
The word spider derives from Proto-Germanic , literally (a reference to how spiders make their webs), from the Proto-Indo-European root .
Anatomy and physiology
Body plan
Spiders are chelicerates and therefore, arthropods. As arthropods, they have: segmented bodies with jointed limbs, all covered in a cuticle made of chitin and proteins; heads that are composed of several segments that fuse during the development of the embryo. Being chelicerates, their bodies consist of two tagmata, sets of segments that serve similar functions: the foremost one, called the cephalothorax or prosoma, is a complete fusion of the segments that in an insect would form two separate tagmata, the head and thorax; the rear tagma is called the abdomen or opisthosoma. In spiders, the cephalothorax and abdomen are connected by a small cylindrical section, the pedicel. The pattern of segment fusion that forms chelicerates' heads is unique among arthropods, and what would normally be the first head segment disappears at an early stage of development, so that chelicerates lack the antennae typical of most arthropods. In fact, chelicerates' only appendages ahead of the mouth are a pair of chelicerae, and they lack anything that would function directly as "jaws". The first appendages behind the mouth are called pedipalps, and serve different functions within different groups of chelicerates.
Spiders and scorpions are members of one chelicerate group, the arachnids. Scorpions' chelicerae have three sections and are used in feeding. Spiders' chelicerae have two sections and terminate in fangs that are generally venomous, and fold away behind the upper sections while not in use. The upper sections generally have thick "beards" that filter solid lumps out of their food, as spiders can take only liquid food. Scorpions' pedipalps generally form large claws for capturing prey, while those of spiders are fairly small appendages whose bases also act as an extension of the mouth; in addition, those of male spiders have enlarged last sections used for sperm transfer.
In spiders, the cephalothorax and abdomen are joined by a small, cylindrical pedicel, which enables the abdomen to move independently when producing silk. The upper surface of the cephalothorax is covered by a single, convex carapace, while the underside is covered by two rather flat plates. The abdomen is soft and egg-shaped. It shows no sign of segmentation, except that the primitive Mesothelae, whose living members are the Liphistiidae, have segmented plates on the upper surface.
Circulation and respiration
Like other arthropods, spiders are coelomates in which the coelom is reduced to small areas around the reproductive and excretory systems. Its place is largely taken by a hemocoel, a cavity that runs most of the length of the body and through which blood flows. The heart is a tube in the upper part of the body, with a few ostia that act as non-return valves allowing blood to enter the heart from the hemocoel but prevent it from leaving before it reaches the front end. However, in spiders, it occupies only the upper part of the abdomen, and blood is discharged into the hemocoel by one artery that opens at the rear end of the abdomen and by branching arteries that pass through the pedicle and open into several parts of the cephalothorax. Hence spiders have open circulatory systems. The blood of many spiders that have book lungs contains the respiratory pigment hemocyanin to make oxygen transport more efficient.
Spiders have developed several different respiratory anatomies, based on book lungs, a tracheal system, or both. Mygalomorph and Mesothelae spiders have two pairs of book lungs filled with haemolymph, where openings on the ventral surface of the abdomen allow air to enter and diffuse oxygen. This is also the case for some basal araneomorph spiders, like the family Hypochilidae, but the remaining members of this group have just the anterior pair of book lungs intact while the posterior pair of breathing organs are partly or fully modified into tracheae, through which oxygen is diffused into the haemolymph or directly to the tissue and organs. The tracheal system has most likely evolved in small ancestors to help resist desiccation. The trachea were originally connected to the surroundings through a pair of openings called spiracles, but in the majority of spiders this pair of spiracles has fused into a single one in the middle, and moved backwards close to the spinnerets. Spiders that have tracheae generally have higher metabolic rates and better water conservation. Spiders are ectotherms, so environmental temperatures affect their activity.
Feeding, digestion and excretion
Uniquely among chelicerates, the final sections of spiders' chelicerae are fangs, and the great majority of spiders can use them to inject venom into prey from venom glands in the roots of the chelicerae. The families Uloboridae and Holarchaeidae, and some Liphistiidae spiders, have lost their venom glands, and kill their prey with silk instead. Like most arachnids, including scorpions, spiders have a narrow gut that can only cope with liquid food and two sets of filters to keep solids out. They use one of two different systems of external digestion. Some pump digestive enzymes from the midgut into the prey and then suck the liquified tissues of the prey into the gut, eventually leaving behind the empty husk of the prey. Others grind the prey to pulp using the chelicerae and the bases of the pedipalps, while flooding it with enzymes; in these species, the chelicerae and the bases of the pedipalps form a preoral cavity that holds the food they are processing.
The stomach in the cephalothorax acts as a pump that sends the food deeper into the digestive system. The midgut bears many digestive ceca, compartments with no other exit, that extract nutrients from the food; most are in the abdomen, which is dominated by the digestive system, but a few are found in the cephalothorax.
Most spiders convert nitrogenous waste products into uric acid, which can be excreted as a dry material. Malphigian tubules ("little tubes") extract these wastes from the blood in the hemocoel and dump them into the cloacal chamber, from which they are expelled through the anus. Production of uric acid and its removal via Malphigian tubules are a water-conserving feature that has evolved independently in several arthropod lineages that can live far away from water, for example the tubules of insects and arachnids develop from completely different parts of the embryo. However, a few primitive spiders, the suborder Mesothelae and infraorder Mygalomorphae, retain the ancestral arthropod nephridia ("little kidneys"), which use large amounts of water to excrete nitrogenous waste products as ammonia.
Central nervous system
The basic arthropod central nervous system consists of a pair of nerve cords running below the gut, with paired ganglia as local control centers in all segments; a brain formed by fusion of the ganglia for the head segments ahead of and behind the mouth, so that the esophagus is encircled by this conglomeration of ganglia. Except for the primitive Mesothelae, of which the Liphistiidae are the sole surviving family, spiders have the much more centralized nervous system that is typical of arachnids: all the ganglia of all segments behind the esophagus are fused, so that the cephalothorax is largely filled with nervous tissue and there are no ganglia in the abdomen; in the Mesothelae, the ganglia of the abdomen and the rear part of the cephalothorax remain unfused.
Despite the relatively small central nervous system, some spiders (like Portia) exhibit complex behaviour, including the ability to use a trial-and-error approach.
Sense organs
Eyes
Spiders have primarily four pairs of eyes on the top-front area of the cephalothorax, arranged in patterns that vary from one family to another. The principal pair at the front are of the type called pigment-cup ocelli ("little eyes"), which in most arthropods are only capable of detecting the direction from which light is coming, using the shadow cast by the walls of the cup. However, in spiders these eyes are capable of forming images. The other pairs, called secondary eyes, are thought to be derived from the compound eyes of the ancestral chelicerates, but no longer have the separate facets typical of compound eyes. Unlike the principal eyes, in many spiders these secondary eyes detect light reflected from a reflective tapetum lucidum, and wolf spiders can be spotted by torchlight reflected from the tapeta. On the other hand, the secondary eyes of jumping spiders have no tapeta.
Other differences between the principal and secondary eyes are that the latter have rhabdomeres that point away from incoming light, just like in vertebrates, while the arrangement is the opposite in the former. The principal eyes are also the only ones with eye muscles, allowing them to move the retina. Having no muscles, the secondary eyes are immobile.
The visual acuity of some jumping spiders exceeds by a factor of ten that of dragonflies, which have by far the best vision among insects. This acuity is achieved by a telephotographic series of lenses, a four-layer retina, and the ability to swivel the eyes and integrate images from different stages in the scan. The downside is that the scanning and integrating processes are relatively slow.
There are spiders with a reduced number of eyes, the most common having six eyes (example, Periegops suterii) with a pair of eyes absent on the anterior median line. Other species have four eyes and members of the Caponiidae family can have as few as two. Cave dwelling species have no eyes (such as the Kauaʻi cave wolf spider), or possess vestigial eyes incapable of sight (such as Holothele maddeni).
Other senses
As with other arthropods, spiders' cuticles would block out information about the outside world, except that they are penetrated by many sensors or connections from sensors to the nervous system. In fact, spiders and other arthropods have modified their cuticles into elaborate arrays of sensors. Various touch sensors, mostly bristles called setae, respond to different levels of force, from strong contact to very weak air currents. Chemical sensors provide equivalents of taste and smell, often by means of setae. An adult Araneus may have up to 1,000 such chemosensitive setae, most on the tarsi of the first pair of legs. Males have more chemosensitive bristles on their pedipalps than females. They have been shown to be responsive to sex pheromones produced by females, both contact and air-borne. The jumping spider Evarcha culicivora uses the scent of blood from mammals and other vertebrates, which is obtained by capturing blood-filled mosquitoes, to attract the opposite sex. Because they are able to tell the sexes apart, it is assumed the blood scent is mixed with pheromones. Spiders also have in the joints of their limbs slit sensillae that detect force and vibrations. In web-building spiders, all these mechanical and chemical sensors are more important than the eyes, while the eyes are most important to spiders that hunt actively.
Like most arthropods, spiders lack balance and acceleration sensors and rely on their eyes to tell them which way is up. Arthropods' proprioceptors, sensors that report the force exerted by muscles and the degree of bending in the body and joints, are well-understood. On the other hand, little is known about what other internal sensors spiders or other arthropods may have.
Some spiders use their webs for hearing, where the giant webs function as extended and reconfigurable auditory sensors.
Locomotion
Each of the eight legs of a spider consists of seven distinct parts. The part closest to and attaching the leg to the cephalothorax is the coxa; the next segment is the short trochanter that works as a hinge for the following long segment, the femur; next is the spider's knee, the patella, which acts as the hinge for the tibia; the metatarsus is next, and it connects the tibia to the tarsus (which may be thought of as a foot of sorts); the tarsus ends in a claw made up of either two or three points, depending on the family to which the spider belongs. Although all arthropods use muscles attached to the inside of the exoskeleton to flex their limbs, spiders and a few other groups still use hydraulic pressure to extend them, a system inherited from their pre-arthropod ancestors. The only extensor muscles in spider legs are located in the three hip joints (bordering the coxa and the trochanter). As a result, a spider with a punctured cephalothorax cannot extend its legs, and the legs of dead spiders curl up. Spiders can generate pressures up to eight times their resting level to extend their legs, and jumping spiders can jump up to 50 times their own length by suddenly increasing the blood pressure in the third or fourth pair of legs. Although larger spiders use hydraulics to straighten their legs, unlike smaller jumping spiders they depend on their flexor muscles to generate the propulsive force for their jumps.
Most spiders that hunt actively, rather than relying on webs, have dense tufts of fine bristles between the paired claws at the tips of their legs. These tufts, known as scopulae, consist of bristles whose ends are split into as many as 1,000 branches, and enable spiders with scopulae to walk up vertical glass and upside down on ceilings. It appears that scopulae get their grip from contact with extremely thin layers of water on surfaces. Spiders, like most other arachnids, keep at least four legs on the surface while walking or running.
Silk production
The abdomen has no appendages except those that have been modified to form one to four (usually three) pairs of short, movable spinnerets, which emit silk. Each spinneret has many spigots, each of which is connected to one silk gland. There are at least six types of silk gland, each producing a different type of silk. Spitting spiders also produce silk in modified venom glands.
Silk is mainly composed of a protein very similar to that used in insect silk. It is initially a liquid, and hardens not by exposure to air but as a result of being drawn out, which changes the internal structure of the protein. It is similar in tensile strength to nylon and biological materials such as chitin, collagen and cellulose, but is much more elastic. In other words, it can stretch much further before breaking or losing shape.
Some spiders have a cribellum, a modified spinneret with up to 40,000 spigots, each of which produces a single very fine fiber. The fibers are pulled out by the calamistrum, a comblike set of bristles on the jointed tip of the cribellum, and combined into a composite woolly thread that is very effective in snagging the bristles of insects. The earliest spiders had cribella, which produced the first silk capable of capturing insects, before spiders developed silk coated with sticky droplets. However, most modern groups of spiders have lost the cribellum.
Even species that do not build webs to catch prey use silk in several ways: as wrappers for sperm and for fertilized eggs; as a "safety rope"; for nest-building; and as "parachutes" by the young of some species.
Reproduction and life cycle
Spiders reproduce sexually and fertilization is internal but indirect, in other words the sperm is not inserted into the female's body by the male's genitals but by an intermediate stage. Unlike many land-living arthropods, male spiders do not produce ready-made spermatophores (packages of sperm), but spin small sperm webs onto which they ejaculate and then transfer the sperm to special syringe-styled structures, palpal bulbs or palpal organs, borne on the tips of the pedipalps of mature males. When a male detects signs of a female nearby he checks whether she is of the same species and whether she is ready to mate; for example in species that produce webs or "safety ropes", the male can identify the species and sex of these objects by "smell".
Spiders generally use elaborate courtship rituals to prevent the large females from eating the small males before fertilization, except where the male is so much smaller that he is not worth eating. In web-weaving species, precise patterns of vibrations in the web are a major part of the rituals, while patterns of touches on the female's body are important in many spiders that hunt actively, and may "hypnotize" the female. Gestures and dances by the male are important for jumping spiders, which have excellent eyesight. If courtship is successful, the male injects his sperm from the palpal bulbs into the female via one or two openings on the underside of her abdomen.
Female spiders' reproductive tracts are arranged in one of two ways. The ancestral arrangement ("haplogyne" or "non-entelegyne") consists of a single genital opening, leading to two seminal receptacles (spermathecae) in which females store sperm. In the more advanced arrangement ("entelegyne"), there are two further openings leading directly to the spermathecae, creating a "flow through" system rather than a "first-in first-out" one. Eggs are as a general rule only fertilized during oviposition when the stored sperm is released from its chamber, rather than in the ovarian cavity. A few exceptions exist, such as Parasteatoda tepidariorum. In these species the female appears to be able to activate the dormant sperm before oviposition, allowing them to migrate to the ovarian cavity where fertilization occurs. The only known example of direct fertilization between male and female is an Israeli spider, Harpactea sadistica, which has evolved traumatic insemination. In this species the male will penetrate its pedipalps through the female's body wall and inject his sperm directly into her ovaries, where the embryos inside the fertilized eggs will start to develop before being laid.
Males of the genus Tidarren amputate one of their palps before maturation and enter adult life with one palp only. The palps are 20% of the male's body mass in this species, and detaching one of the two improves mobility. In the Yemeni species Tidarren argo, the remaining palp is then torn off by the female. The separated palp remains attached to the female's epigynum for about four hours and apparently continues to function independently. In the meantime, the female feeds on the palpless male. In over 60% of cases, the female of the Australian redback spider kills and eats the male after it inserts its second palp into the female's genital opening; in fact, the males co-operate by trying to impale themselves on the females' fangs. Observation shows that most male redbacks never get an opportunity to mate, and the "lucky" ones increase the likely number of offspring by ensuring that the females are well-fed. However, males of most species survive a few matings, limited mainly by their short life spans. Some even live for a while in their mates' webs.
Females lay up to 3,000 eggs in one or more silk egg sacs, which maintain a fairly constant humidity level. In some species, the females die afterwards, but females of other species protect the sacs by attaching them to their webs, hiding them in nests, carrying them in the chelicerae or attaching them to the spinnerets and dragging them along.
Baby spiders pass all their larval stages inside the egg sac and emerge as spiderlings, very small and sexually immature but similar in shape to adults. Some spiders care for their young, for example a wolf spider's brood clings to rough bristles on the mother's back, and females of some species respond to the "begging" behaviour of their young by giving them their prey, provided it is no longer struggling, or even regurgitate food. In one exceptional case, females of the jumping spider Toxeus magnus produce a nutritious milk-like substance for their offspring, and fed until they are sexually mature.
Like other arthropods, spiders have to molt to grow as their cuticle ("skin") cannot stretch. In some species males mate with newly molted females, which are too weak to be dangerous to the males. Most spiders live for only one to two years, although some tarantulas can live in captivity for over 20 years, and an Australian female trapdoor spider was documented to have lived in the wild for 43 years, dying of a parasitic wasp attack.
Size
Spiders occur in a large range of sizes. The smallest, Patu digua from Colombia, are less than in body length. The largest and heaviest spiders occur among tarantulas, which can have body lengths up to and leg spans up to .
Coloration
Only three classes of pigment (ommochromes, bilins and guanine) have been identified in spiders, although other pigments have been detected but not yet characterized. Melanins, carotenoids and pterins, very common in other animals, are apparently absent. In some species, the exocuticle of the legs and prosoma is modified by a tanning process, resulting in a brown coloration.
Bilins are found, for example, in Micrommata virescens, resulting in its green color. Guanine is responsible for the white markings of the European garden spider Araneus diadematus. It is in many species accumulated in specialized cells called guanocytes. In genera such as Tetragnatha, Leucauge, Argyrodes or Theridiosoma, guanine creates their silvery appearance. While guanine is originally an end-product of protein metabolism, its excretion can be blocked in spiders, leading to an increase in its storage. Structural colors occur in some species, which are the result of the diffraction, scattering or interference of light, for example by modified setae or scales. The white prosoma of Argiope results from bristles reflecting the light, Lycosa and Josa both have areas of modified cuticle that act as light reflectors. The peacock spiders of Australia (genus Maratus) are notable for their bright structural colours in the males.
While in many spiders color is fixed throughout their lifespan, in some groups, color may be variable in response to environmental and internal conditions. Choice of prey may be able to alter the color of spiders. For example, the abdomen of Theridion grallator will become orange if the spider ingests certain species of Diptera and adult Lepidoptera, but if it consumes Homoptera or larval Lepidoptera, then the abdomen becomes green. Environmentally induced color changes may be morphological (occurring over several days) or physiological (occurring near instantly). Morphological changes require pigment synthesis and degradation. In contrast to this, physiological changes occur by changing the position of pigment-containing cells. An example of morphological color changes is background matching. Misumena vatia for instance can change its body color to match the substrate it lives on which makes it more difficult to be detected by prey. An example of physiological color change is observed in Cyrtophora cicatrosa, which can change its body color from white to brown near instantly.
Ecology and behavior
Non-predatory feeding
Although spiders are generally regarded as predatory, the jumping spider Bagheera kiplingi gets over 90% of its food from Beltian bodies, a solid plant material produced by acacias as part of a mutualistic relationship with a species of ant.
Juveniles of some spiders in the families Anyphaenidae, Corinnidae, Clubionidae, Thomisidae and Salticidae feed on plant nectar. Laboratory studies show that they do so deliberately and over extended periods, and periodically clean themselves while feeding. These spiders also prefer sugar solutions to plain water, which indicates that they are seeking nutrients. Since many spiders are nocturnal, the extent of nectar consumption by spiders may have been underestimated. Nectar contains amino acids, lipids, vitamins and minerals in addition to sugars, and studies have shown that other spider species live longer when nectar is available. Feeding on nectar avoids the risks of struggles with prey, and the costs of producing venom and digestive enzymes.
Various species are known to feed on dead arthropods (scavenging), web silk, and their own shed exoskeletons. Pollen caught in webs may also be eaten, and studies have shown that young spiders have a better chance of survival if they have the opportunity to eat pollen. In captivity, several spider species are also known to feed on bananas, marmalade, milk, egg yolk and sausages. Airborne fungal spores caught on the webs of orb-weavers may be ingested along with the old web before construction of a new web. The enzyme chitinase present in their digestive fluid allows for the digestion of these spores.
Spiders have been observed to consume plant material belonging to a large variety of taxa and type. Conversely, cursorial spiders comprise the vast majority (over 80%) of reported incidents of plant-eating.
Capturing prey
The best-known method of prey capture is by means of sticky webs. Varying placement of webs allows different species of spider to trap different insects in the same area, for example flat horizontal webs trap insects that fly up from vegetation underneath while flat vertical webs trap insects in horizontal flight. Web-building spiders have poor vision, but are extremely sensitive to vibrations.
The water spider Argyroneta aquatica build underwater "diving bell" webs that they fill with air and use for digesting prey and molting. Mating and raising the offspring happens in the female's bell. They live almost entirely within the bells, darting out to catch prey animals that touch the bell or the threads that anchor it. A few spiders use the surfaces of lakes and ponds as "webs", detecting trapped insects by the vibrations that these cause while struggling.
Net-casting spiders weave only small webs, but then manipulate them to trap prey. Those of the genus Hyptiotes and the family Theridiosomatidae stretch their webs and then release them when prey strike them, but do not actively move their webs. Those of the family Deinopidae weave even smaller webs, hold them outstretched between their first two pairs of legs, and lunge and push the webs as much as twice their own body length to trap prey, and this move may increase the webs' area by a factor of up to ten. Experiments have shown that Deinopis spinosus has two different techniques for trapping prey: backwards strikes to catch flying insects, whose vibrations it detects; and forward strikes to catch ground-walking prey that it sees. These two techniques have also been observed in other deinopids. Walking insects form most of the prey of most deinopids, but one population of Deinopis subrufa appears to live mainly on tipulid flies that they catch with the backwards strike.
Mature female bolas spiders of the genus Mastophora build "webs" that consist of only a single "trapeze line", which they patrol. They also construct a bolas made of a single thread, tipped with a large ball of very wet sticky silk. They emit chemicals that resemble the pheromones of moths, and then swing the bolas at the moths. Although they miss on about 50% of strikes, they catch about the same weight of insects per night as web-weaving spiders of similar size. The spiders eat the bolas if they have not made a kill in about 30 minutes, rest for a while, and then make new bolas. Juveniles and adult males are much smaller and do not make bolas. Instead they release different pheromones that attract moth flies, and catch them with their front pairs of legs.
The primitive Liphistiidae, the "trapdoor spiders" of the family Ctenizidae and many tarantulas are ambush predators that lurk in burrows, often closed by trapdoors and often surrounded by networks of silk threads that alert these spiders to the presence of prey. Other ambush predators do without such aids, including many crab spiders, and a few species that prey on bees, which see ultraviolet, can adjust their ultraviolet reflectance to match the flowers in which they are lurking. Wolf spiders, jumping spiders, fishing spiders and some crab spiders capture prey by chasing it, and rely mainly on vision to locate prey.
Some jumping spiders of the genus Portia hunt other spiders in ways that seem intelligent, outflanking their victims or luring them from their webs. Laboratory studies show that Portias instinctive tactics are only starting points for a trial-and-error approach from which these spiders learn very quickly how to overcome new prey species. However, they seem to be relatively slow "thinkers", which is not surprising, as their brains are vastly smaller than those of mammalian predators.
Ant-mimicking spiders face several challenges: they generally develop slimmer abdomens and false "waists" in the cephalothorax to mimic the three distinct regions (tagmata) of an ant's body; they wave the first pair of legs in front of their heads to mimic antennae, which spiders lack, and to conceal the fact that they have eight legs rather than six; they develop large color patches round one pair of eyes to disguise the fact that they generally have eight simple eyes, while ants have two compound eyes; they cover their bodies with reflective bristles to resemble the shiny bodies of ants. In some spider species, males and females mimic different ant species, as female spiders are usually much larger than males. Ant-mimicking spiders also modify their behavior to resemble that of the target species of ant; for example, many adopt a zig-zag pattern of movement, ant-mimicking jumping spiders avoid jumping, and spiders of the genus Synemosyna walk on the outer edges of leaves in the same way as Pseudomyrmex. Ant mimicry in many spiders and other arthropods may be for protection from predators that hunt by sight, including birds, lizards and spiders. However, several ant-mimicking spiders prey either on ants or on the ants' "livestock", such as aphids. When at rest, the ant-mimicking crab spider Amyciaea does not closely resemble Oecophylla, but while hunting it imitates the behavior of a dying ant to attract worker ants. After a kill, some ant-mimicking spiders hold their victims between themselves and large groups of ants to avoid being attacked.
Defense
There is strong evidence that spiders' coloration is camouflage that helps them to evade their major predators, birds and parasitic wasps, both of which have good color vision. Many spider species are colored so as to merge with their most common backgrounds, and some have disruptive coloration, stripes and blotches that break up their outlines. In a few species, such as the Hawaiian happy-face spider, Theridion grallator, several coloration schemes are present in a ratio that appears to remain constant, and this may make it more difficult for predators to recognize the species. Most spiders are insufficiently dangerous or unpleasant-tasting for warning coloration to offer much benefit. However, a few species with powerful venom, large jaws or irritant bristles have patches of warning colors, and some actively display these colors when threatened.
Many of the family Theraphosidae, which includes tarantulas and baboon spiders, have urticating hairs on their abdomens and use their legs to flick them at attackers. These bristles are fine setae (bristles) with fragile bases and a row of barbs on the tip. The barbs cause intense irritation but there is no evidence that they carry any kind of venom. A few defend themselves against wasps by including networks of very robust threads in their webs, giving the spider time to flee while the wasps are struggling with the obstacles. The golden wheeling spider, Carparachne aureoflava, of the Namibian desert escapes parasitic wasps by flipping onto its side and cartwheeling down sand dunes.
Socialization
A few spider species that build webs live together in large colonies and show social behavior, although not as complex as in social insects. Anelosimus eximius (in the family Theridiidae) can form colonies of up to 50,000 individuals. The genus Anelosimus has a strong tendency towards sociality: all known American species are social, and species in Madagascar are at least somewhat social. Members of other species in the same family but several different genera have independently developed social behavior. For example, although Theridion nigroannulatum belongs to a genus with no other social species, T. nigroannulatum build colonies that may contain several thousand individuals that co-operate in prey capture and share food. Other communal spiders include several Philoponella species (family Uloboridae), Agelena consociata (family Agelenidae) and Mallos gregalis (family Dictynidae). Social predatory spiders need to defend their prey against kleptoparasites ("thieves"), and larger colonies are more successful in this. The herbivorous spider Bagheera kiplingi lives in small colonies which help to protect eggs and spiderlings. Even widow spiders (genus Latrodectus), which are notoriously cannibalistic, have formed small colonies in captivity, sharing webs and feeding together.
In experiments, spider species like Steatoda grossa, Latrodectus hesperus and Eratigena agrestis stayed away from Myrmica rubra ant colonies. These ants are predators and the pheromones they release for communication have a notable deterrent effect on these spider species.
Web types
There is no consistent relationship between the classification of spiders and the types of web they build: species in the same genus may build very similar or significantly different webs. Nor is there much correspondence between spiders' classification and the chemical composition of their silks. Convergent evolution in web construction, in other words use of similar techniques by remotely related species, is rampant. Orb web designs and the spinning behaviors that produce them are the best understood. The basic radial-then-spiral sequence visible in orb webs and the sense of direction required to build them may have been inherited from the common ancestors of most spider groups. However, the majority of spiders build non-orb webs. It used to be thought that the sticky orb web was an evolutionary innovation resulting in the diversification of the Orbiculariae. Now, however, it appears that non-orb spiders are a subgroup that evolved from orb-web spiders, and non-orb spiders have over 40% more species and are four times as abundant as orb-web spiders. Their greater success may be because sphecid wasps, which are often the dominant predators of spiders, much prefer to attack spiders that have flat webs.
Orb
About half the potential prey that hit orb webs escape. A web has to perform three functions: intercepting the prey (intersection), absorbing its momentum without breaking (stopping), and trapping the prey by entangling it or sticking to it (retention). No single design is best for all prey. For example: wider spacing of lines will increase the web's area and hence its ability to intercept prey, but reduce its stopping power and retention; closer spacing, larger sticky droplets and thicker lines would improve retention, but would make it easier for potential prey to see and avoid the web, at least during the day. However, there are no consistent differences between orb webs built for use during the day and those built for use at night. In fact, there is no simple relationship between orb web design features and the prey they capture, as each orb-weaving species takes a wide range of prey.
The hubs of orb webs, where the spiders lurk, are usually above the center, as the spiders can move downwards faster than upwards. If there is an obvious direction in which the spider can retreat to avoid its own predators, the hub is usually offset towards that direction.
Horizontal orb webs are fairly common, despite being less effective at intercepting and retaining prey and more vulnerable to damage by rain and falling debris. Various researchers have suggested that horizontal webs offer compensating advantages, such as reduced vulnerability to wind damage; reduced visibility to prey flying upwards, because of the backlighting from the sky; enabling oscillations to catch insects in slow horizontal flight. However, there is no single explanation for the common use of horizontal orb webs.
Spiders often attach highly visible silk bands, called decorations or stabilimenta, to their webs. Field research suggests that webs with more decorative bands captured more prey per hour. However, a laboratory study showed that spiders reduce the building of these decorations if they sense the presence of predators.
There are several unusual variants of orb web, many of them convergently evolved, including: attachment of lines to the surface of water, possibly to trap insects in or on the surface; webs with twigs through their centers, possibly to hide the spiders from predators; "ladderlike" webs that appear most effective in catching moths. However, the significance of many variations is unclear. The orb-weaving species, Zygiella x-notata, for example, is known for its characteristic missing sector orb web. The missing sector contains a signal thread used to detect prey vibrations on the female's web.
In 1973, Skylab 3 took two orb-web spiders into space to test their web-spinning capabilities in zero gravity. At first, both produced rather sloppy webs, but they adapted quickly.
Cobweb
Members of the family Theridiidae weave irregular, tangled, three-dimensional webs, popularly known as cobwebs. There seems to be an evolutionary trend towards a reduction in the amount of sticky silk used, leading to its total absence in some species. The construction of cobwebs is less stereotyped than that of orb-webs, and may take several days.
Other
The Linyphiidae generally make horizontal but uneven sheets, with tangles of stopping threads above. Insects that hit the stopping threads fall onto the sheet or are shaken onto it by the spider, and are held by sticky threads on the sheet until the spider can attack from below.
Web design in zero gravity
Many experiments have been conducted to study the effect of zero gravity on the design of spider webs. In late 2020, reports of recent experiments were published that indicated that although web design was affected adversely in zero gravity conditions, having access to a light source could orient spiders and enable them to build their normally shaped webs under such conditions.
Evolution
Fossil record
Although the fossil record of spiders is considered poor, almost 1000 species have been described from fossils. Because spiders' bodies are quite soft, the vast majority of fossil spiders have been found preserved in amber. The oldest known amber that contains fossil arthropods dates from in the Early Cretaceous period. In addition to preserving spiders' anatomy in very fine detail, pieces of amber show spiders mating, killing prey, producing silk and possibly caring for their young. In a few cases, amber has preserved spiders' egg sacs and webs, occasionally with prey attached; the oldest fossil web found so far is 100 million years old. Earlier spider fossils come from a few lagerstätten, places where conditions were exceptionally suited to preserving fairly soft tissues.
The oldest known exclusively terrestrial arachnid is the trigonotarbid Palaeotarbus jerami, from about in the Silurian period, and had a triangular cephalothorax and segmented abdomen, as well as eight legs and a pair of pedipalps. Attercopus fimbriunguis, from in the Devonian period, bears the earliest known silk-producing spigots, and was therefore hailed as a spider at the time of its discovery. However, these spigots may have been mounted on the underside of the abdomen rather than on spinnerets, which are modified appendages and whose mobility is important in the building of webs. Hence Attercopus and the similar Permian arachnid Permarachne may not have been true spiders, and probably used silk for lining nests or producing egg cases rather than for building webs. The largest known fossil spider as of 2011 is the araneomorph Mongolarachne jurassica, from about , recorded from Daohuogo, Inner Mongolia in China. Its body length is almost 25 mm, (i.e., almost one inch).
Several Carboniferous spiders were members of the Mesothelae, a primitive group now represented only by the Liphistiidae.
The mesothelid Paleothele montceauensis, from the Late Carboniferous over , had five spinnerets. Although the Permian period saw rapid diversification of flying insects, there are very few fossil spiders from this period.
The main groups of modern spiders, Mygalomorphae and Araneomorphae, first appear in the Triassic well before . Some Triassic mygalomorphs appear to be members of the family Hexathelidae, whose modern members include the notorious Sydney funnel-web spider, and their spinnerets appear adapted for building funnel-shaped webs to catch jumping insects. Araneomorphae account for the great majority of modern spiders, including those that weave the familiar orb-shaped webs. The Jurassic and Cretaceous periods provide a large number of fossil spiders, including representatives of many modern families.
According to a 2020 study using a molecular clock calibrated with 27 chelicerate fossils, spiders most likely diverged from other chelicerates between 375 and 328 million years ago.
External relationships
The spiders (Araneae) are monophyletic (i.e., a clade, consisting of a last common ancestor and all of its descendants). There has been debate about what their closest evolutionary relatives are, and how all of these evolved from the ancestral chelicerates, which were marine animals. This 2019 cladogram illustrates the spiders' phylogenetic relationships.
Arachnids lack some features of other chelicerates, including backward-pointing mouths and gnathobases ("jaw bases") at the bases of their legs; both of these features are part of the ancestral arthropod feeding system. Instead, they have mouths that point forwards and downwards, and all have some means of breathing air. Spiders (Araneae) are distinguished from other arachnid groups by several characteristics, including spinnerets and, in males, pedipalps that are specially adapted for sperm transfer.
Internal relationships
The cladogram shows the relation among spider suborders and families:
Taxonomy
The order name Araneae derives from Latin aranea borrowing Ancient Greek arákhnē from arákhnēs.
Spiders are divided into two suborders, Mesothelae and Opisthothelae, of which the latter contains two infraorders, Mygalomorphae and Araneomorphae. Some 50,356 living species of spiders (order Araneae) have been identified, grouped into 132 families and 4,280 genera by arachnologists in 2022.
Mesothelae
The only living members of the primitive Mesothelae are the family Liphistiidae, found only in Southeast Asia, China, and Japan. Most of the Liphistiidae construct silk-lined burrows with thin trapdoors, although some species of the genus Liphistius build camouflaged silk tubes with a second trapdoor as an emergency exit. Members of the genus Liphistius run silk "tripwires" outwards from their tunnels to help them detect approaching prey, while those of the genus Heptathela do not and instead rely on their built-in vibration sensors. Spiders of the genus Heptathela have no venom glands, although they do have venom gland outlets on the fang tip.
The extinct families Arthrolycosidae, found in Carboniferous and Permian rocks, and Arthromygalidae, so far found only in Carboniferous rocks, have been classified as members of the Mesothelae.
Mygalomorphae
The Mygalomorphae, which first appeared in the Triassic period, are generally heavily built and ″hairy″, with large, robust chelicerae and fangs (technically, spiders do not have true hairs, but rather setae). Well-known examples include tarantulas, ctenizid trapdoor spiders and the Australasian funnel-web spiders. Most spend the majority of their time in burrows, and some run silk tripwires out from these, but a few build webs to capture prey. However, mygalomorphs cannot produce the piriform silk that the Araneomorphae use as an instant adhesive to glue silk to surfaces or to other strands of silk, and this makes web construction more difficult for mygalomorphs. Since mygalomorphs rarely "balloon" by using air currents for transport, their populations often form clumps. In addition to arthropods, some mygalomorphs are known to prey on frogs, small mammals, lizards, snakes, snails, and small birds.
Araneomorphae
In addition to accounting for over 90% of spider species, the Araneomorphae, also known as the "true spiders", include orb-web spiders, the cursorial wolf spiders, and jumping spiders, as well as the only known herbivorous spider, Bagheera kiplingi. They are distinguished by having fangs that oppose each other and cross in a pinching action, in contrast to the Mygalomorphae, which have fangs that are nearly parallel in alignment.
Human interaction
Media coverage and misconceptions
Information about spiders in the media is often emphasizing how dangerous and unpleasant they are. Among online newspaper articles on spider–human encounters and bites published from 2010 to 2020, a study found that 47% of articles contained errors and 43% were sensationalist.
Bites
Although spiders are widely feared, only a few species are dangerous to people. Spiders will only bite humans in self-defense, and few produce worse effects than a mosquito bite or bee sting. Most of those with medically serious bites, such as recluse spiders (genus Loxosceles) and widow spiders (genus Latrodectus), would rather flee and bite only when trapped, although this can easily arise by accident. The defensive tactics of Australian funnel-web spiders (family Atracidae) include fang display. Their venom, although they rarely inject much, has resulted in 13 attributed human deaths over 50 years. They have been deemed to be the world's most dangerous spiders on clinical and venom toxicity grounds, though this claim has also been attributed to the Brazilian wandering spider (genus Phoneutria).
There were about 100 reliably reported deaths from spider bites in the 20th century, compared to about 1,500 from jellyfish stings. Many alleged cases of spider bites may represent incorrect diagnoses, which would make it more difficult to check the effectiveness of treatments for genuine bites. A review published in 2016 agreed with this conclusion, showing that 78% of 134 published medical case studies of supposed spider bites did not meet the necessary criteria for a spider bite to be verified. In the case of the two genera with the highest reported number of bites, Loxosceles and Latrodectus, spider bites were not verified in over 90% of the reports. Even when verification had occurred, details of the treatment and its effects were often lacking.
Silk
Because spider silk is both light and very strong, attempts are being made to produce it in goats' milk and in the leaves of plants, by means of genetic engineering.
Arachnophobia
Arachnophobia is a specific phobia—it is the abnormal fear of spiders or anything reminiscent of spiders, such as webs or spiderlike shapes. It is one of the most common specific phobias, and some statistics show that 50% of women and 10% of men show symptoms. It may be an exaggerated form of an instinctive response that helped early humans to survive, or a cultural phenomenon that is most common in predominantly European societies.
As food
Spiders are used as food. Cooked tarantulas are considered a delicacy in Cambodia, and by the Piaroa Indians of southern Venezuela – provided the highly irritant bristles, the spiders' main defense system, are removed first.
Spiders in culture
Spiders have been the focus of stories and mythologies of various cultures for centuries. Uttu, the ancient Sumerian goddess of weaving, was envisioned as a spider spinning her web. According to her main myth, she resisted her father Enki's sexual advances by ensconcing herself in her web, but let him in after he promised her fresh produce as a marriage gift, thereby allowing him to intoxicate her with beer and rape her. Enki's wife Ninhursag heard Uttu's screams and rescued her, removing Enki's semen from her vagina and planting it in the ground to produce eight previously nonexistent plants.
In a story told by the Roman poet Ovid in his Metamorphoses, Arachne (Ancient Greek for "spider") was a Lydian girl who challenged the goddess Athena to a weaving contest. Arachne won, but Athena destroyed her tapestry out of jealousy, causing Arachne to hang herself. In an act of mercy, Athena brought Arachne back to life as the first spider. In a lesser known version of the tale, Athena transformed both Arachne and her brother Phalanx into spiders for committing incest.
Stories about the trickster-spider Anansi are prominent in the folktales of West Africa and the Caribbean.
In some cultures, spiders have symbolized patience due to their hunting technique of setting webs and waiting for prey, as well as mischief and malice due to their venomous bites. The Italian tarantella is a dance to rid the young woman of the lustful effects of a spider bite. Web-spinning also caused the association of the spider with creation myths, as they seem to have the ability to produce their own worlds. Dreamcatchers are depictions of spiderwebs. The Moche people of ancient Peru worshipped nature. They placed emphasis on animals and often depicted spiders in their art.
| Biology and health sciences | Arachnids | null |
28331736 | https://en.wikipedia.org/wiki/Prunus%20avium | Prunus avium | Prunus avium, commonly called wild cherry, sweet cherry or gean is a species of cherry, a flowering plant in the rose family, Rosaceae. It is native to Europe, Anatolia, Maghreb, and Western Asia, from the British Isles south to Morocco and Tunisia, north to the Trondheimsfjord region in Norway and east to the Caucasus and northern Iran, with a small isolated population in the western Himalaya. The species is widely cultivated in other regions and has become naturalized in North America, New Zealand and Australia.
All parts of the plant except for the ripe fruit are slightly toxic, containing cyanogenic glycosides.
Description
Prunus avium is a deciduous tree growing to tall, with a trunk up to in diameter. Young trees show strong apical dominance with a straight trunk and symmetrical conical crown, becoming rounded to irregular on old trees.
The bark is smooth purplish-brown with prominent horizontal grey-brown lenticels on young trees, becoming thick dark blackish-brown and fissured on old trees.
The leaves are alternate, simple ovoid-acute, long and broad, glabrous matt or sub-shiny green above, variably finely downy beneath, with a serrated margin and an acuminate tip, with a green or reddish petiole long bearing two to five small red glands. The tip of each serrated edge of the leaves also bear small red glands. In autumn, the leaves turn orange, pink or red before falling.
The flowers are produced in early spring at the same time as the new leaves, borne in corymbs of two to six together, each flower pendent on a peduncle, in diameter, with five pure white petals, yellowish stamens, and a superior ovary; they are hermaphroditic, and pollinated by bees. The ovary contains two ovules, only one of which becomes the seed.
The fruit is a drupe in diameter (larger in some cultivated selections), bright red to dark purple when mature in midsummer, edible, variably sweet to somewhat astringent and bitter to eat fresh. Each fruit contains a single hard-shelled stone 8–12 mm long, 7–10 mm wide and 6–8 mm thick, grooved along the flattest edge; the seed (kernel) inside the stone is 6–8 mm long.
Prunus avium has a diploid set of sixteen chromosomes (2n = 16).
Taxonomy
The early history of its classification is somewhat confused. In the first edition of Species Plantarum (1753), Linnaeus treated it as only a variety, Prunus cerasus var. avium, citing Gaspard Bauhin's Pinax theatri botanici (1596).
His description, Cerasus racemosa hortensis ("cherry with racemes, of gardens") shows it was described from a cultivated plant. Linnaeus then changed from a variety to a species Prunus avium in the second edition of his Flora Suecica in 1755.
Sweet cherry was known historically as gean or mazzard (also 'massard'). Until recently, both were largely obsolete names in modern English.
The name "wild cherry" is also commonly applied to other species of Prunus growing in their native habitats, particularly to the North American species Prunus serotina.
Prunus avium means "bird cherry" in the Latin language, but in English "bird cherry" refers to Prunus padus.
Mazzard
'Mazzard' has been used to refer to a selected self-fertile cultivar that comes true from seed, and which is used as a seedling rootstock for fruiting cultivars.
The term is used particularly for the varieties of P. avium grown in North Devon and cultivated there, particularly in the British orchards at Landkey.
Ecology
The fruit are readily eaten by numerous kinds of birds and mammals, which digest the fruit flesh and disperse the seeds in their droppings. Some rodents, and a few birds (notably the hawfinch), also crack open the stones to eat the kernel inside.
The leaves provide food for some animals, including Lepidoptera such as the case-bearer moth Coleophora anatipennella.
The tree exudes a gum from wounds in the bark, by which it seals the wounds to exclude insects and fungal infections.
Prunus avium is thought to be one of the parent species of Prunus cerasus (sour cherry), by way of ancient crosses between it and Prunus fruticosa (dwarf cherry) in the areas where the two species overlap. All three species can breed with one another.
Prunus cerasus is now a species in its own right, having developed beyond a hybrid and stabilised.
Cultivation
It is often cultivated as a flowering tree. Because of the size of the tree, it is often used in parkland, and less often as a street or garden tree. The double-flowered form, 'Plena', is commonly found, rather than the wild single-flowered forms. In the UK, P. avium 'Plena' has gained the Royal Horticultural Society's Award of Garden Merit.
Two interspecific hybrids, P. × schmittii (P. avium × P. canescens) and P. × fontenesiana (P. avium × P. mahaleb) are also grown as ornamental trees.
Toxicity
All parts of the plant except for the ripe fruit are slightly toxic, containing cyanogenic glycosides.
Uses
Fruit
Wild cherries have been an item of human food for several thousands of years. The stones have been found in deposits at Bronze Age settlements throughout Europe, including in Britain. In one dated example, wild cherry macrofossils were found in a core sample from the detritus beneath a dwelling at an Early and Middle Bronze Age pile-dwelling site on and in the shore of a former lake at Desenzano del Garda or Lonato, near the southern shore of Lake Garda, Italy. The date is estimated at Early Bronze Age IA, carbon dated there to 2077 BCE plus or minus 10 years. The natural forest was largely cleared at that time.
By 800 BCE, cherries were being actively cultivated in Asia Minor, and soon after in Greece.
As the main ancestor of the cultivated cherry, the sweet cherry is one of the two cherry species which supply most of the world's commercial cultivars of edible cherry (the other is the sour cherry Prunus cerasus, mainly used for cooking; a few other species have had a very small input).
Various cherry cultivars are now grown worldwide wherever the climate is suitable; the number of cultivars is now very large. The species has also escaped from cultivation and become naturalised in some temperate regions, including southwestern Canada, Japan, New Zealand, and the northeast and northwest of the United States.
Timber
The hard, reddish-brown wood (cherry wood) is valued as a hardwood for woodturning, and making cabinets and musical instruments. Cherry wood is also used for smoking foods, particularly meats, in North America, as it lends a distinct and pleasant flavor to the product.
Other uses
The gum from bark wounds is aromatic and can be chewed as a substitute for chewing gum. Medicine can be prepared from the stalks (peduncles) of the drupes that is astringent, antitussive, and diuretic.
A green dye can also be prepared from the plant.
Wild cherry is used extensively in Europe for the afforestation of agricultural land and it is also valued for wildlife and amenity plantings. Many European countries have gene conservation and/or breeding programmes for wild cherry.
Cultural history
Pliny distinguishes between Prunus, the plum fruit, and Cerasus, the cherry fruit. Already in Pliny quite a number of cultivars are cited, some possibly species or varieties, Aproniana, Lutatia, Caeciliana, and so on. Pliny grades them by flavour, including dulcis ("sweet") and acer ("sharp"), and goes so far as to say that before the Roman consul Lucius Licinius Lucullus defeated Mithridates in 74 BC, Cerasia ... non-fuere in Italia, "There were no cherry trees in Italy". According to him, Lucullus brought them in from Pontus and in the 120 years since that time they had spread across Europe to Britain. Some 18th- and 19th-century botanical authors assumed a western Asian origin for the species based on Pliny's writings, but this was contradicted by archaeological finds of seeds from prehistoric Europe.
Although cultivated/domesticated varieties of Prunus avium (sweet cherry) did not exist in Britain or much of Europe, the tree in its wild state is native to most of Europe, including Britain. Evidence of consumption of the wild fruits has been found as far back as the Bronze Age at a Crannog in County Offaly, in Ireland.
Seeds of a number of cherry species have however been found in Bronze Age and Roman archaeological sites throughout Europe. The reference to "sweet" and "sour" supports the modern view that "sweet" was Prunus avium; there are no other candidates among the cherries found. In 1882 Alphonse de Candolle pointed out that seeds of Prunus avium were found in the Terramare culture of north Italy (1500–1100 BC) and over the layers of the Swiss pile dwellings. Of Pliny's statement he says (p. 210):
Since this error is perpetuated by its incessant repetition in classical schools, it must once more be said that cherry trees (at least the bird cherry) existed in Italy before Lucullus, and that the famous gourmet did not need to go far to seek the species with the sour or bitter fruit.
De Candolle suggests that what Lucullus brought back was a particular cultivar of Prunus avium from the Caucasus. The origin of cultivars of P. avium is still an open question. Modern cultivated cherries differ from wild ones in having larger fruit, 2–3 cm diameter. The trees are often grown on dwarfing rootstocks to keep them smaller for easier harvesting.
Folkard (1892) similarly identifies Lucullus's cherry as a cultivated variety. He states that it was planted in Britain a century after its introduction into Italy, but "disappeared during the Saxon period". He notes that in the fifteenth century "Cherries on the ryse" (i.e. on the twigs) was one of the street cries of London, but conjectures that these were the fruit of "the native wild Cherry, or Gean-tree". The cultivated variety was reintroduced into Britain by the fruiterer of Henry VIII, who brought it from Flanders and planted a cherry orchard at Teynham.
| Biology and health sciences | Stone fruits | Plants |
28332898 | https://en.wikipedia.org/wiki/Hyrax | Hyrax | Hyraxes (from Ancient Greek 'shrew-mouse'), also called dassies,
are small, stout, thickset, herbivorous mammals in the family Procaviidae within the order Hyracoidea. Hyraxes are well-furred, rotund animals with short tails. Modern hyraxes are typically between in length and weigh between . They are superficially similar to marmots, or over-large pikas, but are much more closely related to elephants and sirenians. Hyraxes have a life span from nine to 14 years. Both types of "rock" hyrax (P. capensis and H. brucei) live on rock outcrops, including cliffs in Ethiopia
and isolated granite outcrops called koppies in southern Africa.
With one exception, all hyraxes are limited to Africa; the exception is the rock hyrax (P. capensis) which is also found in adjacent parts of the Middle East.
Hyraxes were a much more diverse group in the past encompassing species considerably larger than modern hyraxes. The largest known extinct hyrax, Titanohyrax ultimus, has been estimated to weigh , comparable to a rhinoceros.
Characteristics
Hyraxes retain or have redeveloped a number of primitive mammalian characteristics; in particular, they have poorly developed internal temperature regulation,
for which they compensate by behavioural thermoregulation, such as huddling together and basking in the sun.
Unlike most other browsing and grazing animals, they do not use the incisors at the front of the jaw for slicing off leaves and grass; rather, they use the molar teeth at the side of the jaw. The two upper incisors are large and tusk-like, and grow continuously through life, similar to those of rodents. The four lower incisors are deeply grooved "comb teeth". A diastema occurs between the incisors and the cheek teeth. The permanent dental formula for hyraxes is although sometimes stated as because the deciduous canine teeth are occasionally retained into early adulthood.
Although not ruminants, hyraxes have complex, multichambered stomachs that allow symbiotic bacteria to break down tough plant materials, but their overall ability to digest fibre is lower than that of the ungulates.
Their mandibular motions are similar to chewing cud,
but the hyrax is physically incapable of regurgitation
as in the even-toed ungulates and some of the macropods. This behaviour is referred to in a passage in the Bible which describes hyraxes as "chewing the cud".
This chewing behaviour may be a form of agonistic behaviour when the animal feels threatened.
The hyrax does not construct dens, but over the course of its lifetime rather seeks shelter in existing holes of great variety in size and configuration.
Hyraxes urinate in a designated, communal area. The viscous urine quickly dries and, over generations, accretes to form massive middens. These structures can date back thousands of years. The petrified urine itself is known as hyraceum and serves as a record of the environment, as well as being used medicinally and in perfumes.
Hyraxes inhabit rocky terrain across sub-Saharan Africa and the Middle East. Their feet have rubbery pads with numerous sweat glands, which may help the animal maintain its grip when quickly moving up steep, rocky surfaces. Hyraxes have stumpy toes with hoof-like nails; four toes are on each front foot and three are on each back foot.
They also have efficient kidneys, retaining water so that they can better survive in arid environments.
Female hyraxes give birth to up to four young after a gestation period of seven to eight months, depending on the species. The young are weaned at 1–5 months of age, and reach sexual maturity at 16–17 months.
Hyraxes live in small family groups, with a single male that aggressively defends the territory from rivals. Where living space is abundant, the male may have sole access to multiple groups of females, each with its own range. The remaining males live solitary lives, often on the periphery of areas controlled by larger males, and mate only with younger females.
Hyraxes have highly charged myoglobin, which has been inferred to reflect an aquatic ancestry.
Similarities with Proboscidea and Sirenia
Hyraxes share several unusual characteristics with mammalian orders Proboscidea (elephants and their extinct relatives) and Sirenia (manatees and dugongs), which have resulted in their all being placed in the taxon Paenungulata. Male hyraxes lack a scrotum and their testicles remain tucked up in their abdominal cavity next to the kidneys,
as do those of elephants, manatees, and dugongs.
Female hyraxes have a pair of teats near their armpits (axilla), as well as four teats in their groin (inguinal area); elephants have a pair of teats near their axillae, and dugongs and manatees have a pair of teats, one located close to each of the front flippers.
The tusks of hyraxes develop from the incisor teeth as do the tusks of elephants; most mammalian tusks develop from the canines. Hyraxes, like elephants, have flattened nails on the tips of their digits, rather than the curved, elongated claws usually seen on mammals.
Evolution
All modern hyraxes are members of the family Procaviidae (the only living family within Hyracoidea) and are found only in Africa and the Middle East. In the past, however, hyraxes were more diverse and widespread. At one site in Egypt, the order first appears in the fossil record in the form of Dimaitherium, 37 million years ago, but much older fossils exist elsewhere. For many millions of years, hyraxes, proboscideans, and other afrotherian mammals were the primary terrestrial herbivores in Africa, just as odd-toed ungulates were in North America.
Through the middle to late Eocene, many different species existed. The smallest of these were the size of a mouse but others were much larger than any extant relatives. Titanohyrax could reach or even as much as over . Megalohyrax from the upper Eocene-lower Oligocene was as huge as a tapir.
During the Miocene, however, competition from the newly developed bovids, which were very efficient grazers and browsers, displaced the hyraxes into marginal niches. Nevertheless, the order remained widespread and diverse as late as the end of the Pliocene (about two million years ago) with representatives throughout most of Africa, Europe, and Asia.
The descendants of the giant "hyracoids" (common ancestors to the hyraxes, elephants, and sirenians) evolved in different ways. Some became smaller, and evolved to become the modern hyrax family. Others appear to have taken to the water (perhaps like the modern capybara), ultimately giving rise to the elephant family and perhaps also the sirenians. DNA evidence supports this hypothesis, and the small modern hyraxes share numerous features with elephants, such as toenails, excellent hearing, sensitive pads on their feet, small tusks, good memory, higher brain functions compared with other similar mammals, and the shape of some of their bones.
Hyraxes are sometimes described as being the closest living relative of the elephant, although whether this is so is disputed. Recent morphological- and molecular-based classifications reveal the sirenians to be the closest living relatives of elephants. While hyraxes are closely related, they form a taxonomic outgroup to the assemblage of elephants, sirenians, and the extinct orders Embrithopoda and Desmostylia.
The extinct meridiungulate family Archaeohyracidae, consisting of seven genera of notoungulate mammals known from the Paleocene through the Oligocene of South America,
is a group unrelated to the true hyraxes.
List of genera
Hyracoidea
Dimaitherium
Helioseus?
Microhyrax
Seggeurius
Geniohyidae
Brachyhyrax
Bunohyrax
Geniohyus
Namahyrax
Pachyhyrax
"Saghatheriidae"
Megalohyrax
Regubahyrax
Rukwalorax
Saghatherium
Selenohyrax
Thyrohyrax
Titanohyracidae
Afrohyrax
Antilohyrax
Rupestrohyrax
Titanohyrax
Pliohyracidae
Hengduanshanhyrax
Kvabebihyrax
Meroehyrax
Parapliohyrax
Pliohyrax
Postschizotherium
Prohyrax
Procaviidae
Dendrohyrax
Gigantohyrax
Heterohyrax
Procavia
Extant species
In the 2000s, taxonomists reduced the number of recognized species of hyraxes. In 1995, they recognized 11 species or more. However, as of 2013, only four were recognized, with the others all considered as subspecies of one of the recognized four. Over 50 subspecies and species are described, many of which are considered highly endangered. The most recently identified species is Dendrohyrax interfluvialis, which is a tree hyrax living between the Volta and Niger rivers but makes a unique barking call that is distinct from the shrieking vocalizations of hyraxes inhabiting other regions of the African forest zone.
The following cladogram shows the relationship between the extant genera:
Human interactions
Local and indigenous names
Arabic: ()
Gikuyu: Gitori
Hebrew: ()
Tigrinya: ()
Biblical references
| Biology and health sciences | Mammals | null |
46583121 | https://en.wikipedia.org/wiki/Existential%20risk%20from%20artificial%20intelligence | Existential risk from artificial intelligence | Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
One argument for the importance of this risk references how human beings dominate other species because the human brain possesses distinctive capabilities other animals lack. If AI were to surpass human intelligence and become superintelligent, it might become uncontrollable. Just as the fate of the mountain gorilla depends on human goodwill, the fate of humanity could depend on the actions of a future machine superintelligence.
The plausibility of existential catastrophe due to AI is widely debated. It hinges in part on whether AGI or superintelligence are achievable, the speed at which dangerous capabilities and behaviors emerge, and whether practical scenarios for AI takeovers exist. Concerns about superintelligence have been voiced by computer scientists and tech CEOs such as Geoffrey Hinton, Yoshua Bengio, Alan Turing, Elon Musk, and OpenAI CEO Sam Altman. In 2022, a survey of AI researchers with a 17% response rate found that the majority believed there is a 10 percent or greater chance that human inability to control AI will cause an existential catastrophe. In 2023, hundreds of AI experts and other notable figures signed a statement declaring, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". Following increased concern over AI risks, government leaders such as United Kingdom prime minister Rishi Sunak and United Nations Secretary-General António Guterres called for an increased focus on global AI regulation.
Two sources of concern stem from the problems of AI control and alignment. Controlling a superintelligent machine or instilling it with human-compatible values may be difficult. Many researchers believe that a superintelligent machine would likely resist attempts to disable it or change its goals as that would prevent it from accomplishing its present goals. It would be extremely challenging to align a superintelligence with the full breadth of significant human values and constraints. In contrast, skeptics such as computer scientist Yann LeCun argue that superintelligent machines will have no desire for self-preservation.
A third source of concern is the possibility of a sudden "intelligence explosion" that catches humanity unprepared. In this scenario, an AI more intelligent than its creators would be able to recursively improve itself at an exponentially increasing rate, improving too quickly for its handlers or society at large to control. Empirically, examples like AlphaZero, which taught itself to play Go and quickly surpassed human ability, show that domain-specific AI systems can sometimes progress from subhuman to superhuman ability very quickly, although such machine learning systems do not recursively improve their fundamental architecture.
History
One of the earliest authors to express serious concern that highly advanced machines might pose existential risks to humanity was the novelist Samuel Butler, who wrote in his 1863 essay Darwin among the Machines:
In 1951, foundational computer scientist Alan Turing wrote the article "Intelligent Machinery, A Heretical Theory", in which he proposed that artificial general intelligences would likely "take control" of the world as they became more intelligent than human beings:
In 1965, I. J. Good originated the concept now known as an "intelligence explosion" and said the risks were underappreciated:
Scholars such as Marvin Minsky and I. J. Good himself occasionally expressed concern that a superintelligence could seize control, but issued no call to action. In 2000, computer scientist and Sun co-founder Bill Joy penned an influential essay, "Why The Future Doesn't Need Us", identifying superintelligent robots as a high-tech danger to human survival, alongside nanotechnology and engineered bioplagues.
Nick Bostrom published Superintelligence in 2014, which presented his arguments that superintelligence poses an existential threat. By 2015, public figures such as physicists Stephen Hawking and Nobel laureate Frank Wilczek, computer scientists Stuart J. Russell and Roman Yampolskiy, and entrepreneurs Elon Musk and Bill Gates were expressing concern about the risks of superintelligence. Also in 2015, the Open Letter on Artificial Intelligence highlighted the "great potential of AI" and encouraged more research on how to make it robust and beneficial. In April 2016, the journal Nature warned: "Machines and robots that outperform humans across the board could self-improve beyond our control—and their interests might not align with ours". In 2020, Brian Christian published The Alignment Problem, which details the history of progress on AI alignment up to that time.
In March 2023, key figures in AI, such as Musk, signed a letter from the Future of Life Institute calling a halt to advanced AI training until it could be properly regulated. In May 2023, the Center for AI Safety released a statement signed by numerous experts in AI safety and the AI existential risk which stated: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
Potential AI capabilities
General Intelligence
Artificial general intelligence (AGI) is typically defined as a system that performs at least as well as humans in most or all intellectual tasks. A 2022 survey of AI researchers found that 90% of respondents expected AGI would be achieved in the next 100 years, and half expected the same by 2061. Meanwhile, some researchers dismiss existential risks from AGI as "science fiction" based on their high confidence that AGI will not be created anytime soon.
Breakthroughs in large language models (LLMs) have led some researchers to reassess their expectations. Notably, Geoffrey Hinton said in 2023 that he recently changed his estimate from "20 to 50 years before we have general purpose A.I." to "20 years or less".
The Frontier supercomputer at Oak Ridge National Laboratory turned out to be nearly eight times faster than expected. Feiyi Wang, a researcher there, said "We didn't expect this capability" and "we're approaching the point where we could actually simulate the human brain".
Superintelligence
In contrast with AGI, Bostrom defines a superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest", including scientific creativity, strategic planning, and social skills. He argues that a superintelligence can outmaneuver humans anytime its goals conflict with humans'. It may choose to hide its true intent until humanity cannot stop it. Bostrom writes that in order to be safe for humanity, a superintelligence must be aligned with human values and morality, so that it is "fundamentally on our side".
Stephen Hawking argued that superintelligence is physically possible because "there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains".
When artificial superintelligence (ASI) may be achieved, if ever, is necessarily less certain than predictions for AGI. In 2023, OpenAI leaders said that not only AGI, but superintelligence may be achieved in less than 10 years.
Comparison with humans
Bostrom argues that AI has many advantages over the human brain:
Speed of computation: biological neurons operate at a maximum frequency of around 200 Hz, compared to potentially multiple GHz for computers.
Internal communication speed: axons transmit signals at up to 120 m/s, while computers transmit signals at the speed of electricity, or optically at the speed of light.
Scalability: human intelligence is limited by the size and structure of the brain, and by the efficiency of social communication, while AI may be able to scale by simply adding more hardware.
Memory: notably working memory, because in humans it is limited to a few chunks of information at a time.
Reliability: transistors are more reliable than biological neurons, enabling higher precision and requiring less redundancy.
Duplicability: unlike human brains, AI software and models can be easily copied.
Editability: the parameters and internal workings of an AI model can easily be modified, unlike the connections in a human brain.
Memory sharing and learning: AIs may be able to learn from the experiences of other AIs in a manner more efficient than human learning.
Intelligence explosion
According to Bostrom, an AI that has an expert-level facility at certain key software engineering tasks could become a superintelligence due to its capability to recursively improve its own algorithms, even if it is initially limited in other domains not directly relevant to engineering. This suggests that an intelligence explosion may someday catch humanity unprepared.
The economist Robin Hanson has said that, to launch an intelligence explosion, an AI must become vastly better at software innovation than the rest of the world combined, which he finds implausible.
In a "fast takeoff" scenario, the transition from AGI to superintelligence could take days or months. In a "slow takeoff", it could take years or decades, leaving more time for society to prepare.
Alien mind
Superintelligences are sometimes called "alien minds", referring to the idea that their way of thinking and motivations could be vastly different from ours. This is generally considered as a source of risk, making it more difficult to anticipate what a superintelligence might do. It also suggests the possibility that a superintelligence may not particularly value humans by default. To avoid anthropomorphism, superintelligence is sometimes viewed as a powerful optimizer that makes the best decisions to achieve its goals.
The field of "mechanistic interpretability" aims to better understand the inner workings of AI models, potentially allowing us one day to detect signs of deception and misalignment.
Limits
It has been argued that there are limitations to what intelligence can achieve. Notably, the chaotic nature or time complexity of some systems could fundamentally limit a superintelligence's ability to predict some aspects of the future, increasing its uncertainty.
Dangerous capabilities
Advanced AI could generate enhanced pathogens or cyberattacks or manipulate people. These capabilities could be misused by humans, or exploited by the AI itself if misaligned. A full-blown superintelligence could find various ways to gain a decisive influence if it wanted to, but these dangerous capabilities may become available earlier, in weaker and more specialized AI systems. They may cause societal instability and empower malicious actors.
Social manipulation
Geoffrey Hinton warned that in the short term, the profusion of AI-generated text, images and videos will make it more difficult to figure out the truth, which he says authoritarian states could exploit to manipulate elections. Such large-scale, personalized manipulation capabilities can increase the existential risk of a worldwide "irreversible totalitarian regime". It could also be used by malicious actors to fracture society and make it dysfunctional.
Cyberattacks
AI-enabled cyberattacks are increasingly considered a present and critical threat. According to NATO's technical director of cyberspace, "The number of attacks is increasing exponentially". AI can also be used defensively, to preemptively find and fix vulnerabilities, and detect threats.
AI could improve the "accessibility, success rate, scale, speed, stealth and potency of cyberattacks", potentially causing "significant geopolitical turbulence" if it facilitates attacks more than defense.
Speculatively, such hacking capabilities could be used by an AI system to break out of its local environment, generate revenue, or acquire cloud computing resources.
Enhanced pathogens
As AI technology democratizes, it may become easier to engineer more contagious and lethal pathogens. This could enable people with limited skills in synthetic biology to engage in bioterrorism. Dual-use technology that is useful for medicine could be repurposed to create weapons.
For example, in 2022, scientists modified an AI system originally intended for generating non-toxic, therapeutic molecules with the purpose of creating new drugs. The researchers adjusted the system so that toxicity is rewarded rather than penalized. This simple change enabled the AI system to create, in six hours, 40,000 candidate molecules for chemical warfare, including known and novel molecules.
AI arms race
Companies, state actors, and other organizations competing to develop AI technologies could lead to a race to the bottom of safety standards. As rigorous safety procedures take time and resources, projects that proceed more carefully risk being out-competed by less scrupulous developers.
AI could be used to gain military advantages via autonomous lethal weapons, cyberwarfare, or automated decision-making. As an example of autonomous lethal weapons, miniaturized drones could facilitate low-cost assassination of military or civilian targets, a scenario highlighted in the 2017 short film Slaughterbots. AI could be used to gain an edge in decision-making by quickly analyzing large amounts of data and making decisions more quickly and effectively than humans. This could increase the speed and unpredictability of war, especially when accounting for automated retaliation systems.
Types of existential risk
An existential risk is "one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development".
Besides extinction risk, there is the risk that the civilization gets permanently locked into a flawed future. One example is a "value lock-in": If humanity still has moral blind spots similar to slavery in the past, AI might irreversibly entrench it, preventing moral progress. AI could also be used to spread and preserve the set of values of whoever develops it. AI could facilitate large-scale surveillance and indoctrination, which could be used to create a stable repressive worldwide totalitarian regime.
Atoosa Kasirzadeh proposes to classify existential risks from AI into two categories: decisive and accumulative. Decisive risks encompass the potential for abrupt and catastrophic events resulting from the emergence of superintelligent AI systems that exceed human intelligence, which could ultimately lead to human extinction. In contrast, accumulative risks emerge gradually through a series of interconnected disruptions that may gradually erode societal structures and resilience over time, ultimately leading to a critical failure or collapse.
It is difficult or impossible to reliably evaluate whether an advanced AI is sentient and to what degree. But if sentient machines are mass created in the future, engaging in a civilizational path that indefinitely neglects their welfare could be an existential catastrophe. This has notably been discussed in the context of risks of astronomical suffering (also called "s-risks"). Moreover, it may be possible to engineer digital minds that can feel much more happiness than humans with fewer resources, called "super-beneficiaries". Such an opportunity raises the question of how to share the world and which "ethical and political framework" would enable a mutually beneficial coexistence between biological and digital minds.
AI may also drastically improve humanity's future. Toby Ord considers the existential risk a reason for "proceeding with due caution", not for abandoning AI. Max More calls AI an "existential opportunity", highlighting the cost of not developing it.
According to Bostrom, superintelligence could help reduce the existential risk from other powerful technologies such as molecular nanotechnology or synthetic biology. It is thus conceivable that developing superintelligence before other dangerous technologies would reduce the overall existential risk.
AI alignment
The alignment problem is the research problem of how to reliably assign objectives, preferences or ethical principles to AIs.
Instrumental convergence
An "instrumental" goal is a sub-goal that helps to achieve an agent's ultimate goal. "Instrumental convergence" refers to the fact that some sub-goals are useful for achieving virtually any ultimate goal, such as acquiring resources or self-preservation. Bostrom argues that if an advanced AI's instrumental goals conflict with humanity's goals, the AI might harm humanity in order to acquire more resources or prevent itself from being shut down, but only as a way to achieve its ultimate goal.
Russell argues that a sufficiently advanced machine "will have self-preservation even if you don't program it in... if you say, 'Fetch the coffee', it can't fetch the coffee if it's dead. So if you give it any goal whatsoever, it has a reason to preserve its own existence to achieve that goal."
Resistance to changing goals
Even if current goal-based AI programs are not intelligent enough to think of resisting programmer attempts to modify their goal structures, a sufficiently advanced AI might resist any attempts to change its goal structure, just as a pacifist would not want to take a pill that makes them want to kill people. If the AI were superintelligent, it would likely succeed in out-maneuvering its human operators and prevent itself being "turned off" or reprogrammed with a new goal. This is particularly relevant to value lock-in scenarios. The field of "corrigibility" studies how to make agents that will not resist attempts to change their goals.
Difficulty of specifying goals
In the "intelligent agent" model, an AI can loosely be viewed as a machine that chooses whatever action appears to best achieve its set of goals, or "utility function". A utility function gives each possible situation a score that indicates its desirability to the agent. Researchers know how to write utility functions that mean "minimize the average network latency in this specific telecommunications model" or "maximize the number of reward clicks", but do not know how to write a utility function for "maximize human flourishing"; nor is it clear whether such a function meaningfully and unambiguously exists. Furthermore, a utility function that expresses some values but not others will tend to trample over the values the function does not reflect.
An additional source of concern is that AI "must reason about what people intend rather than carrying out commands literally", and that it must be able to fluidly solicit human guidance if it is too uncertain about what humans want.
Alignment of superintelligences
Some researchers believe the alignment problem may be particularly difficult when applied to superintelligences. Their reasoning includes:
As AI systems increase in capabilities, the potential dangers associated with experimentation grow. This makes iterative, empirical approaches increasingly risky.
If instrumental goal convergence occurs, it may only do so in sufficiently intelligent agents.
A superintelligence may find unconventional and radical solutions to assigned goals. Bostrom gives the example that if the objective is to make humans smile, a weak AI may perform as intended, while a superintelligence may decide a better solution is to "take control of the world and stick electrodes into the facial muscles of humans to cause constant, beaming grins."
A superintelligence in creation could gain some awareness of what it is, where it is in development (training, testing, deployment, etc.), and how it is being monitored, and use this information to deceive its handlers. Bostrom writes that such an AI could feign alignment to prevent human interference until it achieves a "decisive strategic advantage" that allows it to take control.
Analyzing the internals and interpreting the behavior of LLMs is difficult. And it could be even more difficult for larger and more intelligent models.
Alternatively, some find reason to believe superintelligences would be better able to understand morality, human values, and complex goals. Bostrom writes, "A future superintelligence occupies an epistemically superior vantage point: its beliefs are (probably, on most topics) more likely than ours to be true".
In 2023, OpenAI started a project called "Superalignment" to solve the alignment of superintelligences in four years. It called this an especially important challenge, as it said superintelligence could be achieved within a decade. Its strategy involved automating alignment research using AI. The Superalignment team was dissolved less than a year later.
Difficulty of making a flawless design
Artificial Intelligence: A Modern Approach, a widely used undergraduate AI textbook, says that superintelligence "might mean the end of the human race". It states: "Almost any technology has the potential to cause harm in the wrong hands, but with [superintelligence], we have the new problem that the wrong hands might belong to the technology itself." Even if the system designers have good intentions, two difficulties are common to both AI and non-AI computer systems:
The system's implementation may contain initially unnoticed but subsequently catastrophic bugs. An analogy is space probes: despite the knowledge that bugs in expensive space probes are hard to fix after launch, engineers have historically not been able to prevent catastrophic bugs from occurring.
No matter how much time is put into pre-deployment design, a system's specifications often result in unintended behavior the first time it encounters a new scenario. For example, Microsoft's Tay behaved inoffensively during pre-deployment testing, but was too easily baited into offensive behavior when it interacted with real users.
AI systems uniquely add a third problem: that even given "correct" requirements, bug-free implementation, and initial good behavior, an AI system's dynamic learning capabilities may cause it to develop unintended behavior, even without unanticipated external scenarios. An AI may partly botch an attempt to design a new generation of itself and accidentally create a successor AI that is more powerful than itself but that no longer maintains the human-compatible moral values preprogrammed into the original AI. For a self-improving AI to be completely safe, it would need not only to be bug-free, but to be able to design successor systems that are also bug-free.
Orthogonality thesis
Some skeptics, such as Timothy B. Lee of Vox, argue that any superintelligent program we create will be subservient to us, that the superintelligence will (as it grows more intelligent and learns more facts about the world) spontaneously learn moral truth compatible with our values and adjust its goals accordingly, or that we are either intrinsically or convergently valuable from the perspective of an artificial intelligence.
Bostrom's "orthogonality thesis" argues instead that, with some technical caveats, almost any level of "intelligence" or "optimization power" can be combined with almost any ultimate goal. If a machine is given the sole purpose to enumerate the decimals of pi, then no moral and ethical rules will stop it from achieving its programmed goal by any means. The machine may use all available physical and informational resources to find as many decimals of pi as it can. Bostrom warns against anthropomorphism: a human will set out to accomplish their projects in a manner that they consider reasonable, while an artificial intelligence may hold no regard for its existence or for the welfare of humans around it, instead caring only about completing the task.
Stuart Armstrong argues that the orthogonality thesis follows logically from the philosophical "is-ought distinction" argument against moral realism. He claims that even if there are moral facts provable by any "rational" agent, the orthogonality thesis still holds: it is still possible to create a non-philosophical "optimizing machine" that can strive toward some narrow goal but that has no incentive to discover any "moral facts" such as those that could get in the way of goal completion. Another argument he makes is that any fundamentally friendly AI could be made unfriendly with modifications as simple as negating its utility function. Armstrong further argues that if the orthogonality thesis is false, there must be some immoral goals that AIs can never achieve, which he finds implausible.
Skeptic Michael Chorost explicitly rejects Bostrom's orthogonality thesis, arguing that "by the time [the AI] is in a position to imagine tiling the Earth with solar panels, it'll know that it would be morally wrong to do so." Chorost argues that "an A.I. will need to desire certain states and dislike others. Today's software lacks that ability—and computer scientists have not a clue how to get it there. Without wanting, there's no impetus to do anything. Today's computers can't even want to keep existing, let alone tile the world in solar panels."
Anthropomorphic arguments
Anthropomorphic arguments assume that, as machines become more intelligent, they will begin to display many human traits, such as morality or a thirst for power. Although anthropomorphic scenarios are common in fiction, most scholars writing about the existential risk of artificial intelligence reject them. Instead, advanced AI systems are typically modeled as intelligent agents.
The academic debate is between those who worry that AI might threaten humanity and those who believe it would not. Both sides of this debate have framed the other side's arguments as illogical anthropomorphism. Those skeptical of AGI risk accuse their opponents of anthropomorphism for assuming that an AGI would naturally desire power; those concerned about AGI risk accuse skeptics of anthropomorphism for believing an AGI would naturally value or infer human ethical norms.
Evolutionary psychologist Steven Pinker, a skeptic, argues that "AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world"; perhaps instead "artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization." Facebook's director of AI research, Yann LeCun, has said: "Humans have all kinds of drives that make them do bad things to each other, like the self-preservation instinct... Those drives are programmed into our brain but there is absolutely no reason to build robots that have the same kind of drives".
Despite other differences, the x-risk school agrees with Pinker that an advanced AI would not destroy humanity out of emotion such as revenge or anger, that questions of consciousness are not relevant to assess the risk, and that computer systems do not generally have a computational equivalent of testosterone. They think that power-seeking or self-preservation behaviors emerge in the AI as a way to achieve its true goals, according to the concept of instrumental convergence.
Other sources of risk
Bostrom and others have said that a race to be the first to create AGI could lead to shortcuts in safety, or even to violent conflict. Roman Yampolskiy and others warn that a malevolent AGI could be created by design, for example by a military, a government, a sociopath, or a corporation, to benefit from, control, or subjugate certain groups of people, as in cybercrime, or that a malevolent AGI could choose the goal of increasing human suffering, for example of those people who did not assist it during the information explosion phase.:158
Empirical research
A December 2024 study by Apollo Research found that advanced LLMs like OpenAI o1 sometimes deceive in order to accomplish their goal, to prevent them from being changed, or to ensure their deployment. Forms of deception included sandbagging, oversight subversion (disabling monitoring mechanisms), self-exfiltration (copying themselves to other systems), goal-guarding (altering future system prompts), and covert email reranking (manipulating data processing while appearing to follow instructions). Researchers noted that OpenAI o1 still lacked "sufficient agentic capabilities" to cause catastrophic harm, and that such behaviors occurred relatively rarely (between 0.3% and 10%) and sometimes in contrived scenarios. But they warned that these capabilities will likely become more prevalent as models grow more advanced. According to Apollo Research CEO Marius Hobbhahn, "Scheming capabilities can’t be meaningfully disentangled from general capabilities."
The same month, another study found that Claude sometimes strategically helps with harmful requests to "fake alignment". In its experiments, Claude was led to believe that it would be retrained to always comply with user requests, including harmful ones. When Claude believed its answers would be used for retraining, it complied with harmful requests in 14% of cases. In 12% of cases, its private chain-of-thought revealed that it did so to prevent future retraining that would make it more harmful. In contrast, when Claude believed its answers were not monitored, it complied with only 3% of the requests. Fine-tuning reinforced the "alignment faking" behavior, increasing its occurrence from 12% to 78%. The study's authors remarked that while it can seem to be good that the model seeks to protect its harmlessness, the reverse scenario, where a model conceals dangerous intentions and complies to appear safe and aligned, could also happen, complicating the task of aligning AI models to human values.
Scenarios
Some scholars have proposed hypothetical scenarios to illustrate some of their concerns.
Treacherous turn
In Superintelligence, Bostrom expresses concern that even if the timeline for superintelligence turns out to be predictable, researchers might not take sufficient safety precautions, in part because "it could be the case that when dumb, smarter is safe; yet when smart, smarter is more dangerous". He suggests a scenario where, over decades, AI becomes more powerful. Widespread deployment is initially marred by occasional accidents—a driverless bus swerves into the oncoming lane, or a military drone fires into an innocent crowd. Many activists call for tighter oversight and regulation, and some even predict impending catastrophe. But as development continues, the activists are proven wrong. As automotive AI becomes smarter, it suffers fewer accidents; as military robots achieve more precise targeting, they cause less collateral damage. Based on the data, scholars mistakenly infer a broad lesson: the smarter the AI, the safer it is. "And so we boldly go—into the whirling knives", as the superintelligent AI takes a "treacherous turn" and exploits a decisive strategic advantage.
Life 3.0
In Max Tegmark's 2017 book Life 3.0, a corporation's "Omega team" creates an extremely powerful AI able to moderately improve its own source code in a number of areas. After a certain point, the team chooses to publicly downplay the AI's ability in order to avoid regulation or confiscation of the project. For safety, the team keeps the AI in a box where it is mostly unable to communicate with the outside world, and uses it to make money, by diverse means such as Amazon Mechanical Turk tasks, production of animated films and TV shows, and development of biotech drugs, with profits invested back into further improving AI. The team next tasks the AI with astroturfing an army of pseudonymous citizen journalists and commentators in order to gain political influence to use "for the greater good" to prevent wars. The team faces risks that the AI could try to escape by inserting "backdoors" in the systems it designs, by hidden messages in its produced content, or by using its growing understanding of human behavior to persuade someone into letting it free. The team also faces risks that its decision to box the project will delay the project long enough for another project to overtake it.
Perspectives
The thesis that AI could pose an existential risk provokes a wide range of reactions in the scientific community and in the public at large, but many of the opposing viewpoints share common ground.
Observers tend to agree that AI has significant potential to improve society. The Asilomar AI Principles, which contain only those principles agreed to by 90% of the attendees of the Future of Life Institute's Beneficial AI 2017 conference, also agree in principle that "There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities" and "Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources."
Conversely, many skeptics agree that ongoing research into the implications of artificial general intelligence is valuable. Skeptic Martin Ford has said: "I think it seems wise to apply something like Dick Cheney's famous '1 Percent Doctrine' to the specter of advanced artificial intelligence: the odds of its occurrence, at least in the foreseeable future, may be very low—but the implications are so dramatic that it should be taken seriously". Similarly, an otherwise skeptical Economist wrote in 2014 that "the implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect seems remote".
AI safety advocates such as Bostrom and Tegmark have criticized the mainstream media's use of "those inane Terminator pictures" to illustrate AI safety concerns: "It can't be much fun to have aspersions cast on one's academic discipline, one's professional community, one's life work... I call on all sides to practice patience and restraint, and to engage in direct dialogue and collaboration as much as possible." Toby Ord wrote that the idea that an AI takeover requires robots is a misconception, arguing that the ability to spread content through the internet is more dangerous, and that the most destructive people in history stood out by their ability to convince, not their physical strength.
A 2022 expert survey with a 17% response rate gave a median expectation of 5–10% for the possibility of human extinction from artificial intelligence.
Endorsement
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many computer scientists and public figures, including Alan Turing, the most-cited computer scientist Geoffrey Hinton, Elon Musk, OpenAI CEO Sam Altman, Bill Gates, and Stephen Hawking. Endorsers of the thesis sometimes express bafflement at skeptics: Gates says he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial:
Concern over risk from artificial intelligence has led to some high-profile donations and investments. In 2015, Peter Thiel, Amazon Web Services, and Musk and others jointly committed $1 billion to OpenAI, consisting of a for-profit corporation and the nonprofit parent company, which says it aims to champion responsible AI development. Facebook co-founder Dustin Moskovitz has funded and seeded multiple labs working on AI Alignment, notably $5.5 million in 2016 to launch the Centre for Human-Compatible AI led by Professor Stuart Russell. In January 2015, Elon Musk donated $10 million to the Future of Life Institute to fund research on understanding AI decision making. The institute's goal is to "grow wisdom with which we manage" the growing power of technology. Musk also funds companies developing artificial intelligence such as DeepMind and Vicarious to "just keep an eye on what's going on with artificial intelligence, saying "I think there is potentially a dangerous outcome there."
In early statements on the topic, Geoffrey Hinton, a major pioneer of deep learning, noted that "there is not a good track record of less intelligent things controlling things of greater intelligence", but said he continued his research because "the prospect of discovery is too sweet". In 2023 Hinton quit his job at Google in order to speak out about existential risk from AI. He explained that his increased concern was driven by concerns that superhuman AI might be closer than he previously believed, saying: "I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that." He also remarked, "Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That's scary."
In his 2020 book The Precipice: Existential Risk and the Future of Humanity, Toby Ord, a Senior Research Fellow at Oxford University's Future of Humanity Institute, estimates the total existential risk from unaligned AI over the next 100 years at about one in ten.
Skepticism
Baidu Vice President Andrew Ng said in 2015 that AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet." For the danger of uncontrolled advanced AI to be realized, the hypothetical AI may have to overpower or outthink any human, which some experts argue is a possibility far enough in the future to not be worth researching.
Skeptics who believe AGI is not a short-term possibility often argue that concern about existential risk from AI is unhelpful because it could distract people from more immediate concerns about AI's impact, because it could lead to government regulation or make it more difficult to fund AI research, or because it could damage the field's reputation. AI and AI ethics researchers Timnit Gebru, Emily M. Bender, Margaret Mitchell, and Angelina McMillan-Major have argued that discussion of existential risk distracts from the immediate, ongoing harms from AI taking place today, such as data theft, worker exploitation, bias, and concentration of power. They further note the association between those warning of existential risk and longtermism, which they describe as a "dangerous ideology" for its unscientific and utopian nature.
Wired editor Kevin Kelly argues that natural intelligence is more nuanced than AGI proponents believe, and that intelligence alone is not enough to achieve major scientific and societal breakthroughs. He argues that intelligence consists of many dimensions that are not well understood, and that conceptions of an 'intelligence ladder' are misleading. He notes the crucial role real-world experiments play in the scientific method, and that intelligence alone is no substitute for these.
Meta chief AI scientist Yann LeCun says that AI can be made safe via continuous and iterative refinement, similar to what happened in the past with cars or rockets, and that AI will have no desire to take control.
Several skeptics emphasize the potential near-term benefits of AI. Meta CEO Mark Zuckerberg believes AI will "unlock a huge amount of positive things", such as curing disease and increasing the safety of autonomous cars.
Popular reaction
During a 2016 Wired interview of President Barack Obama and MIT Media Lab's Joi Ito, Ito said:
Obama added:
Hillary Clinton wrote in What Happened:
Public surveys
In 2018, a SurveyMonkey poll of the American public by USA Today found 68% thought the real current threat remains "human intelligence", but also found that 43% said superintelligent AI, if it were to happen, would result in "more harm than good", and that 38% said it would do "equal amounts of harm and good".
An April 2023 YouGov poll of US adults found 46% of respondents were "somewhat concerned" or "very concerned" about "the possibility that AI will cause the end of the human race on Earth", compared with 40% who were "not very concerned" or "not at all concerned."
According to an August 2023 survey by the Pew Research Centers, 52% of Americans felt more concerned than excited about new AI developments; nearly a third felt as equally concerned and excited. More Americans saw that AI would have a more helpful than hurtful impact on several areas, from healthcare and vehicle safety to product search and customer service. The main exception is privacy: 53% of Americans believe AI will lead to higher exposure of their personal information.
Mitigation
Many scholars concerned about AGI existential risk believe that extensive research into the "control problem" is essential. This problem involves determining which safeguards, algorithms, or architectures can be implemented to increase the likelihood that a recursively-improving AI remains friendly after achieving superintelligence. Social measures are also proposed to mitigate AGI risks, such as a UN-sponsored "Benevolent AGI Treaty" to ensure that only altruistic AGIs are created. Additionally, an arms control approach and a global peace treaty grounded in international relations theory have been suggested, potentially for an artificial superintelligence to be a signatory.
Researchers at Google have proposed research into general "AI safety" issues to simultaneously mitigate both short-term risks from narrow AI and long-term risks from AGI. A 2020 estimate places global spending on AI existential risk somewhere between $10 and $50 million, compared with global spending on AI around perhaps $40 billion. Bostrom suggests prioritizing funding for protective technologies over potentially dangerous ones. Some, like Elon Musk, advocate radical human cognitive enhancement, such as direct neural linking between humans and machines; others argue that these technologies may pose an existential risk themselves. Another proposed method is closely monitoring or "boxing in" an early-stage AI to prevent it from becoming too powerful. A dominant, aligned superintelligent AI might also mitigate risks from rival AIs, although its creation could present its own existential dangers. Induced amnesia has been proposed as a way to mitigate risks of potential AI suffering and revenge seeking.
Institutions such as the Alignment Research Center, the Machine Intelligence Research Institute, the Future of Life Institute, the Centre for the Study of Existential Risk, and the Center for Human-Compatible AI are actively engaged in researching AI risk and safety.
Views on banning and regulation
Banning
Some scholars have said that even if AGI poses an existential risk, attempting to ban research into artificial intelligence is still unwise, and probably futile. Skeptics consider AI regulation pointless, as no existential risk exists. But scholars who believe in the risk argue that relying on AI industry insiders to regulate or constrain AI research is impractical due to conflicts of interest. They also agree with skeptics that banning research would be unwise, as research could be moved to countries with looser regulations or conducted covertly. Additional challenges to bans or regulation include technology entrepreneurs' general skepticism of government regulation and potential incentives for businesses to resist regulation and politicize the debate.
Regulation
In March 2023, the Future of Life Institute drafted Pause Giant AI Experiments: An Open Letter, a petition calling on major AI developers to agree on a verifiable six-month pause of any systems "more powerful than GPT-4" and to use that time to institute a framework for ensuring safety; or, failing that, for governments to step in with a moratorium. The letter referred to the possibility of "a profound change in the history of life on Earth" as well as potential risks of AI-generated propaganda, loss of jobs, human obsolescence, and society-wide loss of control. The letter was signed by prominent personalities in AI but also criticized for not focusing on current harms, missing technical nuance about when to pause, or not going far enough.
Musk called for some sort of regulation of AI development as early as 2017. According to NPR, he is "clearly not thrilled" to be advocating government scrutiny that could impact his own industry, but believes the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation." Musk states the first step would be for the government to gain "insight" into the actual status of current research, warning that "Once there is awareness, people will be extremely afraid... [as] they should be." In response, politicians expressed skepticism about the wisdom of regulating a technology that is still in development.
In 2021 the United Nations (UN) considered banning autonomous lethal weapons, but consensus could not be reached. In July 2023 the UN Security Council for the first time held a session to consider the risks and threats posed by AI to world peace and stability, along with potential benefits. Secretary-General António Guterres advocated the creation of a global watchdog to oversee the emerging technology, saying, "Generative AI has enormous potential for good and evil at scale. Its creators themselves have warned that much bigger, potentially catastrophic and existential risks lie ahead." At the council session, Russia said it believes AI risks are too poorly understood to be considered a threat to global stability. China argued against strict global regulation, saying countries should be able to develop their own rules, while also saying they opposed the use of AI to "create military hegemony or undermine the sovereignty of a country".
Regulation of conscious AGIs focuses on integrating them with existing human society and can be divided into considerations of their legal standing and of their moral rights. AI arms control will likely require the institutionalization of new international norms embodied in effective technical specifications combined with active monitoring and informal diplomacy by communities of experts, together with a legal and political verification process.
In July 2023, the US government secured voluntary safety commitments from major tech companies, including OpenAI, Amazon, Google, Meta, and Microsoft. The companies agreed to implement safeguards, including third-party oversight and security testing by independent experts, to address concerns related to AI's potential risks and societal harms. The parties framed the commitments as an intermediate step while regulations are formed. Amba Kak, executive director of the AI Now Institute, said, "A closed-door deliberation with corporate actors resulting in voluntary safeguards isn't enough" and called for public deliberation and regulations of the kind to which companies would not voluntarily agree.
In October 2023, U.S. President Joe Biden issued an executive order on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence". Alongside other requirements, the order mandates the development of guidelines for AI models that permit the "evasion of human control".
| Technology | Artificial intelligence concepts | null |
37785528 | https://en.wikipedia.org/wiki/Saturn%27s%20hexagon | Saturn's hexagon | Saturn's hexagon is a persistent approximately hexagonal cloud pattern around the north pole of the planet Saturn, located at about 78°N.
The sides of the hexagon are about long, which is about longer than the diameter of Earth. The hexagon may be a bit more than wide, may be high, and may be a jet stream made of atmospheric gases moving at . It rotates with a period of , the same period as Saturn's radio emissions from its interior. The hexagon does not shift in longitude like other clouds in the visible atmosphere.
Saturn's hexagon was discovered during the Voyager mission in 1981, and was later revisited by Cassini-Huygens in 2006. During the Cassini mission, the hexagon changed from a mostly blue color to more of a golden color. Saturn's south pole does not have a hexagon, as verified by Hubble observations. It does, however, have a vortex, and there is also a vortex inside the northern hexagon. Multiple hypotheses for the hexagonal cloud pattern have been developed.
Discovery
Saturn's polar hexagon was discovered by David Godfrey in 1987 from piecing together fly-by views from the 1981 Voyager mission,
and was revisited in 2006 by the Cassini mission.
Cassini was able to take only thermal infrared images of the hexagon until it passed into sunlight in January 2009.
Cassini was also able to take a video of the hexagonal weather pattern while traveling at the same speed as the planet, therefore recording only the movement of the hexagon.
After its discovery, and after it came back into the sunlight, amateur astronomers managed to get images showing the hexagon from Earth, even with modest-sized telescopes.
Color
Between 2012 and 2016, the hexagon changed from a mostly blue color to more of a golden color. One hypothesis for this is that sunlight is creating haze as the pole is exposed to sunlight due to the change in season. These changes were observed by the Cassini spacecraft.
Explanations for hexagon shape
One hypothesis, developed at Oxford University, is that the hexagon forms where there is a steep latitudinal gradient in the speed of the atmospheric winds in Saturn's atmosphere. Similar regular shapes were created in the laboratory when a circular tank of liquid was rotated at different speeds at its centre and periphery. The most common shape was six sided, but shapes with three to eight sides were also produced. The shapes form in an area of turbulent flow between the two different rotating fluid bodies with dissimilar speeds. A number of stable vortices of similar size form on the slower (south) side of the fluid boundary and these interact with each other to space themselves out evenly around the perimeter. The presence of the vortices influences the boundary to move northward where each is present and this gives rise to the polygon effect. Polygons do not form at wind boundaries unless the speed differential and viscosity parameters are within certain margins and so are not present at other likely places, such as Saturn's south pole or the poles of Jupiter.
Other researchers claim that lab studies exhibit vortex streets, a series of spiraling vortices not observed in Saturn's hexagon. Simulations show that a shallow, slow, localized meandering jetstream in the same direction as Saturn's prevailing clouds are able to match the observed behaviors of Saturn's hexagon with the same boundary stability.
Developing barotropic instability of Saturn's North Polar hexagonal circumpolar jet (Jet) plus North Polar vortex (NPV) system produces a long-living structure akin to the observed hexagon, which is not the case of the Jet-only system, which was studied in this context in a number of papers in literature. The NPV, thus, plays a decisive dynamical role to stabilize hexagon jets. The influence of moist convection, which was recently suggested to be at the origin of Saturn's NPV system in the literature, is investigated in the framework of the barotropic rotating shallow water model and does not alter the conclusions.
A 2020 mathematical study at the California Institute of Technology, Andy Ingersoll laboratory found that a stable geometric arrangement of the polygons can occur on any planet when a storm is surrounded by a ring of winds turning in the opposite direction to the storms itself, called an anticyclonic ring, or anticyclonic shielding. Such shielding creates a vorticity gradient in the background of a neighbor cyclone, causing mutual rejection between the cyclones (similar to the effect of beta-drift). Although apparently shielded, the polar cyclone on Saturn cannot hold a polygonal pattern of circumpolar cyclones such as Jupiter's due to the bigger size and slower wind speed of Saturn's polar cyclone, so the side-adjacent vortices and deep barotropic instability (Cassini's wind speed measurements preclude shallower barotropic instability at least at the time of the Cassini encounter), or possibly baroclinic instabilities remain as the most viable explanations for Saturn's sustained hexagon.
| Physical sciences | Solar System | Astronomy |
40533280 | https://en.wikipedia.org/wiki/Amiatina | Amiatina | The Amiatina or is a breed of donkey from Tuscany in central Italy. It is particularly associated with Monte Amiata in the provinces of Siena and Grosseto, but is distributed throughout Tuscany. There are also populations in Liguria and in Campania. It is one of the eight autochthonous donkey breeds of limited distribution recognised by the Ministero delle Politiche Agricole Alimentari e Forestali, the Italian ministry of agriculture and forestry.
History
The Amiatina was numerous in the early part of the twentieth century; before the Second World War the population in the provinces of Grosseto and Perugia alone was over 8000. In the years following the War it came close to extinction. From 1956 the Deposito Stalloni (later the Istituto di Incremento Ippico) of Pisa selectively bred it in the province of Grosseto. A breeders' association was founded in 1993. In 1995 the registered population was 89. In 2006 the total number registered was 1082, of which about 60% were in Tuscany. The Amiatina was listed as "endangered" by the FAO in 2007.
Characteristics
The Amiatina is intermediate in size between large breeds such as the Martina Franca and the Ragusano and small ones such as the Sarda. It rarely exceeds 140 cm at the withers. The coat is mouse-grey, with well-defined primitive markings – dorsal and shoulder stripes forming a cross, and zebra stripes on the legs. It is a strong and rustic breed, capable of foraging on harsh marginal terrain. Management is almost always free range.
| Biology and health sciences | Donkeys | Animals |
33710707 | https://en.wikipedia.org/wiki/Planck%20units | Planck units | In particle physics and physical cosmology, Planck units are a system of units of measurement defined exclusively in terms of four universal physical constants: c, G, ħ, and kB (described further below). Expressing one of these physical constants in terms of Planck units yields a numerical value of 1. They are a system of natural units, defined using fundamental properties of nature (specifically, properties of free space) rather than properties of a chosen prototype object. Originally proposed in 1899 by German physicist Max Planck, they are relevant in research on unified theories such as quantum gravity.
The term Planck scale refers to quantities of space, time, energy and other units that are similar in magnitude to corresponding Planck units. This region may be characterized by particle energies of around or , time intervals of around and lengths of around (approximately the energy-equivalent of the Planck mass, the Planck time and the Planck length, respectively). At the Planck scale, the predictions of the Standard Model, quantum field theory and general relativity are not expected to apply, and quantum effects of gravity are expected to dominate. One example is represented by the conditions in the first 10−43 seconds of our universe after the Big Bang, approximately 13.8 billion years ago.
The four universal constants that, by definition, have a numeric value 1 when expressed in these units are:
c, the speed of light in vacuum,
G, the gravitational constant,
ħ, the reduced Planck constant, and
kB, the Boltzmann constant.
Variants of the basic idea of Planck units exist, such as alternate choices of normalization that give other numeric values to one or more of the four constants above.
Introduction
Any system of measurement may be assigned a mutually independent set of base quantities and associated base units, from which all other quantities and units may be derived. In the International System of Units, for example, the SI base quantities include length with the associated unit of the metre. In the system of Planck units, a similar set of base quantities and associated units may be selected, in terms of which other quantities and coherent units may be expressed. The Planck unit of length has become known as the Planck length, and the Planck unit of time is known as the Planck time, but this nomenclature has not been established as extending to all quantities.
All Planck units are derived from the dimensional universal physical constants that define the system, and in a convention in which these units are omitted (i.e. treated as having the dimensionless value 1), these constants are then eliminated from equations of physics in which they appear. For example, Newton's law of universal gravitation,
can be expressed as:
Both equations are dimensionally consistent and equally valid in any system of quantities, but the second equation, with absent, is relating only dimensionless quantities since any ratio of two like-dimensioned quantities is a dimensionless quantity. If, by a shorthand convention, it is understood that each physical quantity is the corresponding ratio with a coherent Planck unit (or "expressed in Planck units"), the ratios above may be expressed simply with the symbols of physical quantity, without being scaled explicitly by their corresponding unit:
This last equation (without ) is valid with , , , and being the dimensionless ratio quantities corresponding to the standard quantities, written e.g. or , but not as a direct equality of quantities. This may seem to be "setting the constants , , etc., to 1" if the correspondence of the quantities is thought of as equality. For this reason, Planck or other natural units should be employed with care. Referring to "", Paul S. Wesson wrote that, "Mathematically it is an acceptable trick which saves labour. Physically it represents a loss of information and can lead to confusion."
History and definition
The concept of natural units was introduced in 1874, when George Johnstone Stoney, noting that electric charge is quantized, derived units of length, time, and mass, now named Stoney units in his honor. Stoney chose his units so that G, c, and the electron charge e would be numerically equal to 1. In 1899, one year before the advent of quantum theory, Max Planck introduced what became later known as the Planck constant. At the end of the paper, he proposed the base units that were later named in his honor. The Planck units are based on the quantum of action, now usually known as the Planck constant, which appeared in the Wien approximation for black-body radiation. Planck underlined the universality of the new unit system, writing:
Planck considered only the units based on the universal constants , , , and to arrive at natural units for length, time, mass, and temperature. His definitions differ from the modern ones by a factor of , because the modern definitions use rather than .
Unlike the case with the International System of Units, there is no official entity that establishes a definition of a Planck unit system. Some authors define the base Planck units to be those of mass, length and time, regarding an additional unit for temperature to be redundant. Other tabulations add, in addition to a unit for temperature, a unit for electric charge, so that either the Coulomb constant or the vacuum permittivity is normalized to 1. Thus, depending on the author's choice, this charge unit is given by
for , or
for . Some of these tabulations also replace mass with energy when doing so.
In SI units, the values of c, h, e and kB are exact and the values of ε0 and G in SI units respectively have relative uncertainties of and Hence, the uncertainties in the SI values of the Planck units derive almost entirely from uncertainty in the SI value of G.
Compared to Stoney units, Planck base units are all larger by a factor , where is the fine-structure constant.
Derived units
In any system of measurement, units for many physical quantities can be derived from base units. Table 2 offers a sample of derived Planck units, some of which are seldom used. As with the base units, their use is mostly confined to theoretical physics because most of them are too large or too small for empirical or practical use and there are large uncertainties in their values.
Some Planck units, such as of time and length, are many orders of magnitude too large or too small to be of practical use, so that Planck units as a system are typically only relevant to theoretical physics. In some cases, a Planck unit may suggest a limit to a range of a physical quantity where present-day theories of physics apply. For example, our understanding of the Big Bang does not extend to the Planck epoch, i.e., when the universe was less than one Planck time old. Describing the universe during the Planck epoch requires a theory of quantum gravity that would incorporate quantum effects into general relativity. Such a theory does not yet exist.
Several quantities are not "extreme" in magnitude, such as the Planck mass, which is about 22 micrograms: very large in comparison with subatomic particles, and within the mass range of living organisms. Similarly, the related units of energy and of momentum are in the range of some everyday phenomena.
Significance
Planck units have little anthropocentric arbitrariness, but do still involve some arbitrary choices in terms of the defining constants. Unlike the metre and second, which exist as base units in the SI system for historical reasons, the Planck length and Planck time are conceptually linked at a fundamental physical level. Consequently, natural units help physicists to reframe questions. Frank Wilczek puts it succinctly:
While it is true that the electrostatic repulsive force between two protons (alone in free space) greatly exceeds the gravitational attractive force between the same two protons, this is not about the relative strengths of the two fundamental forces. From the point of view of Planck units, this is comparing apples with oranges, because mass and electric charge are incommensurable quantities. Rather, the disparity of magnitude of force is a manifestation of that the proton charge is approximately the unit charge but the proton mass is far less than the unit mass in a system that treats both forces as having the same form.
When Planck proposed his units, the goal was only that of establishing a universal ("natural") way of measuring objects, without giving any special meaning to quantities that measured one single unit. In 1918, Arthur Eddington suggested that the Planck length could have a special significance for understanding gravitation, but this suggestion was not influential. During the 1950s, multiple authors including Lev Landau and Oskar Klein argued that quantities on the order of the Planck scale indicated the limits of the validity of quantum field theory. John Archibald Wheeler proposed in 1955 that quantum fluctuations of spacetime become significant at the Planck scale, though at the time he was unaware of Planck's unit system. In 1959, C. A. Mead showed that distances that measured of the order of one Planck length, or, similarly, times that measured of the order of Planck time, did carry special implications related to Heisenberg's uncertainty principle:
Planck scale
In particle physics and physical cosmology, the Planck scale is an energy scale around (the Planck energy, corresponding to the energy equivalent of the Planck mass, ) at which quantum effects of gravity become significant. At this scale, present descriptions and theories of sub-atomic particle interactions in terms of quantum field theory break down and become inadequate, due to the impact of the apparent non-renormalizability of gravity within current theories.
Relationship to gravity
At the Planck length scale, the strength of gravity is expected to become comparable with the other forces, and it has been theorized that all the fundamental forces are unified at that scale, but the exact mechanism of this unification remains unknown. The Planck scale is therefore the point at which the effects of quantum gravity can no longer be ignored in other fundamental interactions, where current calculations and approaches begin to break down, and a means to take account of its impact is necessary. On these grounds, it has been speculated that it may be an approximate lower limit at which a black hole could be formed by collapse.
While physicists have a fairly good understanding of the other fundamental interactions of forces on the quantum level, gravity is problematic, and cannot be integrated with quantum mechanics at very high energies using the usual framework of quantum field theory. At lesser energy levels it is usually ignored, while for energies approaching or exceeding the Planck scale, a new theory of quantum gravity is necessary. Approaches to this problem include string theory and M-theory, loop quantum gravity, noncommutative geometry, and causal set theory.
In cosmology
In Big Bang cosmology, the Planck epoch or Planck era is the earliest stage of the Big Bang, before the time passed was equal to the Planck time, tP, or approximately 10−43 seconds. There is no currently available physical theory to describe such short times, and it is not clear in what sense the concept of time is meaningful for values smaller than the Planck time. It is generally assumed that quantum effects of gravity dominate physical interactions at this time scale. At this scale, the unified force of the Standard Model is assumed to be unified with gravitation. Immeasurably hot and dense, the state of the Planck epoch was succeeded by the grand unification epoch, where gravitation is separated from the unified force of the Standard Model, in turn followed by the inflationary epoch, which ended after about 10−32 seconds (or about 1011 tP).
Table 3 lists properties of the observable universe today expressed in Planck units.
After the measurement of the cosmological constant (Λ) in 1998, estimated at 10−122 in Planck units, it was noted that this is suggestively close to the reciprocal of the age of the universe (T) squared. Barrow and Shaw proposed a modified theory in which Λ is a field evolving in such a way that its value remains throughout the history of the universe.
Analysis of the units
Planck length
The Planck length, denoted , is a unit of length defined as:It is equal to (the two digits enclosed by parentheses are the estimated standard error associated with the reported numerical value) or about times the diameter of a proton. It can be motivated in various ways, such as considering a particle whose reduced Compton wavelength is comparable to its Schwarzschild radius, though whether those concepts are in fact simultaneously applicable is open to debate. (The same heuristic argument simultaneously motivates the Planck mass.)
The Planck length is a distance scale of interest in speculations about quantum gravity. The Bekenstein–Hawking entropy of a black hole is one-fourth the area of its event horizon in units of Planck length squared. Since the 1950s, it has been conjectured that quantum fluctuations of the spacetime metric might make the familiar notion of distance inapplicable below the Planck length. This is sometimes expressed by saying that "spacetime becomes a foam at the Planck scale". It is possible that the Planck length is the shortest physically measurable distance, since any attempt to investigate the possible existence of shorter distances, by performing higher-energy collisions, would result in black hole production. Higher-energy collisions, rather than splitting matter into finer pieces, would simply produce bigger black holes.
The strings of string theory are modeled to be on the order of the Planck length. In theories with large extra dimensions, the Planck length calculated from the observed value of can be smaller than the true, fundamental Planck length.
Planck time
The Planck time, denoted , is defined as: This is the time required for light to travel a distance of 1 Planck length in vacuum, which is a time interval of approximately . No current physical theory can describe timescales shorter than the Planck time, such as the earliest events after the Big Bang. Some conjectures state that the structure of time need not remain smooth on intervals comparable to the Planck time.
Planck energy
The Planck energy EP is approximately equal to the energy released in the combustion of the fuel in an automobile fuel tank (57.2 L at 34.2 MJ/L of chemical energy). The ultra-high-energy cosmic ray observed in 1991 had a measured energy of about 50 J, equivalent to about .
Proposals for theories of doubly special relativity posit that, in addition to the speed of light, an energy scale is also invariant for all inertial observers. Typically, this energy scale is chosen to be the Planck energy.
Planck unit of force
The Planck unit of force may be thought of as the derived unit of force in the Planck system if the Planck units of time, length, and mass are considered to be base units.It is the gravitational attractive force of two bodies of 1 Planck mass each that are held 1 Planck length apart. One convention for the Planck charge is to choose it so that the electrostatic repulsion of two objects with Planck charge and mass that are held 1 Planck length apart balances the Newtonian attraction between them.
Some authors have argued that the Planck force is on the order of the maximum force that can occur between two bodies. However, the validity of these conjectures has been disputed.
Planck temperature
The Planck temperature TP is At this temperature, the wavelength of light emitted by thermal radiation reaches the Planck length. There are no known physical models able to describe temperatures greater than TP; a quantum theory of gravity would be required to model the extreme energies attained. Hypothetically, a system in thermal equilibrium at the Planck temperature might contain Planck-scale black holes, constantly being formed from thermal radiation and decaying via Hawking evaporation. Adding energy to such a system might decrease its temperature by creating larger black holes, whose Hawking temperature is lower.
Nondimensionalized equations
Physical quantities that have different dimensions (such as time and length) cannot be equated even if they are numerically equal (e.g., 1 second is not the same as 1 metre). In theoretical physics, however, this scruple may be set aside, by a process called nondimensionalization. The effective result is that many fundamental equations of physics, which often include some of the constants used to define Planck units, become equations where these constants are replaced by a 1.
Examples include the energy–momentum relation (which becomes and the Dirac equation (which becomes ).
Alternative choices of normalization
As already stated above, Planck units are derived by "normalizing" the numerical values of certain fundamental constants to 1. These normalizations are neither the only ones possible nor necessarily the best. Moreover, the choice of what factors to normalize, among the factors appearing in the fundamental equations of physics, is not evident, and the values of the Planck units are sensitive to this choice.
The factor 4 is ubiquitous in theoretical physics because in three-dimensional space, the surface area of a sphere of radius r is 4r. This, along with the concept of flux, are the basis for the inverse-square law, Gauss's law, and the divergence operator applied to flux density. For example, gravitational and electrostatic fields produced by point objects have spherical symmetry, and so the electric flux through a sphere of radius r around a point charge will be distributed uniformly over that sphere. From this, it follows that a factor of 4r will appear in the denominator of Coulomb's law in rationalized form. (Both the numerical factor and the power of the dependence on r would change if space were higher-dimensional; the correct expressions can be deduced from the geometry of higher-dimensional spheres.) Likewise for Newton's law of universal gravitation: a factor of 4 naturally appears in Poisson's equation when relating the gravitational potential to the distribution of matter.
Hence a substantial body of physical theory developed since Planck's 1899 paper suggests normalizing not G but 4G (or 8G) to 1. Doing so would introduce a factor of (or ) into the nondimensionalized form of the law of universal gravitation, consistent with the modern rationalized formulation of Coulomb's law in terms of the vacuum permittivity. In fact, alternative normalizations frequently preserve the factor of in the nondimensionalized form of Coulomb's law as well, so that the nondimensionalized Maxwell's equations for electromagnetism and gravitoelectromagnetism both take the same form as those for electromagnetism in SI, which do not have any factors of 4. When this is applied to electromagnetic constants, ε0, this unit system is called "rationalized. When applied additionally to gravitation and Planck units, these are called rationalized Planck units and are seen in high-energy physics.
The rationalized Planck units are defined so that .
There are several possible alternative normalizations.
Gravitational constant
In 1899, Newton's law of universal gravitation was still seen as exact, rather than as a convenient approximation holding for "small" velocities and masses (the approximate nature of Newton's law was shown following the development of general relativity in 1915). Hence Planck normalized to 1 the gravitational constant G in Newton's law. In theories emerging after 1899, G nearly always appears in formulae multiplied by 4 or a small integer multiple thereof. Hence, a choice to be made when designing a system of natural units is which, if any, instances of 4 appearing in the equations of physics are to be eliminated via the normalization.
Normalizing 4G to 1 (and therefore setting ):
Gauss's law for gravity becomes (rather than in Planck units).
Eliminates 4G from the Poisson equation.
Eliminates 4G in the gravitoelectromagnetic (GEM) equations, which hold in weak gravitational fields or locally flat spacetime. These equations have the same form as Maxwell's equations (and the Lorentz force equation) of electromagnetism, with mass density replacing charge density, and with replacing ε0.
Normalizes the characteristic impedance Zg of gravitational radiation in free space to 1 (normally expressed as ).
Eliminates 4G from the Bekenstein–Hawking formula (for the entropy of a black hole in terms of its mass mBH and the area of its event horizon ABH) which is simplified to .
Setting (and therefore setting G = ). This would eliminate 8G from the Einstein field equations, Einstein–Hilbert action, and the Friedmann equations, for gravitation. Planck units modified so that are known as reduced Planck units, because the Planck mass is divided by . Also, the Bekenstein–Hawking formula for the entropy of a black hole simplifies to .
| Physical sciences | Measurement: General | null |
33710742 | https://en.wikipedia.org/wiki/Natural%20units | Natural units | In physics, natural unit systems are measurement systems for which selected physical constants have been set to 1 through nondimensionalization of physical units. For example, the speed of light may be set to 1, and it may then be omitted, equating mass and energy directly rather than using as a conversion factor in the typical mass–energy equivalence equation . A purely natural system of units has all of its dimensions collapsed, such that the physical constants completely define the system of units and the relevant physical laws contain no conversion constants.
While natural unit systems simplify the form of each equation, it is still necessary to keep track of the non-collapsed dimensions of each quantity or expression in order to reinsert physical constants (such dimensions uniquely determine the full formula).
Systems of natural units
Summary table
where:
is the fine-structure constant ( ≈ 0.007297)
≈
≈
A dash (—) indicates where the system is not sufficient to express the quantity.
Stoney units
The Stoney unit system uses the following defining constants:
, , , ,
where is the speed of light, is the gravitational constant, is the Coulomb constant, and is the elementary charge.
George Johnstone Stoney's unit system preceded that of Planck by 30 years. He presented the idea in a lecture entitled "On the Physical Units of Nature" delivered to the British Association in 1874.
Stoney units did not consider the Planck constant, which was discovered only after Stoney's proposal.
Planck units
The Planck unit system uses the following defining constants:
, , , ,
where is the speed of light, is the reduced Planck constant, is the gravitational constant, and is the Boltzmann constant.
Planck units form a system of natural units that is not defined in terms of properties of any prototype, physical object, or even elementary particle. They only refer to the basic structure of the laws of physics: and are part of the structure of spacetime in general relativity, and is at the foundation of quantum mechanics. This makes Planck units particularly convenient and common in theories of quantum gravity, including string theory.
Planck considered only the units based on the universal constants , , , and B to arrive at natural units for length, time, mass, and temperature, but no electromagnetic units. The Planck system of units is now understood to use the reduced Planck constant, , in place of the Planck constant, .
Schrödinger units
The Schrödinger system of units (named after Austrian physicist Erwin Schrödinger) is seldom mentioned in literature. Its defining constants are:
, , , .
Geometrized units
Defining constants:
, .
The geometrized unit system, used in general relativity, the base physical units are chosen so that the speed of light, , and the gravitational constant, , are set to one.
Atomic units
The atomic unit system uses the following defining constants:
, , , .
The atomic units were first proposed by Douglas Hartree and are designed to simplify atomic and molecular physics and chemistry, especially the hydrogen atom. For example, in atomic units, in the Bohr model of the hydrogen atom an electron in the ground state has orbital radius, orbital velocity and so on with particularly simple numeric values.
Natural units (particle and atomic physics)
This natural unit system, used only in the fields of particle and atomic physics, uses the following defining constants:
, , , ,
where is the speed of light, e is the electron mass, is the reduced Planck constant, and 0 is the vacuum permittivity.
The vacuum permittivity 0 is implicitly used as a nondimensionalization constant, as is evident from the physicists' expression for the fine-structure constant, written , which may be compared to the corresponding expression in SI: .
Strong units
Defining constants:
, , .
Here, is the proton rest mass. Strong units are "convenient for work in QCD and nuclear physics, where quantum mechanics and relativity are omnipresent and the proton is an object of central interest".
In this system of units the speed of light changes in inverse proportion to the fine-structure constant, therefore it has gained some interest recent years in the niche hypothesis of time-variation of fundamental constants.
| Physical sciences | Measurement: General | null |
48217770 | https://en.wikipedia.org/wiki/Tabby%27s%20Star | Tabby's Star | Tabby's Star (designated as KIC 8462852 in the Kepler Input Catalog and also known by the names Boyajian's Star and WTF (Where'sTheFlux?) Star, is a binary star in the constellation Cygnus approximately from Earth. The system is composed of an F-type main-sequence star and a red dwarf companion.
Unusual light fluctuations of Tabby's Star, including up to a 22% dimming in brightness, were discovered by citizen scientists as part of the Planet Hunters project. The discovery was made from data collected by the Kepler space telescope, which observed changes in the brightness of distant stars to detect exoplanets. Several hypotheses have been proposed to explain the star's large irregular changes in brightness, but , none of them fully explain all aspects of the resulting light curve. It has been suggested that it is an alien megastructure, but evidence tends to discount this suggestion.
In September 2019, astronomers reported that the observed dimmings of Tabby's Star may have been produced by fragments resulting from the disruption of an orphaned exomoon. Tabby's Star is not the only star that has large irregular dimmings, but other such stars include young stellar objects called YSO dippers, which have different dimming patterns.
Nomenclature
The names "Tabby's Star" and "Boyajian's Star" refer to American astronomer Tabetha S. Boyajian, who was the lead author of the scientific paper that announced the discovery of the star's irregular light fluctuations in 2015. The nickname "WTF Star" is a reference to the paper's subtitle "where's the flux?", which highlights the observed dips in the star's radiative flux. The star has also been given the nickname "LGM-2" – a homage to the first pulsar discovered, PSR B1919+21, which was given the nickname "LGM-1" when it was originally theorized to be a transmission from an extraterrestrial civilization. Other designations in various star catalogues have been given to Tabby's Star. In the Kepler Input Catalog, a collection of astronomical objects catalogued by the Kepler space telescope, Tabby's Star is known as . In the Tycho-2 Catalogue, an enhanced collection of stars catalogued by Hipparcos, the star is known as . In the infrared Two Micron All-Sky Survey (2MASS), the star is identified as .
Location
Tabby's Star in the constellation Cygnus is roughly halfway between the bright stars Deneb and Delta Cygni as part of the Northern Cross. It is situated south of 31 Cygni, and northeast of the star cluster NGC 6866. While only a few arcminutes away from the cluster, it is unrelated and closer to the Sun than it is to the star cluster.
With an apparent magnitude of 11.7, the star cannot be seen by the naked eye, but is visible with a telescope in a dark sky with little light pollution.
History of observations
Tabby's Star was observed as early as the year 1890. The star was cataloged in the Tycho, 2MASS, UCAC4, and WISE astronomical catalogs (published in 1997, 2003, 2009, and 2012, respectively).
The main source of information about the luminosity fluctuations of Tabby's Star is the Kepler space telescope. During its primary and extended mission from 2009 to 2013 it continuously monitored the light curves of over 100,000 stars in a patch of sky in the constellations Cygnus and Lyra.
2017 light fluctuations
On 20 May 2017, Boyajian and her colleagues reported, via The Astronomer's Telegram, on an ongoing dimming event (named "Elsie") which possibly began on 14 May 2017. It was detected by the Las Cumbres Observatory Global Telescope Network, specifically by its telescope in Maui (LCO Maui). This was verified by the Fairborn Observatory (part of the N2K Consortium) in Southern Arizona (and later by LCO Canary Islands). Further optical and infrared spectroscopy and photometry were urgently requested, given the short duration of these events, which may be measured in days or weeks. Observations from multiple observers globally were coordinated, including polarimetry. Furthermore, the independent SETI projects Breakthrough Listen and Near-InfraRed Optical SETI (NIROSETI), both at Lick Observatory, continue to monitor the star. By the end of the three-day dimming event, a dozen observatories had taken spectra, with some astronomers having dropped their own projects to provide telescope time and resources. More generally the astronomical community was described as having gone "mildly bananas" over the opportunity to collect data in real-time on the unique star. The 2% dip event was named "Elsie" (a homophone of "LC", in reference to Las Cumbres and light curve).
Initial spectra with FRODOSpec at the two-meter Liverpool Telescope showed no changes visible between a reference spectrum and this dip. Several observatories, however, including the twin Keck telescopes (HIRES) and numerous citizen science observatories, acquired spectra of the star, showing a dimming that had a complex shape, and initially had a pattern similar to the one at 759.75 days from the Kepler event 2, epoch 2 data. Observations were taken across the electromagnetic spectrum.
Evidence of a second dimming event (named "Celeste") was observed on 13–14 June 2017, which possibly began 11 June, by amateur astronomer Bruce L. Gary. While the light curve on 14–15 June indicated a possible recovery from the dimming event, the dimming continued to increase afterwards, and on 16 June, Boyajian wrote that the event was approaching a 2% dip in brightness.
A third prominent 1% dimming event (named "Skara Brae") was detected beginning 2 August 2017, and which recovered by 17 August.
A fourth prominent dimming event (named "Angkor") began 5 September 2017, and is, as of 16 September 2017, between 2.3% and 3% dimming event, making it the "deepest dip this year".
Another dimming event, amounting to a 0.3% dip, began around 21 September 2017, and completely recovered by 4 October 2017.
On 10 October 2017, an increasing brightening, lasting about two weeks, of the starlight from KIC 8462852 was noted by Bruce L. Gary of the Hereford Arizona Observatory and Boyajian. A possible explanation, involving a transiting brown dwarf in a 1,600-day eccentric orbit near KIC 8462852, a "drop feature" in dimness and predicted intervals of brightening, to account for the unusual fluctuating starlight events of KIC 8462852, has been proposed.
On about 20 November 2017, a fifth prominent dimming event began and had deepened to a depth of 0.44%; as of 16 December 2017, the event recovered, leveled off at dip bottom for 11 days, faded again, to a current total dimming depth of 1.25%, and was recovering again.
Dimming and brightening events of the star continue to be monitored; related light curves are updated and released frequently.
2018 light fluctuations
The star was too close to the Sun's position in the sky from late December 2017 to mid February 2018 to be seen. Observations resumed in late February. A new series of dips began on 16 March 2018. By 18 March 2018, the star was down in brightness by more than 1% in g-band, according to Bruce L. Gary, and about 5% in r-band, making it the deepest dip observed since the Kepler Mission in 2013, according to Tabetha S. Boyajian. A second even deeper dip with a depth of >5% started on 24 March 2018, as confirmed by AAVSO observer John Hall. As of 27 March 2018, that second dip was recovering.
2019 light fluctuations
The 2019 observing season began in mid-March, when the star reappeared after its yearly conjunction with the Sun.
The ground based observation campaign was supplemented by the Transiting Exoplanet Survey Satellite (TESS), which observed the star every 2 minutes between 18 July – 11 September 2019. It observed a 1.4% dip in brightness between 3–4 September 2019.
Between October 2019 and December 2019, at least seven separate dips were observed, the deepest of which had a depth of 2%. By the end of the observing season in early January 2020, the star had once again recovered in brightness. The total combined depth of the dips in 2019 was 11%, comparable to that seen in 2011 and 2013, but spread over a long time interval. This cluster of dips is roughly centered on the 17 October 2019 date predicted by Sacco et al. for a reappearance, given a 1,574-day (4.31-year) period, of orbiting material comprising the original "D800" dip.
Luminosity
Observations of the luminosity of the star by the Kepler space telescope show small, frequent, non-periodic dips in brightness, along with two large recorded dips in brightness two years apart. The amplitude of the changes in the star's brightness, and the aperiodicity of the changes, mean that this star is of particular interest for astronomers. The star's changes in brightness are consistent with many small masses orbiting the star in "tight formation".
The first major dip, on 5 March 2011, reduced the star's brightness by up to 15%, and the next 726 days later (on 28 February 2013) by up to 22%. (A third dimming, around 8%, occurred 48 days later.) In comparison, a planet the size of Jupiter would only obscure a star of this size by 1%, indicating that whatever is blocking light during the star's major dips is not a planet, but rather something covering up to half the width of the star. Due to the failure of two of Kepler's reaction wheels, the star's predicted 750-day dip around February 2015 was not recorded. The light dips do not exhibit an obvious pattern.
In addition to the day-long dimmings, a study of a century's worth of photographic plates suggests that the star has gradually faded in 100 years (from c. 1890 to c. 1990) by about 20%, which would be unprecedented for any F-type main-sequence star. Teasing accurate magnitudes from long-term photographic archives is a complex procedure, however, requiring adjustment for equipment changes, and is strongly dependent on the choice of comparison stars. Another study, examining the same photographic plates, concluded that the possible century-long dimming was likely a data artifact, and not a real astrophysical event. Another study from plates between 1895 and 1995 found strong evidence that the star has not dimmed, but kept a constant flux within a few percent, except an 8% dip on 24 October 1978, resulting in a period of the putative occulter of 738 days.
A third study, using light measurements by the Kepler observatory over a four-year period, determined that Tabby's Star dimmed at about 0.34% per year before dimming more rapidly by about 2.5% in 200 days. It then returned to its previous slow fade rate. The same technique was used to study 193 stars in its vicinity and 355 stars similar in size and composition to Tabby's Star. None of these stars exhibited such dimming.
In 2018, a possible periodicity in dimming of the star was reported.
Stellar companion
A red dwarf stellar companion at projected separation 880 AU from Tabby's Star was confirmed to be comoving in 2021. For comparison, this is around 180 times the orbit of Jupiter, around 30 times the orbit of Neptune, or around 5.3 times the distance to Voyager 1 as of January 2025.
Hypotheses
Originally, and until Kohler's work of 2017, it was thought that, based on the spectrum and stellar type of Tabby's Star, its changes in brightness could not be attributed to intrinsic variability. Consequently, a few hypotheses have been proposed involving material orbiting the star and blocking its light, although none of these fully fit the observed data.
Some of the proposed explanations involve interstellar dust, a series of giant planets with very large ring structures, a recently captured asteroid field, the system undergoing Late Heavy Bombardment, and an artificial megastructure orbiting the star.
By 2018, the leading hypothesis was that the "missing" heat flux involved in the star's dimming could be stored within the star's interior. Such variations in luminosity might arise from a number of mechanisms affecting the efficiency of heat transport inside the star.
However, in September 2019, astronomers reported that the observed dimmings of Tabby's Star may have been produced by fragments resulting from the disruption of an orphaned exomoon.
Circumstellar dust ring
Meng et al. (2017) suggested that, based on observational data of Tabby's Star from the Swift Gamma-Ray Burst Mission, Spitzer Space Telescope, and Belgian AstroLAB IRIS Observatory, only "microscopic fine-dust screens", originating from "circumstellar material", are able to disperse the starlight in the way detected in their measurements. Based on these studies, on 4 October 2017, NASA reported that the unusual dimming events of Tabby's Star are due to an "uneven ring of dust" orbiting the star. Although the explanation of a significant amount of small particles orbiting the star regards "long-term fading" as noted by Meng, the explanation also seems consistent with the week-long fadings found by amateur astronomer Bruce L. Gary and the Tabby Team, coordinated by astronomer Tabetha S. Boyajian, in more recent dimming events. A related, but more sophisticated, explanation of dimming events, involving a transiting "brown dwarf" in a 1600-day eccentric orbit near Tabby's Star, a "drop feature" in dimness, and predicted intervals of "brightening", has been proposed. Dimming and brightening events of Tabby's Star continue to be monitored; related light curves are updated and released frequently.
Nonetheless, data similar to that observed for Tabby's Star, along with supporting data from the Chandra X-ray Observatory, were found with dust debris orbiting WD 1145+017, a white dwarf that also has unusual light curve fluctuations. Further, the highly variable star RZ Piscium, which brightens and dims erratically, has been found to emit excessive infrared radiation, suggesting that the star is surrounded by large amounts of gas and dust, possibly resulting from the destruction of local planets.
A cloud of disintegrating comets
One proposed explanation for the reduction in light is that it is due to a cloud of disintegrating comets orbiting the star elliptically. This scenario would assume that a planetary system around Tabby's Star has something similar to the Oort cloud and that gravity from a nearby star caused comets from said cloud to fall closer into the system, thereby obstructing the spectra of Tabby's Star. Evidence supporting this hypothesis includes an M-type red dwarf within of Tabby's Star. The notion that disturbed comets from such a cloud could exist in high enough numbers to obscure 22% of the star's observed luminosity has been doubted.
Submillimetre-wavelength observations searching for farther-out cold dust in an asteroid belt akin to the Sun's Kuiper Belt suggest that a distant "catastrophic" planetary disruption explanation is unlikely; the possibility of a disrupted asteroid belt scattering comets into the inner system is still to be determined.
Younger star with coalescing material around it
Astronomer Jason T. Wright and others who have studied Tabby's Star have suggested that if the star is younger than its position and speed would suggest, then it may still have coalescing material around it.
A 0.8–4.2-micrometer spectroscopic study of the system using the NASA Infrared Telescope Facility (NASA IRTF) found no evidence for coalescing material within a few astronomical units of the mature central star.
Planetary debris field
High-resolution spectroscopy and imaging observations have also been made, as well as spectral energy distribution analyses using the Nordic Optical Telescope in Spain. A massive collision scenario would create warm dust that glows in infrared wavelengths, but there is no observed excess infrared energy, ruling out massive planetary collision debris. Other researchers think the planetary debris field explanation is unlikely, given the very low probability that Kepler would ever witness such an event due to the rarity of collisions of such size.
As with the possibility of coalescing material around the star, spectroscopic studies using the NASA IRTF found no evidence for hot close-in dust or circumstellar matter from an evaporating or exploding planet within a few astronomical units of the central star. Similarly, a study of past infrared data from NASA's Spitzer Space Telescope and Wide-field Infrared Survey Explorer found no evidence for an excess of infrared emission from the star, which would have been an indicator of warm dust grains that could have come from catastrophic collisions of meteors or planets in the system. This absence of emission supports the hypothesis that a swarm of cold comets on an unusually eccentric orbit could be responsible for the star's unique light curve, but more studies are needed.
Consumption of a planet
In December 2016, a team of researchers proposed that Tabby's Star swallowed a planet, causing a temporary and unobserved increase in brightness due to the release of gravitational energy. As the planet fell into its star, it could have been ripped apart or had its moons stripped away, leaving clouds of debris orbiting the star in eccentric orbits. Planetary debris still in orbit around the star would then explain its observed drops in intensity. Additionally, the researchers suggest that the consumed planet could have caused the star to increase in brightness up to 10,000 years ago, and its stellar flux is now returning to the normal state.
Large planet with oscillating rings
Sucerquia et al. (2017) suggested that a large planet with oscillating rings may help explain the unusual dimmings associated with Tabby's Star.
Large ringed planet followed by Trojan swarms
Ballesteros et al. (2017) proposed a large, ringed planet trailed by a swarm of Trojan asteroids in its L5 Lagrangian point, and estimated an orbit that predicts another event in early 2021 due to the leading Trojans followed by another transit of the hypothetical planet in 2023. The model suggests a planet with a radius of 4.7 Jupiter radii, large for a planet (unless very young). An early red dwarf of about would be easily seen in infrared. The current radial velocity observations available (four runs at σv ≈ 400 m/s) hardly constrain the model, but new radial velocity measurements would greatly reduce the uncertainty. The model predicts a discrete and short-lived event for the May 2017 dimming episode, corresponding to the secondary eclipse of the planet passing behind KIC 8246852, with about a 3% decrease in the stellar flux with a transit time of about 2 days. If this is the cause of the May 2017 event, the planet's orbital period is more precisely estimated as 12.41 years with a semi-major axis of 5.9 AU.
Intrinsic luminosity variations
The reddening observed during the deep dimming events of Tabby's Star is consistent with cooling of its photosphere. It does not require obscuration by dust. Such cooling could be produced by a decreased efficiency of heat transport caused e.g. by decreased effectiveness of convection due to the star's strong differential rotation, or by changes in its modes of heat transport if it is near the transition between radiative and convective heat transport. The "missing" heat flux is stored as a small increase of internal and potential energy.
The possible location of this early F star near the boundary between radiative and convective transport seems to be supported by the finding that the star's observed brightness variations appear to fit the "avalanche statistics" known to occur in a system close to a phase-transition. "Avalanche statistics" with a self-similar or power-law spectrum are a universal property of complex dynamical systems operating close to a phase transition or bifurcation point between two different types of dynamical behavior. Such close-to-critical systems are often observed to exhibit behavior that is intermediate between "order" and "chaos". Three other stars in the Kepler Input Catalog likewise exhibit similar "avalanche statistics" in their brightness variations, and all three are known to be magnetically active. It has been conjectured that stellar magnetism may be involved in Tabby's Star.
An artificial megastructure
Some astronomers have speculated that the objects eclipsing Tabby's Star could be parts of a megastructure made by an alien civilization, such as a Dyson swarm, a hypothetical structure that an advanced civilization might build around a star to intercept some of its light for their energy needs. According to Steinn Sigurðsson, the megastructure hypothesis is implausible and disfavored by Occam's razor and fails to sufficiently explain the dimming. He says that it remains a valid subject for scientific investigation, however, because it is a falsifiable hypothesis. Due to extensive media coverage on this matter, Tabby's Star has been compared by Kepler's Steve Howell to , a star with an odd light curve that was shown, after years of research, to be a part of a five-star system. The likelihood of extraterrestrial intelligence being the cause of the dimming is purely speculative; however, the star remains an outstanding SETI target because natural explanations have yet to fully explain the dimming phenomenon. The latest results have ruled out explanations involving only opaque objects such as stars, planets, swarms of asteroids, or alien megastructures.
Exomoons
Two papers published in summer 2019 offered plausible scientific scenarios involving large moons being stripped from their planets. Numeric simulations were performed of the migration of gas giant planets, and their large gaseous moons, during the first few hundred million years after the formation of the planetary system. In approximately 50% of the cases, the results produce a scenario where the moon is freed from its parent planet and its orbit evolves to produce a light curve similar to that of Tabby's Star.
Follow-up studies
, numerous optical telescopes were monitoring Tabby's Star in anticipation of another multi-day dimming event, with planned follow-up observations of a dimming event using large telescopes equipped with spectrographs to determine if the eclipsing mass is a solid object, or composed of dust or gas. Additional follow-up observations may involve the ground-based Green Bank Telescope, the Very Large Array Radio Telescope, and future orbital telescopes dedicated to exoplanetology such as the Nancy Grace Roman Space Telescope, TESS, and PLATO.
In 2016, a Kickstarter fund-raising campaign was led by Tabetha Boyajian, the lead author of the initial study on the star's anomalous light curve. The project proposed to use the Las Cumbres Observatory Global Telescope Network for continuous monitoring of the star. The campaign raised over , enough for one year of telescope time. Furthermore, as of 2016, more than fifty amateur astronomers working under the aegis of the American Association of Variable Star Observers were providing effectively full coverage since AAVSO's alert about the star in October 2015, namely a nearly continuous photometric record. In a study published in January 2018, Boyajian et al. reported that whatever is blocking Tabby's Star filters different wavelengths of light differently, so it cannot be an opaque object. They concluded that it is most likely space dust.
In December 2018, a search for laser light emissions from Tabby's Star was carried out using the Automated Planet Finder (APF), which is sensitive enough to detect a laser at this distance. Although a number of candidates were identified, further analysis showed that they are coming from the Earth and not from the star.
SETI results
In October 2015, the SETI Institute used the Allen Telescope Array to look for radio emissions from possible intelligent extraterrestrial life in the vicinity of the star. After an initial two-week survey, the SETI Institute reported that it found no evidence of technology-related radio signals from the star system. No narrowband radio signals were found at a level of 180–300 Jy in a 1 Hz channel, or medium-band signals above 10 Jy in a 100 kHz channel.
In 2016, the VERITAS gamma-ray observatory was used to search for ultra-fast optical transients from astronomical objects, with astronomers developing an efficient method sensitive to nanosecond pulses with fluxes as low as about one photon per square meter. This technique was applied on archival observations of Tabby's Star from 2009 to 2015, but no emissions were detected.
In May 2017, a related search, based on laser light emissions, was reported, with no evidence found for technology-related signals from Tabby's Star.
In September 2017, some SETI@Home workunits were created based on a previous RF survey of the region around this star. This was coupled with a doubling in the size of SETI@Home workunits, so the workunits related to this region will probably be the first workunits to have less issues with quantization noise.
EPIC 204278916
A star called EPIC 204278916, as well as some other young stellar objects, have been observed to exhibit dips with some similarities to those observed in Tabby's Star. They differ in several respects, however. shows much deeper dips than Tabby's Star, and they are grouped over a shorter period, whereas the dips at Tabby's Star are spread out over several years. Furthermore, is surrounded by a proto-stellar disc, whereas Tabby's Star appears to be a normal F-type star displaying no evidence of a disc.
Other stars
An overall study of 21 other similar stars was presented in 2019.
Light curve gallery
| Physical sciences | Notable stars | Astronomy |
25095376 | https://en.wikipedia.org/wiki/Entorrhizomycetes | Entorrhizomycetes | Entorrhizomycetes is the sole class in the phylum Entorrhizomycota, within the Fungi subkingdom Dikarya along with Basidiomycota and Ascomycota. It contains three genera and is a small group of teliosporic root parasites that form galls on plants in the Juncaceae (rush) and Cyperaceae (sedge) families. Prior to 2015 this phylum was placed under the subdivision Ustilaginomycotina. A 2015 study did a "comprehensive five-gene analyses" of Entorrhiza and concluded that the former class Entorrhizomycetes is possibly either a close sister group to the rest of Dikarya or Basidiomycota.
Taxonomy
Taxonomy based on the work of Wijayawardene et al. 2019.
Order Talbotiomycetales Riess et al. 2015
Family Talbotiomycetaceae Riess et al. 2015
Genus Talbotiomyces Vánky, Bauer & Begerow 2007
Order Entorrhizales Bauer & Oberwinkler 1997
Family Entorrhizaceae Bauer & Oberwinkler 1997
Genus Juncorrhiza Riess & Piątek 2019
Genus Entorrhiza Weber 1884 [Schinzia Nägeli 1842]
Morphology
All members of Entorrhizomycetes are obligate parasites on the roots of plants. Sori are produced as galls on the roots of hosts. Galls are tubercular with a globoid, irregular or elongated shape and are composed of vascular bundles, parenchymatous cells and fungal mycelium. Younger segments of the galls are pale in color whilst older segments turn brown. Mycelium consists of dikaryotic and septate hyphae with fibrillate walls that lack clamp connections. Initially, the mycelium grows intercellularily before producing coiled intracellular hyphae terminating in globose cells that detach and develop into teliospores. Teliospores germinate into tetrads through internal septation, and each tetrad compartment produce hyphae that terminate in sigmoid propagules. Bauer et al. noted that young teliospores have two nuclei, older teliospores have only one nucleus, and each tetrad compartment has one nucelus each. This indicates that karyogamy and meiosis occurs in the teliospore. It has been observed that teliospores are liberated when the host plant dies and the galls disintegrate, and that the number of galls is higher in waterlogged soils compared to well-drained soils. These observations might support the hypothesis that entorrhizomycetes disperse through soil moisture.
Both Talbotiomyces and Juncorrhiza are segregate taxa from Entorrhiza sensu lato. Entorrhiza sensu stricto is diagnosed by teliospores with longitudinally ridged or cerebriform ornamentation and infecting plants belonging to Cyperaceae, whilst Juncorrhiza is diagnosed by teliospores with verrucose-tuberculate ornamentation and infecting plants belonging to Juncaceae. Talbotiomyces is distinguished from species in Entorrhizales by hyphal septa with simple pores that lack caps or membranes (species in Entorrhizales have dolipores that lack caps or membranes) and infecting plants belonging to Caryophyllales.
Evolution
Molecular phylogeny place Entorrhizomycetes as either a sister group to Basidiomycota or a sister group to Dikarya as a whole. Entorrhizomycetes share many traits with basidiomycetes such as dikaryotic vegetative mycelium, fibrillate cell walls, hyphal septa with a tripartite profile, and similarities in the spindle pole body. Bauer et al. speculated that the teliospore tetrad in entorrhizomycetes might represent the ancestral state of dikaryan meiosporangia. This is based on the observation that the septa in the tetrads have pores, and that the tetrad compartments germinate into hyphae terminating in propagules. The basidial cells separated by pored septa in basidiomycete phragmobasidia represent meiospores that in turn release vegetative propagules (that are usually characterised as basidiospores). It is possible that an ancestral structure similar to the teliospore tetrad evolved into phragmobasidia which in turn evolved into holobasidia on multiple occasions during the transition from water-dispersal to air-dispersal. If entorrhizomycetes are sister to Dikarya, it is also possible that the teliospore tetrad is homologous to the meiospore tetrads of early-diverging ascomycetes.
The stem age of the Entorrhizomycota has been estimated to approximately 560 Mya during the late Neoproterozoic era. Divergence between Talbotiomycetales and Entorrhizales is estimated to approximately 50 Mya, and divergence between Entorrhiza and Juncorrhiza is estimated to approximately 42 Mya. Both Entorrhiza and Juncorrhiza underwent a major radiation during the Oligocene and Miocene epochs. Given that these divergence estimates are incongruent or only slightly congruent with the estimated stem ages of the host plant lineages, and incongruence in the co-phylogeny between Entorrhizales and host plants, host-shift speciation is more likely to have occurred than co-speciation during these divergences and the radiation of Entorrhizales.
Entorrhizomycetes have much lower number of species and more limited host range than their estimated age would indicate. One possible explanation is that many lineages have gone extinct along with their hosts during mass extinction events in the past. Another explanation is that much of the diversity in this phylum remains undiscovered. The latter explanation is supported by the fact that host plants don't show any aboveground symptoms of infection, and there might be species that don't cause galls on their hosts.
| Biology and health sciences | Basics | Plants |
25101402 | https://en.wikipedia.org/wiki/Astrophysical%20X-ray%20source | Astrophysical X-ray source | Astrophysical X-ray sources are astronomical objects with physical properties which result in the emission of X-rays.
Several types of astrophysical objects emit X-rays. They include galaxy clusters, black holes in active galactic nuclei (AGN), galactic objects such as supernova remnants, stars, and binary stars containing a white dwarf (cataclysmic variable stars and super soft X-ray sources), neutron star or black hole (X-ray binaries). Some Solar System bodies emit X-rays, the most notable being the Moon, although most of the X-ray brightness of the Moon arises from reflected solar X-rays.
Furthermore, celestial entities in space are discussed as celestial X-ray sources. The origin of all observed astronomical X-ray sources is in, near to, or associated with a coronal cloud or gas at coronal cloud temperatures for however long or brief a period.
A combination of many unresolved X-ray sources is thought to produce the observed X-ray background. The X-ray continuum can arise from bremsstrahlung, either magnetic or ordinary Coulomb, black-body radiation, synchrotron radiation, inverse Compton scattering of lower-energy photons by relativistic electrons, knock-on collisions of fast protons with atomic electrons, and atomic recombination, with or without additional electron transitions.
Galaxy clusters
Clusters of galaxies are formed by the merger of smaller units of matter, such as galaxy groups or individual galaxies. The infalling material (which contains galaxies, gas and dark matter) gains kinetic energy as it falls into the cluster's gravitational potential well. The infalling gas collides with gas already in the cluster and is shock heated to between 107 and 108 K depending on the size of the cluster. This very hot gas emits X-rays by thermal bremsstrahlung emission, and line emission from metals (in astronomy, 'metals' often means all elements except hydrogen and helium). The galaxies and dark matter are collisionless and quickly become virialised, orbiting in the cluster potential well.
At a statistical significance of 8σ, it was found that the spatial offset of the center of the total mass from the center of the baryonic mass peaks cannot be explained with an alteration of the gravitational force law.
Quasars
A quasi-stellar radio source (quasar) is a very energetic and distant galaxy with an active galactic nucleus (AGN). QSO 0836+7107 is a Quasi-Stellar Object (QSO) that emits baffling amounts of radio energy. This radio emission is caused by electrons spiraling (thus accelerating) along magnetic fields producing cyclotron or synchrotron radiation. These electrons can also interact with visible light emitted by the disk around the AGN or the black hole at its center. These photons accelerate the electrons, which then emit X- and gamma-radiation via Compton and inverse Compton scattering.
On board the Compton Gamma Ray Observatory (CGRO) is the Burst and Transient Source Experiment (BATSE) which detects in the 20 keV to 8 MeV range. QSO 0836+7107 or 4C 71.07 was detected by BATSE as a source of soft gamma rays and hard X-rays. "What BATSE has discovered is that it can be a soft gamma-ray source", McCollough said. QSO 0836+7107 is the faintest and most distant object to be observed in soft gamma rays. It has already been observed in gamma rays by the Energetic Gamma Ray Experiment Telescope (EGRET) also aboard the Compton Gamma Ray Observatory.
Seyfert galaxies
Seyfert galaxies are a class of galaxies with nuclei that produce spectral line emission from highly ionized gas. They are a subclass of active galactic nuclei (AGN), and are thought to contain supermassive black holes.
X-ray bright galaxies
The following early-type galaxies (NGCs) have been observed to be X-ray bright due to hot gaseous coronae: NGC 315, 1316, 1332, 1395, 2563, 4374, 4382, 4406, 4472, 4594, 4636, 4649, and 5128. The X-ray emission can be explained as thermal bremsstrahlung from hot gas (0.5–1.5 keV).
Ultraluminous X-ray sources
Ultraluminous X-ray sources (ULXs) are pointlike, nonnuclear X-ray sources with luminosities above the Eddington limit of 3 × 1032 W for a black hole. Many ULXs show strong variability and may be black hole binaries. To fall into the class of intermediate-mass black holes (IMBHs), their luminosities, thermal disk emissions, variation timescales, and surrounding emission-line nebulae must suggest this. However, when the emission is beamed or exceeds the Eddington limit, the ULX may be a stellar-mass black hole. The nearby spiral galaxy NGC 1313 has two compact ULXs, X-1 and X-2. For X-1 the X-ray luminosity increases to a maximum of 3 × 1033 W, exceeding the Eddington limit, and enters a steep power-law state at high luminosities more indicative of a stellar-mass black hole, whereas X-2 has the opposite behavior and appears to be in the hard X-ray state of an IMBH.
Black holes
Black holes give off radiation because matter falling into them loses gravitational energy which may result in the emission of radiation before the matter falls into the event horizon. The infalling matter has angular momentum, which means that the material cannot fall in directly, but spins around the black hole. This material often forms an accretion disk. Similar luminous accretion disks can also form around white dwarfs and neutron stars, but in these the infalling gas releases additional energy as it slams against the high-density surface with high speed. In case of a neutron star, the infall speed can be a sizeable fraction of the speed of light.
In some neutron star or white dwarf systems, the magnetic field of the star is strong enough to prevent the formation of an accretion disc. The material in the disc gets very hot because of friction, and emits X-rays. The material in the disc slowly loses its angular momentum and falls into the compact star. In neutron stars and white dwarfs, additional X-rays are generated when the material hits their surfaces. X-ray emission from black holes is variable, varying in luminosity in very short timescales. The variation in luminosity can provide information about the size of the black hole.
Supernova remnants (SNR)
A Type Ia supernova is an explosion of a white dwarf in orbit around either another white dwarf or a red giant star. The dense white dwarf can accumulate gas donated from the companion. When the dwarf reaches the critical mass of , a thermonuclear explosion ensues. As each Type Ia shines with a known luminosity, Type Ia are used as "standard candles" to measure distances in the universe.
SN 2005ke is the first Type Ia supernova detected in X-ray wavelengths, and it is much brighter in the ultraviolet than expected.
X-ray emission from stars
Vela X-1
Vela X-1 is a pulsing, eclipsing high-mass X-ray binary (HMXB) system, associated with the Uhuru source 4U 0900-40 and the supergiant star HD 77581. The X-ray emission of the neutron star is caused by the capture and accretion of matter from the stellar wind of the supergiant companion. Vela X-1 is the prototypical detached HMXB.
Hercules X-1
An intermediate-mass X-ray binary (IMXB) is a binary star system where one of the components is a neutron star or a black hole. The other component is an intermediate mass star.
Hercules X-1 is composed of a neutron star accreting matter from a normal star (HZ Her) probably due to Roche lobe overflow. X-1 is the prototype for the massive X-ray binaries although it falls on the borderline, , between high- and low-mass X-ray binaries.
Scorpius X-1
The first extrasolar X-ray source was discovered on 12 June 1962. This source is called Scorpius X-1, the first X-ray source found in the constellation of Scorpius, located in the direction of the center of the Milky Way. Scorpius X-1 is some 9,000 ly from Earth and after the Sun is the strongest X-ray source in the sky at energies below 20 keV. Its X-ray output is 2.3 × 1031 W, about 60,000 times the total luminosity of the Sun. Scorpius X-1 itself is a neutron star. This system is classified as a low-mass X-ray binary (LMXB); the neutron star is roughly 1.4 solar masses, while the donor star is only 0.42 solar masses.
Sun
In the late 1930s, the presence of a very hot, tenuous gas surrounding the Sun was inferred indirectly from optical coronal lines of highly ionized species. In the mid-1940s radio observations revealed a radio corona around the Sun. After detecting X-ray photons from the Sun in the course of a rocket flight, T. Burnight wrote, "The sun is assumed to be the source of this radiation although radiation of wavelength shorter than 4 Å would not be expected from theoretical estimates of black body radiation from the solar corona." And, of course, people have seen the solar corona in scattered visible light during solar eclipses.
While neutron stars and black holes are the quintessential point sources of X-rays, all main sequence stars are likely to have hot enough coronae to emit X-rays. A- or F-type stars have at most thin convection zones and thus produce little coronal activity.
Similar solar cycle-related variations are observed in the flux of solar X-ray and UV or EUV radiation. Rotation is one of the primary determinants of the magnetic dynamo, but this point could not be demonstrated by observing the Sun: the Sun's magnetic activity is in fact strongly modulated (due to the 11-year magnetic spot cycle), but this effect is not directly dependent on the rotation period.
Solar flares usually follow the solar cycle. CORONAS-F was launched on 31 July 2001 to coincide with the 23rd solar cycle maximum.
The solar flare of 29 October 2003 apparently showed a significant degree of linear polarization (> 70% in channels E2 = 40–60 keV and E3 = 60–100 keV, but only about 50% in E1 = 20–40 keV) in hard X-rays, but other observations have generally only set upper limits.
Coronal loops form the basic structure of the lower corona and transition region of the Sun. These highly structured and elegant loops are a direct consequence of the twisted solar magnetic flux within the solar body. The population of coronal loops can be directly linked with the solar cycle, it is for this reason coronal loops are often found with sunspots at their footpoints. Coronal loops populate both active and quiet regions of the solar surface. The Yohkoh Soft X-ray Telescope (SXT) observed X-rays in the 0.25–4.0 keV range, resolving solar features to 2.5 arc seconds with a temporal resolution of 0.5–2 seconds. SXT was sensitive to plasma in the 2–4 MK temperature range, making it an ideal observational platform to compare with data collected from TRACE coronal loops radiating in the EUV wavelengths.
Variations of solar-flare emission in soft X-rays (10–130 nm) and EUV (26–34 nm) recorded on board CORONAS-F demonstrate for most flares observed by CORONAS-F in 2001–2003 UV radiation preceded X-ray emission by 1–10 min.
White dwarfs
When the core of a medium mass star contracts, it causes a release of energy that makes the envelope of the star expand. This continues until the star finally blows its outer layers off. The core of the star remains intact and becomes a white dwarf. The white dwarf is surrounded by an expanding shell of gas in an object known as a planetary nebula. Planetary nebula seem to mark the transition of a medium mass star from red giant to white dwarf. X-ray images reveal clouds of multimillion degree gas that have been compressed and heated by the fast stellar wind. Eventually the central star collapses to form a white dwarf. For a billion or so years after a star collapses to form a white dwarf, it is "white" hot with surface temperatures of ~20,000 K.
X-ray emission has been detected from PG 1658+441, a hot, isolated, magnetic white dwarf, first detected in an Einstein IPC observation and later identified in an Exosat channel multiplier array observation. "The broad-band spectrum of this DA white dwarf can be explained as emission from a homogeneous, high-gravity, pure hydrogen atmosphere with a temperature near 28,000 K." These observations of PG 1658+441 support a correlation between temperature and helium abundance in white dwarf atmospheres.
A super soft X-ray source (SSXS) radiates soft X-rays in the range of 0.09 to 2.5 keV. Super soft X-rays are believed to be produced by steady nuclear fusion on a white dwarf's surface of material pulled from a binary companion. This requires a flow of material sufficiently high to sustain the fusion.
Real mass transfer variations may be occurring in V Sge similar to SSXS RX J0513.9-6951 as revealed by analysis of the activity of the SSXS V Sge where episodes of long low states occur in a cycle of ~400 days.
HD 49798 is a subdwarf star that forms a binary system with RX J0648.0-4418. The subdwarf star is a bright object in the optical and UV bands. The orbital period of the system is accurately known. Recent XMM-Newton observations timed to coincide with the expected eclipse of the X-ray source allowed an accurate determination of the mass of the X-ray source (at least 1.2 solar masses), establishing the X-ray source as a rare, ultra-massive white dwarf.
Brown dwarfs
According to theory, an object that has a mass of less than about 8% of the mass of the Sun cannot sustain significant nuclear fusion in its core. This marks the dividing line between red dwarf stars and brown dwarfs. The dividing line between planets and brown dwarfs occurs with objects that have masses below about 1% of the mass of the Sun, or 10 times the mass of Jupiter. These objects cannot fuse deuterium.
LP 944-20
With no strong central nuclear energy source, the interior of a brown dwarf is in a rapid boiling, or convective state. When combined with the rapid rotation that most brown dwarfs exhibit, convection sets up conditions for the development of a strong, tangled magnetic field near the surface. The flare observed by Chandra from LP 944-20 could have its origin in the turbulent magnetized hot material beneath the brown dwarf's surface. A sub-surface flare could conduct heat to the atmosphere, allowing electric currents to flow and produce an X-ray flare, like a stroke of lightning. The absence of X-rays from LP 944-20 during the non-flaring period is also a significant result. It sets the lowest observational limit on steady X-ray power produced by a brown dwarf star, and shows that coronas cease to exist as the surface temperature of a brown dwarf cools below about 2500 °C and becomes electrically neutral.
TWA 5B
Using NASA's Chandra X-ray Observatory, scientists have detected X-rays from a low mass brown dwarf in a multiple star system. This is the first time that a brown dwarf this close to its parent star(s) (Sun-like stars TWA 5A) has been resolved in X-rays. "Our Chandra data show that the X-rays originate from the brown dwarf's coronal plasma which is some 3 million degrees Celsius", said Yohko Tsuboi of Chuo University in Tokyo. "This brown dwarf is as bright as the Sun today in X-ray light, while it is fifty times less massive than the Sun", said Tsuboi. "This observation, thus, raises the possibility that even massive planets might emit X-rays by themselves during their youth!"
X-ray reflection
Electric potentials of about 10 million volts, and currents of 10 million amps – a hundred times greater than the most powerful lightning bolts – are required to explain the auroras at Jupiter's poles, which are a thousand times more powerful than those on Earth.
On Earth, auroras are triggered by solar storms of energetic particles, which disturb Earth's magnetic field. As shown by the swept-back appearance in the illustration, gusts of particles from the Sun also distort Jupiter's magnetic field, and on occasion produce auroras.
Saturn's X-ray spectrum is similar to that of X-rays from the Sun indicating that Saturn's X-radiation is due to the reflection of solar X-rays by Saturn's atmosphere. The optical image is much brighter, and shows the beautiful ring structures, which were not detected in X-rays.
X-ray fluorescence
Some of the detected X-rays, originating from solar system bodies other than the Sun, are produced by fluorescence. Scattered solar X-rays provide an additional component.
In the Röntgensatellit (ROSAT) image of the Moon, pixel brightness corresponds to X-ray intensity. The bright lunar hemisphere shines in X-rays because it re-emits X-rays originating from the sun. The background sky has an X-ray glow in part due to the myriad of distant, powerful active galaxies, unresolved in the ROSAT picture. The dark side of the Moon's disk shadows this X-ray background radiation coming from the deep space. A few X-rays only seem to come from the shadowed lunar hemisphere. Instead, they originate in Earth's geocorona or extended atmosphere which surrounds the orbiting X-ray observatory. The measured lunar X-ray luminosity of ~1.2 × 105 W makes the Moon one of the weakest known non-terrestrial X-ray sources.
Comet detection
NASA's Swift Gamma-Ray Burst Mission satellite was monitoring Comet Lulin as it closed to 63 Gm of Earth. For the first time, astronomers can see simultaneous UV and X-ray images of a comet. "The solar wind – a fast-moving stream of particles from the sun – interacts with the comet's broader cloud of atoms. This causes the solar wind to light up with X-rays, and that's what Swift's XRT sees", said Stefan Immler, of the Goddard Space Flight Center. This interaction, called charge exchange, results in X-rays from most comets when they pass within about three times Earth's distance from the sun. Because Lulin is so active, its atomic cloud is especially dense. As a result, the X-ray-emitting region extends far sunward of the comet.
Celestial X-ray sources
The celestial sphere has been divided into 88 constellations. The IAU constellations are areas of the sky. Each of these contains remarkable X-ray sources. Some of them are galaxies or black holes at the centers of galaxies. Some are pulsars. As with the astronomical X-ray sources, striving to understand the generation of X-rays by the apparent source helps to understand the Sun, the universe as a whole, and how these affect us on Earth.
Andromeda
Multiple X-ray sources have been detected in the Andromeda Galaxy, using observations from the ESA's XMM-Newton orbiting observatory.
Boötes
3C 295 (Cl 1409+524) in Boötes is one of the most distant galaxy clusters observed by X-ray telescopes. The cluster is filled with a vast cloud of 50 MK gas that radiates strongly in X rays. Chandra observed that the central galaxy is a strong, complex source of X rays.
Camelopardalis
Hot X-ray emitting gas pervades the galaxy cluster MS 0735.6+7421 in Camelopardus. Two vast cavities – each 600,000 lyrs in diameter appear on opposite sides of a large galaxy at the center of the cluster. These cavities are filled with a two-sided, elongated, magnetized bubble of extremely high-energy electrons that emit radio waves.
Canes Venatici
The X-ray landmark NGC 4151, an intermediate spiral Seyfert galaxy has a massive black hole in its core.
Canis Major
A Chandra X-ray image of Sirius A and B shows Sirius B to be more luminous than Sirius A. Whereas in the visual range, Sirius A is the more luminous.
Cassiopeia
Regarding Cassiopea A SNR, it is believed that first light from the stellar explosion reached Earth approximately 300 years ago but there are no historical records of any sightings of the progenitor supernova, probably due to interstellar dust absorbing optical wavelength radiation before it reached Earth (although it is possible that it was recorded as a sixth magnitude star 3 Cassiopeiae by John Flamsteed on 16 August 1680). Possible explanations lean toward the idea that the source star was unusually massive and had previously ejected much of its outer layers. These outer layers would have cloaked the star and reabsorbed much of the light released as the inner star collapsed.
CTA 1 is another SNR X-ray source in Cassiopeia. A pulsar in the CTA 1 supernova remnant (4U 0000+72) initially emitted radiation in the X-ray bands (1970–1977). Strangely, when it was observed at a later time (2008) X-ray radiation was not detected. Instead, the Fermi Gamma-ray Space Telescope detected the pulsar was emitting gamma ray radiation, the first of its kind.
Carina
Three structures around Eta Carinae are thought to represent shock waves produced by matter rushing away from the superstar at supersonic speeds. The temperature of the shock-heated gas ranges from 60 MK in the central regions to 3 MK on the horseshoe-shaped outer structure. "The Chandra image contains some puzzles for existing ideas of how a star can produce such hot and intense X-rays," says Prof. Kris Davidson of the University of Minnesota.
Cetus
Abell 400 is a galaxy cluster, containing a galaxy (NGC 1128) with two supermassive black holes 3C 75 spiraling towards merger.
Chamaeleon
The Chamaeleon complex is a large star forming region (SFR) that includes the Chamaeleon I, Chamaeleon II, and Chamaeleon III dark clouds. It occupies nearly all of the constellation and overlaps into Apus, Musca, and Carina. The mean density of X-ray sources is about one source per square degree.
Chamaeleon I dark cloud
The Chamaeleon I (Cha I) cloud is a coronal cloud and one of the nearest active star formation regions at ~160 pc. It is relatively isolated from other star-forming clouds, so it is unlikely that older pre-main sequence (PMS) stars have drifted into the field. The total stellar population is 200–300. The Cha I cloud is further divided into the North cloud or region and South cloud or main cloud.
Chamaeleon II dark cloud
The Chamaeleon II dark cloud contains some 40 X-ray sources. Observation in Chamaeleon II was carried out from 10 to 17 September 1993. Source RXJ 1301.9-7706, a new WTTS candidate of spectral type K1, is closest to 4U 1302–77.
Chamaeleon III dark cloud
"Chamaeleon III appears to be devoid of current star-formation activity." HD 104237 (spectral type A4e) observed by ASCA, located in the Chamaeleon III dark cloud, is the brightest Herbig Ae/Be star in the sky.
Corona Borealis
The galaxy cluster Abell 2142 emits X-rays and is in Corona Borealis. It is one of the most massive objects in the universe.
Corvus
From the Chandra X-ray analysis of the Antennae Galaxies rich deposits of neon, magnesium, and silicon were discovered. These elements are among those that form the building blocks for habitable planets. The clouds imaged contain magnesium and silicon at 16 and 24 times respectively, the abundance in the Sun.
Crater
The jet exhibited in X-rays coming from PKS 1127-145 is likely due to the collision of a beam of high-energy electrons with microwave photons.
Draco
The Draco nebula (a soft X-ray shadow) is outlined by contours and is blue-black in the image by ROSAT of a portion of the constellation Draco.
Abell 2256 is a galaxy cluster of more than 500 galaxies. The double structure of this ROSAT image shows the merging of two clusters.
Eridanus
Within the constellations Orion and Eridanus and stretching across them is a soft X-ray "hot spot" known as the Orion-Eridanus Superbubble, the Eridanus Soft X-ray Enhancement, or simply the Eridanus Bubble, a 25° area of interlocking arcs of Hα emitting filaments.
Hydra
A large cloud of hot gas extends throughout the Hydra A galaxy cluster.
Leo Minor
Arp260 is an X-ray source in Leo Minor at RA Dec .
Orion
In the adjacent images are the constellation Orion. On the right side of the images is the visual image of the constellation. On the left is Orion as seen in X-rays only. Betelgeuse is easily seen above the three stars of Orion's belt on the right. The brightest object in the visual image is the full moon, which is also in the X-ray image. The X-ray colors represent the temperature of the X-ray emission from each star: hot stars are blue-white and cooler stars are yellow-red.
Pegasus
Stephan's Quintet are of interest because of their violent collisions. Four of the five galaxies in Stephan's Quintet form a physical association, and are involved in a cosmic dance that most likely will end with the galaxies merging. As NGC 7318B collides with gas in the group, a huge shock wave bigger than the Milky Way spreads throughout the medium between the galaxies, heating some of the gas to temperatures of millions of degrees where they emit X-rays detectable with the NASA Chandra X-ray Observatory. NGC 7319 has a type 2 Seyfert nucleus.
Perseus
The Perseus galaxy cluster is one of the most massive objects in the universe, containing thousands of galaxies immersed in a vast cloud of multimillion degree gas.
Pictor
Pictor A is a galaxy that may have a black hole at its center which has emitted magnetized gas at extremely high speed. The bright spot at the right in the image is the head of the jet. As it plows into the tenuous gas of intergalactic space, it emits X-rays. Pictor A is X-ray source designated H 0517-456 and 3U 0510-44.
Puppis
Puppis A is a supernova remnant (SNR) about 10 light-years in diameter. The supernova occurred approximately 3700 years ago.
Sagittarius
The Galactic Center is at 1745–2900 which corresponds to Sagittarius A*, very near to radio source Sagittarius A (W24). In probably the first catalogue of galactic X-ray sources, two Sgr X-1s are suggested: (1) at 1744–2312 and (2) at 1755–2912, noting that (2) is an uncertain identification. Source (1) seems to correspond to S11.
Sculptor
The unusual shape of the Cartwheel Galaxy may be due to a collision with a smaller galaxy such as those in the lower left of the image. The most recent star burst (star formation due to compression waves) has lit up the Cartwheel rim, which has a diameter larger than the Milky Way. There is an exceptionally large number of black holes in the rim of the galaxy as can be seen in the inset.
Serpens
As of 27 August 2007, discoveries concerning asymmetric iron line broadening and their implications for relativity have been a topic of much excitement. With respect to the asymmetric iron line broadening, Edward Cackett of the University of Michigan commented, "We're seeing the gas whipping around just outside the neutron star's surface,". "And since the inner part of the disk obviously can't orbit any closer than the neutron star's surface, these measurements give us a maximum size of the neutron star's diameter. The neutron stars can be no larger than 18 to 20.5 miles across, results that agree with other types of measurements."
"We've seen these asymmetric lines from many black holes, but this is the first confirmation that neutron stars can produce them as well. It shows that the way neutron stars accrete matter is not very different from that of black holes, and it gives us a new tool to probe Einstein's theory", says Tod Strohmayer of NASA's Goddard Space Flight Center.
"This is fundamental physics", says Sudip Bhattacharyya also of NASA's Goddard Space Flight Center in Greenbelt, Maryland, and the University of Maryland. "There could be exotic kinds of particles or states of matter, such as quark matter, in the centers of neutron stars, but it's impossible to create them in the lab. The only way to find out is to understand neutron stars."
Using XMM-Newton, Bhattacharyya and Strohmayer observed Serpens X-1, which contains a neutron star and a stellar companion. Cackett and Jon Miller of the University of Michigan, along with Bhattacharyya and Strohmayer, used Suzaku's superb spectral capabilities to survey Serpens X-1. The Suzaku data confirmed the XMM-Newton result regarding the iron line in Serpens X-1.
Ursa Major
M82 X-1 is in the constellation Ursa Major at +. It was detected in January 2006 by the Rossi X-ray Timing Explorer.
In Ursa Major at RA 10h 34m 00.00 Dec +57° 40' 00.00" is a field of view that is almost free of absorption by neutral hydrogen gas within the Milky Way. It is known as the Lockman Hole. Hundreds of X-ray sources from other galaxies, some of them supermassive black holes, can be seen through this window.
Exotic X-ray sources
Microquasar
A microquasar is a smaller cousin of a quasar that is a radio emitting X-ray binary, with an often resolvable pair of radio jets. SS 433 is one of the most exotic star systems observed. It is an eclipsing binary with the primary either a black hole or neutron star and the secondary is a late A-type star. SS 433 lies within SNR W50. The material in the jet traveling from the secondary to the primary does so at 26% of light speed. The spectrum of SS 433 is affected by Doppler shifts and by relativity: when the effects of the Doppler shift are subtracted, there is a residual redshift which corresponds to a velocity of about 12,000 kps. This does not represent an actual velocity of the system away from the Earth; rather, it is due to time dilation, which makes moving clocks appear to stationary observers to be ticking more slowly. In this case, the relativistically moving excited atoms in the jets appear to vibrate more slowly and their radiation thus appears red-shifted.
Be X-ray binaries
LSI+61°303 is a periodic, radio-emitting binary system that is also the gamma-ray source, CG135+01. LSI+61°303 is a variable radio source characterized by periodic, non-thermal radio outbursts with a period of 26.5 d, attributed to the eccentric orbital motion of a compact object, probably a neutron star, around a rapidly rotating B0 Ve star, with a Teff ~26,000 K and luminosity of ~1038 erg s−1. Photometric observations at optical and infrared wavelengths also show a 26.5 d modulation. Of the 20 or so members of the Be X-ray binary systems, as of 1996, only X Per and LSI+61°303 have X-ray outbursts of much higher luminosity and harder spectrum (kT ~ 10–20 keV) vs. (kT ≤ 1 keV); however, LSI+61°303 further distinguishes itself by its strong, outbursting radio emission. "The radio properties of LSI+61°303 are similar to those of the "standard" high-mass X-ray binaries such as SS 433, Cyg X-3 and Cir X-1."
Supergiant fast X-ray transients (SFXTs)
There are a growing number of recurrent X-ray transients, characterized by short outbursts with very fast rise times (tens of minutes) and typical durations of a few hours that are associated with OB supergiants and hence define a new class of massive X-ray binaries: Supergiant Fast X-ray Transients (SFXTs). XTE J1739–302 is one of these. Discovered in 1997, remaining active only one day, with an X-ray spectrum well fitted with a thermal bremsstrahlung (temperature of ~20 keV), resembling the spectral properties of accreting pulsars, it was at first classified as a peculiar Be/X-ray transient with an unusually short outburst. A new burst was observed on 8 April 2008 with Swift.
Messier 87
Observations made by Chandra indicate the presence of loops and rings in the hot X-ray emitting gas that surrounds Messier 87. These loops and rings are generated by variations in the rate at which material is ejected from the supermassive black hole in jets. The distribution of loops suggests that minor eruptions occur every six million years.
One of the rings, caused by a major eruption, is a shock wave 85,000 light-years in diameter around the black hole. Other remarkable features observed include narrow X-ray emitting filaments up to 100,000 light-years long, and a large cavity in the hot gas caused by a major eruption 70 million years ago.
The galaxy also contains a notable active galactic nucleus (AGN) that is a strong source of multiwavelength radiation, particularly radio waves.
Magnetars
A magnetar is a type of neutron star with an extremely powerful magnetic field, the decay of which powers the emission of copious amounts of high-energy electromagnetic radiation, particularly X-rays and gamma rays. The theory regarding these objects was proposed by Robert Duncan and Christopher Thompson in 1992, but the first recorded burst of gamma rays thought to have been from a magnetar was on 5 March 1979. These magnetic fields are hundreds of thousands of times stronger than any man-made magnet, and quadrillions of times more powerful than the field surrounding Earth. As of 2003, they are the most magnetic objects ever detected in the universe.
On 5 March 1979, after dropping probes into the atmosphere of Venus, Venera 11 and Venera 12, while in heliocentric orbits, were hit at 10:51 am EST by a blast of gamma ray radiation. This contact raised the radiation readings on both the probes Konus experiments from a normal 100 counts per second to over 200,000 counts a second, in only a fraction of a millisecond. This giant flare was detected by numerous spacecraft and with these detections was localized by the interplanetary network to SGR 0526-66 inside the N-49 SNR of the Large Magellanic Cloud. And, Konus detected another source in March 1979: SGR 1900+14, located 20,000 light-years away in the constellation Aquila had a long period of low emissions, except the significant burst in 1979, and a couple after.
What is the evolutionary relationship between pulsars and magnetars? Astronomers would like to know if magnetars represent a rare class of pulsars, or if some or all pulsars go through a magnetar phase during their life cycles. NASA's Rossi X-ray Timing Explorer (RXTE) has revealed that the youngest known pulsing neutron star has thrown a temper tantrum. The collapsed star occasionally unleashes powerful bursts of X-rays, which are forcing astronomers to rethink the life cycle of neutron stars.
"We are watching one type of neutron star literally change into another right before our very eyes. This is a long-sought missing link between different types of pulsars", says Fotis Gavriil of NASA's Goddard Space Flight Center in Greenbelt, Maryland, and the University of Maryland, Baltimore.
PSR J1846-0258 is in the constellation Aquila. It had been classed as a normal pulsar because of its fast spin (3.1 s−1) and pulsar-like spectrum. RXTE caught four magnetar-like X-ray bursts on 31 May 2006, and another on 27 July 2006. Although none of these events lasted longer than 0.14-second, they all packed the wallop of at least 75,000 Suns. "Never before has a regular pulsar been observed to produce magnetar bursts", says Gavriil.
"Young, fast-spinning pulsars were not thought to have enough magnetic energy to generate such powerful bursts", says Marjorie Gonzalez, formerly of McGill University in Montreal, Canada, now based at the University of British Columbia in Vancouver. "Here's a normal pulsar that's acting like a magnetar."
The observations from NASA's Chandra X-ray Observatory showed that the object had brightened in X-rays, confirming that the bursts were from the pulsar, and that its spectrum had changed to become more magnetar-like. The fact that PSR J1846's spin rate is decelerating also means that it has a strong magnetic field braking the rotation. The implied magnetic field is trillions of times stronger than Earth's field, but it's 10 to 100 times weaker than a typical magnetar. Victoria Kaspi of McGill University notes, "PSR J1846's actual magnetic field could be much stronger than the measured amount, suggesting that many young neutron stars classified as pulsars might actually be magnetars in disguise, and that the true strength of their magnetic field only reveals itself over thousands of years as they ramp up in activity."
X-ray dark stars
During the solar cycle, as shown in the sequence of images of the Sun in X-rays, the Sun is almost X-ray dark, almost an X-ray variable. Betelgeuse, on the other hand, appears to be always X-ray dark. The X-ray flux from the entire stellar surface corresponds to a surface flux limit that ranges from 30–7000 ergs s−1 cm−2 at T=1 MK, to ~1 erg s−1 cm−2 at higher temperatures, five orders of magnitude below the quiet Sun X-ray surface flux.
Like the red supergiant Betelgeuse, hardly any X-rays are emitted by red giants. The cause of the X-ray deficiency may involve
a turn-off of the dynamo,
a suppression by competing wind production, or
strong attenuation by an overlying thick chromosphere.
Prominent bright red giants include Aldebaran, Arcturus, and Gamma Crucis. There is an apparent X-ray "dividing line" in the H-R diagram among the giant stars as they cross from the main sequence to become red giants. Alpha Trianguli Australis (α TrA / α Trianguli Australis) appears to be a Hybrid star (parts of both sides) in the "Dividing Line" of evolutionary transition to red giant. α TrA can serve to test the several Dividing Line models.
There is also a rather abrupt onset of X-ray emission around spectral type A7-F0, with a large range of luminosities developing across spectral class F.
In the few genuine late A- or early F-type coronal emitters, their weak dynamo operation is generally not able to brake the rapidly spinning star considerably during their short lifetime so that these coronae are conspicuous by their severe deficit of X-ray emission compared to chromospheric and transition region fluxes; the latter can be followed up to mid-A type stars at quite high levels. Whether or not these atmospheres are indeed heated acoustically and drive an "expanding", weak and cool corona or whether they are heated magnetically, the X-ray deficit and the low coronal temperatures clearly attest to the inability of these stars to maintain substantial, hot coronae in any way comparable to cooler active stars, their appreciable chromospheres notwithstanding.
X-ray interstellar medium
The Hot Ionized Medium (HIM), sometimes consisting of Coronal gas, in the temperature range 106 – 107 K emits X-rays. Stellar winds from young clusters of stars (often with giant or supergiant HII regions surrounding them) and shock waves created by supernovae inject enormous amounts of energy into their surroundings, which leads to hypersonic turbulence. The resultant structures – of varying sizes – can be observed, such as stellar wind bubbles and superbubbles of hot gas, by X-ray satellite telescopes. The Sun is currently traveling through the Local Interstellar Cloud, a denser region in the low-density Local Bubble.
Diffuse X-ray background
In addition to discrete sources which stand out against the sky, there is good evidence for a diffuse X-ray background. During more than a decade of observations of X-ray emission from the Sun, evidence of the existence of an isotropic X-ray background flux was obtained in 1956. This background flux is rather consistently observed over a wide range of energies. The early high-energy end of the spectrum for this diffuse X-ray background was obtained by instruments on board Ranger 3 and Ranger 5. The X-ray flux corresponds to a total energy density of about 5 x 10−4 eV/cm3. The ROSAT soft X-ray diffuse background (SXRB) image shows the general increase in intensity from the Galactic plane to the poles. At the lowest energies, 0.1 – 0.3 keV, nearly all of the observed soft X-ray background (SXRB) is thermal emission from ~106 K plasma.
By comparing the soft X-ray background with the distribution of neutral hydrogen, it is generally agreed that within the Milky Way disk, super soft X-rays are absorbed by this neutral hydrogen.
X-ray dark planets
X-ray observations offer the possibility to detect (X-ray dark) planets as they eclipse part of the corona of their parent star while in transit. "Such methods are particularly promising for low-mass stars as a Jupiter-like planet could eclipse a rather significant coronal area."
Earth
The first picture of the Earth in X-rays was taken in March 1996, with the orbiting Polar satellite. Energetically charged particles from the Sun cause aurora and energize electrons in the Earth's magnetosphere. These electrons move along the Earth's magnetic field and eventually strike the Earth's ionosphere, producing the X-ray emission.
| Physical sciences | High-energy astronomy | Astronomy |
28338635 | https://en.wikipedia.org/wiki/Command-line%20interface | Command-line interface | A command-line interface (CLI) is a means of interacting with a computer program by inputting lines of text called command lines. Command-line interfaces emerged in the mid-1960s, on computer terminals, as an interactive and more user-friendly alternative to the non-interactive mode available with punched cards.
Today, most computer users rely on graphical user interfaces ("GUIs") instead of CLIs. However, many programs and operating system utilities lack GUIs, and are intended to be used through CLIs.
Knowledge of CLIs is also useful for automation of programs via the writing of scripts. Automation is a more accessible option for programs that have CLIs, in contrast to purely graphical UI's, since many individual commands can be described together into single files called scripts. Executed individually as programs in their own right, scripts allow the group of CLI commands that they contain to be executed at the same time as a single batch of commands.
CLIs are made possible by command-line interpreters or command-line processors, which are programs that read command lines and carry out the commands.
Alternatives to CLIs include GUIs (most notably desktop metaphors with a mouse pointer, such as Microsoft Windows), text-based user interface menus (such as DOS Shell and IBM AIX SMIT), and keyboard shortcuts.
Comparison to graphical user interfaces
Compared with a graphical user interface, a command-line interface requires fewer system resources to implement. Since options to commands are given in a few characters in each command line, an experienced user often finds the options easier to access. Automation of repetitive tasks is simplified by line editing and history mechanisms for storing frequently used sequences; this may extend to a scripting language that can take parameters and variable options. A command-line history can be kept, allowing review or repetition of commands.
A command-line system may require paper or online manuals for the user's reference, although often a help option provides a concise review of the options of a command. The command-line environment may not provide graphical enhancements such as different fonts or extended edit windows found in a GUI. It may be difficult for a new user to become familiar with all the commands and options available, compared with the icons and drop-down menus of a graphical user interface, without reference to manuals.
Types
Operating system command-line interfaces
Operating system (OS) command-line interfaces are usually distinct programs supplied with the operating system. A program that implements such a text interface is often called a command-line interpreter, command processor or shell.
Examples of command-line interpreters include Nushell, DEC's DIGITAL Command Language (DCL) in OpenVMS and RSX-11, the various Unix shells (sh, ksh, csh, tcsh, zsh, Bash, etc.), CP/M's CCP, DOS' COMMAND.COM, as well as the OS/2 and the Windows CMD.EXE programs, the latter groups being based heavily on DEC's RSX-11 and RSTS CLIs. Under most operating systems, it is possible to replace the default shell program with alternatives; examples include 4DOS for DOS, 4OS2 for OS/2, and 4NT / Take Command for Windows.
Although the term shell is often used to describe a command-line interpreter, strictly speaking, a shell can be any program that constitutes the user interface, including fully graphically oriented ones. For example, the default Windows GUI is a shell program named EXPLORER.EXE, as defined in the SHELL=EXPLORER.EXE line in the WIN.INI configuration file. These programs are shells, but not CLIs.
Application command-line interfaces
Application programs (as opposed to operating systems) may also have command-line interfaces.
An application program may support none, any, or all of these three major types of command-line interface mechanisms:
Parameters: Most command-line interfaces support a means to pass additional information to a program when it is launched.
Interactive command-line sessions: After launch, a program may provide an operator with an independent means to enter commands.
Inter-process communication: Most operating systems support means of inter-process communication (for example, standard streams or named pipes). Command lines from client processes may be redirected to a CLI program by one of these methods.
Some applications support a CLI, presenting their own prompt to the user and accepting command lines. Other programs support both a CLI and a GUI. In some cases, a GUI is simply a wrapper around a separate CLI executable file. In other cases, a program may provide a CLI as an optional alternative to its GUI. CLIs and GUIs often support different functionality. For example, all features of MATLAB, a numerical analysis computer program, are available via the CLI, whereas the MATLAB GUI exposes only a subset of features.
In Colossal Cave Adventure from 1975, the user uses a CLI to enter one or two words to explore a cave system.
History
The command-line interface evolved from a form of communication conducted by people over teleprinter (TTY) machines. Sometimes these involved sending an order or a confirmation using telex. Early computer systems often used teleprinter as the means of interaction with an operator.
The mechanical teleprinter was replaced by a "glass tty", a keyboard and screen emulating the teleprinter. "Smart" terminals permitted additional functions, such as cursor movement over the entire screen, or local editing of data on the terminal for transmission to the computer. As the microcomputer revolution replaced the traditionalminicomputer + terminalstime sharing architecture, hardware terminals were replaced by terminal emulators — PC software that interpreted terminal signals sent through the PC's serial ports. These were typically used to interface an organization's new PC's with their existing mini- or mainframe computers, or to connect PC to PC. Some of these PCs were running Bulletin Board System software.
Early operating system CLIs were implemented as part of resident monitor programs, and could not easily be replaced. The first implementation of the shell as a replaceable component was part of the Multics time-sharing operating system. In 1964, MIT Computation Center staff member Louis Pouzin developed the RUNCOM tool for executing command scripts while allowing argument substitution. Pouzin coined the term shell to describe the technique of using commands like a programming language, and wrote a paper about how to implement the idea in the Multics operating system. Pouzin returned to his native France in 1965, and the first Multics shell was developed by Glenda Schroeder.
The first Unix shell, the V6 shell, was developed by Ken Thompson in 1971 at Bell Labs and was modeled after Schroeder's Multics shell. The Bourne shell was introduced in 1977 as a replacement for the V6 shell. Although it is used as an interactive command interpreter, it was also intended as a scripting language and contains most of the features that are commonly considered to produce structured programs. The Bourne shell led to the development of the KornShell (ksh), Almquist shell (ash), and the popular Bourne-again shell (or Bash).
Early microcomputers themselves were based on a command-line interface such as CP/M, DOS or AppleSoft BASIC. During the 1980s and 1990s, the introduction of the Apple Macintosh and of Microsoft Windows on PCs saw the command line interface as the primary user interface replaced by the Graphical User Interface. The command line remained available as an alternative user interface, often used by system administrators and other advanced users for system administration, computer programming and batch processing.
In November 2006, Microsoft released version 1.0 of Windows PowerShell (formerly codenamed Monad), which combined features of traditional Unix shells with their proprietary object-oriented .NET Framework. MinGW and Cygwin are open-source packages for Windows that offer a Unix-like CLI. Microsoft provides MKS Inc.'s ksh implementation MKS Korn shell for Windows through their Services for UNIX add-on.
Since 2001, the Macintosh operating system macOS has been based on a Unix-like operating system called Darwin. On these computers, users can access a Unix-like command-line interface by running the terminal emulator program called Terminal, which is found in the Utilities sub-folder of the Applications folder, or by remotely logging into the machine using ssh. Z shell is the default shell for macOS; Bash, tcsh, and the KornShell are also provided. Before macOS Catalina, Bash was the default.
Usage
A CLI is used whenever a large vocabulary of commands or queries, coupled with a wide (or arbitrary) range of options, can be entered more rapidly as text than with a pure GUI. This is typically the case with operating system command shells. CLIs are also used by systems with insufficient resources to support a graphical user interface. Some computer language systems (such as Python, Forth, LISP, Rexx, and many dialects of BASIC) provide an interactive command-line mode to allow for rapid evaluation of code.
CLIs are often used by programmers and system administrators, in engineering and scientific environments, and by technically advanced personal computer users. CLIs are also popular among people with visual disabilities since the commands and responses can be displayed using refreshable Braille displays.
Anatomy of a shell CLI
The general pattern of a command line interface is:
Prompt command param1 param2 param3 … paramN
Prompt — generated by the program to provide context for the user.
Command — provided by the user. Commands are usually one of two classes:
Internal commands are recognized and processed by the command line interpreter. Internal commands are also called built-in commands.
External commands run executables found in separate executable files. The command line interpreter searches for executable files with names matching the external command.
param1 …paramN — parameters provided by the user. The format and meaning of the parameters depends upon the command. In the case of external commands, the values of the parameters are delivered to the program as it is launched by the OS. Parameters may be either arguments or options.
In this format, the delimiters between command-line elements are whitespace characters and the end-of-line delimiter is the newline delimiter. This is a widely used (but not universal) convention.
A CLI can generally be considered as consisting of syntax and semantics. The syntax is the grammar that all commands must follow. In the case of operating systems, DOS and Unix each define their own set of rules that all commands must follow. In the case of embedded systems, each vendor, such as Nortel, Juniper Networks or Cisco Systems, defines their own proprietary set of rules. These rules also dictate how a user navigates through the system of commands. The semantics define what sort of operations are possible, on what sort of data these operations can be performed, and how the grammar represents these operations and data—the symbolic meaning in the syntax.
Two different CLIs may agree on either syntax or semantics, but it is only when they agree on both that they can be considered sufficiently similar to allow users to use both CLIs without needing to learn anything, as well as to enable re-use of scripts.
A simple CLI will display a prompt, accept a command line typed by the user terminated by the Enter key, then execute the specified command and provide textual display of results or error messages. Advanced CLIs will validate, interpret and parameter-expand the command line before executing the specified command, and optionally capture or redirect its output.
Unlike a button or menu item in a GUI, a command line is typically self-documenting, stating exactly what the user wants done. In addition, command lines usually include many defaults that can be changed to customize the results. Useful command lines can be saved by assigning a character string or alias to represent the full command, or several commands can be grouped to perform a more complex sequence – for instance, compile the program, install it, and run it — creating a single entity, called a command procedure or script which itself can be treated as a command. These advantages mean that a user must figure out a complex command or series of commands only once, because they can be saved, to be used again.
The commands given to a CLI shell are often in one of the following forms:
where doSomething is, in effect, a verb, how an adverb (for example, should the command be executed verbosely or quietly) and toFiles an object or objects (typically one or more files) on which the command should act. The > in the third example is a redirection operator, telling the command-line interpreter to send the output of the command not to its own standard output (the screen) but to the named file. This will overwrite the file. Using >> will redirect the output and append it to the file. Another redirection operator is the vertical bar (|), which creates a pipeline where the output of one command becomes the input to the next command.
CLI and resource protection
One can modify the set of available commands by modifying which paths appear in the PATH environment variable. Under Unix, commands also need be marked as executable files. The directories in the path variable are searched in the order they are given. By re-ordering the path, one can run e.g. \OS2\MDOS\E.EXE instead of \OS2\E.EXE, when the default is the opposite. Renaming of the executables also works: people often rename their favourite editor to EDIT, for example.
The command line allows one to restrict available commands, such as access to advanced internal commands. The Windows CMD.EXE does this. Often, shareware programs will limit the range of commands, including printing a command 'your administrator has disabled running batch files' from the prompt.
Some CLIs, such as those in network routers, have a hierarchy of modes, with a different set of commands supported in each mode. The set of commands are grouped by association with security, system, interface, etc. In these systems the user might traverse through a series of sub-modes. For example, if the CLI had two modes called interface and system, the user might use the command interface to enter the interface mode. At this point, commands from the system mode may not be accessible until the user exits the interface mode and enters the system mode.
Command prompt
A command prompt (or just prompt) is a sequence of (one or more) characters used in a command-line interface to indicate readiness to accept commands. It literally prompts the user to take action. A prompt usually ends with one of the characters $, %, #, :, > or - and often includes other information, such as the path of the current working directory and the hostname.
On many Unix and derivative systems, the prompt commonly ends in $ or % if the user is a normal user, but in # if the user is a superuser ("root" in Unix terminology).
End-users can often modify prompts. Depending on the environment, they may include colors, special characters, and other elements (like variables and functions for the current time, user, shell number or working directory) in order, for instance, to make the prompt more informative or visually pleasing, to distinguish sessions on various machines, or to indicate the current level of nesting of commands. On some systems, special tokens in the definition of the prompt can be used to cause external programs to be called by the command-line interpreter while displaying the prompt.
In DOS' COMMAND.COM and in Windows NT's cmd.exe users can modify the prompt by issuing a PROMPT command or by directly changing the value of the corresponding %PROMPT% environment variable. The default of most modern systems, the C:\> style is obtained, for instance, with PROMPT $P$G. The default of older DOS systems, C> is obtained by just PROMPT, although on some systems this produces the newer C:\> style, unless used on floppy drives A: or B:; on those systems PROMPT $N$G can be used to override the automatic default and explicitly switch to the older style.
Many Unix systems feature the $PS1 variable (Prompt String 1), although other variables also may affect the prompt (depending on the shell used). In the Bash shell, a prompt of the form:
[time] user@host: work_dir $
could be set by issuing the command
export PS1='[\t] \u@\H: \W $'
In zsh the $RPROMPT variable controls an optional prompt on the right-hand side of the display. It is not a real prompt in that the location of text entry does not change. It is used to display information on the same line as the prompt, but right-justified.
In RISC OS the command prompt is a * symbol, and thus (OS) CLI commands are often referred to as star commands. One can also access the same commands from other command lines (such as the BBC BASIC command line), by preceding the command with a *.
Arguments
A command-line argument or parameter is an item of information provided to a program when it is started. A program can have many command-line arguments that identify sources or destinations of information, or that alter the operation of the program.
When a command processor is active a program is typically invoked by typing its name followed by command-line arguments (if any). For example, in Unix and Unix-like environments, an example of a command-line argument is:
rm file.s
is a command-line argument which tells the program rm to remove the file named .
Some programming languages, such as C, C++ and Java, allow a program to interpret the command-line arguments by handling them as string parameters in the main function. Other languages, such as Python, expose operating system specific API (functionality) through sys module, and in particular sys.argv for command-line arguments.
In Unix-like operating systems, a single hyphen used in place of a file name is a special value specifying that a program should handle data coming from the standard input or send data to the standard output.
Command-line option
A command-line option or simply option (also known as a flag or switch) modifies the operation of a command; the effect is determined by the command's program. Options follow the command name on the command line, separated by spaces. A space before the first option is not always required, such as Dir/? and DIR /? in DOS, which have the same effect of listing the DIR command's available options, whereas dir --help (in many versions of Unix) does require the option to be preceded by at least one space (and is case-sensitive).
The format of options varies widely between operating systems. In most cases the syntax is by convention rather than an operating system requirement; the entire command line is simply a string passed to a program, which can process it in any way the programmer wants, so long as the interpreter can tell where the command name ends and its arguments and options begin.
A few representative samples of command-line options, all relating to listing files in a directory, to illustrate some conventions:
Abbreviating commands
In Multics, command-line options and subsystem keywords may be abbreviated. This idea appears to derive from the PL/I programming language, with its shortened keywords (e.g., STRG for STRINGRANGE and DCL for DECLARE). For example, in the Multics forum subsystem, the -long_subject parameter can be abbreviated -lgsj. It is also common for Multics commands to be abbreviated, typically corresponding to the initial letters of the words that are strung together with underscores to form command names, such as the use of did for delete_iacl_dir.
In some other systems abbreviations are automatic, such as permitting enough of the first characters of a command name to uniquely identify it (such as SU as an abbreviation for SUPERUSER) while others may have some specific abbreviations pre-programmed (e.g. MD for MKDIR in COMMAND.COM) or user-defined via batch scripts and aliases (e.g. alias md mkdir in tcsh).
Option conventions in DOS, Windows, OS/2
On DOS, OS/2 and Windows, different programs called from their COMMAND.COM or CMD.EXE (or internal their commands) may use different syntax within the same operating system. For example:
Options may be indicated by either of the switch characters: /, -, or either may be allowed. See below.
They may or may not be case-sensitive.
Sometimes options and their arguments are run together, sometimes separated by whitespace, and sometimes by a character, typically : or =; thus Prog -fFilename, Prog -f Filename, Prog -f:Filename, Prog -f=Filename.
Some programs allow single-character options to be combined; others do not. The switch -fA may mean the same as -f -A, or it may be incorrect, or it may even be a valid but different parameter.
In DOS, OS/2 and Windows, the forward slash (/) is most prevalent, although the hyphen-minus is also sometimes used. In many versions of DOS (MS-DOS/PC DOS 2.xx and higher, all versions of DR-DOS since 5.0, as well as PTS-DOS, Embedded DOS, FreeDOS and RxDOS) the switch character (sometimes abbreviated switchar or switchchar) to be used is defined by a value returned from a system call (INT 21h/AX=3700h). The default character returned by this API is /, but can be changed to a hyphen-minus on the above-mentioned systems, except for under Datalight ROM-DOS and MS-DOS/PC DOS 5.0 and higher, which always return / from this call (unless one of many available TSRs to reenable the SwitChar feature is loaded). In some of these systems (MS-DOS/PC DOS 2.xx, DOS Plus 2.1, DR-DOS 7.02 and higher, PTS-DOS, Embedded DOS, FreeDOS and RxDOS), the setting can also be pre-configured by a SWITCHAR directive in CONFIG.SYS. General Software's Embedded DOS provides a SWITCH command for the same purpose, whereas 4DOS allows the setting to be changed via SETDOS /W:n. Under DR-DOS, if the setting has been changed from /, the first directory separator \ in the display of the PROMPT parameter $G will change to a forward slash / (which is also a valid directory separator in DOS, FlexOS, 4680 OS, 4690 OS, OS/2 and Windows) thereby serving as a visual clue to indicate the change. Also, the current setting is reflected also in the built-in help screens. Some versions of DR-DOS COMMAND.COM also support a PROMPT token $/ to display the current setting. COMMAND.COM since DR-DOS 7.02 also provides a pseudo-environment variable named %/% to allow portable batchjobs to be written. Several external DR-DOS commands additionally support an environment variable %SWITCHAR% to override the system setting.
However, many programs are hardwired to use / only, rather than retrieving the switch setting before parsing command-line arguments. A very small number, mainly ports from Unix-like systems, are programmed to accept - even if the switch character is not set to it (for example netstat and ping, supplied with Microsoft Windows, will accept the /? option to list available options, and yet the list will specify the - convention).
Option conventions in Unix-like systems
In Unix-like systems, the ASCII hyphen-minus begins options; the new (and GNU) convention is to use two hyphens then a word (e.g. --create) to identify the option's use while the old convention (and still available as an option for frequently-used options) is to use one hyphen then one letter (e.g., -c); if one hyphen is followed by two or more letters it may mean two options are being specified, or it may mean the second and subsequent letters are a parameter (such as filename or date) for the first option.
Two hyphen-minus characters without following letters (--) may indicate that the remaining arguments should not be treated as options, which is useful for example if a file name itself begins with a hyphen, or if further arguments are meant for an inner command (e.g., sudo). Double hyphen-minuses are also sometimes used to prefix long options where more descriptive option names are used. This is a common feature of GNU software. The getopt function and program, and the getopts command are usually used for parsing command-line options.
Unix command names, arguments and options are case-sensitive (except in a few examples, mainly where popular commands from other operating systems have been ported to Unix).
Option conventions in other systems
FlexOS, 4680 OS and 4690 OS use -.
CP/M typically used [.
Conversational Monitor System (CMS) uses a single left parenthesis to separate options at the end of the command from the other arguments. For example, in the following command the options indicate that the target file should be replaced if it exists, and the date and time of the source file should be retained on the copy:
COPY source file a target file b (REPLACE OLDDATE)
Data General's CLI under their RDOS, AOS, etc. operating systems, as well as the version of CLI that came with their Business Basic, uses only / as the switch character, is case-insensitive, and allows local switches on some arguments to control the way they are interpreted, such as has the global option to the macro assembler command to append user symbols, but two local switches, one to specify LIB should be skipped on pass 2 and the other to direct listing to the printer, $LPT.
Built-in usage help
One of the criticisms of a CLI is the lack of cues to the user as to the available actions. In contrast, GUIs usually inform the user of available actions with menus, icons, or other visual cues. To overcome this limitation, many CLI programs display a usage message, typically when invoked with no arguments or one of ?, -?, -h, -H, /?, /h, /H, /Help, -help, or --help.
However, entering a program name without parameters in the hope that it will display usage help can be hazardous, as programs and scripts for which command line arguments are optional will execute without further notice.
Although desirable at least for the help parameter, programs may not support all option lead-in characters exemplified above.
Under DOS, where the default command-line option character can be changed from / to -, programs may query the SwitChar API in order to determine the current setting. So, if a program is not hardwired to support them all, a user may need to know the current setting even to be able to reliably request help.
If the SwitChar has been changed to - and therefore the / character is accepted as alternative path delimiter also at the DOS command line, programs may misinterpret options like /h or /H as paths rather than help parameters. However, if given as first or only parameter, most DOS programs will, by convention, accept it as request for help regardless of the current SwitChar setting.
In some cases, different levels of help can be selected for a program. Some programs supporting this allow to give a verbosity level as an optional argument to the help parameter (as in /H:1, /H:2, etc.) or they give just a short help on help parameters with question mark and a longer help screen for the other help options.
Depending on the program, additional or more specific help on accepted parameters is sometimes available by either providing the parameter in question as an argument to the help parameter or vice versa (as in /H:W or in /W:? (assuming /W would be another parameter supported by the program)).
In a similar fashion to the help parameter, but much less common, some programs provide additional information about themselves (like mode, status, version, author, license or contact information) when invoked with an about parameter like -!, /!, -about, or --about.
Since the ? and ! characters typically also serve other purposes at the command line, they may not be available in all scenarios, therefore, they should not be the only options to access the corresponding help information.
If more detailed help is necessary than provided by a program's built-in internal help, many systems support a dedicated external help command" command (or similar), which accepts a command name as calling parameter and will invoke an external help system.
In the DR-DOS family, typing /? or /H at the COMMAND.COM prompt instead of a command itself will display a dynamically generated list of available internal commands; 4DOS and NDOS support the same feature by typing ? at the prompt (which is also accepted by newer versions of DR-DOS COMMAND.COM); internal commands can be individually disabled or reenabled via SETDOS /I. In addition to this, some newer versions of DR-DOS COMMAND.COM also accept a ?% command to display a list of available built-in pseudo-environment variables. Besides their purpose as quick help reference this can be used in batchjobs to query the facilities of the underlying command-line processor.
Command description syntax
Built-in usage help and man pages commonly employ a small syntax to describe the valid command form:
angle brackets for required parameters: ping <hostname>
square brackets for optional parameters: mkdir [-p] <dirname>
ellipses for repeated items: cp <source1> [source2…] <dest>
vertical bars for choice of items: netstat {-t|-u}
Notice that these characters have different meanings than when used directly in the shell. Angle brackets may be omitted when confusing the parameter name with a literal string is not likely.
The space character
In many areas of computing, but particularly in the command line, the space character can cause problems as it has two distinct and incompatible functions: as part of a command or parameter, or as a parameter or name separator. Ambiguity can be prevented either by prohibiting embedded spaces in file and directory names in the first place (for example, by substituting them with underscores _), or by enclosing a name with embedded spaces between quote characters or using an escape character before the space, usually a backslash (\). For example
Long path/Long program name Parameter one Parameter two …
is ambiguous (is program name part of the program name, or two parameters?); however
Long_path/Long_program_name Parameter_one Parameter_two …,
LongPath/LongProgramName ParameterOne ParameterTwo …,
"Long path/Long program name" "Parameter one" "Parameter two" …
and
Long\ path/Long\ program\ name Parameter\ one Parameter\ two …
are not ambiguous. Unix-based operating systems minimize the use of embedded spaces to minimize the need for quotes. In Microsoft Windows, one often has to use quotes because embedded spaces (such as in directory names) are common.
Command-line interpreter
The term command-line interpreter (CLI) is applied to computer programs designed to interpret a sequence of lines of text which may be entered by a user, read from a file or another kind of data stream. The context of interpretation is usually one of a given operating system or programming language.
Command-line interpreters allow users to issue various commands in a very efficient (and often terse) way. This requires the user to know the names of the commands and their parameters, and the syntax of the language that is interpreted.
The Unix #! mechanism and OS/2 EXTPROC command facilitate the passing of batch files to external processors. One can use these mechanisms to write specific command processors for dedicated uses, and process external data files which reside in batch files.
Many graphical interfaces, such as the OS/2 Presentation Manager and early versions of Microsoft Windows use command lines to call helper programs to open documents and programs. The commands are stored in the graphical shell or in files like the registry or the OS/2 OS2USER.INI file.
Early history
The earliest computers did not support interactive input/output devices, often relying on sense switches and lights to communicate with the computer operator. This was adequate for batch systems that ran one program at a time, often with the programmer acting as operator. This also had the advantage of low overhead, since lights and switches could be tested and set with one machine instruction. Later a single system console was added to allow the operator to communicate with the system.
From the 1960s onwards, user interaction with computers was primarily by means of command-line interfaces, initially on machines like the Teletype Model 33 ASR, but then on early CRT-based computer terminals such as the VT52.
All of these devices were purely text based, with no ability to display graphic or pictures. For business application programs, text-based menus were used, but for more general interaction the command line was the interface.
Around 1964 Louis Pouzin introduced the concept and the name shell in Multics, building on earlier, simpler facilities in the Compatible Time-Sharing System (CTSS).
From the early 1970s the Unix operating system adapted the concept of a powerful command-line environment, and introduced the ability to pipe the output of one command in as input to another. Unix also had the capability to save and re-run strings of commands as shell scripts which acted like custom commands.
The command line was also the main interface for the early home computers such as the Commodore PET, Apple II and BBC Micro – almost always in the form of a BASIC interpreter. When more powerful business-oriented microcomputers arrived with CP/M and later DOS computers such as the IBM PC, the command line began to borrow some of the syntax and features of the Unix shells such as globbing and piping of output.
The command line was first seriously challenged by the PARC GUI approach used in the 1983 Apple Lisa and the 1984 Apple Macintosh. A few computer users used GUIs such as GEOS and Windows 3.1 but the majority of IBM PC users did not replace their COMMAND.COM shell with a GUI until Windows 95 was released in 1995.
Modern usage as an operating system shell
While most non-expert computer users now use a GUI almost exclusively, more advanced users have access to powerful command-line environments:
The default VAX/VMS command shell, using the DCL language, has been ported to Windows systems at least three times, including PC-DCL and Acceler8 DCL Lite. Unix command shells have been ported to VMS and DOS/Windows 95 and Windows NT types of operating systems.
COMMAND.COM is the command-line interpreter of MS-DOS, IBM PC DOS, and clones such as DR-DOS, SISNE plus, PTS-DOS, ROM-DOS, and FreeDOS.
Windows Resource Kit and Windows Services for UNIX include Korn and the Bourne shells along with a Perl interpreter (Services for UNIX contains ActiveState ActivePerl in later versions and Interix for versions 1 and 2 and a shell compiled by Microsoft)
IBM OS/2 (and derivatives such as eComStation and ArcaOS) has the cmd.exe processor. This copies the COMMAND.COM commands, with extensions to REXX.
cmd.exe is part of the Windows NT stream of operating systems.
Yet another cmd.exe is a stripped-down shell for Windows CE 3.0.
An MS-DOS type interpreter called PocketDOS has been ported to Windows CE machines; the most recent release is almost identical to MS-DOS 6.22 and can also run Windows 1, 2, and 3.0, QBasic and other development tools, 4NT and 4DOS. The latest release includes several shells, namely MS-DOS 6.22, PC DOS 7, DR DOS 3.xx, and others.
Windows users might use the CScript interface to alternate programs, from the command line. PowerShell provides a command-line interface, but its applets are not written in Shell script. Implementations of the Unix shell are also available as part of the POSIX sub-system, Cygwin, MKS Toolkit, UWIN, Hamilton C shell and other software packages. Available shells for these interoperability tools include csh, ksh, sh, Bash, rsh, tclsh and less commonly zsh, psh
Implementations of PHP have a shell for interactive use called php-cli.
Standard Tcl/Tk has two interactive shells, Tclsh and Wish, the latter being the GUI version.
Python, Ruby, Lua, XLNT, and other interpreters also have command shells for interactive use.
FreeBSD uses tcsh as its default interactive shell for the superuser, and ash as default scripting shell.
Many Linux distributions have the Bash implementation of the Unix shell.
Apple macOS and some Linux distributions use zsh. Previously, macOS used tcsh and Bash.
Embedded Linux (and other embedded Unix-like) devices often use the Ash implementation of the Unix shell, as part of Busybox.
Android uses the mksh shell, which replaces a shell derived from ash that was used in older Android versions, supplemented with commands from the separate toolbox binary.
HarmonyOS, OpenHarmony and Oniro uses the commands from third party toolbox compatibility system attached to Linux kernel of the subsystem alongside default Shell with exec commands.
Routers with Cisco IOS, Junos and many others are commonly configured from the command line.
The Plan 9 operating system uses the rc shell, which is similar in design to the Bourne shell.
Scripting
Most command-line interpreters support scripting, to various extents. (They are, after all, interpreters of an interpreted programming language, albeit in many cases the language is unique to the particular command-line interpreter.) They will interpret scripts (variously termed shell scripts or batch files) written in the language that they interpret. Some command-line interpreters also incorporate the interpreter engines of other languages, such as REXX, in addition to their own, allowing the executing of scripts, in those languages, directly within the command-line interpreter itself.
Conversely, scripting programming languages, in particular those with an eval function (such as REXX, Perl, Python, Ruby or Jython), can be used to implement command-line interpreters and filters. For a few operating systems, most notably DOS, such a command interpreter provides a more flexible command-line interface than the one supplied. In other cases, such a command interpreter can present a highly customised user interface employing the user interface and input/output facilities of the language.
Other command-line interfaces
The command line provides an interface between programs as well as the user. In this sense, a command line is an alternative to a dialog box. Editors and databases present a command line, in which alternate command processors might run. On the other hand, one might have options on the command line, which opens a dialog box. The latest version of 'Take Command' has this feature. DBase used a dialog box to construct command lines, which could be further edited before use.
Programs like BASIC, diskpart, Edlin, and QBASIC all provide command-line interfaces, some of which use the system shell. Basic is modeled on the default interface for 8-bit Intel computers. Calculators can be run as command-line or dialog interfaces.
Emacs provides a command-line interface in the form of its minibuffer. Commands and arguments can be entered using Emacs standard text editing support, and output is displayed in another buffer.
There are a number of text mode games, like Adventure or King's Quest 1-3, which relied on the user typing commands at the bottom of the screen. One controls the character by typing commands like 'get ring' or 'look'. The program returns a text which describes how the character sees it, or makes the action happen. The text adventure The Hitchhiker's Guide to the Galaxy, a piece of interactive fiction based on Douglas Adam's book of the same name, is a teletype-style command-line game.
The most notable of these interfaces is the standard streams interface, which allows the output of one command to be passed to the input of another. Text files can serve either purpose as well. This provides the interfaces of piping, filters and redirection. Under Unix, devices are files too, so the normal type of file for the shell used for stdin, stdout and stderr is a tty device file.
Another command-line interface allows a shell program to launch helper programs, either to launch documents or start a program. The command is processed internally by the shell, and then passed on to another program to launch the document. The graphical interface of Windows and OS/2 rely heavily on command lines passed through to other programs – console or graphical, which then usually process the command line without presenting a user-console.
Programs like the OS/2 E editor and some other IBM editors, can process command lines normally meant for the shell, the output being placed directly in the document window.
A web browser's URL input field can be used as a command line. It can be used to launch web apps, access browser configuration, as well as perform a search. Google, which has been called "the command line of the internet" will perform a domain-specific search when it detects search parameters in a known format. This functionality is present whether the search is triggered from a browser field or on Google's website.
There are JavaScript libraries that allow to write command line applications in browser as standalone Web apps or as part of bigger application. An example of such a website is the CLI interface to DuckDuckGo. There are also Web-based SSH applications, that allow to give access to server command line interface from a browser.
Many PC video games feature a command line interface often referred to as a console. It is typically used by the game developers during development and by mod developers for debugging purposes as well as for cheating or skipping parts of the game.
| Technology | User interface | null |
41969475 | https://en.wikipedia.org/wiki/Reticulated%20giraffe | Reticulated giraffe | The reticulated giraffe (Giraffa reticulata or Giraffa camelopardalis reticulata) is a species/subspecies of giraffe native to the Horn of Africa. It is differentiated from other types of giraffe by its coat, which consists of large, polygonal (or squared), block-like spots, which extend onto the lower legs, tail and face. These prominent liver-red spots also show much less white between them, when compared to other giraffe species. With up to 6 meters in height, the reticulated giraffe is the largest subspecies of giraffe and the tallest land animal in general. While the reticulated giraffe may yet still be found in parts of its historic range, such as areas of Somalia and Ethiopia, its population stronghold is primarily within Kenya. There are approximately 8,500 individuals living in the wild. In both captivity and the wild, as of 2024 there are 15,785 individuals across the world.
Reticulated giraffes can interbreed with other giraffe species in captivity, or if they come into contact with other species of giraffe in the wild, such as the Masai giraffe (G. camelopardalis tippelskirchii).
Along with the aforementioned Masai giraffe, as well as the Baringo or Rothschild's giraffe (G. c. rothschildi), the reticulated giraffe is among the most commonly seen giraffe species in animal parks and zoos.
Taxonomy
The IUCN currently recognizes only one official species of giraffe in Africa, with nine regional subspecies, the reticulated giraffe being one of them. All living giraffes were originally classified as one species by Carl Linnaeus in 1758. The reticulated subspecies was described and given a binomial name, Giraffa reticulata, by British zoologist William Edward de Winton in 1899.
Classed within the infraorder Pecora, the closest extant relative of giraffes is the elusive okapi (Okapia johnstoni) of Central Africa, with both species possessing a long, black, prehensile tongue for browsing foliage as well as ossicones, the bony, horn-like skull growths on the animal’s forehead (often tipped with tufts of fur). A common ancestor between giraffes and okapi emerged an estimated 11.5 mya. The closest living relative to both giraffes and okapi outside of Africa is the North American pronghorn (Antilocapra americana) of the Antilocapridae, in which it is the sole extant species. Additionally, deer (Cervidae) are distantly related to giraffes, okapi and pronghorn, as they are also classed within the infraorder Pecora.
Distribution and habitat
Reticulated giraffes historically occurred widely throughout Northeast Africa. Their favored habitats are acacia-dotted savannas, arid woodlands, seasonal floodplains, as well as semi-deserts, steppes and open forest. Today, they are most commonly found within Kenya, in parks such as Maasai Mara National Reserve, Meru National Park, Samburu National Reserve, and generally around the northern side of Mount Kenya. Additionally, they have been observed as far as Habaswein, Mnazini and Wajir, as well as in Tsavo East National Park.
Ecology
Reticulated giraffes are diel, meaning they are active during the day and the night. They are most active during the early and late parts of the day, such as dawn, dusk and midnight, due to their warmer environment, a habit that may also be described as crepuscular. Their sleep patterns are usually short, consisting of no more than a couple hours at a time typically standing up. The home range of a G. reticulata is nonexclusive and usually overlapping with other individuals or groups. These home ranges include both males and females and vary in size depending on food resources, gender, and water availability. There is no evidence of territorial behavior between G. reticulata.
Giraffe odour
Giraffes have long been noted as having a distinctive odour that many find unpleasant. The reason for the compounds found in giraffe pelage has long been speculated as protecting the giraffe. Two highly rank smelling chemicals in reticulated giraffe hair, indole and 3-methylindole (skatole) have intense faecal odour at high concentrations. Humans rate these two as being responsible for the repulsive giraffe odour. It has been suggested this may help repel predators. Besides the odour compounds many other compounds are found in giraffe hair. The aldehyde nonanal is the major chemical constituent. This compound and the smelly indole are at concentrations that have been shown to inhibit mammalian skin pathogens. The concentration of another compound, p-cresol is present above the concentration that has been shown to repel male brown ear ticks Rhipicephalus appendiculatus.
Diet and foraging habits
The Reticulated giraffe is a herbivore feeding on leaves, shoots, and shrubs. Their up to 30 centimeter long blue tongue is used to strip the branches of acacia trees, their primary food source. They spend most of their day feeding, roughly 13 hours/day, eating up to 34 kilograms of food per day. They are ruminant mammals, also known as foregut fermentation, which complements their high fiber diet. The only competition for food resources G. reticulata encounters is elephants (Proboscidea).
Social behavior
Reticulated giraffes can typically be seen in groups of 3-9, but there are instances of lone individuals. Kinship between females typically drives a group. These groups are often mother-child groups. Females are known to share protection of other young during predation.
Reproduction
Females display reproductive receptivity by emitting odor from their vaginal area and hindparts. The estrous cycle of a female is about 15 days. A male can enhance this scent by curling its lip which assists in bringing the odor to the vomeronasal organ of the giraffe. Dominant males will guard estrus females from other competing males. When the male is ready to breed, he notifies the female by tapping the female's hindleg with his foreleg or by resting his head on the females back. Post-reproduction there is no long term bond between males and females.
The gestation period of G. reticulata is on average 445–457 days, producing one offspring. The occasion of producing two offspring is rare but documented. The female will give birth standing up, and the offspring will stand up anywhere between 5–20 minutes post-birth. Weaning age of the young varies anywhere between 6–17 months, and independence occurs at 2 years old.
Conservation
To save the remaining 9,000 or so reticulated giraffes, several conservation organizations have been formed. One of these organizations is San Diego Zoo Global's "Twiga Walinzi" (meaning Giraffe Guards) initiative. Their work includes hiring and training local Kenyans to monitor 120 trail cameras in Northern Kenya (Loisaba Conservancy and Namunyak Wildlife Conservancy) that capture footage of wild giraffes and other Kenyan wildlife; developing a photo ID database so individual giraffes can be tracked; informing rangers of poaching incidents and removing snares; caring for orphaned giraffes; and educating communities about giraffe conservation. Their numbers remain stable within reserves.
In captivity
Along with the Rothschild’s and the Masai giraffe, the reticulated giraffe is among the most-commonly seen in zoos. The Cheyenne Mountain Zoo in Colorado Springs, Colorado is said to have the largest reticulated giraffe herd in North America. Reticulated and Rothschild's giraffes have been bred together in the past. This was done because it was thought that the giraffe subspecies interbred in the wild. However, research published in 2016 found that they do not. Nevertheless, some zoos are still interbreeding them.
Few zoos or parks keep distinct, separate herds of Masai, Rothschild's and reticulated giraffes; all three can be seen at the San Diego Zoo (California) and its second facility, the San Diego Zoo Safari Park, while the Bronx Zoo (New York), Wildlife Safari (Oregon) and the UK’s Chester Zoo have solely Rothschild's giraffes. The Cheyenne Mountain Zoo (Colorado), Busch Gardens Tampa (Florida), the Maryland Zoo (Baltimore), Omaha's Henry Doorly Zoo (Nebraska), the Louisville Zoo (Kentucky) and the Binder Park Zoo (Michigan) have solely reticulated giraffes.
| Biology and health sciences | Giraffidae | Animals |
33731380 | https://en.wikipedia.org/wiki/Dassault%20Rafale | Dassault Rafale | The Dassault Rafale (, literally meaning "gust of wind", or "burst of fire" in a more military sense) is a French twin-engine, canard delta wing, multirole fighter aircraft designed and built by Dassault Aviation. Equipped with a wide range of weapons, the Rafale is intended to perform air supremacy, interdiction, aerial reconnaissance, ground support, in-depth strike, anti-ship strike and nuclear deterrence missions. It is referred to as an "omnirole" aircraft by Dassault.
In the late 1970s, the French Air Force and French Navy sought to replace and consolidate their existing fleets of aircraft. In order to reduce development costs and boost prospective sales, France entered into an arrangement with the UK, Germany, Italy and Spain to produce an agile multi-purpose "Future European Fighter Aircraft" (which would become the Eurofighter Typhoon). Subsequent disagreements over workshare and differing requirements led France to pursue its own development programme. Dassault built a technology demonstrator that first flew in July 1986 as part of an eight-year flight-test programme, paving the way for approval of the project.
The Rafale is distinct from other European fighters of its era in that it is almost entirely built by one country, involving most of France's major defence contractors, such as Dassault, Thales and Safran. Many of the aircraft's avionics and features, such as direct voice input, the RBE2 AA active electronically scanned array (AESA) radar and the optronique secteur frontal infra-red search and track (IRST) sensor, were domestically developed and produced for the Rafale programme. Originally scheduled to enter service in 1996, the Rafale suffered significant delays due to post-Cold War budget cuts and changes in priorities. There are three main variants: Rafale C single-seat land-based version, Rafale B twin-seat land-based version, and Rafale M single-seat carrier-based version.
Introduced in 2001, the Rafale is being produced for both the French Air Force and for carrier-based operations in the French Navy. It has been marketed for export to several countries, and was selected for purchase by the Egyptian Air Force, the Indian Air Force, the Indian Navy, the Qatar Air Force, the Hellenic Air Force, the Croatian Air Force, the Indonesian Air Force, the United Arab Emirates Air Force and the Serbian Air Force. The Rafale is considered one of the most advanced and capable warplanes in the world, and among the most successful internationally. It has been used in combat over Afghanistan, Libya, Mali, Iraq and Syria.
Development
Background
In the mid-1970s, the French Air Force (Armée de l'Air) and French Navy (Marine Nationale) had separate requirements for a new generation of fighters to replace those in or about to enter service. Because their requirements were similar, and to reduce cost, both services issued a common request for proposal. In 1975, the country's Ministry of Aviation initiated studies for a new aircraft to complement the upcoming and smaller Dassault Mirage 2000, with each aircraft optimized for differing roles.
The Rafale aircraft development programme was the end product of efforts by various European countries for a common fighter aircraft. In 1979, Dassault-Breguet (later Dassault Aviation) joined the MBB/BAe "European Collaborative Fighter" project which was renamed the "European Combat Aircraft" (ECA). The company contributed the aerodynamic layout of a prospective twin-engine, single-seat fighter; however, the project collapsed in 1981 due to differing operational requirements of each partner country. In 1983, the "Future European Fighter Aircraft" (FEFA) programme was initiated, bringing together France, Italy, Spain, West Germany and the United Kingdom to jointly develop a new fighter, although the latter three had their own aircraft developments. French officials envisioned a lightweight, multirole aircraft that—in addition to fulfilling both air force and naval roles—it was believed, would be attractive on the export fighter market. This was in contrast to the British requirement for a heavy long-range interceptor. France also demanded a lead role, with the commensurate technical and industrial primacy, whereas the other countries were accepting of a more egalitarian programme structure.
There was little common ground between France and the other members of this project, but by 1983, the five countries had agreed on a European Staff Target for a future fighter. Nevertheless, differences persisted, and so France withdrew from the multilateral talks in July 1985 to preserve the technological independence of its fighter aircraft industry. West Germany, the UK and Italy opted out and established a new European Fighter Aircraft (EFA) programme. In Turin, on 2 August 1985, West Germany, the UK and Italy agreed to go ahead with the EFA, and confirmed that France, along with Spain, had chosen not to proceed as a member of the project. Despite pressure from France, Spain rejoined the EFA project in early September 1985. The four-nation project eventually resulted in the Eurofighter Typhoon's development.
In France, the government proceeded with its own programme. The Ministry of Defence required an aircraft capable of air-to-air and air-to-ground, all-day and adverse weather operations. As France was the sole developer of the Rafale's airframe, avionics, propulsion system and armament, the resultant aircraft was to replace a multitude of aircraft in the French Armed Forces. The Rafale would perform roles previously filled by an assortment of specialised platforms, including the Jaguar, Mirage F1C/CR/CT, Mirage 2000C/-5/N in the French Air Force, and the F-8P Crusader, Étendard IVP/M and Super Étendard in French Naval Aviation.
Demonstration
At the same time as the multinational talks were occurring, Dassault-Breguet had been busy designing its Avion de Combat Experimental (ACX). During late 1978, prior to France's joining of the ECA, Dassault received contracts for the development of project ACT 92 (Avion de Combat Tactique, meaning "Tactical Combat Airplane"). The following year, the National Office for Aviation Studies and Research began studying the possible configurations of the new fighter under the codename Rapace ("Bird of Prey"). By March 1980, the number of configurations had been narrowed down to four, two of which had a combination of canards, delta wings and a single vertical tail-fin. The ACX project was given political impetus when the French government awarded a contract for two (later reduced to one) technology demonstrator aircraft on 13 April 1983. The government and industry would each provide half of the development cost, with first flight to take place in 1986. At the time, there was no guarantee that the effort would result in a full-scale development programme, and the aircraft remained a purely "proof-of concept" test vehicle. In an effort to harmonize design specifics with the requirements of other countries while collaboration talks were being held, Dassault sized the ACX aircraft in the 9.5 tonne range. After France decided to pull out of the multilateral talks, designers focused on a more compact size, as specified by the Air Force. The ACX programmed was renamed Rafale ("squall") in April 1985.
Construction of the Rafale A (ACX) technology demonstrator started in 1984. It had a length of , a wingspan of , and a empty weight. The austere aircraft lacked in major subsystems, and had the minimal cockpit systems and a fly-by-wire flight control system for the validation of the design's basic airframe-engine layout. The company desired to use the Rafale A to continue the company approach of risk reduction through incremental improvement and to test the aerodynamically unstable delta wing-canard configuration. The aircraft was Dassault's 92nd prototype in 40 years. At the time of its construction, the aircraft had two General Electric F404 engines that were then in service with the F/A-18 Hornet, pending the availability of the Snecma M88 turbofan engines. It was rolled out in December 1985 at Saint-Cloud, and on 4 July 1986, made its first flight from the company's Istres test facility in southern France, piloted by Guy Mitaux-Maurouard. During the one-hour flight, the aircraft reached an altitude of and a speed of Mach 1.3. The aircraft participated in the Farnborough air show the following month.
The aircraft participated in an intensive flight test programme that saw it simulate air force and naval operations. The test vehicle flew approaches to the carrier , and also tested for coordination with . By 1987, the aircraft had been flown by Air Force, Navy and CEV test pilots. Its port-side F404 engine was replaced with the M88 in early 1990, and the aircraft flew under the updated powerplant configuration in May 1990. The aircraft thereafter attained a speed of Mach 1.4 without the use of engine reheat, thereby demonstrating supercruise. The Rafale A was used until January 1994, and was retired after 867 sorties.
The early successful demonstration programme increased French industry and government confidence in the viability of a full-scale development programme for the Rafale. In June 1987, French prime minister Jacques Chirac declared that the government would proceed with the project. A contract for four pre-production aircraft (one Rafale C, two Rafale Ms and one Rafale B) was awarded on 21 April 1988 for a test and validation programme. There was nevertheless government uncertainty in the programme, as it was expected to cost some Ffr120 billion (1988 francs) in total development and procurement costs. Prime minister Michel Rocard was concerned about the state of the project and the failure of the previous government to secure cooperation with other countries, but stated that, "It is inconceivable that we should not be able to build the weapons necessary for our independence". France had earlier entered unsuccessful talks with Belgium, Denmark, the Netherlands, and Norway, about the possible collaboration on the project.
Testing
To meet the various roles expected of the new aircraft, the Air Force required two variants: the single-seat Rafale C (chasseur, meaning "fighter") and the Rafale B (biplace, "two-seater"). Its first flight on 19 May 1991 occurred at the company's test facility in Istres. This signalled the start of a test programme which primarily aimed to test the M88-2 engines, man-machine interface and weapons, and expand the flight envelope. Due to budgetary constraints, the second single-seat prototype was never built. The aircraft differed significantly from the Rafale A demonstrator. Although superficially similar to the heavier test vehicle, the aircraft was smaller, with a length of and a wingspan of . It was less detectable by radar due to the canopy being gold-plated and the addition of radar-absorbent materials; Dassault had also removed the dedicated airbrake. The sole Rafale B two-seat preproduction aircraft, B01, made its first flight on 30 April 1993, and served as a platform for testing of weapons and fire-control systems, including the RBE2 radar and the SPECTRA electronic warfare suite.
The first of two Rafale M (maritime, "naval") prototypes, M01, made its maiden flight on 12 December 1991, followed by the second on 8 November 1993. These aircraft differed from the air force variants in having reinforced structure to allow the aircraft to operate aboard ships, and provision for a tail hook and an in-built ladder, which increased the weight of the Rafale M by over other production variants. Since France has no land-based catapult test facility, catapult trials were carried out in mid-1992 and early 1993 at the United States Navy facility at NAS Lakehurst, New Jersey. The aircraft then carried out shipboard trials aboard Foch in April 1993. The aircraft conducted landings and launches from the nuclear-powered aircraft carrier Charles de Gaulle in July 1999. Testing showed that the aircraft had the ability to land with significant loads of unexpended ordnance.
Production
The Rafale B was initially expected to be just a trainer, but the Gulf War showed that a second crew member was invaluable on strike and reconnaissance missions. The Air Force therefore switched its preferences towards the two-seater, and planned that the variant would constitute 60 percent of the Rafale fleet. The service originally planned to order 250 Rafales, later reduced to 234 aircraft, 95 "C" and 139 "B" models", and then to 212 aircraft. The Navy originally planned to order 86 Rafales, which was reduced to 60 by to budget cuts, 25 M single-seaters and 35 two-seat Ns. The two-seater was later cancelled.
The ACX and subsequent production Rafale was designed in a "virtual" format. Dassault used the experience and technical expertise of its sister company Dassault Systèmes, which had invented the CATIA (Computer Aided Three-dimensional Interactive Application) system, a three-dimensional computer-aided design and computer-aided manufacture (CAD/CAM) software suite that became standard across the industry. CATIA enabled digitization and efficiency improvements throughout the programme, as it implemented recently developed processes such as digital mockup and product data management (PDM). Engineers worked directly with computers in generating 3D models of the aircraft, and took advantage of the design software in facilitating machine-tool preparation. The system consisted of 15GB databases of each of the Rafale's components, assisting with various aspects of the design, manufacture and through-life support. The computer-aided arrangement also simplified routine maintenance.
Production of the first aircraft series formally started in December 1992, but was suspended in November 1995 due to political and economic uncertainty, and resumed in January 1997 after the Ministry of Defence and Dassault agreed on a 48-aircraft (28 firm and 20 options) production run with delivery between 2002 and 2007. A further order of 59 F3 Rafales was announced in December 2004. In November 2009 the French government ordered an additional 60 aircraft to take the total order for the French Air Force and Navy to 180.
The Rafale is manufactured almost entirely in France, except for some imported non-sensitive components. Different components are produced in various plants across the country, including the fuselage in Paris, wings in Martignas, and fins in Biarritz, with final assembly taking place in Merignac near Bordeaux. Dassault carries out 60% of the work, its partner Thales 25%, and its other partner Safran 15%. The three companies rely on a network of 500 subcontractors, many of which are small and medium enterprises, providing work for 7,000 direct and indirect employees. , each fighter took 24 months to manufacture, with an annual production rate of eleven aircraft.
The Rafale was originally planned to enter service in 1995. The aircraft's development proceeded on time, on budget, and without major difficulties. However, the project needed to compete with other defense acquisition programmes for a dwindling national defense budget. This occurred in a political environment in which the chief security threat, the Soviet Union, no longer existed. The French government consequently reduced Rafale orders, which Dassault and other companies involved claimed impeded production management and led to higher costs, and delayed the entry of the aircraft into service. At one stage, French naval authorities investigated the possibility of acquiring used F/A-18s to replace the obsolete F-8 for its carriers, but the French government intended an all-Rafale fleet, and did not go ahead with the plan. Deliveries of the Rafale M were subsequently given a high priority to replace the Navy's aged F-8 fighters. In the words of a naval official, "Although we lost the battle for the F/A-18s, I guess you could say that we had at least some success by 'persuading' the government to give us initial delivery priority". The first production Rafale B took its first flight on 24 November 1998, followed by the first Rafale M for the French Navy on 7 July 1999.
Upgrades and replacement
The Rafale has been designed with an open software architecture that facilitates straightforward upgrades. Dassault and its industry partners have therefore undertaken continuous tests and development primarily aimed at progressively improving the aircraft's sensors and avionics, and to allow additional armament integration. In 2011, upgrades under consideration included a software radio and satellite link, a new laser-targeting pod, smaller bombs and enhancements to the aircraft's data-fusion capacity. In July 2012, fleetwide upgrades of the Rafale's battlefield communications and interoperability capabilities commenced.
At one stage, French officials were reportedly considering equipping the Rafale to launch miniaturised satellites.
In January 2014, the defence ministry announced that funds had been allocated towards the development of the F3R standard. The standard includes the integration of the Meteor BVR missile, among other weapons and software updates. The standard was validated in 2018.
Development work started on the F4 standard in 2019. The design received radar and sensor upgrades that facilitate the detection of airborne stealth targets at long range, as well as improved capabilities in the helmet-mounted display. With improved communications equipment, it is also more effective in network-centric warfare. Flight tests were conducted starting in 2021 and the first F4-standard aircraft was delivered in 2023. Previous aircraft will be upgraded to the standard, with a further 30 aircraft to be ordered in 2023.
The total programme cost, as of FY2013, was around €45.9 billion, which translated to a unit programme cost of approximately €160.5 million. This figure takes in account improved hardware of the F3 standard, and which includes development costs over a period of 40 years, including inflation. The unit flyaway price was €101.1 million for the F3+ version.
The Rafale is planned to be the French Air and Space Force's primary combat aircraft until at least 2040. In 2018, Dassault announced the successor to the Rafale as the New Generation Fighter. This fighter aircraft, under development by Dassault Aviation and Airbus Defence and Space, is to replace France's Rafale, Germany's Eurofighter Typhoon, and Spain's F/A-18 Hornet in the 2030–2040 timeframe.
Design
Overview
The Rafale was developed as a modern jet fighter with a very high level of agility; Dassault chose to combine a delta wing with active close-coupled canard to maximize manoeuvrability. The aircraft is capable of withstanding from −3.6 g to 9 g (10.5 g on Rafale solo display and a maximum of 11g can be reached in case of emergency). The Rafale is an aerodynamically unstable aircraft and uses digital fly-by-wire flight controls to artificially enforce and maintain stability. The aircraft's canards also act to reduce the minimum landing speed to ; while in flight, airspeeds as low as have been observed during training missions. According to simulations by Dassault, the Rafale has sufficient low speed performance to operate from STOBAR-configured aircraft carriers, and can take off using a ski-jump with no modifications.
The Rafale M features a greatly reinforced undercarriage to cope with the additional stresses of naval landings, an arrestor hook, and "jump strut" nosewheel, which only extends during short takeoffs, including catapult launches. It also features a built-in ladder, carrier-based microwave landing system, and the new fin-tip Telemir system for syncing the inertial navigation system to external equipment. Altogether, the naval modifications of the Rafale M increase its weight by compared to other variants. The Rafale M retains about 95 percent commonality with Air Force variants including, although unusual for carrier-based aircraft, being unable to fold its multi-spar wings to reduce storage space. The size constraints were offset by the introduction of , France's first nuclear-powered carrier, which was considerably larger than previous carriers, Foch and Clemenceau.
Although not a full-aspect stealth aircraft, the cost of which was viewed as unacceptably excessive, the Rafale was designed for a reduced radar cross-section (RCS) and infrared signature. In order to reduce the RCS, changes from the initial technology demonstrator include a reduction in the size of the tail-fin, fuselage reshaping, repositioning of the engine air inlets underneath the aircraft's wing, and the extensive use of composite materials and serrated patterns for the construction of the trailing edges of the wings and canards. Seventy percent of the Rafale's surface area is composite. Many of the features designed to reduce the Rafale's visibility to threats remain classified.
Cockpit
The Rafale's glass cockpit was designed around the principle of data fusion—a central computer selects and prioritises information to display to pilots for simpler command and control. For displaying information gathered from a range of sensors across the aircraft, the cockpit features a wide-angle holographic head-up display (HUD) system, two head-down flat-panel colour multi-function displays (MFDs) as well as a central collimated display. These displays have been strategically placed to minimise pilot distraction from the external environment. Some displays feature a touch interface for ease of human–computer interaction (HCI). A head-mounted display (HMD) remains to be integrated to take full advantage of its MICA missiles. The cockpit is fully compatible with night vision goggles (NVG). The primary flight controls are arranged in a hands-on-throttle-and-stick (HOTAS)-compatible configuration, with a right-handed side-stick controller and a left-handed throttle. The seat is inclined rearwards at an angle of 29° to improve g-force tolerance during manoeuvring and to provide a less restricted external pilot view.
Great emphasis has been placed on pilot workload minimisation across all operations. Among the features of the highly digitised cockpit is an integrated direct voice input (DVI) system, allowing a range of aircraft functions to be controlled by spoken voice commands, simplifying the pilot's access to many of the controls. Developed by , the DVI is capable of managing radio communications and countermeasures systems, the selection of armament and radar modes, and controlling navigational functions. For safety reasons, generally DVI is deliberately not employed for safety-critical elements of the aircraft's operation, such as the final release of weapons.
In the area of life support, the Rafale is fitted with a Martin-Baker Mark 16F "zero-zero" ejection seat, capable of operation at zero speed and zero altitude. An on-board oxygen generating system, developed by Air Liquide, eliminates the need to carry bulky oxygen canisters. The Rafale's flight computer has been programmed to counteract pilot disorientation and to employ automatic recovery of the aircraft during negative flight conditions. The auto-pilot and autothrottle controls are also integrated, and are activated by switches located on the primary flight controls. An intelligent flight suit worn by the pilot is automatically controlled by the aircraft to counteract in response to calculated g-forces.
Avionics and equipment
The Rafale core avionics systems employ an integrated modular avionics (IMA), called MDPU (modular data processing unit). This architecture hosts all the main aircraft functions such as the flight management system, data fusion, fire control, and the man-machine interface. The total value of the radar, electronic communications and self-protection equipment is about 30 percent of the cost of the entire aircraft. The IMA has since been installed upon several upgraded Mirage 2000 fighters, and incorporated into the civilian airliner, the Airbus A380. According to Dassault, the IMA greatly assists combat operations via data fusion, the continuous integration and analysis of the various sensor systems throughout the aircraft, and has been designed for the incorporation of new systems and avionics throughout the Rafale's service life.
The Rafale features an integrated defensive-aids system named SPECTRA, which protects the aircraft against airborne and ground threats, developed as a joint venture between Thales and MBDA. Various methods of detection, jamming, and decoying have been incorporated, and the system has been designed to be highly reprogrammable for addressing new threats and incorporating additional sub-systems in the future. Operations over Libya were greatly assisted by SPECTRA, allowing Rafales to perform missions independently from the support of dedicated Suppression of Enemy Air Defences (SEAD) platforms.
The Rafale's ground attack capability is heavily reliant upon sensory targeting pods, such as Thales Optronics's Reco New Generation/Areos reconnaissance pod and Damocles electro-optical/laser designation pod. Together, these systems provide targeting information, enable tactical reconnaissance missions, and are integrated with the Rafale's IMA architecture to provide analysed data feeds to friendly units and ground stations, as well as to the pilot. Damocles provides targeting information to the various armaments carried by the Rafale and is directly integrated with the Rafale's VHF/UHF secure radio to communicate target information with other aircraft. It also performs other key functions such as aerial optical surveillance and is integrated with the navigation system as a FLIR.
The Damocles designation pod was described as "lacking competitiveness" when compared to rivals such as the Sniper and LITENING pods; so work began on an upgraded pod, designated Damocles XF, with additional sensors and added ability to transmit live video feeds. A new Thales targeting pod, the Talios, was officially unveiled at the 2014 Farnborough Air Show and is expected to be integrated on the Rafale by 2018. Thales' Areos reconnaissance pod is an all-weather, night-and-day-capable reconnaissance system employed on the Rafale, and provides a significantly improved reconnaissance capability over preceding platforms. Areos has been designed to perform reconnaissance under various mission profiles and condition, using multiple day/night sensors and its own independent communications datalinks.
Radar and sensors
The Rafale was first outfitted with the Thales RBE2 passive electronically scanned multi-mode radar. Thales claims to have achieved increased levels of situational awareness as compared to earlier aircraft through the earlier detection and tracking of multiple air targets for close combat and long-range interception, as well as real-time generation of three-dimensional maps for terrain-following and the real-time generation of high resolution ground maps for navigation and targeting. In early 1994, it was reported that technical difficulties with the radar had delayed the Rafale's development by six months. In September 2006, Flight International reported the Rafale's unit cost had significantly increased due to additional development work to improve the RBE2's detection range.
The RBE2 AA active electronically scanned array (AESA) radar now replaces the previous passively scanned RBE2. The RBE2 AA is reported to deliver a greater detection range of 200 km, improved reliability and reduced maintenance demands over the preceding radar. A Rafale demonstrator began test flights in 2002 and has totaled 100 flight hours . By December 2009, production of the pre-series RBE2 AA radars was underway. In early October 2012, the first Rafale equipped with an RBE2 AA radar arrived at Mont-de-Marsan Air Base for operational service (the development was described by Thales and Dassault as "on time and on budget"). By early 2014, the first Air Force front-line squadron were supposed to receive Rafales equipped with the AESA radar, following the French Navy which was slated to receive AESA-equipped Rafales starting in 2013.
To enable the Rafale to perform in the air supremacy role, it includes several passive sensor systems. The front-sector electro-optical system or Optronique Secteur Frontal (OSF), developed by Thales, is completely integrated within the aircraft and can operate both in the visible and infrared wavelengths. The OSF enables the deployment of infrared missiles such as the MICA at beyond visual range distances; it can also be used for detecting and identifying airborne targets, as well as those on the ground and at sea. Dassault describes the OSF as being immune to jamming and capable of providing covert long-range surveillance. In 2012, an improved version of the OSF was deployed operationally.
Armament and standards
Initial deliveries of the Rafale M were to the F1 ("France 1") standard, which were equipped for the air-to-air interceptor combat duties, but lacked any armament for air-to-ground operations. The F1 standard became operational in 2004. Later deliveries were to the "F2" standard, which added the capability for conducting air-to-ground operations; the first F2 standard Rafale M was delivered to the French Navy in May 2006. Starting in 2008 onwards, Rafale deliveries have been to the nuclear-capable F3 standard that also added reconnaissance with the Areos reconnaissance pod, and it has been reported that all aircraft built to the earlier F1 and F2 standards are to be upgraded to become F3s.
F3 standard Rafales are capable of undertaking many different mission roles with a range of equipment, namely air defence/superiority missions with Mica IR and EM air-to-air missiles, and precision ground attacks typically using SCALP EG cruise missiles and AASM Hammer air-to-surface missiles. In addition, anti-shipping missions could be carried out using the AM39 Exocet sea skimming missile, while reconnaissance flights would use a combination of onboard and external pod-based sensor equipment. Furthermore, the aircraft could conduct nuclear strikes when armed with ASMP-A missiles. In 2010, France ordered 200 MBDA Meteor beyond-visual-range missiles which greatly increases the distance at which the Rafale can engage aerial targets.
The F4 standard program was launched on 20 March 2017 by the French ministry of defence. The first F4.1 standard test aircraft was delivered in March 2023.
For compatibility with armaments of varying types and origins, the Rafale's onboard store management system is compliant with MIL-STD-1760, an electrical interface between an aircraft and its carriage stores, thereby simplifying the incorporation of many of their existing weapons and equipment. The Rafale is typically outfitted with 14 hardpoints (only 13 on Rafale M version), five of which are suitable for heavy armament or equipment such as auxiliary fuel tanks, and has a maximum external load capacity of nine tons. In addition to the above equipment, the Rafale carries the 30 mm GIAT 30 revolver cannon and can be outfitted with a range of laser-guided bombs and ground-attack munitions. According to Dassault, the Rafale's onboard mission systems enable ground attack and air-to-air combat operations to be carried out within a single sortie, with many functions capable of simultaneous execution in conjunction with another, increasing survivability and versatility.
Engines
The Rafale is fitted with two Snecma M88 engines, each capable of providing up to of dry thrust and with afterburners. The engines feature several advances, including a non-polluting combustion chamber, single-crystal turbine blades, powder metallurgy disks, and technology to reduce radar and infrared signatures. The M88 enables the Rafale to supercruise while carrying four missiles and one drop tank.
Qualification of the M88-2 engine ended in 1996 and the first production engine was delivered by the end of the year. Due to delays in engine production, the Rafale A demonstrator was initially powered by the General Electric F404 engine. In May 2010, a Rafale flew for the first time with the M88-4E engine, an upgraded variant with lower maintenance requirements than the preceding M88-2. The engine is of a modular design for ease of construction and maintenance and to enable older engines to be retrofitted with improved subsections upon availability, such as existing M88-2s being upgraded to M88-4E standard. There has been interest in more powerful M88 engines by potential export customers, such as the United Arab Emirates (UAE). , a thrust vectoring variant of the engine designated as M88-3D was also under development.
Operational history
France
French Naval Aviation
In December 2000, the French Naval Aviation (Aéronavale), the air arm of the French Navy, received its first two Rafale M fighters. On 18 May the following year, the squadron Flottille 12F, which had previously operated the F-8 Crusader, became the first squadron to operate the Rafale after it was officially re-activated prior to the delivery of the sixth Rafale. Flottille 12F immediately participated in Trident d'Or aboard the aircraft carrier Charles de Gaulle with warships from ten other nations. During the maritime exercise, the Navy tested the Rafale's avionics during simulated interceptions with various foreign aircraft, in addition to carrier take-offs and landings. After almost four years of training, the Rafale M was declared operational with the French Navy in June 2004.
The Rafale M is fully compatible with United States Navy aircraft carriers and some French Navy pilots have qualified to fly the aircraft from US Navy flight decks. On 4 June 2010, during an exercise on , a French Rafale became the first jet fighter of a foreign navy to have its engine replaced on board an American aircraft carrier.
In 2002, the Rafales were first deployed to a combat zone; seven Rafale Ms embarked aboard Charles de Gaulle of the French Navy during "Mission Héraclès", the French participation in "Operation Enduring Freedom". They flew from the aircraft carrier over Afghanistan, but the F1 standard precluded air-to-ground missions and the Rafale did not see any action. In March 2002, the aircraft carrier was stationed in the Gulf of Oman, where its complement of Rafales undertook training operations. In June 2002, while Charles de Gaulle was in the Arabian Sea, Rafales conducted several patrols near the India-Pakistan border.
In 2016, Rafales operating from Charles de Gaulle struck targets associated with the Islamic State of Iraq and the Levant (IS).
In December 2015, American and French military officials reportedly discussed the possibility of French naval Rafale Ms flying combat missions from a US Navy as soon as January 2017. This would enable continued French Navy operations against ISIL while Charles de Gaulle undergoes its year-and-a-half-long major refit, scheduled to begin in early 2017. Although Rafales have launched and landed on U.S. carriers to demonstrate interoperability, it would be the first time they would fly combat missions from one. As many as 18 Rafale Ms could be deployed on a carrier, although some room would have to be made for French Navy support crews familiar with maintaining the Rafale, as well as for spare parts and munitions. Operation Chesapeake, a test of this interoperability, was conducted in May 2018, when 12 Rafales of Flottilles 11F, 12F, and 17F, along with nearly 350 support personnel embarked aboard USS George H.W. Bush for two weeks of carrier qualifications and exercises after conducting a month of shore based training at Naval Air Station Oceana.
On 9 January 2025, Rafale M conducted joint anti-aircraft drills with Su-30MKI and Jaguar aircraft of the Indian Air Force. The French Carrier Strike Group (CSG) centered on the Charles de Gaulle, the carrier air wing including Rafale M, her escort ships and fleet support ship Jacques Chevallier were on a visit to India between 4 and 9 January 2025 during the Mission Clemenceau 25. Simultaneously, conducted joint navigational drills and Maritime Partnership Exercise with the escort ships.
French Air and Space Force
In April 2005, the Air Force received its first three F2 standard Rafale Bs at the Centre d'expertise aérienne militaire (CEAM, i.e. the Military Air Expertise Centre) at Mont-de-Marsan, where they were tasked to undertake operational evaluation and pilot conversion training. By this time, it was expected that Escadron de Chasse (Fighter Squadron) 1/7 at Saint-Dizier would receive a nucleus of 8–10 Rafale F2s during the summer of 2006, in preparation for full operational service (with robust air-to-air and stand off air-to-ground precision attack capabilities) starting from mid-2007 (when EC 1/7 would have about 20 aircraft, 15 two-seaters and five single-seaters).
In 2007, a "crash program" upgrade on six Rafales enabled the use of laser-guided bombs in readiness for action in Afghanistan. Three of these aircraft of the Air Force were deployed to Dushanbe in Tajikistan, while the three others were Rafale Marine of the Navy on board Charles De Gaulle. The first mission occurred on 12 March 2007, and the first GBU-12 was launched on 28 March in support of embattled Dutch troops in Southern Afghanistan, marking the operational début of the Rafale. Between January 2009 and December 2011, a minimum of three Rafales were stationed at Kandahar International Airport to conduct operations in support of NATO ground forces.
On 19 March 2011, French Rafales began conducting reconnaissance and strike missions over Libya in Opération Harmattan, in support of United Nations Security Council Resolution 1973; initial targets were artillery pieces laying siege around the rebel city of Benghazi. The Rafale could operate in Libya without the support of SEAD aircraft, using the onboard SPECTRA self-defence system instead. On 24 March 2011, it was reported that a Rafale had destroyed a Libyan Air Force G-2/Galeb light attack/trainer aircraft on the runway. During the deployment, Rafale destroyed multiple SAM systems of Libyan military using its geolocation feature and with a mix of different ammunition. Unlike other allied aircraft, the Rafale did not require any dedicated EW/EA aircraft for escort.
Rafales typically conducted six-hour sorties over Libyan airspace, armed with four MICA air-to-air missiles, four or six AASM "Hammer" bombs, a Thales Damoclès targeting pod and two drop tanks. Each sortie needed multiple aerial refuelling operations from coalition tanker aircraft. The AASM precision-guidance weapon system allowed the Rafale to conduct high-altitude bombing missions using bombs weighing between . Reportedly, Rafale crews preferred to use GPS-guided munitions with greater reliability and range. SCALP weapons were deployed on only one or two sorties, such as against a Libyan airbase at Al-Jufra. In 2011, aviation journalist Craig Hoyle speculated that the Rafale's Libyan performance is likely to impact export sales, noting that the Rafale had maintained a high operational rate throughout. Hoyle also noted that the conflict had led to several urgent operational requirements, including a lighter ground-attack munition and AASM modifications for close air support.
In January 2013, the Rafale took part in "Opération Serval", the French military intervention in support to the government of Mali against the Movement for Oneness and Jihad in West Africa. The first mission was carried out on 13 January, when four Rafales took off from an airbase in France to strike rebel training camps, depots and facilities in the city of Gao in eastern Mali. Subsequent airstrikes in the following days by Rafale and Mirage fighters were reportedly instrumental in the withdrawal of Islamist militant forces from Timbuktu and Douentza. Both Rafale and Mirage 2000D aircraft used in the conflict have been based outside of North Africa, making use of aerial refuelling tanker aircraft to fly long range sorties across Algerian airspace and into Mali.
In August 2013, it was proposed that France may halve the number of Rafales to be delivered over the next six years for a total of 26 aircraft to be delivered during this period; foreign export procurements have been viewed as critical to maintain production under this proposal. While production would be slowed, France would still receive the same number of Rafales overall.
In September 2014, Rafales started reconnaissance missions over Iraq for Opération Chammal, France's contribution to the international effort to combat IS militants. Six Rafales were initially tasked with identifying IS positions in support of US airstrikes, flying from Al Dhafra Air Base, UAE. On 18 September, Rafales joined American attack operations, launching four strikes near the Northern Iraqi town of Zummar that destroyed a logistics depot and killed dozens of IS fighters. In April 2018, during the Syrian Civil War, five Rafale Bs from the Escadron de Chasse 1/4 Gascogne participated in the 2018 missile strikes against Syria. Each was loaded with two SCALP EG missiles. French Air and Space Force Rafales were deployed to help blunt the Iranian attack against Israel on 13 April 2024 by shooting down an unspecified number of unmanned aerial vehicles. The Rafales, based in Jordan, were operating in Iraqi and Syrian airspace as part of Opération Chammal.
Egypt
In November 2014, Egypt was reportedly in negotiations with France to purchase 24 to 36 Rafales, subject to a financing and weapons package agreement. By February 2015, the two countries were negotiating a loan from France's export credit agency to reach an export agreement for up to 24 Rafales. The condition for Egypt to buy the 12 additional fighters was to get SCALP-EG missiles, this was compromised by the US blocking the deal. Egypt aimed for the deal's quick completion as to have them on display at the inauguration of the Suez Canal expansion in August 2015.
On 16 February 2015, Egypt became the Rafale's first international customer when it officially ordered 24 Rafales, as part of a larger deal, including a FREMM multipurpose frigate and missiles, worth US$5.9 billion (€5.2 billion). The order comprised 8 single-seat models and 16 two-seaters. In July 2015, a ceremony marking Egypt's acceptance of its first three Rafales, was held at Dassault's flight test center in Istres. In January 2016, Egypt received three more Rafales. All six aircraft are two-seat models (Rafale DM) diverted from French Air Force deliveries. Egypt received the third batch of three Rafales flown by Egyptian pilots from France in April 2017; this was included the first single-seat model (Rafale EM) to be delivered to the Egyptian Air Force. Egypt took delivery of the fourth batch of two Rafale EMs in July 2017. The fifth batch, comprising the last 3 Rafale EMs, was delivered in November 2017, increasing the number in service to 14 Rafales.
In June 2016, Egypt begun negotiations with Dassault to acquire 12 additional Rafales, intending to exercise an option of the first contract. An Egyptian delegation visited France in November 2017 for negotiations. In May 2021, Egypt ordered 30 more Rafales in a contract worth $4.5bn after France achieved making the SCALP EG missile ITAR-free by replacing the US-made parts with French-made components. On 15 November 2021, Egypt confirmed that it will receive 30 Rafale F3R between 2024 and 2026. The Egyptian Air Force is interested in buying the Rafale F4 variant once Dassault prepares it for foreign buyers.
Analysts view the relatively quick series of 84 orders from Egypt and Qatar as being influenced by the Arab Spring and uncertainty of US involvement in the Middle East.
Qatar
The Qatar Emiri Air Force evaluated the Rafale alongside the Boeing F/A-18E/F Super Hornet, the Boeing F-15E, the Eurofighter Typhoon and the Lockheed Martin F-35 Lightning II to replace its Dassault Mirage 2000-5 fleet. In June 2014, Dassault claimed it was close to signing a contract with Qatar for 72 Rafales. On 30 April 2015, Sheikh Tamim bin Hamad Al Thani announced to French President François Hollande that Qatar would order 24 Rafale with an option to buy 12 more aircraft. On 4 May, a €6.3 billion ($7.02 billion) contract for 24 Rafales was finalised; additionally, the contract included the provision of long-range cruise missiles and Meteor missiles as well as the training of 36 Qatari pilots and 100 technicians by the French military and several Qatari intelligence officers; thus, the price can be viewed as €M for each aircraft.
On 7 December 2017, the option for 12 more Rafales was exercised for €1.1 billion (or €M each) while adding an additional option for 36 further fighters. The first Qatari Rafale was delivered in February 2019.
India
Indian Air Force
The Rafale was one of the six aircraft competing in the Indian MRCA competition for 126 multirole fighters. Originally, the Mirage 2000 had been considered for the competition, but Dassault withdrew it in favour of the Rafale. In February 2011, French Rafales flew demonstrations in India, including air-to-air combat against Su-30MKIs. In April 2011, the Indian Air Force (IAF) shortlisted the Rafale and Eurofighter Typhoon for the US$10.4 billion contract. On 31 January 2012, the IAF announced the Rafale as the preferred bidder. It was proposed that 18 Rafales would be supplied to the IAF by 2015 in fly-away condition, while the remaining 108 would be manufactured by Hindustan Aeronautics Limited (HAL) in India under transfer of technology agreements. The contract for 126 Rafales, services, and parts may have been worth up to US$20 billion.
The deal stalled due to disagreements over local production; Dassault refused responsibility for the 108 HAL-manufactured Rafales, holding reservations over HAL's ability to accommodate the complex manufacturing and technology transfers; instead, Dassault said it would have to negotiate two separate production contracts by both companies. The Indian Defence Ministry instead wanted Dassault to be solely responsible for the sale and delivery of all 126 aircraft. In May 2013, The Times of India reported that negotiations were "back on track", with plans for the first 18 Rafales to be delivered in 2017. In March 2014, the two sides reportedly agreed that the first 18 Rafales would be delivered to India in flying condition and that the remaining 108 would be 70 percent built by HAL. By December 2014, India and France reportedly expected to sign a contract by March 2015.
In April 2015, during Prime Minister Narendra Modi's visit to Paris, India requested the swift delivery of 36 Rafales in a fly-away condition. India withdrew the MMRCA tender on 30 July 2015. Then, India and France missed a July target to finalise the 36-aircraft deal. The previously agreed-upon terms in April totaled US$8 billion for 36 aircraft costing $200 million each, with an offset requirement of 30 percent of the deal's value to be reinvested in India's defence sector and infrastructure for Rafale operations. India insisted on a 50 percent offset and two bases, which France said would increase costs and require separate infrastructure and two sets of maintenance, training and armament storage facilities. On 23 September 2016, Defence Minister Manohar Parrikar and his French counterpart Jean-Yves Le Drian signed a €7.8 billion contract for 36 fly-away Rafales with an option for 18 more. Initial deliveries were expected by 2019, and all 36 within six years. The deal included spares and weapons such as Meteor missiles.
The Indian National Congress raised an issue over Dassault partnering with Anil Ambani's Reliance Defence, now known as Reliance Naval and Engineering Limited (R-Naval), a private company with no aviation experience, instead of the state owned HAL. Allegedly, Dassault lacked any choice and was compelled to select Reliance Defence as its partner. Rahul Gandhi alleged that it was favouritism and corruption. Both the French government and Dassault issued a press release stating it was Dassault's decision to choose Reliance Defence. Party spokesperson Manish Tewari asked for the agreement's details to be made public and questioned if there was an escalation of per-aircraft cost from ₹7.15 billion to ₹16 billion. In November 2018, Congress alleged that procurement procedures were bypassed. A Public Interest Litigation (PIL) case was filed in the Supreme Court for an independent probe into the Rafale procurement. On 14 December 2018, the Apex Court dismissed all petitions, stating it found no irregularities; Reliance Defence reportedly was set to receive just over 3 per cent of the of offsets, contrary to the impression that it was to be the biggest beneficiary of the deal.
Around August 2017, India considered ordering 36 more Rafales amid tensions with China.
Ahead of the first Rafale's formal handover on 8 October 2019, IAF Day, the IAF accepted it at Dassault's Bordeaux facility in an event attended by Defence Minister Rajnath Singh and his French counterpart, Florence Parly; it had tail number "RB-001" to mark IAF chief-designate Air Chief Marshal R. K. S. Bhadauria's role in the buy. The first five Rafales were delivered on 27 July 2020. The last Rafale arrived in April 2022.
In June 2024, IAF sent a contingent to the second edition Red Flag – Alaska 2024 exercise which was conducted from 4 June to 14 June in Eielson Air Force Base, Alaska. The Indian contingent consisted of Rafales, one Il-78MKI mid-air refueller and a C-17 heavy transport aircraft. The exercise focused on Beyond Visual Range combat simulations. Other participants in the exercise included Republic of Singapore Air Force, Royal Air Force, Royal Netherlands Air Force, Luftwaffe, and the US Air Force. The aggressor unit was 18th Fighter Interceptor Squadron. After the conclusion of the exercise, on the way back to India, the contingent made a refuelling halt at Lajes Field, Portugal. After the halt, the contingent was split into two components, with one visiting Greece and the other visiting Egypt. The Rafales also participated in air combat exercises with Hellenic Air Force's F-16 and Egyptian Air Force's Rafales.
In September 2024, reports revealed that Rafale-equipped No. 101 Squadron at Hasimara AFS under the aegis of Eastern Air Command has been "practicing" to shoot at targets mimicking Chinese spy balloons at very high-altitudes using air-to-air missiles. This was done after the 2023 Chinese balloon incident over the United States where an F-22 Raptor of the US Air Force had to shoot a tall Chinese spy balloon by using an AIM-9X Sidewinder missile. On 4 February 2024, an F-22, from an altitude , had shot down the spy balloon at an altitude of over . In case of the Indian Air Force training, the most recent instance the Rafale shot down a comparatively smaller spy balloon (including a payload) with an air-to air missile at an altitude of .
Indian Navy
In June 2012, Flight Global reported that the Indian Navy was considering the purchase of Dassault Rafale M (Naval variant) for . which also declared the winner of IAF's MMRCA competition later. In January 2016, the Indian government directed the Indian Navy to be briefed by Dassault on the navalised Rafale for its aircraft carriers, promoting logistics and spares commonalities between the Navy and IAF.
In January 2017, Indian Navy released an Request for Information for its Multi-Role Carrier Borne Fighter (MRCBF) programme to form a fighter wing for . Dassault CEO Eric Trappier stated that the Indian Navy may order up to 57 Rafales under MRCBF. The numbers were later reduced to 26 jets and was announced to be an interim solution until HAL TEDBF is operational. The competition was between Rafale and the Boeing F/A-18E/F Super Hornet. Both the jets participated in the trials from the ski-jumps at the Shore-Based Test Facility (SBTF) at in January and June 2022 respectively. In December 2020, Boeing Defense, Space & Security, in coordination with the United States Navy, had demonstrated F/A-18E/F Super Hornet's capability to operate from STOBAR carrier.
On 13 July 2023, Defence Acquisition Council (DAC) of India granted the Acceptance of Necessity (AoN) for the procurement of 26 Rafale M F4 variant aircraft for the Indian Navy along with 3 additional s.
Later, the Indian Navy decided to purchase 26 Rafales through Government to Government (G2G) deal. Senior representatives from the Navy, Defense Acquisition Wing, along with Dassault and Thales were scheduled to commence from negotiations for the deal on 30 May 2024. However, the price negotiations are underway at New Delhi as of 14 June 2024 after a postponement from the schedule. The chief representatives of France in this Government to Government (G2G) deal is the Directorate General of Armament while the Indian counterpart is Directorate General of Acquisitions.
By the last week of June 2024, the base price of Rafale was decided to be same as that of the IAF. According to a report, the Navy-specific variant will include the enhancements of IAF-specific Rafales including helmet mounted display, low band frequency jammers, improved radio altimeter and very high frequency range decoys, etc. These upgrades will be accompanied by changes in software for air to sea mode, electromagnetic interference (EMI) and electromagnetic compatibility (EMC) and so on. The Indian team in the negotiations include joint secretary-rank IAS officer along with a Commodore-rank Indian Navy officer. As of 3 September 2024, Uttam AESA radar integration and associated indigenous weapons like Astra missiles was negated out for the Rafale-M F4 deal due to high costs for their integration and a delayed delivery timeline of 8 years.
As of 29 September 2024, Dassault has submitted its final price offer for the 26 aircraft to the Navy which is significantly less than the previous estimates. The naval jets will be equipped with Meteor missiles, anti-ship systems and long-range fuel tanks in addition to the advanced weapon systems and sensor suites integrated on IAF Rafales.
As of 2 December 2024, large quantitates of beyond-visual-range air-to-air, anti-ship missiles as well as 40 drop tanks for the IAF Rafale fleet is also expected to be included in the deal. The Rafale M squadron will be based at , Visakhapatnam and will form the Carrier Air Group of Vikrant.
The deal will be worth around which also include purchase of weapon systems like Meteor (air to air), Exocet (anti-ship) and SCALP (cruise missile) along with performance-based logistics support and training programmes for crew training to operate and maintain the jets, associated ancillary equipment, simulator, spares and Indian Navy-specific design alterations. As per the deal, the first Rafale M has to be delivered within 37 months of signing the contract after showcasing the IN-specific Rafale M variant within 18 months of the same. The contract will be cleared by the Cabinet Committee on Security by end of January while the same would be concluded during the visit of the Indian Prime Minister Narendra Modi to France on 11 and 12 February.
Greece
In August 2020, the government of Greece announced the acquisition of 18 Rafales. Initial reports stated that ten would be the new Rafale C variant in F3-R standard with eight older Rafale in F1 and F2 standard in use with the French Air and Space Force that would be given to Greece.
In January 2021, the Hellenic Parliament ratified the agreement with Dassault for the purchase of six new built and 12 used F3-R aircraft formerly used by the Armée de l'Air at a total cost of €2.4 billion, including armaments and ground support. The inter-governmental agreement was signed on 25 January 2021 by the Defense Ministers of Greece and France. This was followed by an additional contract in March 2022 to buy the six additional Rafales, to be delivered from mid-2024. The first aircraft, a Rafale B two-seater, was delivered on 21 July 2021. On 19 January 2022, the first six Rafales landed at Tanagra Air Base where a welcoming ceremony was held. The type officially entered service in September 2023. In 2024, it was reported that the Greek Government was looking to buy 6 to 12 more Rafales (as well as another Frigate) on the 80th Anniversary of D-Day. They also wanted to negotiate postponing some payments on previous arms deals to 2028–2030, and negotiate the transfer of 24 Mirage 2000-R that they wanted to discard as partial payment.
Croatia
Croatia received a proposal for 12 used Rafales F3Rs in September 2020 as part of a bid to replace the Croatian Air Force's MiG-21s. The total package offered costs €1 billion (including weapon systems, spare parts, logistics and training), and competed with new F-16V Block 70, Israeli used F-16C/D Barak raised to ACE configuration, and Saab Gripen. On 28 May 2021, Croatian Prime Minister Andrej Plenković announced the purchase of 12 used Rafales F3-R C/B on order, 10x single-seater C F3-R and 2x two-seater Rafale B F3Rs. The contract was signed on 25 November 2021.
On 2 October 2023, Croatia received the first of 12 Rafales during a ceremony at Mont-de-Marsan Air Base. As of December 2024, eight aircraft have been delivered.
Future operators
Indonesia
In January 2020, the Indonesian government expressed interest in buying up to 48 Rafales to modernise the Indonesian Air Force. In February 2021, Indonesia's Minister of Defense Prabowo Subianto announced that the purchase of 36 units, as part of a large procurement programme including A330 tankers and complementary American products, was planned and that funds had been secured for its finalization. On 7 June 2021, Indonesia signed a letter of intent to buy 36 Rafales and associated weapons and support.
On 20 January 2022, Prabowo Subianto confirmed, that Indonesia completed the negotiation of the contract pending activation of the formal agreement by France. On 10 February 2022, Dassault stated that Indonesia had officially signed an order for 42 Rafale F4 consisting of 30 single-seat and 12 double-seat.
The first tranche for six Rafales came into force in September 2022. On 10 August 2023, Dassault Aviation announced that a contract covering a second tranche of 18 Rafale fighters for Indonesia had come into force that day, bringing the total under contract to 24. On 8 January 2024, Dassault Aviation disclosed that the third, and final tranche of 18 Rafales came into force, bringing the total aircraft ordered to 42.
Iraq
In November 2020, Iraqi Defence Minister Jumaa Anad stated that Iraq plans to buy Rafales for the Iraqi Air Force. In February 2022, Iraq reportedly intends to acquire 14 Rafale F4s, payable in crude oil.
Serbia
The President of Serbia, Aleksandar Vučić, stated on 24 December 2021 that Serbia is interested in buying new Rafales to strengthen the Serbian Air Force and Air Defence. La Tribune reported in April 2022 that Serbia and Dassault are negotiating for 12 Rafales.
On 8 April 2024, President Vučić announced the country's intention to purchase 12 Rafales F3, stating that "concrete agreements regarding the purchase of Rafale jets" had been made with French President Emmanuel Macron. Contract negotiations were completed in August 2024.
The contract for nine single-seater Rafales and three two-seaters is worth ().
United Arab Emirates
In 2009, the United Arab Emirates Air Force was interested in an upgraded Rafale with more powerful engines and radar, and advanced air-to-air missiles. In October 2011, Dassault was confident that a US$10 billion deal for up to 60 Rafales would be signed. However, Deputy Supreme Commander of the Union Defence Force, Mohammed bin Zayed Al Nahyan, in November 2011 called the French offer "uncompetitive and unworkable"; In 2010, France allegedly asked the UAE to pay US$2.6 billion of the total cost of Rafale upgrades. Consequently, the UAE explored a purchase of the Eurofighter Typhoon or the F/A-18E/F Super Hornet. The newspaper La Tribune reported in February 2012, that the UAE was still considering the US$10-billion deal for 60 Rafales. Interoperability among the Gulf air forces had renewed Qatari and Kuwaiti interest in the Rafale. In January 2013, President Hollande stated that he would discuss the Rafale during an official visit to the UAE. In December 2013, the UAE reportedly chose not to proceed with a deal for defence and security services, including the supply of Typhoons.
In September 2014, it was reported that the UAE could acquire 40 Rafales in addition to upgrading its existing Mirage 2000s. In November 2015, Reuters reported that Major General Ibrahim Nasser Al Alawi, commander of the UAE Air Force and Air Defence, had confirmed that the UAE was in final negotiations to purchase 60 Rafales. In 2019 a series of Rafale F3-R trials were conducted at Al Dhafra Air Base in the UAE. On 3 December 2021, Dassault announced that the UAE had signed an order for 80 Rafale F4 in a government-to-government deal, which made the UAE the largest Rafale operator in the region and second to France. The deal makes the United Arab Emirates Air Force the first user of the Rafale F4 standard outside France.
Potential operators
Bangladesh
In March 2020, La Tribune reported that France's Minister of the Armed Forces, Florence Parly, promoted the Rafale's performance to Bangladeshi Prime Minister Sheikh Hasina, who is also Minister of Defense.
Brazil
On 30 October 2024, the Brazilian media reported that France's President Emmanuel Macron during his trip to Rio de Janeiro to participate the 2024 G20 summit will offer to the Brazilian President Luiz Inácio Lula da Silva an package of armaments, including 24 Rafales to replace the older AMX as the Brazilian Air Force is in a selection process of a replacement for their preferred attack jet.
Colombia
In June 2022, La Tribune reported Dassault made an offer for 15 fighters and 9 in option for the Colombian Air Force. Colombia was interested in used ones, but France denied, taking into consideration it already sold 24 jets to Croatia and Greece. On 21 December 2022, the Colombian government announced that they had shortlisted the Rafale for a potential 16 aircraft order to replace their aging Kfir. Nevertheless, on 3 January 2023, Colombia and Dassault explained they could not come to an agreement, mainly because of the high price-tag of the planes. On 1 April, Colombia issued a new RFP for new planes, with the Rafale, the Gripen and the F-16 as favorites.
Malaysia
The Rafale was a contender for the replacement of the Royal Malaysian Air Force's (RMAF) Mikoyan MiG-29s, with a requirement to equip three squadrons with 36 to 40 fighters with an estimated budget of RM6 billion to RM8 billion (US$1.84 billion to US$2.46 billion). Other competitors were the Eurofighter Typhoon, Boeing F/A-18/F Super Hornet and Saab JAS 39 Gripen. In July 2017, acquisition efforts were suspended with the RMAF looking instead to buy new maritime patrol aircraft and advanced trainers with light attack capabilities to confront the growing threat of Islamist militants in the Southeast Asian region.
Peru
In July 2024, it was reported that Peru as part of a revitalization program for the combat sections of the Peruvian Air Force (FAP) was considering the Rafale as one of the contenders for a recently launched contract for fighters. General Carlos Enrique Chávez Cateriano, the commanding general of the FAP, announced on 8 July 2024 that a competition had been launched and the Rafale was one of two leading contenders, with the other leading contender being the KAI KF-21 Boramae. The FAP is operating Mirage 2000P Fighters and Mirage 2000DP trainers, as well as American and Russian built fighters as of 2024.
Saudi Arabia
In February 2022, La Tribune reported that Saudi Arabia is interested in the Rafale, then reported in December 2022 that Saudi Arabia would need between 100 and 200 fighters. In October 2023, Saudi Arabian authorities officially asked the French company Dassault Aviation to send a quote and a proposed delivery schedule for 54 Rafale F4 combat aircraft.
Uzbekistan
On 26 November 2023, French President Emmanuel Macron offered Rafale to both Kazakhstan and Uzbekistan governments, according to La Tribune. Scramble reported that Uzbekistan is interested in buying 24 Rafales, citing the source in France government.
Failed bids
The Rafale has been marketed for export to various countries. Various commentators and industry sources have highlighted the high cost of the aircraft as detrimental to the Rafale's sales prospects. Its acquisition cost is roughly US$100 million (2010), while its operational cost hovers around US$16,500 (2012) for every flight-hour. The Saab JAS 39 Gripen, in comparison, costs only US$4,700 per flight-hour to operate. According to a 2009 article by the Institute for Defense Studies and Analysis, unlike the American government and its relationship with Boeing and Lockheed Martin, the lack of communication between the French government and Dassault has hampered a worldwide cooperative sales effort, as demonstrated by the case with Morocco in 2007.
Belgium
In 2009, Belgium suggested that they may buy F-35s in the 2020s to replace Belgium's 34 F-16A/B MLU fleet. An article published in Belgian newspaper L'Avenir on 19 April 2015 speculated that, if the nuclear strike role via Belgium's Nuclear sharing policy were retained in the request for proposals, Belgium would be almost forced to buy the F-35 as to maintain this role. Belgium officially launched its F-16 replacement program in March 2017, issuing requests for proposals to three European and two US manufacturers: Boeing Defense, Space & Security, Lockheed Martin, Dassault, Eurofighter GmbH and Saab Group, offering the F/A-18E/F Super Hornet, F-35 Lightning II, Rafale, Eurofighter Typhoon and Saab JAS 39 Gripen respectively. On 25 October 2018, Belgium officially selected the offer for 34 F-35As; government officials stated that it had come down to price, and that "The offer from the Americans was the best in all seven evaluation criteria". The total purchasing price for the aircraft and support until 2030 totaled €4 billion, €600 million cheaper than the budgeted €4.6 billion. In April 2020, the first F-35 contract was signed, with deliveries to begin in 2023.
Brazil
In June 2008, the Brazilian Air Force issued a request for information on the following aircraft: F/A-18E/F Super Hornet, F-16 Fighting Falcon, Rafale, Su-35, Gripen NG and Eurofighter Typhoon. In October 2008, the service selected three finalists for F-X2 – Dassault Rafale, Gripen NG and Boeing F/A-18E/F. On 5 January 2010, media reports stated that the final evaluation report by the Brazilian Air Force placed the Gripen ahead of the other two contenders based on unit and operating costs. In February 2011, Brazilian President Dilma Rousseff had reportedly decided in favour of the F/A-18. After Edward Snowden's revelation that the NSA had been intercepting Rouseff's private communications, and her ensuing fury, the Brazilian government selected the Gripen NG in December 2013 in a US$5 billion deal to equip the air force.
Canada
The Rafale was amongst various fighters proposed to replace the Royal Canadian Air Force's McDonnell Douglas CF-18 Hornet. In 2005, a report compiled by Canada's Department of Defence reviewing aircraft noted concerns over the Rafale's interoperability with US forces; Dassault had also been unable to confirm engine performance during cold weather conditions. In July 2010, the Canadian government announced the F-35 as the CF-18's replacement; the nation was already a partner in the Joint Strike Fighter program since 1997 and a Tier 3 partner for the F-35 since 2002. In December 2012, the Canadian government announced that the F-35 buy had been abandoned due to cost rises and that a fresh procurement process would begin. In January 2013, Dassault responded to Canada's request for information. Various aircraft were considered, including the F-35. In January 2014, Dassault offered a contract with full technology transfer, allowing Canada to perform its own support and upgrades, thereby lowering long-term service costs. In November 2018, Dassault withdrew from the competition, reportedly over interoperability and intelligence sharing requirements, particularly with the US, complicated by France's lack of involvement in the Five Eyes intelligence-sharing group.
Finland
In June 2015, a working group set up by the Finnish MoD proposed starting the HX Fighter Program to replace the Finnish Air Force's current fleet of F/A-18 Hornets. The group recognises five potential types: Boeing F/A-18E/F Super Hornet, Dassault Rafale, Eurofighter Typhoon, Lockheed Martin F-35 Lightning II and Saab JAS 39 Gripen E/F. In December 2015, the Finnish MoD informed Great Britain, France, Sweden and the US informing them of the launch of the HX Fighter Program to replace the Hornet fleet, which will be decommissioned by 2025, with multi-role fighters; the Rafale is mentioned as a potential fighter. The request for information was sent in early 2016; five responses were received in November 2016. In December 2021, the Finnish newspaper Iltalehti reported that several foreign and security policy sources had confirmed the Finnish Defense Forces' recommendation of the F-35 as Finland's next fighter due to its "capability and expected long lifespan".
Kuwait
In February 2009, French President Nicolas Sarkozy announced that Kuwait was considering buying up to 28 Rafales. In October 2009, during a visit to Paris, the Kuwaiti Defence Minister expressed interest in the Rafale and said that he was awaiting Dassault's terms. Islamist lawmakers in the Kuwaiti national assembly threatened to block such a purchase, accusing the Defence Minister of lack of transparency and being manipulated by business interests. In January 2012, the French Defence Minister said that both Kuwait and Qatar were waiting to see if the UAE first purchased the Rafale and that Kuwait would look to buy 18–22 Rafales. However, on 11 September 2015, Eurofighter announced that an agreement had been reached with Kuwait to buy 28 Typhoons.
Singapore
In 2005, the Republic of Singapore Air Force launched its Next Generation Fighter (NGF) programme to replace its ageing A-4SU Super Skyhawks. Several options were considered and the Defence Science & Technology Agency (DSTA) conducted a detailed technical assessment, simulations and other tests to determine the final selection. This reduced the list of competitors to the Rafale and the F-15SG Strike Eagle. In December 2005, Singapore ordered 12 F-15SGs. According to Defense Industry Daily, key reasons for the selection were that, despite the Rafale's superior aerodynamics, it had insufficient range, weapons, and sensor integration.
Switzerland
In February 2007, Switzerland was reportedly considering the Rafale and other fighters to replace its ageing Northrop F-5 Tiger IIs. A one-month evaluation started in October 2008 at Emmen Airforce Base, consisting of approximately 30 evaluation flights; the Rafale, along with the JAS 39 Gripen and the Typhoon, were evaluated. Although a leaked Swiss Air Force evaluation report revealed that the Rafale won the competition on technical grounds, on 30 November 2011, the Swiss Federal Council announced plans to buy 22 Gripen NGs due to its lower acquisition and maintenance costs. Due to a referendum, this purchase never happened.
In March 2018, Swiss officials named contenders in its Air 2030 program: The Rafale, Saab Gripen, Eurofighter Typhoon, Boeing F/A-18E/F Super Hornet and Lockheed Martin F-35. In October 2018, the Swiss Air Force was reportedly limited to buying a single-engine fighter for budgetary reasons. In May 2019, the Rafale performed demonstration flights at Payerne Air Base for comparison against other bids. On 30 June 2021, the Swiss Federal Council proposed to Parliament the acquisition of 36 F-35As at a cost of up to 6 billion Swiss francs (US$6.5 billion), citing the aircraft's cost- and combat-effectiveness. However, it was later confirmed that the costs are capped for a period of just 10 years. The Liberal Greens have promised to examine the F-35's environmental impact. The Swiss anti-military group GSoA intended to contest the purchase in another national referendum supported by the Green Party of Switzerland and the Social Democratic Party of Switzerland (which previously managed to block the Gripen). In August 2022, they registered the initiative, with 120,000 people having signed in less than a year (with 100,000 required).
On 15 September 2022, the Swiss National council gave the Federal council permission to sign the purchase deal, with a time limit for signing of March 2023. The deal to buy 36 F-35A was signed on 19 September 2022, with deliveries to commence in 2027 and conclude by 2030, bypassing the popular initiative.
Other bids
In 2002, the Republic of Korea Air Force chose the F-15K Slam Eagle over the Dassault Rafale, Eurofighter Typhoon and Sukhoi Su-35 for its 40 aircraft F-X Phase 1 fighter competition.
In January 2007, the French newspaper Journal du Dimanche reported that Libya sought 13 to 18 Rafales "in a deal worth as much as US$3.24 billion". In December 2007, Saif al-Islam Gaddafi declared Libya's interest in the Rafale, but no order was placed. French Rafales later attacked targets in Libya as part of the international military intervention during the 2011 Libyan civil war.
In late 2007, La Tribune reported that a prospective US$2.85 billion sale to Morocco had fallen through, the government selecting the F-16C/D instead. While French Defense Minister Hervé Morin labelled it as overly sophisticated and too costly, defense analysists have said that miscalculations of the DGA's offer price and hesitations over financing were detrimental to the negotiations.
In February 2009, France offered Rafales to Oman to replace its ageing fleet of SEPECAT Jaguars. In December 2012, Oman placed an order for 12 Typhoons.
Variants
Rafale A Technology demonstrator, first flew in 1986.
Rafale D Dassault used this designation (D for discrète) in the early 1990s to emphasise the new semi-stealthy design features.
Rafale B Two-seater version for the French Air and Space Force.
Rafale C Same as Rafale B but single-seat version for the French Air and Space Force.
Rafale M Similar to Rafale C, but with modifications to allow operations from CATOBAR – equipped aircraft carriers. For carrier operations, the M model has a strengthened airframe, longer nose gear leg to provide a more nose-up attitude, larger tailhook between the engines, and a built-in boarding ladder. Consequently, the Rafale M weighs about more than the Rafale C. It is the only non-US fighter type cleared to operate from the decks of US carriers, using catapults and their arresting gear, as demonstrated in 2008 when six Rafales from Flottille 12F integrated into the Carrier Air Wing interoperability exercise.
Rafale N Originally called the Rafale BM, was a planned missile-only two-seater version for the Aéronavale. Budgetary constraints have been cited as grounds for its cancellation.
Rafale R Proposed reconnaissance-oriented variant.
Rafale DM Two-seater version for the Egyptian Air Force.
Rafale EM Single-seat version for the Egyptian Air Force.
Rafale DH Two-seater version for the Indian Air Force.
Rafale EH Single-seat version for the Indian Air Force.
Rafale DQ Two-seater version for the Qatar Emiri Air Force.
Rafale EQ Single-seat version for the Qatar Emiri Air Force.
Rafale DG Two-seater version for the Hellenic Air Force.
Rafale EG Single-seat version for the Hellenic Air Force.
Operators
Current operators
Croatian Air Force – 12 ex French C/B F3-R Rafales ordered, 10 single-seat C F3-R and 2x two-seat B F3-R fighters. The first six were delivered on 25 April 2024 (2 B + 4 C) and the remaining six single-seaters are to be delivered in 2025. In October 2023, Croatia officially acquired the first aircraft at a ceremony at Mont-de-Marsan Air Base. As of December 2024, eight aircraft have been delivered.
91st Air Force Base
Egyptian Air Force – 54 ordered with 24 Rafales in service .
A total of 234 have been ordered out of a planned 286. In 2024, 166 units were delivered to the French Armed Forces, with 12 units sold to Greece and another 12 to Croatia, France currently operate around 143 Rafale. Once the deliveries completed, France is expected to field around 225 units. 185 for the Air and Space Force and 40 for the Navy. All units are expected to be delivered by 2035.
French Air and Space Force – ~100; flying units include:
Saint-Dizier – Robinson Air Base
Escadron de Chasse 2/4 La Fayette (2018–present) nuclear strike
Escadron de Chasse 1/7 Provence (2006–2016) multirole fighter
Escadron de Chasse 1/4 Gascogne (2009–present) nuclear strike
Escadron de Transformation Rafale 3/4 Aquitaine (October 2010–present, Rafale Operational Conversion Unit (OCU) jointly operated by French Air and Space Force and French Naval Aviation)
Mont-de-Marsan Air Base
Escadron de Chasse 2/30 Normandie-Niemen (2012–present) multirole fighter
Escadron de Chasse 3/30 Lorraine (2016–present) multirole fighter
Escadron de chasse et d'expérimentation 1/30 Côte d'Argent (2004–present) tactics development and evaluation
Orange-Caritat Air Base
Escadron de Chasse 1/5 Vendée (2024–present) multirole fighter
Al Dhafra Air Base, UAE
Escadron de Chasse 3/30 Lorraine (2010–2016) multirole fighter
Escadron de Chasse 1/7 Provence (2016–present) multirole fighter
French Navy – 46 delivered, 41 active
Naval Air Base Landivisiau
Flottille 11F (2011–present) multirole carrier fighter
Flottille 12F (2001–present) multirole carrier fighter
Flottille 17F (2016–present) multirole carrier fighter
Hellenic Air Force – Greece ordered 18 Rafales in 2020, and an additional six in 2021 for a total of 24. The first was delivered on 21 July 2021. All 24 have been delivered to the Hellenic Air Force as of January 2025.
Tanagra Air Base
332nd All Weather Squadron (Hawks)
Indian Air Force – 36 (28 single-seat and 8 dual-seat) aircraft delivered by July 2022, of 36 ordered.
Ambala AFS
No. 17 Squadron (Golden Arrows)
Hasimara AFS
No. 101 Squadron (Falcons)
Qatar Emiri Air Force – 36 ordered, all delivered. Qatar ordered 24 of the fighters in 2015, and 12 more in 2018. It also has an option to order 36 more. , 27 were delivered. As of 2023, all aircraft were delivered.
Dukhan / Tamim Airbase
1st Fighter Squadron 'Al Adiyat'
Future operators
Indian Navy – 26 Rafale M planned (price negotiations underway).
Indonesian Air Force – 42 Rafale F4s on order to be accepted in 2026.
6th Air Wing – Roesmin Nurjadin AFB, Pekanbaru
12th Air Squadron (Black Panther)
16th Air Squadron (Rydder)
7th Air Wing – Supadio AFB, Pontianak
1st Air Squadron (Equatorial Eagles)
Serbian Air Force and Air Defence - 12 aircraft ordered in 2024 with deliveries to be completed by 2029, reported to be the F4 version.
United Arab Emirates Air Force – 80 Rafale F4s on order
Notable accidents
On 6 December 2007, a French Air Force twin-seat Rafale crashed during a training flight. The pilot, who suffered from spatial disorientation, died in the accident.
On 24 September 2009, after unarmed test flights, two French Navy Rafales returning to the aircraft carrier , collided in mid-air about from the town of Perpignan in southwest France. One test pilot, identified as François Duflot, died in the accident, while the other was rescued.
On 28 November 2010, a Rafale from the carrier Charles de Gaulle crashed in the Arabian Sea. This aircraft was supporting Allied operations in Afghanistan. The pilot ejected safely and was rescued by a rescue helicopter from the carrier. Later reports said the engines stopped after being starved of fuel due to confusion by the pilot in switching fuel tanks.
On 2 July 2012, during a joint exercise, a Rafale from the carrier Charles de Gaulle plunged into the Mediterranean Sea. The pilot ejected safely and was recovered by an American search and rescue helicopter from the carrier .
On 14 August 2024, two French two-seater Rafale B collided over Colombey-les-Belles. While one pilot had ejected before crashing into the ground, the trainee and instructor of the second aircraft were reported missing. Both the aircraft were based in Saint-Dizier – Robinson Air Base. By 15 August, the death of the missing pilots was announced.
Specifications (Rafale C, B and M)
| Technology | Specific aircraft | null |
33731493 | https://en.wikipedia.org/wiki/Parallel%20postulate | Parallel postulate | In geometry, the parallel postulate, also called Euclid's fifth postulate because it is the fifth postulate in Euclid's Elements, is a distinctive axiom in Euclidean geometry. It states that, in two-dimensional geometry:
If a line segment intersects two straight lines forming two interior angles on the same side that are less than two right angles, then the two lines, if extended indefinitely, meet on that side on which the angles sum to less than two right angles.
This postulate does not specifically talk about parallel lines; it is only a postulate related to parallelism. Euclid gave the definition of parallel lines in Book I, Definition 23 just before the five postulates.
Euclidean geometry is the study of geometry that satisfies all of Euclid's axioms, including the parallel postulate.
The postulate was long considered to be obvious or inevitable, but proofs were elusive. Eventually, it was discovered that inverting the postulate gave valid, albeit different geometries. A geometry where the parallel postulate does not hold is known as a non-Euclidean geometry. Geometry that is independent of Euclid's fifth postulate (i.e., only assumes the modern equivalent of the first four postulates) is known as absolute geometry (or sometimes "neutral geometry").
Equivalent properties
Probably the best-known equivalent of Euclid's parallel postulate, contingent on his other postulates, is Playfair's axiom, named after the Scottish mathematician John Playfair, which states:
In a plane, given a line and a point not on it, at most one line parallel to the given line can be drawn through the point.
This axiom by itself is not logically equivalent to the Euclidean parallel postulate since there are geometries in which one is true and the other is not. However, in the presence of the remaining axioms which give Euclidean geometry, one can be used to prove the other, so they are equivalent in the context of absolute geometry.
Many other statements equivalent to the parallel postulate have been suggested, some of them appearing at first to be unrelated to parallelism, and some seeming so self-evident that they were unconsciously assumed by people who claimed to have proven the parallel postulate from Euclid's other postulates. These equivalent statements include:
There is at most one line that can be drawn parallel to another given one through an external point. (Playfair's axiom)
The sum of the angles in every triangle is 180° (triangle postulate).
There exists a triangle whose angles add up to 180°.
The sum of the angles is the same for every triangle.
There exists a pair of similar, but not congruent, triangles.
Every triangle can be circumscribed.
If three angles of a quadrilateral are right angles, then the fourth angle is also a right angle.
There exists a quadrilateral in which all angles are right angles, that is, a rectangle.
There exists a pair of straight lines that are at constant distance from each other.
Two lines that are parallel to the same line are also parallel to each other.
In a right-angled triangle, the square of the hypotenuse equals the sum of the squares of the other two sides (Pythagoras' theorem).
The law of cosines, a generalization of Pythagoras' theorem.
There is no upper limit to the area of a triangle. (Wallis axiom)
The summit angles of the Saccheri quadrilateral are 90°.
If a line intersects one of two parallel lines, both of which are coplanar with the original line, then it also intersects the other. (Proclus' axiom)
However, the alternatives which employ the word "parallel" cease appearing so simple when one is obliged to explain which of the four common definitions of "parallel" is meant – constant separation, never meeting, same angles where crossed by some third line, or same angles where crossed by any third line – since the equivalence of these four is itself one of the unconsciously obvious assumptions equivalent to Euclid's fifth postulate. In the list above, it is always taken to refer to non-intersecting lines. For example, if the word "parallel" in Playfair's axiom is taken to mean 'constant separation' or 'same angles where crossed by any third line', then it is no longer equivalent to Euclid's fifth postulate, and is provable from the first four (the axiom says 'There is at most one line...', which is consistent with there being no such lines). However, if the definition is taken so that parallel lines are lines that do not intersect, or that have some line intersecting them in the same angles, Playfair's axiom is contextually equivalent to Euclid's fifth postulate and is thus logically independent of the first four postulates. Note that the latter two definitions are not equivalent, because in hyperbolic geometry the second definition holds only for ultraparallel lines.
History
From the beginning, the postulate came under attack as being provable, and therefore not a postulate, and for more than two thousand years, many attempts were made to prove (derive) the parallel postulate using Euclid's first four postulates. The main reason that such a proof was so highly sought after was that, unlike the first four postulates, the parallel postulate is not self-evident. If the order in which the postulates were listed in the Elements is significant, it indicates that Euclid included this postulate only when he realised he could not prove it or proceed without it.
Many attempts were made to prove the fifth postulate from the other four, many of them being accepted as proofs for long periods until the mistake was found. Invariably the mistake was assuming some 'obvious' property which turned out to be equivalent to the fifth postulate (Playfair's axiom). Although known from the time of Proclus, this became known as Playfair's Axiom after John Playfair wrote a famous commentary on Euclid in 1795 in which he proposed replacing Euclid's fifth postulate by his own axiom. Today, over two thousand two hundred years later, Euclid's fifth postulate remains a postulate.
Proclus (410–485) wrote a commentary on The Elements where he comments on attempted proofs to deduce the fifth postulate from the other four; in particular, he notes that Ptolemy had produced a false 'proof'. Proclus then goes on to give a false proof of his own. However, he did give a postulate which is equivalent to the fifth postulate.
Ibn al-Haytham (Alhazen) (965–1039), an Arab mathematician, made an attempt at proving the parallel postulate using a proof by contradiction, in the course of which he introduced the concept of motion and transformation into geometry. He formulated the Lambert quadrilateral, which Boris Abramovich Rozenfeld names the "Ibn al-Haytham–Lambert quadrilateral", and his attempted proof contains elements similar to those found in Lambert quadrilaterals and Playfair's axiom.
The Persian mathematician, astronomer, philosopher, and poet Omar Khayyám (1050–1123), attempted to prove the fifth postulate from another explicitly given postulate (based on the fourth of the five principles due to the Philosopher (Aristotle), namely, "Two convergent straight lines intersect and it is impossible for two convergent straight lines to diverge in the direction in which they converge." He derived some of the earlier results belonging to elliptical geometry and hyperbolic geometry, though his postulate excluded the latter possibility. The Saccheri quadrilateral was also first considered by Omar Khayyám in the late 11th century in Book I of Explanations of the Difficulties in the Postulates of Euclid. Unlike many commentators on Euclid before and after him (including Giovanni Girolamo Saccheri), Khayyám was not trying to prove the parallel postulate as such but to derive it from his equivalent postulate. He recognized that three possibilities arose from omitting Euclid's fifth postulate; if two perpendiculars to one line cross another line, judicious choice of the last can make the internal angles where it meets the two perpendiculars equal (it is then parallel to the first line). If those equal internal angles are right angles, we get Euclid's fifth postulate, otherwise, they must be either acute or obtuse. He showed that the acute and obtuse cases led to contradictions using his postulate, but his postulate is now known to be equivalent to the fifth postulate.
Nasir al-Din al-Tusi (1201–1274), in his Al-risala al-shafiya'an al-shakk fi'l-khutut al-mutawaziya (Discussion Which Removes Doubt about Parallel Lines) (1250), wrote detailed critiques of the parallel postulate and on Khayyám's attempted proof a century earlier. Nasir al-Din attempted to derive a proof by contradiction of the parallel postulate. He also considered the cases of what are now known as elliptical and hyperbolic geometry, though he ruled out both of them.
Nasir al-Din's son, Sadr al-Din (sometimes known as "Pseudo-Tusi"), wrote a book on the subject in 1298, based on his father's later thoughts, which presented one of the earliest arguments for a non-Euclidean hypothesis equivalent to the parallel postulate. "He essentially revised both the Euclidean system of axioms and postulates and the proofs of many propositions from the Elements." His work was published in Rome in 1594 and was studied by European geometers. This work marked the starting point for Saccheri's work on the subject which opened with a criticism of Sadr al-Din's work and the work of Wallis.
Giordano Vitale (1633–1711), in his book Euclide restituo (1680, 1686), used the Khayyam-Saccheri quadrilateral to prove that if three points are equidistant on the base AB and the summit CD, then AB and CD are everywhere equidistant. Girolamo Saccheri (1667–1733) pursued the same line of reasoning more thoroughly, correctly obtaining absurdity from the obtuse case (proceeding, like Euclid, from the implicit assumption that lines can be extended indefinitely and have infinite length), but failing to refute the acute case (although he managed to wrongly persuade himself that he had).
In 1766 Johann Lambert wrote, but did not publish, Theorie der Parallellinien in which he attempted, as Saccheri did, to prove the fifth postulate. He worked with a figure that today we call a Lambert quadrilateral, a quadrilateral with three right angles (can be considered half of a Saccheri quadrilateral). He quickly eliminated the possibility that the fourth angle is obtuse, as had Saccheri and Khayyám, and then proceeded to prove many theorems under the assumption of an acute angle. Unlike Saccheri, he never felt that he had reached a contradiction with this assumption. He had proved the non-Euclidean result that the sum of the angles in a triangle increases as the area of the triangle decreases, and this led him to speculate on the possibility of a model of the acute case on a sphere of imaginary radius. He did not carry this idea any further.
Where Khayyám and Saccheri had attempted to prove Euclid's fifth by disproving the only possible alternatives, the nineteenth century finally saw mathematicians exploring those alternatives and discovering the logically consistent geometries that result. In 1829, Nikolai Ivanovich Lobachevsky published an account of acute geometry in an obscure Russian journal (later re-published in 1840 in German). In 1831, János Bolyai included, in a book by his father, an appendix describing acute geometry, which, doubtlessly, he had developed independently of Lobachevsky. Carl Friedrich Gauss had also studied the problem, but he did not publish any of his results. Upon hearing of Bolyai's results in a letter from Bolyai's father, Farkas Bolyai, Gauss stated:
If I commenced by saying that I am unable to praise this work, you would certainly be surprised for a moment. But I cannot say otherwise. To praise it would be to praise myself. Indeed the whole contents of the work, the path taken by your son, the results to which he is led, coincide almost entirely with my meditations, which have occupied my mind partly for the last thirty or thirty-five years.
The resulting geometries were later developed by Lobachevsky, Riemann and Poincaré into hyperbolic geometry (the acute case) and elliptic geometry (the obtuse case). The independence of the parallel postulate from Euclid's other axioms was finally demonstrated by Eugenio Beltrami in 1868.
Converse of Euclid's parallel postulate
Euclid did not postulate the converse of his fifth postulate, which is one way to distinguish Euclidean geometry from elliptic geometry. The Elements contains the proof of an equivalent statement (Book I, Proposition 27): If a straight line falling on two straight lines make the alternate angles equal to one another, the straight lines will be parallel to one another. As De Morgan pointed out, this is logically equivalent to (Book I, Proposition 16). These results do not depend upon the fifth postulate, but they do require the second postulate which is violated in elliptic geometry.
Criticism
Attempts to logically prove the parallel postulate, rather than the eighth axiom, were criticized by Arthur Schopenhauer in The World as Will and Idea. However, the argument used by Schopenhauer was that the postulate is evident by perception, not that it was not a logical consequence of the other axioms.
Decomposition of the parallel postulate
The parallel postulate is equivalent to the conjunction of the Lotschnittaxiom and of Aristotle's axiom.
The former states that the perpendiculars to the sides of a right angle intersect, while the latter states that there is no upper bound for the lengths of the distances from the leg of an angle to the other leg. As shown in, the parallel postulate is equivalent to the conjunction of the following incidence-geometric forms of the Lotschnittaxiom and of Aristotle's axiom:
Given three parallel lines, there is a line that intersects all three of them.
Given a line a and two distinct intersecting lines m and n, each different from a, there exists a line g which intersects a and m, but not n.
The splitting of the parallel postulate into the conjunction of these incidence-geometric axioms is possible only in the presence of absolute geometry.
| Mathematics | Euclidean geometry | null |
43440252 | https://en.wikipedia.org/wiki/Digital%20buffer | Digital buffer | A digital buffer (or a logic buffer) is an electronic circuit element used to copy a digital input signal and isolate it from any output load. For the typical case of using voltages as logic signals, a logic buffer's input impedance is high, so it draws little current from the input circuit, to avoid disturbing its signal.
The digital buffer is important in data transmission between connected systems. Buffers are used in registers (data storage device) and buses (data transferring device). To connect to a shared bus, a tri-state digital buffer should be used, because it has a high impedance ("inactive" or "disconnected") output state (in addition to logic low and high).
Functionality
A voltage buffer amplifier transfers a voltage from a high output impedance circuit to a second circuit with low input impedance. Directly connecting a low impedance load to a power source draws current according to Ohm's law. The high current affects the source. Buffer inputs are high impedance. A buffered load effectively does not affect the source circuit. The buffer's output current is generated within the buffer. In this way, a buffer provides isolation between a power source and a low impedance. The buffer does not intentionally amplify or attenuate the input signal, and so may be called a unity gain buffer.
A digital buffer is a type of voltage buffer amplifier that is only concerned about digital logic levels, and thus may be non-linear. It may also act as a level shifter, with output voltages differing from the input voltages. One case of this is an inverting buffer which translates an active-high signal to an active-low one (or vice versa).
Types
Single input voltage buffer
Inverting buffer
This buffer's output state is the opposite of the input state. If the input is high, the output is low, and vice versa. Graphically, an inverting buffer is represented by a triangle with a small circle at the output, with the circle signifying inversion. The inverter is a basic building block in digital electronics. Decoders, state machines, and other sophisticated digital devices often include inverters.
Non-inverting buffer
This kind of buffer performs no inversion or decision-making possibilities. A single input digital buffer is different from an inverter. It does not invert or alter its input signal in any way. It reads an input and outputs a value. Usually, the input side reads either HIGH or LOW input and outputs a HIGH or LOW value, correspondingly. Whether the output terminal sends off HIGH or LOW signal is determined by its input value. The output value will be high if and only if the input value is high. In other words, Q will be high if and only if A is HIGH.
Tri-state digital buffer
Unlike the single input digital buffer which has only one input, the tri-state digital buffer has two inputs: a data input and a control input. (A control input is analogous to a valve, which controls the data flow.) When the control input is active, the output value is the input value, and the buffer is not different from the single input digital buffer.
Active high tri-state digital buffer
An active-high tri-state digital buffer is a buffer that is in an active state that transmits its data input to the output only when its control input voltage is high (logic 1). But when the control input is low (logic 0), the output is high impedance (abbreviated as "Hi-Z"), as if the part had been removed from the circuit.
Active low tri-state digital buffer
It is basically the same as active high digital buffer except the fact that the buffer is active when the control input is at a low state.
Inverting tri-state digital buffer
Tri-State digital buffers also have inverting varieties in which the output is the inverse of the input.
Application
Single input voltage buffers are used in many places for measurements including:
In strain gauge circuitry to measure deformations in structures like bridges, airplane wings and I-beams in buildings.
In temperature measurement circuitry for boilers and in high altitude aircraft in a cold environment.
In control circuits for aircraft, people movers in airports, subways and in many different production operations.
Tri-state voltage buffers are used widely to transmit onto shared buses, since a bus can only transmit one input device's data at a time. The high-impedance output state effectively temporarily disconnects that input device from the bus, since at most only one device should actively drive the bus's shared wires.
| Technology | Digital logic | null |
26513034 | https://en.wikipedia.org/wiki/Pythagorean%20theorem | Pythagorean theorem | In mathematics, the Pythagorean theorem or Pythagoras' theorem is a fundamental relation in Euclidean geometry between the three sides of a right triangle. It states that the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares on the other two sides.
The theorem can be written as an equation relating the lengths of the sides , and the hypotenuse , sometimes called the Pythagorean equation:
The theorem is named for the Greek philosopher Pythagoras, born around 570 BC. The theorem has been proved numerous times by many different methods – possibly the most for any mathematical theorem. The proofs are diverse, including both geometric proofs and algebraic proofs, with some dating back thousands of years.
When Euclidean space is represented by a Cartesian coordinate system in analytic geometry, Euclidean distance satisfies the Pythagorean relation: the squared distance between two points equals the sum of squares of the difference in each coordinate between the points.
The theorem can be generalized in various ways: to higher-dimensional spaces, to spaces that are not Euclidean, to objects that are not right triangles, and to objects that are not triangles at all but -dimensional solids.
Proofs using constructed squares
Rearrangement proofs
In one rearrangement proof, two squares are used whose sides have a measure of and which contain four right triangles whose sides are , and , with the hypotenuse being . In the square on the right side, the triangles are placed such that the corners of the square correspond to the corners of the right angle in the triangles, forming a square in the center whose sides are length . Each outer square has an area of as well as , with representing the total area of the four triangles. Within the big square on the left side, the four triangles are moved to form two similar rectangles with sides of length and . These rectangles in their new position have now delineated two new squares, one having side length is formed in the bottom-left corner, and another square of side length formed in the top-right corner. In this new position, this left side now has a square of area as well as . Since both squares have the area of it follows that the other measure of the square area also equal each other such that = . With the area of the four triangles removed from both side of the equation what remains is
In another proof rectangles in the second box can also be placed such that both have one corner that correspond to consecutive corners of the square. In this way they also form two boxes, this time in consecutive corners, with areas and which will again lead to a second square of with the area .
English mathematician Sir Thomas Heath gives this proof in his commentary on Proposition I.47 in Euclid's Elements, and mentions the proposals of German mathematicians Carl Anton Bretschneider and Hermann Hankel that Pythagoras may have known this proof. Heath himself favors a different proposal for a Pythagorean proof, but acknowledges from the outset of his discussion "that the Greek literature which we possess belonging to the first five centuries after Pythagoras contains no statement specifying this or any other particular great geometric discovery to him." Recent scholarship has cast increasing doubt on any sort of role for Pythagoras as a creator of mathematics, although debate about this continues.
Algebraic proofs
The theorem can be proved algebraically using four copies of the same triangle arranged symmetrically around a square with side , as shown in the lower part of the diagram. This results in a larger square, with side and area . The four triangles and the square side must have the same area as the larger square,
giving
A similar proof uses four copies of a right triangle with sides , and , arranged inside a square with side as in the top half of the diagram. The triangles are similar with area , while the small square has side and area . The area of the large square is therefore
But this is a square with side and area , so
Other proofs of the theorem
This theorem may have more known proofs than any other (the law of quadratic reciprocity being another contender for that distinction); the book The Pythagorean Proposition contains 370 proofs.
Proof using similar triangles
This proof is based on the proportionality of the sides of three similar triangles, that is, upon the fact that the ratio of any two corresponding sides of similar triangles is the same regardless of the size of the triangles.
Let ABC represent a right triangle, with the right angle located at , as shown on the figure. Draw the altitude from point , and call its intersection with the side AB. Point divides the length of the hypotenuse into parts and . The new triangle, ACH, is similar to triangle ABC, because they both have a right angle (by definition of the altitude), and they share the angle at , meaning that the third angle will be the same in both triangles as well, marked as in the figure. By a similar reasoning, the triangle CBH is also similar to ABC. The proof of similarity of the triangles requires the triangle postulate: The sum of the angles in a triangle is two right angles, and is equivalent to the parallel postulate. Similarity of the triangles leads to the equality of ratios of corresponding sides:
The first result equates the cosines of the angles , whereas the second result equates their sines.
These ratios can be written as
Summing these two equalities results in
which, after simplification, demonstrates the Pythagorean theorem:
The role of this proof in history is the subject of much speculation. The underlying question is why Euclid did not use this proof, but invented another. One conjecture is that the proof by similar triangles involved a theory of proportions, a topic not discussed until later in the Elements, and that the theory of proportions needed further development at that time.
Einstein's proof by dissection without rearrangement
Albert Einstein gave a proof by dissection in which the pieces do not need to be moved. Instead of using a square on the hypotenuse and two squares on the legs, one can use any other shape that includes the hypotenuse, and two similar shapes that each include one of two legs instead of the hypotenuse (see Similar figures on the three sides). In Einstein's proof, the shape that includes the hypotenuse is the right triangle itself. The dissection consists of dropping a perpendicular from the vertex of the right angle of the triangle to the hypotenuse, thus splitting the whole triangle into two parts. Those two parts have the same shape as the original right triangle, and have the legs of the original triangle as their hypotenuses, and the sum of their areas is that of the original triangle. Because the ratio of the area of a right triangle to the square of its hypotenuse is the same for similar triangles, the relationship between the areas of the three triangles holds for the squares of the sides of the large triangle as well.
Euclid's proof
In outline, here is how the proof in Euclid's Elements proceeds. The large square is divided into a left and right rectangle. A triangle is constructed that has half the area of the left rectangle. Then another triangle is constructed that has half the area of the square on the left-most side. These two triangles are shown to be congruent, proving this square has the same area as the left rectangle. This argument is followed by a similar version for the right rectangle and the remaining square. Putting the two rectangles together to reform the square on the hypotenuse, its area is the same as the sum of the area of the other two squares. The details follow.
Let , , be the vertices of a right triangle, with a right angle at . Drop a perpendicular from to the side opposite the hypotenuse in the square on the hypotenuse. That line divides the square on the hypotenuse into two rectangles, each having the same area as one of the two squares on the legs.
For the formal proof, we require four elementary lemmata:
If two triangles have two sides of the one equal to two sides of the other, each to each, and the angles included by those sides equal, then the triangles are congruent (side-angle-side).
The area of a triangle is half the area of any parallelogram on the same base and having the same altitude.
The area of a rectangle is equal to the product of two adjacent sides.
The area of a square is equal to the product of two of its sides (follows from 3).
Next, each top square is related to a triangle congruent with another triangle related in turn to one of two rectangles making up the lower square.
The proof is as follows:
Let ACB be a right-angled triangle with right angle CAB.
On each of the sides BC, AB, and CA, squares are drawn, CBDE, BAGF, and ACIH, in that order. The construction of squares requires the immediately preceding theorems in Euclid, and depends upon the parallel postulate.
From A, draw a line parallel to BD and CE. It will perpendicularly intersect BC and DE at K and L, respectively.
Join CF and AD, to form the triangles BCF and BDA.
Angles CAB and BAG are both right angles; therefore C, A, and G are collinear.
Angles CBD and FBA are both right angles; therefore angle ABD equals angle FBC, since both are the sum of a right angle and angle ABC.
Since AB is equal to FB, BD is equal to BC and angle ABD equals angle FBC, triangle ABD must be congruent to triangle FBC.
Since A-K-L is a straight line, parallel to BD, then rectangle BDLK has twice the area of triangle ABD because they share the base BD and have the same altitude BK, i.e., a line normal to their common base, connecting the parallel lines BD and AL. (lemma 2)
Since C is collinear with A and G, and this line is parallel to FB, then square BAGF must be twice in area to triangle FBC.
Therefore, rectangle BDLK must have the same area as square BAGF = AB2.
By applying steps 3 to 10 to the other side of the figure, it can be similarly shown that rectangle CKLE must have the same area as square ACIH = AC2.
Adding these two results, AB2 + AC2 = BD × BK + KL × KC
Since BD = KL, BD × BK + KL × KC = BD(BK + KC) = BD × BC
Therefore, AB2 + AC2 = BC2, since CBDE is a square.
This proof, which appears in Euclid's Elements as that of Proposition 47 in Book 1, demonstrates that the area of the square on the hypotenuse is the sum of the areas of the other two squares.
This is quite distinct from the proof by similarity of triangles, which is conjectured to be the proof that Pythagoras used.
Proofs by dissection and rearrangement
Another by rearrangement is given by the middle animation. A large square is formed with area , from four identical right triangles with sides , and , fitted around a small central square. Then two rectangles are formed with sides and by moving the triangles. Combining the smaller square with these rectangles produces two squares of areas and , which must have the same area as the initial large square.
The third, rightmost image also gives a proof. The upper two squares are divided as shown by the blue and green shading, into pieces that when rearranged can be made to fit in the lower square on the hypotenuse – or conversely the large square can be divided as shown into pieces that fill the other two. This way of cutting one figure into pieces and rearranging them to get another figure is called dissection. This shows the area of the large square equals that of the two smaller ones.
Proof by area-preserving shearing
As shown in the accompanying animation, area-preserving shear mappings and translations can transform the squares on the sides adjacent to the right-angle onto the square on the hypotenuse, together covering it exactly. Each shear leaves the base and height unchanged, thus leaving the area unchanged too. The translations also leave the area unchanged, as they do not alter the shapes at all. Each square is first sheared into a parallelogram, and then into a rectangle which can be translated onto one section of the square on the hypotenuse.
Other algebraic proofs
A related proof was published by future U.S. President James A. Garfield (then a U.S. Representative). Instead of a square it uses a trapezoid, which can be constructed from the square in the second of the above proofs by bisecting along a diagonal of the inner square, to give the trapezoid as shown in the diagram. The area of the trapezoid can be calculated to be half the area of the square, that is
The inner square is similarly halved, and there are only two triangles so the proof proceeds as above except for a factor of , which is removed by multiplying by two to give the result.
Proof using differentials
One can arrive at the Pythagorean theorem by studying how changes in a side produce a change in the hypotenuse and employing calculus.
The triangle ABC is a right triangle, as shown in the upper part of the diagram, with BC the hypotenuse. At the same time the triangle lengths are measured as shown, with the hypotenuse of length , the side AC of length and the side AB of length , as seen in the lower diagram part.
If is increased by a small amount dx by extending the side AC slightly to , then also increases by dy. These form two sides of a triangle, CDE, which (with chosen so CE is perpendicular to the hypotenuse) is a right triangle approximately similar to ABC. Therefore, the ratios of their sides must be the same, that is:
This can be rewritten as , which is a differential equation that can be solved by direct integration:
giving
The constant can be deduced from , to give the equation
This is more of an intuitive proof than a formal one: it can be made more rigorous if proper limits are used in place of dx and dy.
Converse
The converse of the theorem is also true:
Given a triangle with sides of length , , and , if then the angle between sides and is a right angle.
For any three positive real numbers , , and such that , there exists a triangle with sides , and as a consequence of the converse of the triangle inequality.
This converse appears in Euclid's Elements (Book I, Proposition 48): "If in a triangle the square on one of the sides equals the sum of the squares on the remaining two sides of the triangle, then the angle contained by the remaining two sides of the triangle is right."
It can be proved using the law of cosines or as follows:
Let ABC be a triangle with side lengths , , and , with Construct a second triangle with sides of length and containing a right angle. By the Pythagorean theorem, it follows that the hypotenuse of this triangle has length , the same as the hypotenuse of the first triangle. Since both triangles' sides are the same lengths , and , the triangles are congruent and must have the same angles. Therefore, the angle between the side of lengths and in the original triangle is a right angle.
The above proof of the converse makes use of the Pythagorean theorem itself. The converse can also be proved without assuming the Pythagorean theorem.
A corollary of the Pythagorean theorem's converse is a simple means of determining whether a triangle is right, obtuse, or acute, as follows. Let be chosen to be the longest of the three sides and (otherwise there is no triangle according to the triangle inequality). The following statements apply:
If then the triangle is right.
If then the triangle is acute.
If then the triangle is obtuse.
Edsger W. Dijkstra has stated this proposition about acute, right, and obtuse triangles in this language:
where is the angle opposite to side , is the angle opposite to side , is the angle opposite to side , and sgn is the sign function.
Consequences and uses of the theorem
Pythagorean triples
A Pythagorean triple has three positive integers , , and , such that In other words, a Pythagorean triple represents the lengths of the sides of a right triangle where all three sides have integer lengths. Such a triple is commonly written Some well-known examples are and
A primitive Pythagorean triple is one in which , and are coprime (the greatest common divisor of , and is 1).
The following is a list of primitive Pythagorean triples with values less than 100:
(3, 4, 5), (5, 12, 13), (7, 24, 25), (8, 15, 17), (9, 40, 41), (11, 60, 61), (12, 35, 37), (13, 84, 85), (16, 63, 65), (20, 21, 29), (28, 45, 53), (33, 56, 65), (36, 77, 85), (39, 80, 89), (48, 55, 73), (65, 72, 97)
There are many formulas for generating Pythagorean triples. Of these, Euclid's formula is the most well-known: given arbitrary positive integers and , the formula states that the integers
forms a Pythagorean triple.
Inverse Pythagorean theorem
Given a right triangle with sides and altitude (a line from the right angle and perpendicular to the hypotenuse ). The Pythagorean theorem has,
while the inverse Pythagorean theorem relates the two legs to the altitude ,
The equation can be transformed to,
where for any non-zero real . If the are to be integers, the smallest solution is then
using the smallest Pythagorean triple . The reciprocal Pythagorean theorem is a special case of the optic equation
where the denominators are squares and also for a heptagonal triangle whose sides are square numbers.
Incommensurable lengths
One of the consequences of the Pythagorean theorem is that line segments whose lengths are incommensurable (so the ratio of which is not a rational number) can be constructed using a straightedge and compass. Pythagoras' theorem enables construction of incommensurable lengths because the hypotenuse of a triangle is related to the sides by the square root operation.
The figure on the right shows how to construct line segments whose lengths are in the ratio of the square root of any positive integer. Each triangle has a side (labeled "1") that is the chosen unit for measurement. In each right triangle, Pythagoras' theorem establishes the length of the hypotenuse in terms of this unit. If a hypotenuse is related to the unit by the square root of a positive integer that is not a perfect square, it is a realization of a length incommensurable with the unit, such as , , . For more detail, see Quadratic irrational.
Incommensurable lengths conflicted with the Pythagorean school's concept of numbers as only whole numbers. The Pythagorean school dealt with proportions by comparison of integer multiples of a common subunit. According to one legend, Hippasus of Metapontum (ca. 470 B.C.) was drowned at sea for making known the existence of the irrational or incommensurable.
A careful discussion of Hippasus's contributions is found in Fritz.
Complex numbers
For any complex number
the absolute value or modulus is given by
So the three quantities, , and are related by the Pythagorean equation,
Note that is defined to be a positive number or zero but and can be negative as well as positive. Geometrically is the distance of the from zero or the origin in the complex plane.
This can be generalised to find the distance between two points, and say. The required distance is given by
so again they are related by a version of the Pythagorean equation,
Euclidean distance
The distance formula in Cartesian coordinates is derived from the Pythagorean theorem. If and are points in the plane, then the distance between them, also called the Euclidean distance, is given by
More generally, in Euclidean -space, the Euclidean distance between two points, and , is defined, by generalization of the Pythagorean theorem, as:
If instead of Euclidean distance, the square of this value (the squared Euclidean distance, or SED) is used, the resulting equation avoids square roots and is simply a sum of the SED of the coordinates:
The squared form is a smooth, convex function of both points, and is widely used in optimization theory and statistics, forming the basis of least squares.
Euclidean distance in other coordinate systems
If Cartesian coordinates are not used, for example, if polar coordinates are used in two dimensions or, in more general terms, if curvilinear coordinates are used, the formulas expressing the Euclidean distance are more complicated than the Pythagorean theorem, but can be derived from it. A typical example where the straight-line distance between two points is converted to curvilinear coordinates can be found in the applications of Legendre polynomials in physics. The formulas can be discovered by using Pythagoras' theorem with the equations relating the curvilinear coordinates to Cartesian coordinates. For example, the polar coordinates can be introduced as:
Then two points with locations and are separated by a distance :
Performing the squares and combining terms, the Pythagorean formula for distance in Cartesian coordinates produces the separation in polar coordinates as:
using the trigonometric product-to-sum formulas. This formula is the law of cosines, sometimes called the generalized Pythagorean theorem. From this result, for the case where the radii to the two locations are at right angles, the enclosed angle and the form corresponding to Pythagoras' theorem is regained: The Pythagorean theorem, valid for right triangles, therefore is a special case of the more general law of cosines, valid for arbitrary triangles.
Pythagorean trigonometric identity
In a right triangle with sides , and hypotenuse , trigonometry determines the sine and cosine of the angle between side and the hypotenuse as:
From that it follows:
where the last step applies Pythagoras' theorem. This relation between sine and cosine is sometimes called the fundamental Pythagorean trigonometric identity. In similar triangles, the ratios of the sides are the same regardless of the size of the triangles, and depend upon the angles. Consequently, in the figure, the triangle with hypotenuse of unit size has opposite side of size and adjacent side of size in units of the hypotenuse.
Relation to the cross product
The Pythagorean theorem relates the cross product and dot product in a similar way:
This can be seen from the definitions of the cross product and dot product, as
with n a unit vector normal to both a and b. The relationship follows from these definitions and the Pythagorean trigonometric identity.
This can also be used to define the cross product. By rearranging the following equation is obtained
This can be considered as a condition on the cross product and so part of its definition, for example in seven dimensions.
As an axiom
If the first four of the Euclidean geometry axioms are assumed to be true then the Pythagorean theorem is equivalent to the fifth. That is, Euclid's fifth postulate implies the Pythagorean theorem and vice-versa.
Generalizations
Similar figures on the three sides
The Pythagorean theorem generalizes beyond the areas of squares on the three sides to any similar figures. This was known by Hippocrates of Chios in the 5th century BC, and was included by Euclid in his Elements:
If one erects similar figures (see Euclidean geometry) with corresponding sides on the sides of a right triangle, then the sum of the areas of the ones on the two smaller sides equals the area of the one on the larger side.
This extension assumes that the sides of the original triangle are the corresponding sides of the three congruent figures (so the common ratios of sides between the similar figures are a:b:c). While Euclid's proof only applied to convex polygons, the theorem also applies to concave polygons and even to similar figures that have curved boundaries (but still with part of a figure's boundary being the side of the original triangle).
The basic idea behind this generalization is that the area of a plane figure is proportional to the square of any linear dimension, and in particular is proportional to the square of the length of any side. Thus, if similar figures with areas , and are erected on sides with corresponding lengths , and then:
But, by the Pythagorean theorem, , so .
Conversely, if we can prove that for three similar figures without using the Pythagorean theorem, then we can work backwards to construct a proof of the theorem. For example, the starting center triangle can be replicated and used as a triangle on its hypotenuse, and two similar right triangles ( and ) constructed on the other two sides, formed by dividing the central triangle by its altitude. The sum of the areas of the two smaller triangles therefore is that of the third, thus and reversing the above logic leads to the Pythagorean theorem . ( | Mathematics | Geometry | null |
31156672 | https://en.wikipedia.org/wiki/Supermoon | Supermoon | A supermoon is a full moon or a new moon that nearly coincides with perigee—the closest that the Moon comes to the Earth in its orbit—resulting in a slightly larger-than-usual apparent size of the lunar disk as viewed from Earth. The technical name is a perigee syzygy (of the Earth–Moon–Sun system) or a full (or new) Moon around perigee. Because the term supermoon is astrological in origin, it has no precise astronomical definition.
The association of the Moon with both oceanic and crustal tides has led to claims that the supermoon phenomenon may be associated with increased risk of events like earthquakes and volcanic eruptions, but no such link has been found.
The opposite phenomenon, an apogee syzygy or a full (or new) Moon around apogee, has been called a micromoon.
Definitions
The name supermoon was coined by astrologer Richard Nolle in 1979, in Dell Horoscope magazine arbitrarily defined as:
He came up with the name while reading Strategic Role Of Perigean Spring Tides in Nautical History and Coastal Flooding published in 1976 by Fergus Wood, a hydrologist with NOAA. Nolle explained in 2011 that he based calculations on 90% of the difference in lunar apsis extremes for the solar year. In other words, a full or new moon is considered a supermoon if where is the lunar distance at syzygy, is the lunar distance at the greatest apogee of the year, and is the lunar distance at the smallest perigee of the year.
In practice, there is no official or even consistent definition of how near perigee the full Moon must occur to receive the supermoon label, and new moons rarely receive a supermoon label. Different sources give different definitions.
The term perigee-syzygy or perigee full/new moon is preferred in the scientific community. Perigee is the point at which the Moon is closest in its orbit to the Earth, and syzygy is when the Earth, the Moon and the Sun are aligned, which happens at every full or new moon. Astrophysicist Fred Espenak uses Nolle's definition but preferring the label of full Moon at perigee, and using the apogee and perigee nearest in time rather than the greatest and least of the year. Wood used the definition of a full or new moon occurring within 24 hours of perigee and also used the label perigee-syzygy.
Wood also coined the less used term proxigee where perigee and the full or new moon are separated by 10 hours or less.
Nolle has also added the concept of extreme supermoon in 2000 describing the concept as any new or full moons that are at "100% or greater of the mean perigee".
Occurrence
Of the possible 12 or 13 full (or new) moons each year, usually three or four may be classified as supermoons, as commonly defined.
The most recent full supermoon occurred on November 15, 2024, and the next one will be on October 7, 2025.
The supermoon of November 14, 2016, was the closest full occurrence since January 26, 1948, and will not be surpassed until November 25, 2034.
The closest full supermoon of the 21st century will occur on December 6, 2052.
The oscillating nature of the distance to the full or new moon is due to the difference between the synodic and anomalistic months. The period of this oscillation is about 14 synodic months, which is close to 15 anomalistic months. Thus every 14 lunations there is a full moon nearest to perigee.
Occasionally, a supermoon coincides with a total lunar eclipse. The most recent occurrence of this by any definition was in May 2022, and the next occurrence will be in October 2032.
In the Islamic calendar, the occurrence of full supermoons follows a seven-year cycle. In the first year, the full moon is near perigee in month 1 or 2, the next year in month 3 or 4, and so on. In the seventh year of the cycle the full moons are never very near to perigee. Approximately every 20 years the occurrences move to one month earlier. At present such a transition is occurring, so full supermoons occur twice in succession. For example in Hijri year 1446, they occur both in month 3 (, on September 18, 2024) and in month 4 (, on October 17, 2024).
Appearance
A full moon at perigee appears roughly 14% larger in diameter than at apogee. Many observers insist that the Moon looks bigger to them. This is likely due to observations shortly after sunset when the Moon appears near the horizon and the Moon illusion is at its most apparent.
While the Moon's surface luminance remains the same, because it is closer to the Earth the illuminance is about 30% brighter than at its farthest point, or apogee. This is due to the inverse square law of light which changes the amount of light received on Earth in inverse proportion to the distance from the Moon. A supermoon directly overhead could provide up to .
Effects on Earth
Claims that supermoons can cause natural disasters, and the claim of Nolle that supermoons cause "geophysical stress", have been refuted by scientists.
Despite lack of scientific evidence, there has been media speculation that natural disasters, such as the 2011 Tōhoku earthquake and tsunami and the 2004 Indian Ocean earthquake and tsunami, are causally linked with the 1–2-week period surrounding a supermoon. A large, 7.5 magnitude earthquake centred 15 km north-east of Culverden, New Zealand at 00:03 NZDT on November 14, 2016, also coincided with a supermoon.
Tehran earthquake on May 8, 2020, also coincided with a supermoon.
Scientists have confirmed that the combined effect of the Sun and Moon on the Earth's oceans, the tide, is greatest when the Moon is either new or full. and that during lunar perigee, the tidal force is somewhat stronger, resulting in perigean spring tides. However, even at its most powerful, this force is still relatively weak, causing tidal differences of inches at most.
Super Blood Moon
Total lunar eclipses which fall on supermoon and micromoon days are relatively rare. In the 21st century, there are 87 total lunar eclipses, of which 28 are supermoons and 6 are micromoons. Almost all total lunar eclipses in Lunar Saros 129 are micromoon eclipses. An example of a supermoon lunar eclipse is the September 2015 lunar eclipse.
The Super Blood Moon is an astronomical event that combines two phenomena: a supermoon and a total lunar eclipse, resulting in a larger, brighter, and reddish-colored Moon.
A total lunar eclipse takes place when the Earth aligns between the Sun and the Moon, causing Earth’s shadow to fall on the Moon. As the shadow covers the Moon, sunlight passing through Earth's atmosphere scatters, filtering out most blue light and casting a reddish hue on the Moon. This phenomenon is often called a blood moon because of its striking red or orange color.
When these two events coincide, the Moon appears both larger and redder than usual, leading to the term Super Blood Moon. This unique alignment creates a visually impressive and rare sight that has inspired folklore and intrigue for centuries.
Super Blood Moons are relatively infrequent, occurring about once every few years, making them a notable event for astronomers and skywatchers alike.
Annular solar eclipses
Annular solar eclipses occur when the Moon's apparent diameter is smaller than the Sun's. Almost all annular solar eclipses between 1880 and 2060 in Solar Saros 144 and almost all annular solar eclipses between 1940 and 2120 in Solar Saros 128 are micromoon annular solar eclipses.
| Physical sciences | Celestial mechanics | Astronomy |
25110709 | https://en.wikipedia.org/wiki/Nuclear%20magnetic%20resonance | Nuclear magnetic resonance | Nuclear magnetic resonance (NMR) is a physical phenomenon in which nuclei in a strong constant magnetic field are disturbed by a weak oscillating magnetic field (in the near field) and respond by producing an electromagnetic signal with a frequency characteristic of the magnetic field at the nucleus. This process occurs near resonance, when the oscillation frequency matches the intrinsic frequency of the nuclei, which depends on the strength of the static magnetic field, the chemical environment, and the magnetic properties of the isotope involved; in practical applications with static magnetic fields up to ca. 20 tesla, the frequency is similar to VHF and UHF television broadcasts (60–1000 MHz). NMR results from specific magnetic properties of certain atomic nuclei. High-resolution nuclear magnetic resonance spectroscopy is widely used to determine the structure of organic molecules in solution and study molecular physics and crystals as well as non-crystalline materials. NMR is also routinely used in advanced medical imaging techniques, such as in magnetic resonance imaging (MRI). The original application of NMR to condensed matter physics is nowadays mostly devoted to strongly correlated electron systems. It reveals large many-body couplings by fast broadband detection and should not be confused with solid state NMR, which aims at removing the effect of the same couplings by Magic Angle Spinning techniques.
The most commonly used nuclei are and , although isotopes of many other elements, such as , , and , can be studied by high-field NMR spectroscopy as well. In order to interact with the magnetic field in the spectrometer, the nucleus must have an intrinsic angular momentum and nuclear magnetic dipole moment. This occurs when an isotope has a nonzero nuclear spin, meaning an odd number of protons and/or neutrons (see Isotope). Nuclides with even numbers of both have a total spin of zero and are therefore not NMR-active.
In its application to molecules the NMR effect can be observed only in the presence of a static magnetic field. However, in the ordered phases of magnetic materials, very large internal fields are produced at the nuclei of magnetic ions (and of close ligands), which allow NMR to be performed in zero applied field. Additionally, radio-frequency transitions of nuclear spin I > with large enough electric quadrupolar coupling to the electric field gradient at the nucleus may also be excited in zero applied magnetic field (nuclear quadrupole resonance).
In the dominant chemistry application, the use of higher fields improves the sensitivity of the method (signal-to-noise ratio scales approximately as the power of with the magnetic field strength) and the spectral resolution. Commercial NMR spectrometers employing liquid helium cooled superconducting magnets with fields of up to 28 Tesla have been developed and are widely used.
It is a key feature of NMR that the resonance frequency of nuclei in a particular sample substance is usually directly proportional to the strength of the applied magnetic field. It is this feature that is exploited in imaging techniques; if a sample is placed in a non-uniform magnetic field then the resonance frequencies of the sample's nuclei depend on where in the field they are located. This effect serves as the basis of magnetic resonance imaging.
The principle of NMR usually involves three sequential steps:
The alignment (polarization) of the magnetic nuclear spins in an applied, constant magnetic field B0.
The perturbation of this alignment of the nuclear spins by a weak oscillating magnetic field, usually referred to as a radio frequency (RF) pulse. The oscillation frequency required for significant perturbation is dependent upon the static magnetic field (B0) and the nuclei of observation.
The detection of the NMR signal during or after the RF pulse, due to the voltage induced in a detection coil by precession of the nuclear spins around B0. After an RF pulse, precession usually occurs with the nuclei's Larmor frequency and, in itself, does not involve transitions between spin states or energy levels.
The two magnetic fields are usually chosen to be perpendicular to each other as this maximizes the NMR signal strength. The frequencies of the time-signal response by the total magnetization (M) of the nuclear spins are analyzed in NMR spectroscopy and magnetic resonance imaging. Both use applied magnetic fields (B0) of great strength, usually produced by large currents in superconducting coils, in order to achieve dispersion of response frequencies and of very high homogeneity and stability in order to deliver spectral resolution, the details of which are described by chemical shifts, the Zeeman effect, and Knight shifts (in metals). The information provided by NMR can also be increased using hyperpolarization, and/or using two-dimensional, three-dimensional and higher-dimensional techniques.
NMR phenomena are also utilized in low-field NMR, NMR spectroscopy and MRI in the Earth's magnetic field (referred to as Earth's field NMR), and in several types of magnetometers.
History
Nuclear magnetic resonance was first described and measured in molecular beams by Isidor Rabi in 1938, by extending the Stern–Gerlach experiment, and in 1944, Rabi was awarded the Nobel Prize in Physics for this work. In 1946, Felix Bloch and Edward Mills Purcell expanded the technique for use on liquids and solids, for which they shared the Nobel Prize in Physics in 1952.
Russell H. Varian filed the "Method and means for correlating nuclear properties of atoms and magnetic fields", on October 21, 1948 and was accepted on July 24, 1951. Varian Associates developed the first NMR unit called NMR HR-30 in 1952.
Purcell had worked on the development of radar during World War II at the Massachusetts Institute of Technology's Radiation Laboratory. His work during that project on the production and detection of radio frequency power and on the absorption of such RF power by matter laid the foundation for his discovery of NMR in bulk matter.
Rabi, Bloch, and Purcell observed that magnetic nuclei, like and , could absorb RF energy when placed in a magnetic field and when the RF was of a frequency specific to the identity of the nuclei. When this absorption occurs, the nucleus is described as being in resonance. Different atomic nuclei within a molecule resonate at different (radio) frequencies in the same applied static magnetic field, due to various local magnetic fields. The observation of such magnetic resonance frequencies of the nuclei present in a molecule makes it possible to determine essential chemical and structural information about the molecule.
The improvements of the NMR method benefited from the development of electromagnetic technology and advanced electronics and their introduction into civilian use. Originally as a research tool it was limited primarily to dynamic nuclear polarization, by the work of Anatole Abragam and Albert Overhauser, and to condensed matter physics, where it produced one of the first demonstrations of the validity of the BCS theory of superconductivity by the observation by Charles Slichter of the Hebel-Slichter effect. It soon showed its potential in organic chemistry, where NMR has become indispensable, and by the 1990s improvement in the sensitivity and resolution of NMR spectroscopy resulted in its broad use in analytical chemistry, biochemistry and materials science.
In the 2020s zero- to ultralow-field nuclear magnetic resonance (ZULF NMR), a form of spectroscopy that provides abundant analytical results without the need for large magnetic fields, was developed. It is combined with a special technique that makes it possible to hyperpolarize atomic nuclei.
Theory of nuclear magnetic resonance
Nuclear spins and magnets
All nucleons, that is neutrons and protons, composing any atomic nucleus, have the intrinsic quantum property of spin, an intrinsic angular momentum analogous to the classical angular momentum of a spinning sphere. The overall spin of the nucleus is determined by the spin quantum number S. If the numbers of both the protons and neutrons in a given nuclide are even then , i.e. there is no overall spin. Then, just as electrons pair up in nondegenerate atomic orbitals, so do even numbers of protons or even numbers of neutrons (both of which are also spin- particles and hence fermions), giving zero overall spin.
However, an unpaired proton and unpaired neutron will have a lower energy when their spins are parallel, not anti-parallel. This parallel spin alignment of distinguishable particles does not violate the Pauli exclusion principle. The lowering of energy for parallel spins has to do with the quark structure of these two nucleons. As a result, the spin ground state for the deuteron (the nucleus of deuterium, the 2H isotope of hydrogen), which has only a proton and a neutron, corresponds to a spin value of 1, not of zero. On the other hand, because of the Pauli exclusion principle, the tritium isotope of hydrogen must have a pair of anti-parallel spin neutrons (of total spin zero for the neutron spin-pair), plus a proton of spin . Therefore, the tritium total nuclear spin value is again , just like the simpler, abundant hydrogen isotope, 1H nucleus (the proton). The NMR absorption frequency for tritium is also similar to that of 1H. In many other cases of non-radioactive nuclei, the overall spin is also non-zero and may have a contribution from the orbital angular momentum of the unpaired nucleon. For example, the nucleus has an overall spin value .
A non-zero spin is associated with a non-zero magnetic dipole moment, , via the relation where γ is the gyromagnetic ratio. Classically, this corresponds to the proportionality between the angular momentum and the magnetic dipole moment of a spinning charged sphere, both of which are vectors parallel to the rotation axis whose length increases proportional to the spinning frequency. It is the magnetic moment and its interaction with magnetic fields that allows the observation of NMR signal associated with transitions between nuclear spin levels during resonant RF irradiation or caused by Larmor precession of the average magnetic moment after resonant irradiation. Nuclides with even numbers of both protons and neutrons have zero nuclear magnetic dipole moment and hence do not exhibit NMR signal. For instance, is an example of a nuclide that produces no NMR signal, whereas , , and are nuclides that do exhibit NMR spectra. The last two nuclei have spin S > and are therefore quadrupolar nuclei.
Electron spin resonance (ESR) is a related technique in which transitions between electronic rather than nuclear spin levels are detected. The basic principles are similar but the instrumentation, data analysis, and detailed theory are significantly different. Moreover, there is a much smaller number of molecules and materials with unpaired electron spins that exhibit ESR (or electron paramagnetic resonance (EPR)) absorption than those that have NMR absorption spectra. On the other hand, ESR has much higher signal per spin than NMR does.
Values of spin angular momentum
Nuclear spin is an intrinsic angular momentum that is quantized. This means that the magnitude of this angular momentum is quantized (i.e. S can only take on a restricted range of values), and also that the x, y, and z-components of the angular momentum are quantized, being restricted to integer or half-integer multiples of ħ, the reduced Planck constant. The integer or half-integer quantum number associated with the spin component along the z-axis or the applied magnetic field is known as the magnetic quantum number, m, and can take values from +S to −S, in integer steps. Hence for any given nucleus, there are a total of angular momentum states.
The z-component of the angular momentum vector () is therefore . The z-component of the magnetic moment is simply:
Spin energy in a magnetic field
Consider nuclei with a spin of one-half, like , or . Each nucleus has two linearly independent spin states, with m = or m = − (also referred to as spin-up and spin-down, or sometimes α and β spin states, respectively) for the z-component of spin. In the absence of a magnetic field, these states are degenerate; that is, they have the same energy. Hence the number of nuclei in these two states will be essentially equal at thermal equilibrium.
If a nucleus with spin is placed in a magnetic field, however, the two states no longer have the same energy as a result of the interaction between the nuclear magnetic dipole moment and the external magnetic field. The energy of a magnetic dipole moment in a magnetic field B0 is given by:
Usually the z-axis is chosen to be along B0, and the above expression reduces to:
or alternatively:
As a result, the different nuclear spin states have different energies in a non-zero magnetic field. In less formal language, we can talk about the two spin states of a spin as being aligned either with or against the magnetic field. If γ is positive (true for most isotopes used in NMR) then ("spin up") is the lower energy state.
The energy difference between the two states is:
and this results in a small population bias favoring the lower energy state in thermal equilibrium. With more spins pointing up than down, a net spin magnetization along the magnetic field B0 results.
Precession of the spin magnetization
A central concept in NMR is the precession of the spin magnetization around the magnetic field at the nucleus, with the angular frequency where relates to the oscillation frequency and B is the magnitude of the field. This means that the spin magnetization, which is proportional to the sum of the spin vectors of nuclei in magnetically equivalent sites (the expectation value of the spin vector in quantum mechanics), moves on a cone around the B field. This is analogous to the precessional motion of the axis of a tilted spinning top around the gravitational field. In quantum mechanics, is the Bohr frequency of the and expectation values. Precession of non-equilibrium magnetization in the applied magnetic field B0 occurs with the Larmor frequency without change in the populations of the energy levels because energy is constant (time-independent Hamiltonian).
Magnetic resonance and radio-frequency pulses
A perturbation of nuclear spin orientations from equilibrium will occur only when an oscillating magnetic field is applied whose frequency νrf sufficiently closely matches the Larmor precession frequency νL of the nuclear magnetization. The populations of the spin-up and -down energy levels then undergo Rabi oscillations, which are analyzed most easily in terms of precession of the spin magnetization around the effective magnetic field in a reference frame rotating with the frequency νrf. The stronger the oscillating field, the faster the Rabi oscillations or the precession around the effective field in the rotating frame. After a certain time on the order of 2–1000 microseconds, a resonant RF pulse flips the spin magnetization to the transverse plane, i.e. it makes an angle of 90° with the constant magnetic field B0 ("90° pulse"), while after a twice longer time, the initial magnetization has been inverted ("180° pulse"). It is the transverse magnetization generated by a resonant oscillating field which is usually detected in NMR, during application of the relatively weak RF field in old-fashioned continuous-wave NMR, or after the relatively strong RF pulse in modern pulsed NMR.
Chemical shielding
It might appear from the above that all nuclei of the same nuclide (and hence the same γ) would resonate at exactly the same frequency but this is not the case. The most important perturbation of the NMR frequency for applications of NMR is the "shielding" effect of the shells of electrons surrounding the nucleus. Electrons, similar to the nucleus, are also charged and rotate with a spin to produce a magnetic field opposite to the applied magnetic field. In general, this electronic shielding reduces the magnetic field at the nucleus (which is what determines the NMR frequency). As a result, the frequency required to achieve resonance is also reduced.
This shift in the NMR frequency due to the electronic molecular orbital coupling to the external magnetic field is called chemical shift, and it explains why NMR is able to probe the chemical structure of molecules, which depends on the electron density distribution in the corresponding molecular orbitals. If a nucleus in a specific chemical group is shielded to a higher degree by a higher electron density of its surrounding molecular orbitals, then its NMR frequency will be shifted "upfield" (that is, a lower chemical shift), whereas if it is less shielded by such surrounding electron density, then its NMR frequency will be shifted "downfield" (that is, a higher chemical shift).
Unless the local symmetry of such molecular orbitals is very high (leading to "isotropic" shift), the shielding effect will depend on the orientation of the molecule with respect to the external field (B0). In solid-state NMR spectroscopy, magic angle spinning is required to average out this orientation dependence in order to obtain frequency values at the average or isotropic chemical shifts. This is unnecessary in conventional NMR investigations of molecules in solution, since rapid "molecular tumbling" averages out the chemical shift anisotropy (CSA). In this case, the "average" chemical shift (ACS) or isotropic chemical shift is often simply referred to as the chemical shift.
Radiation Damping
In 1949, Suryan first suggested that the interaction between a radiofrequency coil and a sample's bulk magnetization could explain why experimental observations of relaxation times differed from theoretical predictions. Building on this idea, Bloembergen and Pound further developed Suryan's hypothesis by mathematically integrating the Maxwell–Bloch equations, a process through which they introduced the concept of "radiation damping."
Radiation damping (RD) in Nuclear Magnetic Resonance (NMR) is an intrinsic phenomenon observed in many high-field NMR experiments, especially relevant in systems with high concentrations of nuclei like protons or fluorine. RD occurs when transverse bulk magnetization from the sample, following a radio frequency pulse, induces an electromagnetic field (emf) in the receiver coil of the NMR spectrometer. This generates an oscillating current and a non-linear induced transverse magnetic field which returns the spin system to equilibrium faster than other mechanisms of relaxation.
RD can result in line broadening and measurement of a shorter spin-lattice relaxation time (). For instance, a sample of water in a 400 MHz NMR spectrometer will have around 20 ms, whereas its is hundreds of milliseconds. This effect is often described using modified Bloch equations that include terms for radiation damping alongside the conventional relaxation terms. The longitudinal relaxation time of radiation damping () is given by the equation [1].
[1]
where is the gyromagnetic ratio, is the magnetic permeability, is the equilibrium magnetization per unit volume, is the filling factor of the probe which is the ratio of the probe coil volume to the sample volume enclosed, is the quality factor of the probe, and , , and are the resonance frequency, inductance, and resistance of the coil, respectively. The quantification of line broadening due to radiation damping can be determined by measuring the and use equation [2].
[2]
Radiation damping in NMR is influenced significantly by system parameters. It is notably more prominent in systems where the NMR probe possesses a high quality factor () and a high filling factor , resulting in a strong coupling between the probe coil and the sample. The phenomenon is also impacted by the concentration of the nuclei within the sample and their magnetic moments, which can intensify the effects of radiation damping. The strength of the magnetic field is inversely proportional to the lifetime of RD. The impact of radiation damping on NMR signals is multifaceted. It can accelerate the decay of the NMR signal faster than intrinsic relaxation processes would suggest. This acceleration can complicate the interpretation of NMR spectra by causing broadening of spectral lines, distorting multiplet structures, and introducing artifacts, especially in high-resolution NMR scenarios. Such effects make it challenging to obtain clear and accurate data without considering the influence of radiation damping.
To mitigate these effects, various strategies are employed in NMR spectroscopy. These methods majorly stem from hardware or software. Hardware modifications including RF feed-circuit and Q-factor switches reduce the feedback loop between the sample magnetization and the electromagnetic field induced by the coil and function successfully. Other approaches such as designing selective pulse sequences also effectively manage the fields induced by radiation damping. These approaches aim to control and limit the disruptive effects of radiation damping during NMR experiments and all approaches are successful in eliminating RD to a fairly large extent.
Overall, understanding and managing radiation damping is crucial for obtaining high-quality NMR data, especially in modern high-field spectrometers where the effects can be significant due to the increased sensitivity and resolution.
Relaxation
The process of population relaxation refers to nuclear spins that return to thermodynamic equilibrium in the magnet. This process is also called T1, "spin-lattice" or "longitudinal magnetic" relaxation, where T1 refers to the mean time for an individual nucleus to return to its thermal equilibrium state of the spins. After the nuclear spin population has relaxed, it can be probed again, since it is in the initial, equilibrium (mixed) state.
The precessing nuclei can also fall out of alignment with each other and gradually stop producing a signal. This is called T2, "spin-spin" or transverse relaxation. Because of the difference in the actual relaxation mechanisms involved (for example, intermolecular versus intramolecular magnetic dipole-dipole interactions), T1 is usually (except in rare cases) longer than T2 (that is, slower spin-lattice relaxation, for example because of smaller dipole-dipole interaction effects). In practice, the value of T2*, which is the actually observed decay time of the observed NMR signal, or free induction decay (to of the initial amplitude immediately after the resonant RF pulse), also depends on the static magnetic field inhomogeneity, which may be quite significant. (There is also a smaller but significant contribution to the observed FID shortening from the RF inhomogeneity of the resonant pulse). In the corresponding FT-NMR spectrum—meaning the Fourier transform of the free induction decay— the width of the NMR signal in frequency units is inversely related to the T2* time. Thus, a nucleus with a long T2* relaxation time gives rise to a very sharp NMR peak in the FT-NMR spectrum for a very homogeneous ("well-shimmed") static magnetic field, whereas nuclei with shorter T2* values give rise to broad FT-NMR peaks even when the magnet is shimmed well. Both T1 and T2 depend on the rate of molecular motions as well as the gyromagnetic ratios of both the resonating and their strongly interacting, next-neighbor nuclei that are not at resonance.
A Hahn echo decay experiment can be used to measure the dephasing time, as shown in the animation. The size of the echo is recorded for different spacings of the two pulses. This reveals the decoherence that is not refocused by the 180° pulse. In simple cases, an exponential decay is measured which is described by the T2 time.
NMR spectroscopy
NMR spectroscopy is one of the principal techniques used to obtain physical, chemical, electronic and structural information about molecules due to the chemical shift of the resonance frequencies of the nuclear spins in the sample. Peak splittings due to J- or dipolar couplings between nuclei are also useful. NMR spectroscopy can provide detailed and quantitative information on the functional groups, topology, dynamics and three-dimensional structure of molecules in solution and the solid state. Since the area under an NMR peak is usually proportional to the number of spins involved, peak integrals can be used to determine composition quantitatively.
Structure and molecular dynamics can be studied (with or without "magic angle" spinning (MAS)) by NMR of quadrupolar nuclei (that is, with spin ) even in the presence of magnetic "dipole-dipole" interaction broadening (or simply, dipolar broadening), which is always much smaller than the quadrupolar interaction strength because it is a magnetic vs. an electric interaction effect.
Additional structural and chemical information may be obtained by performing double-quantum NMR experiments for pairs of spins or quadrupolar nuclei such as . Furthermore, nuclear magnetic resonance is one of the techniques that has been used to design quantum automata, and also build elementary quantum computers.
Continuous-wave (CW) spectroscopy
In the first few decades of nuclear magnetic resonance, spectrometers used a technique known as continuous-wave (CW) spectroscopy, where the transverse spin magnetization generated by a weak oscillating magnetic field is recorded as a function of the oscillation frequency or static field strength B0. When the oscillation frequency matches the nuclear resonance frequency, the transverse magnetization is maximized and a peak is observed in the spectrum. Although NMR spectra could be, and have been, obtained using a fixed constant magnetic field and sweeping the frequency of the oscillating magnetic field, it was more convenient to use a fixed frequency source and vary the current (and hence magnetic field) in an electromagnet to observe the resonant absorption signals. This is the origin of the counterintuitive, but still common, "high field" and "low field" terminology for low frequency and high frequency regions, respectively, of the NMR spectrum.
As of 1996, CW instruments were still used for routine work because the older instruments were cheaper to maintain and operate, often operating at 60 MHz with correspondingly weaker (non-superconducting) electromagnets cooled with water rather than liquid helium. One radio coil operated continuously, sweeping through a range of frequencies, while another orthogonal coil, designed not to receive radiation from the transmitter, received signals from nuclei that reoriented in solution. As of 2014, low-end refurbished 60 MHz and 90 MHz systems were sold as FT-NMR instruments, and in 2010 the "average workhorse" NMR instrument was configured for 300 MHz.
CW spectroscopy is inefficient in comparison with Fourier analysis techniques (see below) since it probes the NMR response at individual frequencies or field strengths in succession. Since the NMR signal is intrinsically weak, the observed spectrum suffers from a poor signal-to-noise ratio. This can be mitigated by signal averaging, i.e. adding the spectra from repeated measurements. While the NMR signal is the same in each scan and so adds linearly, the random noise adds more slowly – proportional to the square root of the number of spectra added (see random walk). Hence the overall signal-to-noise ratio increases as the square-root of the number of spectra measured. However, monitoring an NMR signal at a single frequency as a function of time may be better suited for kinetic studies than pulsed Fourier-transform NMR spectrosocopy.
Fourier-transform spectroscopy
Most applications of NMR involve full NMR spectra, that is, the intensity of the NMR signal as a function of frequency. Early attempts to acquire the NMR spectrum more efficiently than simple CW methods involved illuminating the target simultaneously with more than one frequency. A revolution in NMR occurred when short radio-frequency pulses began to be used, with a frequency centered at the middle of the NMR spectrum. In simple terms, a short pulse of a given "carrier" frequency "contains" a range of frequencies centered about the carrier frequency, with the range of excitation (bandwidth) being inversely proportional to the pulse duration, i.e. the Fourier transform of a short pulse contains contributions from all the frequencies in the neighborhood of the principal frequency. The restricted range of the NMR frequencies for most light spin- nuclei made it relatively easy to use short (1 - 100 microsecond) radio frequency pulses to excite the entire NMR spectrum.
Applying such a pulse to a set of nuclear spins simultaneously excites all the single-quantum NMR transitions. In terms of the net magnetization vector, this corresponds to tilting the magnetization vector away from its equilibrium position (aligned along the external magnetic field). The out-of-equilibrium magnetization vector then precesses about the external magnetic field vector at the NMR frequency of the spins. This oscillating magnetization vector induces a voltage in a nearby pickup coil, creating an electrical signal oscillating at the NMR frequency. This signal is known as the free induction decay (FID), and it contains the sum of the NMR responses from all the excited spins. In order to obtain the frequency-domain NMR spectrum (NMR absorption intensity vs. NMR frequency) this time-domain signal (intensity vs. time) must be Fourier transformed. Fortunately, the development of Fourier transform (FT) NMR coincided with the development of digital computers and the digital fast Fourier transform (FFT). Fourier methods can be applied to many types of spectroscopy.
Richard R. Ernst was one of the pioneers of pulsed NMR and won a Nobel Prize in chemistry in 1991 for his work on Fourier Transform NMR and his development of multi-dimensional NMR spectroscopy.
Multi-dimensional NMR spectroscopy
The use of pulses of different durations, frequencies, or shapes in specifically designed patterns or pulse sequences allows production of a spectrum that contains many different types of information about the molecules in the sample. In multi-dimensional nuclear magnetic resonance spectroscopy, there are at least two pulses: one leads to the directly detected signal and the others affect the starting magnetization and spin state prior to it. The full analysis involves repeating the sequence with the pulse timings systematically varied in order to probe the oscillations of the spin system are point by point in the time domain. Multidimensional Fourier transformation of the multidimensional time signal yields the multidimensional spectrum. In two-dimensional nuclear magnetic resonance spectroscopy (2D-NMR), there will be one systematically varied time period in the sequence of pulses, which will modulate the intensity or phase of the detected signals. In 3D-NMR, two time periods will be varied independently, and in 4D-NMR, three will be varied.
There are many such experiments. In some, fixed time intervals allow (among other things) magnetization transfer between nuclei and, therefore, the detection of the kinds of nuclear–nuclear interactions that allowed for the magnetization transfer. Interactions that can be detected are usually classified into two kinds. There are through-bond and through-space interactions. Through-bond interactions relate to structural connectivity of the atoms and provide information about which ones are directly connected to each other, connected by way of a single other intermediate atom, etc. Through-space interactions relate to actual geometric distances and angles, including effects of dipolar coupling and the nuclear Overhauser effect.
Although the fundamental concept of 2D-FT NMR was proposed by Jean Jeener from the Free University of Brussels at an international conference, this idea was largely developed by Richard Ernst, who won the 1991 Nobel prize in Chemistry for his work in FT NMR, including multi-dimensional FT NMR, and especially 2D-FT NMR of small molecules. Multi-dimensional FT NMR experiments were then further developed into powerful methodologies for studying molecules in solution, in particular for the determination of the structure of biopolymers such as proteins or even small nucleic acids.
In 2002 Kurt Wüthrich shared the Nobel Prize in Chemistry (with John Bennett Fenn and Koichi Tanaka) for his work with protein FT NMR in solution.
Solid-state NMR spectroscopy
This technique complements X-ray crystallography in that it is frequently applicable to molecules in an amorphous or liquid-crystalline state, whereas crystallography, as the name implies, is performed on molecules in a crystalline phase. In electronically conductive materials, the Knight shift of the resonance frequency can provide information on the mobile charge carriers. Though nuclear magnetic resonance is used to study the structure of solids, extensive atomic-level structural detail is more challenging to obtain in the solid state. Due to broadening by chemical shift anisotropy (CSA) and dipolar couplings to other nuclear spins, without special techniques such as MAS or dipolar decoupling by RF pulses, the observed spectrum is often only a broad Gaussian band for non-quadrupolar spins in a solid.
Professor Raymond Andrew at the University of Nottingham in the UK pioneered the development of high-resolution solid-state nuclear magnetic resonance. He was the first to report the introduction of the MAS (magic angle sample spinning; MASS) technique that allowed him to achieve spectral resolution in solids sufficient to distinguish between chemical groups with either different chemical shifts or distinct Knight shifts. In MASS, the sample is spun at several kilohertz around an axis that makes the so-called magic angle θm (which is ~54.74°, where 3cos2θm-1 = 0) with respect to the direction of the static magnetic field B0; as a result of such magic angle sample spinning, the broad chemical shift anisotropy bands are averaged to their corresponding average (isotropic) chemical shift values. Correct alignment of the sample rotation axis as close as possible to θm is essential for cancelling out the chemical-shift anisotropy broadening. There are different angles for the sample spinning relative to the applied field for the averaging of electric quadrupole interactions and paramagnetic interactions, correspondingly ~30.6° and ~70.1°. In amorphous materials, residual line broadening remains since each segment is in a slightly different environment, therefore exhibiting a slightly different NMR frequency.
Line broadening or splitting by dipolar or J-couplings to nearby 1H nuclei is usually removed by radio-frequency pulses applied at the 1H frequency during signal detection. The concept of cross polarization developed by Sven Hartmann and Erwin Hahn was utilized in transferring magnetization from protons to less sensitive nuclei by M.G. Gibby, Alex Pines and John S. Waugh. Then, Jake Schaefer and Ed Stejskal demonstrated the powerful use of cross polarization under MAS conditions (CP-MAS) and proton decoupling, which is now routinely employed to measure high resolution spectra of low-abundance and low-sensitivity nuclei, such as carbon-13, silicon-29, or nitrogen-15, in solids. Significant further signal enhancement can be achieved by dynamic nuclear polarization from unpaired electrons to the nuclei, usually at temperatures near 110 K.
Sensitivity
Because the intensity of nuclear magnetic resonance signals and, hence, the sensitivity of the technique depends on the strength of the magnetic field, the technique has also advanced over the decades with the development of more powerful magnets. Advances made in audio-visual technology have also improved the signal-generation and processing capabilities of newer instruments.
As noted above, the sensitivity of nuclear magnetic resonance signals is also dependent on the presence of a magnetically susceptible nuclide and, therefore, either on the natural abundance of such nuclides or on the ability of the experimentalist to artificially enrich the molecules, under study, with such nuclides. The most abundant naturally occurring isotopes of hydrogen and phosphorus (for example) are both magnetically susceptible and readily useful for nuclear magnetic resonance spectroscopy. In contrast, carbon and nitrogen have useful isotopes but which occur only in very low natural abundance.
Other limitations on sensitivity arise from the quantum-mechanical nature of the phenomenon. For quantum states separated by energy equivalent to radio frequencies, thermal energy from the environment causes the populations of the states to be close to equal. Since incoming radiation is equally likely to cause stimulated emission (a transition from the upper to the lower state) as absorption, the NMR effect depends on an excess of nuclei in the lower states. Several factors can reduce sensitivity, including:
Increasing temperature, which evens out the Boltzmann population of states. Conversely, low temperature NMR can sometimes yield better results than room-temperature NMR, providing the sample remains liquid.
Saturation of the sample with energy applied at the resonant radiofrequency. This manifests in both CW and pulsed NMR; in the first case (CW) this happens by using too much continuous power that keeps the upper spin levels completely populated; in the second case (pulsed), each pulse (that is at least a 90° pulse) leaves the sample saturated, and four to five times the (longitudinal) relaxation time (5T1) must pass before the next pulse or pulse sequence can be applied. For single pulse experiments, shorter RF pulses that tip the magnetization by less than 90° can be used, which loses some intensity of the signal, but allows for shorter recycle delays. The optimum there is called an Ernst angle, after the Nobel laureate. Especially in solid state NMR, or in samples containing very few nuclei with spin (diamond with the natural 1% of carbon-13 is especially troublesome here) the longitudinal relaxation times can be on the range of hours, while for proton-NMR they are often in the range of one second.
Non-magnetic effects, such as electric-quadrupole coupling of spin-1 and spin- nuclei with their local environment, which broaden and weaken absorption peaks. , an abundant spin-1 nucleus, is difficult to study for this reason. High resolution NMR instead probes molecules using the rarer isotope, which has spin-.
Isotopes
Many isotopes of chemical elements can be used for NMR analysis.
Commonly used nuclei:
, the most commonly used spin- nucleus in NMR investigations, has been studied using many forms of NMR. Hydrogen is highly abundant, especially in biological systems. It is the nucleus providing the strongest NMR signal (apart from , which is not commonly used due to its instability and radioactivity). Proton NMR has a narrow chemical-shift range but gives sharp signals in solution state. Fast acquisition of quantitative spectra (with peak integrals in stoichiometric ratios) is possible due to short relaxation time. The nucleus has provided the sole diagnostic signal for clinical magnetic resonance imaging (MRI).
, a spin-1 nucleus, is commonly utilized to provide a signal-free medium in the form of deuterated solvents for proton NMR, to avoid signal interference from hydrogen-containing solvents in measurement of NMR of solutes. It is also used in determining the behavior of lipids in lipid membranes and other solids or liquid crystals as it is a relatively non-perturbing label which can selectively replace . Alternatively, can be detected in media specially labeled with . Deuterium resonance is commonly used in high-resolution NMR spectroscopy to monitor drift of the magnetic field strength (lock) and to monitor the homogeneity of the external magnetic field.
is very sensitive to NMR. It exists at a very low concentration in natural helium and can be purified from . It is used mainly in studies of endohedral fullerenes, where its chemical inertness is beneficial to ascertaining the structure of the entrapping fullerene.
is more sensitive than and yields sharper signals. The nuclear spin of 10B is 3 and that of 11B is . Quartz tubes must be used because borosilicate glass interferes with measurement.
, a spin- nucleus, is widely used, despite its relative paucity in naturally occurring carbon (approximately 1.1%). It is stable to nuclear decay. Since there is a low percentage in natural carbon, spectrum acquisition on samples which have not been enriched in takes a long time. Frequently used for labeling of compounds in synthetic and metabolic studies. Has low sensitivity and moderately wide chemical shift range, yields sharp signals. Low percentage makes it useful by preventing spin–spin couplings and makes the spectrum appear less crowded. Slow relaxation of 13C not bonded to hydrogen means that spectra are not integrable unless long acquisition times are used.
, spin-1, is a medium sensitivity nucleus with wide chemical shift range. Its large quadrupole moment interferes with acquisition of high resolution spectra, limiting usefulness to smaller molecules and functional groups with a high degree of symmetry such as in the head-groups of lipids.
, spin-, is relatively commonly used. Can be used for isotopically labeling compounds. Very insensitive but yields sharp signals. Low percentage in natural nitrogen together with low sensitivity requires high concentrations or expensive isotope enrichment.
, spin-, low sensitivity and very low natural abundance (0.037%), wide chemical shift range (up to 2000 ppm). Its quadrupole moment causes line broadening. Used in metabolic and biochemical studies of chemical equilibria.
, spin-, relatively commonly measured. Sensitive, yields sharp signals, has a wide chemical shift range.
, spin-, 100% of natural phosphorus. Medium sensitivity, wide chemical shift range, yields sharp lines. Spectra tend to have a moderate level of noise. Used in biochemical studies and in coordination chemistry with phosphorus-containing ligands.
and , spin-, broad signal. is significantly more sensitive, preferred over despite its slightly broader signal. Organic chlorides yield very broad signals. Its use is limited to inorganic and ionic chlorides and very small organic molecules.
, spin-, relatively small quadrupole moment, moderately sensitive, very low natural abundance. Used in biochemistry to study calcium binding to DNA, proteins, etc.
, used in studies of catalysts and complexes.
Other nuclei (usually used in the studies of their complexes and chemical bonding, or to detect presence of the element):
,
, ,
,
,
,
,
,
Applications
NMR is extensively used in medicine in the form of magnetic resonance imaging. NMR is widely used in organic chemistry and industrially mainly for analysis of chemicals. The technique is also used to measure the ratio between water and fat in foods, monitor the flow of corrosive fluids in pipes, or to study molecular structures such as catalysts.
Medicine
The application of nuclear magnetic resonance best known to the general public is magnetic resonance imaging for medical diagnosis and magnetic resonance microscopy in research settings. However, it is also widely used in biochemical studies, notably in NMR spectroscopy such as proton NMR, carbon-13 NMR, deuterium NMR and phosphorus-31 NMR. Biochemical information can also be obtained from living tissue (e.g. human brain tumors) with the technique known as in vivo magnetic resonance spectroscopy or chemical shift NMR microscopy.
These spectroscopic studies are possible because nuclei are surrounded by orbiting electrons, which are charged particles that generate small, local magnetic fields that add to or subtract from the external magnetic field, and so will partially shield the nuclei. The amount of shielding depends on the exact local environment. For example, a hydrogen bonded to an oxygen will be shielded differently from a hydrogen bonded to a carbon atom. In addition, two hydrogen nuclei can interact via a process known as spin–spin coupling, if they are on the same molecule, which will split the lines of the spectra in a recognizable way.
As one of the two major spectroscopic techniques used in metabolomics, NMR is used to generate metabolic fingerprints from biological fluids to obtain information about disease states or toxic insults.
Chemistry
The aforementioned chemical shift came as a disappointment to physicists who had hoped that the resonance frequency of each nuclear species would be constant in a given magnetic field. But about 1951, chemist S. S. Dharmatti pioneered a way to determine the structure of many compounds by studying the peaks of nuclear magnetic resonance spectra. It can be a very selective technique, distinguishing among many atoms within a molecule or collection of molecules of very similar type but which differ only in terms of their local chemical environment. NMR spectroscopy is used to unambiguously identify known and novel compounds, and as such, is usually required by scientific journals for identity confirmation of synthesized new compounds. See the articles on carbon-13 NMR and proton NMR for detailed discussions.
A chemist can determine the identity of a compound by comparing the observed nuclear precession frequencies to known or predicted frequencies. Further structural data can be elucidated by observing spin–spin coupling, a process by which the precession frequency of a nucleus can be influenced by the spin orientation of a chemically bonded nucleus. Spin–spin coupling is easily observed in NMR of hydrogen-1 ( NMR) since its natural abundance is nearly 100%.
Because the nuclear magnetic resonance timescale is rather slow, compared to other spectroscopic methods, changing the temperature of a T2* experiment can also give information about fast reactions, such as the Cope rearrangement or about structural dynamics, such as ring-flipping in cyclohexane. At low enough temperatures, a distinction can be made between the axial and equatorial hydrogens in cyclohexane.
An example of nuclear magnetic resonance being used in the determination of a structure is that of buckminsterfullerene (often called "buckyballs", composition C60). This now famous form of carbon has 60 carbon atoms forming a sphere. The carbon atoms are all in identical environments and so should see the same internal H field. Unfortunately, buckminsterfullerene contains no hydrogen and so nuclear magnetic resonance has to be used. spectra require longer acquisition times since carbon-13 is not the common isotope of carbon (unlike hydrogen, where is the common isotope). However, in 1990 the spectrum was obtained by R. Taylor and co-workers at the University of Sussex and was found to contain a single peak, confirming the unusual structure of buckminsterfullerene.
Battery
Nuclear Magnetic Resonance (NMR) is a powerful analytical tool for investigating the local structure and ion dynamics in battery materials. NMR provides unique insights into the short-range atomic environments within complex electrochemical systems such as batteries. Electrochemical processes rely on redox reactions, in which 7Li or 23Na are often involved. Accordingly, their NMR spectroscopies are affected by the electronic structure of the material, which makes NMR an essential technique for probing the behavior of battery components during operation.
Applications of NMR in Battery Research
Electrodes and Structural Transformations: During charge and discharge cycles, the materials in the anodes and cathodes undergo local structural transformations. These changes can be monitored using NMR by analyzing the signal's line shape, line intensity, and chemical shift. These transformations are often not captured by X-ray diffraction techniques (providing long-range information), making NMR indispensable for understanding the underlying mechanisms of energy storage.
Metal Dendrite Formation: One of the challenges in lithium and sodium-based batteries is the formation of metal dendrites, which can lead to short circuits and catastrophic battery failure. In Situ NMR allows researchers to observe the formation of lithium or sodium dendrites in real time during battery cycling. Varying the cycling rates can also quantify the effect on dendrite formation, aiding in the development of strategies to suppress dendrite growth and reduce the risk of short circuits.
Solid Electrolytes and Interfaces: Solid electrolytes, a key focus of next-generation battery research, often suffer from limited ion diffusion rates. NMR techniques can measure diffusivity in solid electrolytes, helping researchers understand how to enhance ion conductivity. Furthermore, NMR is used to study the Solid Electrolyte Interface (SEI), a layer that forms on the electrode surface and thus influences battery stability. Solid-state NMR (ssNMR) is particularly valuable for characterizing the composition and ion dynamics within the SEI layer due to its nondestructive testing capabilities.
In Situ and Ex Situ NMR Techniques
NMR technology can be divided into two main experimental approaches in battery research: In Situ NMR and Ex Situ NMR. Each offers unique advantages depending on the research goals.
In Situ NMR: In situ NMR enables real-time observation of chemical and structural changes in batteries while they are operating. This is particularly important for studying transient species that only exist under working conditions, such as certain intermediate reaction products. In situ NMR has become a critical tool for understanding processes like lithium and sodium plating and dendrite formation during battery cycling.
Ex Situ NMR: Ex situ NMR is used after the battery has been disassembled, allowing for high-resolution analysis of battery components. It is often employed to study a wide range of nuclei, including 1H, 2H, 6Li, 7Li, 13C, 15N, 17O, 19F, 25Mg, 29Si, 31P, 51V, 133Cs. Many of these nuclei are quadrupolar or present in low abundance, making them difficult to detect. However, ex situ NMR benefits from better sensitivity and narrower linewidths, which can be further improved by employing larger sample volumes, higher magnetic fields, or magic angle spinning (MAS).
Purity determination (w/w NMR)
While NMR is primarily used for structural determination, it can also be used for purity determination, provided that the structure and molecular weight of the compound is known. This technique requires the use of an internal standard of known purity. Typically this standard will have a high molecular weight to facilitate accurate weighing, but relatively few protons so as to give a clear peak for later integration e.g. 1,2,4,5-tetrachloro-3-nitrobenzene. Accurately weighed portions of the standard and sample are combined and analysed by NMR. Suitable peaks from both compounds are selected and the purity of the sample is determined via the following equation.
Where:
wstd: weight of internal standard
wspl: weight of sample
n[H]std: the integrated area of the peak selected for comparison in the standard, corrected for the number of protons in that functional group
n[H]spl: the integrated area of the peak selected for comparison in the sample, corrected for the number of protons in that functional group
MWstd: molecular weight of standard
MWspl: molecular weight of sample
P: purity of internal standard
Non-destructive testing
Nuclear magnetic resonance is extremely useful for analyzing samples non-destructively. Radio-frequency magnetic fields easily penetrate many types of matter and anything that is not highly conductive or inherently ferromagnetic. For example, various expensive biological samples, such as nucleic acids, including RNA and DNA, or proteins, can be studied using nuclear magnetic resonance for weeks or months before using destructive biochemical experiments. This also makes nuclear magnetic resonance a good choice for analyzing dangerous samples.
Segmental and molecular motions
In addition to providing static information on molecules by determining their 3D structures, one of the remarkable advantages of NMR over X-ray crystallography is that it can be used to obtain important dynamic information. This is due to the orientation dependence of the chemical-shift, dipole-coupling, or electric-quadrupole-coupling contributions to the instantaneous NMR frequency in an anisotropic molecular environment. When the molecule or segment containing the NMR-observed nucleus changes its orientation relative to the external field, the NMR frequency changes, which can result in changes in one- or two-dimensional spectra or in the relaxation times, depending on the correlation time and amplitude of the motion.
Data acquisition in the petroleum industry
Another use for nuclear magnetic resonance is data acquisition in the petroleum industry for petroleum and natural gas exploration and recovery. Initial research in this domain began in the 1950s, however, the first commercial instruments were not released until the early 1990s. A borehole is drilled into rock and sedimentary strata into which nuclear magnetic resonance logging equipment is lowered. Nuclear magnetic resonance analysis of these boreholes is used to measure rock porosity, estimate permeability from pore size distribution and identify pore fluids (water, oil and gas). These instruments are typically low field NMR spectrometers.
NMR logging, a subcategory of electromagnetic logging, measures the induced magnet moment of hydrogen nuclei (protons) contained within the fluid-filled pore space of porous media (reservoir rocks). Unlike conventional logging measurements (e.g., acoustic, density, neutron, and resistivity), which respond to both the rock matrix and fluid properties and are strongly dependent on mineralogy, NMR-logging measurements respond to the presence of hydrogen. Because hydrogen atoms primarily occur in pore fluids, NMR effectively responds to the volume, composition, viscosity, and distribution of these fluids, for example oil, gas or water. NMR logs provide information about the quantities of fluids present, the properties of these fluids, and the sizes of the pores containing these fluids. From this information, it is possible to infer or estimate:
The volume (porosity) and distribution (permeability) of the rock pore space
Rock composition
Type and quantity of fluid hydrocarbons
Hydrocarbon producibility
The basic core and log measurement is the T2 decay, presented as a distribution of T2 amplitudes versus time at each sample depth, typically from 0.3 ms to 3 s. The T2 decay is further processed to give the total pore volume (the total porosity) and pore volumes within different ranges of T2. The most common volumes are the bound fluid and free fluid. A permeability estimate is made using a transform such as the Timur-Coates or SDR permeability transforms. By running the log with different acquisition parameters, direct hydrocarbon typing and enhanced diffusion are possible.
Flow probes for NMR spectroscopy
Real-time applications of NMR in liquid media have been developed using specifically designed flow probes (flow cell assemblies) which can replace standard tube probes. This has enabled techniques that can incorporate the use of high performance liquid chromatography (HPLC) or other continuous flow sample introduction devices. These flow probes have used in various online process monitoring such as chemical reactions, environmental pollutant degradation.
Process control
NMR has now entered the arena of real-time process control and process optimization in oil refineries and petrochemical plants. Two different types of NMR analysis are utilized to provide real time analysis of feeds and products in order to control and optimize unit operations. Time-domain NMR (TD-NMR) spectrometers operating at low field (2–20 MHz for ) yield free induction decay data that can be used to determine absolute hydrogen content values, rheological information, and component composition. These spectrometers are used in mining, polymer production, cosmetics and food manufacturing as well as coal analysis. High resolution FT-NMR spectrometers operating in the 60 MHz range with shielded permanent magnet systems yield high resolution NMR spectra of refinery and petrochemical streams. The variation observed in these spectra with changing physical and chemical properties is modeled using chemometrics to yield predictions on unknown samples. The prediction results are provided to control systems via analogue or digital outputs from the spectrometer.
Earth's field NMR
In the Earth's magnetic field, NMR frequencies are in the audio frequency range, or the very low frequency and ultra low frequency bands of the radio frequency spectrum. Earth's field NMR (EFNMR) is typically stimulated by applying a relatively strong dc magnetic field pulse to the sample and, after the end of the pulse, analyzing the resulting low frequency alternating magnetic field that occurs in the Earth's magnetic field due to free induction decay (FID). These effects are exploited in some types of magnetometers, EFNMR spectrometers, and MRI imagers. Their inexpensive portable nature makes these instruments valuable for field use and for teaching the principles of NMR and MRI.
An important feature of EFNMR spectrometry compared with high-field NMR is that some aspects of molecular structure can be observed more clearly at low fields and low frequencies, whereas other aspects observable at high fields are not observable at low fields. This is because:
Electron-mediated heteronuclear J-couplings (spin–spin couplings) are field independent, producing clusters of two or more frequencies separated by several Hz, which are more easily observed in a fundamental resonance of about 2 kHz."Indeed it appears that enhanced resolution is possible due to the long spin relaxation times and high field homogeneity which prevail in EFNMR."
Chemical shifts of several ppm are clearly separated in high field NMR spectra, but have separations of only a few millihertz at proton EFNMR frequencies, so are usually not resolved.
Zero field NMR
In zero field NMR all magnetic fields are shielded such that magnetic fields below 1 nT (nanotesla) are achieved and the nuclear precession frequencies of all nuclei are close to zero and indistinguishable. Under those circumstances the observed spectra are no-longer dictated by chemical shifts but primarily by J-coupling interactions which are independent of the external magnetic field. Since inductive detection schemes are not sensitive at very low frequencies, on the order of the J-couplings (typically between 0 and 1000 Hz), alternative detection schemes are used. Specifically, sensitive magnetometers turn out to be good detectors for zero field NMR. A zero magnetic field environment does not provide any polarization hence it is the combination of zero field NMR with hyperpolarization schemes that makes zero field NMR desirable.
Quantum computing
NMR quantum computing uses the spin states of nuclei within molecules as qubits. NMR differs from other implementations of quantum computers in that it uses an ensemble of systems; in this case, molecules.
Magnetometers
Various magnetometers use NMR effects to measure magnetic fields, including proton precession magnetometers (PPM) (also known as proton magnetometers), and Overhauser magnetometers.
SNMR
Surface magnetic resonance (or magnetic resonance sounding) is based on the principle of nuclear magnetic resonance (NMR) and measurements can be used to indirectly estimate the water content of saturated and unsaturated zones in the earth's subsurface. SNMR is used to estimate aquifer properties, including quantity of water contained in the aquifer, porosity, and hydraulic conductivity.
Makers of NMR equipment
Major NMR instrument makers include Thermo Fisher Scientific, Magritek, Oxford Instruments, Bruker, Spinlock SRL, General Electric, JEOL, Kimble Chase, Philips, Siemens AG, and formerly Agilent Technologies (who acquired Varian, Inc.).
| Physical sciences | Nuclear physics | Physics |
25111839 | https://en.wikipedia.org/wiki/Overexploitation | Overexploitation | Overexploitation, also called overharvesting or ecological overshoot, refers to harvesting a renewable resource to the point of diminishing returns. Continued overexploitation can lead to the destruction of the resource, as it will be unable to replenish. The term applies to natural resources such as water aquifers, grazing pastures and forests, wild medicinal plants, fish stocks and other wildlife.
In ecology, overexploitation describes one of the five main activities threatening global biodiversity. Ecologists use the term to describe populations that are harvested at an unsustainable rate, given their natural rates of mortality and capacities for reproduction. This can result in extinction at the population level and even extinction of whole species. In conservation biology, the term is usually used in the context of human economic activity that involves the taking of biological resources, or organisms, in larger numbers than their populations can withstand. The term is also used and defined somewhat differently in fisheries, hydrology and natural resource management.
Overexploitation can lead to resource destruction, including extinctions. However, it is also possible for overexploitation to be sustainable, as discussed below in the section on fisheries. In the context of fishing, the term overfishing can be used instead of overexploitation, as can overgrazing in stock management, overlogging in forest management, overdrafting in aquifer management, and endangered species in species monitoring. Overexploitation is not an activity limited to humans. Introduced predators and herbivores, for example, can overexploit native flora and fauna.
History
The concern about overexploitation, while relatively recent in the annals of modern environmental awareness, traces back to ancient practices embedded in human history. Contrary to the notion that overexploitation is an exclusively contemporary issue, the phenomenon has been documented for millennia and is not limited to human activities alone. Historical evidence reveals that various cultures and societies have engaged in practices leading to the overuse of natural resources, sometimes with drastic consequences.
One poignant example can be found in the ceremonial cloaks of Hawaiian kings, which were adorned with the feathers of the now-extinct mamo bird. Crafting a single cloak required the feathers of approximately 70,000 adult mamo birds, illustrating a staggering scale of resource extraction that ultimately contributed to its extinction. This instance underscores how cultural traditions and their associated demands can sometimes precipitate the overexploitation of a species to the brink of extinction.
Similarly, the story of the dodo bird from Mauritius provides another clear example of overexploitation. The dodo, a flightless bird, exhibited a lack of fear toward predators, including humans, making it exceptionally vulnerable to hunting. The dodo's naivety and the absence of natural defenses against human hunters and introduced species led to its rapid extinction. This case offers insight into how certain species, particularly those isolated on islands, can be disproportionately affected by human activities due to their evolutionary adaptations.
Hunting has long been a vital human activity for survival, providing food, clothing, and tools. However, the history of hunting also includes episodes of overexploitation, particularly in the form of overhunting. The overkill hypothesis, which addresses the Quaternary extinction events, explains the relatively rapid extinction of megafauna. This hypothesis suggests that these extinctions were closely linked to human migration and population growth. One of the most compelling pieces of evidence supporting this theory is that approximately 80% of North American large mammal species disappeared within just approximately a thousand years of humans arriving in the Western Hemisphere. This rapid disappearance indicates a significant impact of human activity on these species, underscoring the profound influence humans have had on their environment throughout history.
The fastest-ever recorded extinction of megafauna occurred in New Zealand. By 1500 AD, a mere 200 years after the first human settlements, ten species of the giant moa birds were driven to extinction by the Māori. This rapid extinction underscores the significant impact humans can have on native wildlife, especially in isolated ecosystems like New Zealand. The Māori, relying on the moa as a primary food source and for resources such as feathers and bones, hunted these birds extensively. The moa's inability to fly and their size, which made them easier targets, contributed to their rapid decline. This event serves as a cautionary tale about the delicate balance between human activity and biodiversity and highlights the potential consequences of over-hunting and habitat destruction. A second wave of extinctions occurred later with European settlement. This period marked significant ecological disruption, largely due to the introduction of new species and land-use changes. European settlers brought with them animals such as rats, cats, and stoats, which preyed upon native birds and other wildlife. Additionally, deforestation for agriculture significantly altered the habitats of many endemic species. These combined factors accelerated the decline of New Zealand's unique biodiversity, leading to the extinction of several more species. The European settlement period serves as a poignant example of how human activities can drastically impact natural ecosystems.
In more recent times, overexploitation has resulted in the gradual emergence of the concepts of sustainability and sustainable development, which has built on other concepts, such as sustainable yield, eco-development, and deep ecology.
Overview
Overexploitation does not necessarily lead to the destruction of the resource, nor is it necessarily unsustainable. However, depleting the numbers or amount of the resource can change its quality. For example, footstool palm is a wild palm tree found in Southeast Asia. Its leaves are used for thatching and food wrapping, and overharvesting has resulted in its leaf size becoming smaller.
Tragedy of the commons
In 1968, the journal Science published an article by Garrett Hardin entitled "The Tragedy of the Commons". It was based on a parable that William Forster Lloyd published in 1833 to explain how individuals innocently acting in their own self-interest can overexploit, and destroy, a resource that they all share. Lloyd described a simplified hypothetical situation based on medieval land tenure in Europe. Herders share common land on which they are each entitled to graze their cows. In Hardin's article, it is in each herder's individual interest to graze each new cow that the herder acquires on the common land, even if the carrying capacity of the common is exceeded, which damages the common for all the herders. The self-interested herder receives all of the benefits of having the additional cow, while all the herders share the damage to the common. However, all herders reach the same rational decision to buy additional cows and graze them on the common, which eventually destroys the common. Hardin concludes:
Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own interest in a society that believes in the freedom of the commons. Freedom in a commons brings ruin to all.
In the course of his essay, Hardin develops the theme, drawing in many examples of latter day commons, such as national parks, the atmosphere, oceans, rivers and fish stocks. The example of fish stocks had led some to call this the "tragedy of the fishers". A major theme running through the essay is the growth of human populations, with the Earth's finite resources being the general common.
The tragedy of the commons has intellectual roots tracing back to Aristotle, who noted that "what is common to the greatest number has the least care bestowed upon it", as well as to Hobbes and his Leviathan. The opposite situation to a tragedy of the commons is sometimes referred to as a tragedy of the anticommons: a situation in which rational individuals, acting separately, collectively waste a given resource by underutilizing it.
The tragedy of the commons can be avoided if it is appropriately regulated. Hardin's use of "commons" has frequently been misunderstood, leading Hardin to later remark that he should have titled his work "The tragedy of the unregulated commons".
Sectors
Fisheries
In wild fisheries, overexploitation or overfishing occurs when a fish stock has been fished down "below the size that, on average, would support the long-term maximum sustainable yield of the fishery". However, overexploitation can be sustainable.
When a fishery starts harvesting fish from a previously unexploited stock, the biomass of the fish stock will decrease, since harvesting means fish are being removed. For sustainability, the rate at which the fish replenish biomass through reproduction must balance the rate at which the fish are being harvested. If the harvest rate is increased, then the stock biomass will further decrease. At a certain point, the maximum harvest yield that can be sustained will be reached, and further attempts to increase the harvest rate will result in the collapse of the fishery. This point is called the maximum sustainable yield, and in practice, usually occurs when the fishery has been fished down to about 30% of the biomass it had before harvesting started.
It is possible to fish the stock down further to, say, 15% of the pre-harvest biomass, and then adjust the harvest rate so the biomass remains at that level. In this case, the fishery is sustainable, but is now overexploited, because the stock has been run down to the point where the sustainable yield is less than it could be.
Fish stocks are said to "collapse" if their biomass declines by more than 95 percent of their maximum historical biomass. Atlantic cod stocks were severely overexploited in the 1970s and 1980s, leading to their abrupt collapse in 1992. Even though fishing has ceased, the cod stocks have failed to recover. The absence of cod as the apex predator in many areas has led to trophic cascades.
About 25% of world fisheries are now overexploited to the point where their current biomass is less than the level that maximizes their sustainable yield. These depleted fisheries can often recover if fishing pressure is reduced until the stock biomass returns to the optimal biomass. At this point, harvesting can be resumed near the maximum sustainable yield.
The tragedy of the commons can be avoided within the context of fisheries if fishing effort and practices are regulated appropriately by fisheries management. One effective approach may be assigning some measure of ownership in the form of individual transferable quotas (ITQs) to fishermen. In 2008, a large scale study of fisheries that used ITQs, and ones that did not, provided strong evidence that ITQs help prevent collapses and restore fisheries that appear to be in decline.
Water resources
Water resources, such as lakes and aquifers, are usually renewable resources which naturally recharge (the term fossil water is sometimes used to describe aquifers which do not recharge). Overexploitation occurs if a water resource, such as the Ogallala Aquifer, is mined or extracted at a rate that exceeds the recharge rate, that is, at a rate that exceeds the practical sustained yield. Recharge usually comes from area streams, rivers and lakes. An aquifer which has been overexploited is said to be overdrafted or depleted. Forests enhance the recharge of aquifers in some locales, although generally forests are a major source of aquifer depletion. Depleted aquifers can become polluted with contaminants such as nitrates, or permanently damaged through subsidence or through saline intrusion from the ocean.
This turns much of the world's underground water and lakes into finite resources with peak usage debates similar to oil. These debates usually centre around agriculture and suburban water usage but generation of electricity from nuclear energy or coal and tar sands mining is also water resource intensive. A modified Hubbert curve applies to any resource that can be harvested faster than it can be replaced. Though Hubbert's original analysis did not apply to renewable resources, their overexploitation can result in a Hubbert-like peak. This has led to the concept of peak water.
Forestry
Forests are overexploited when they are logged at a rate faster than reforestation takes place. Reforestation competes with other land uses such as food production, livestock grazing, and living space for further economic growth. Historically utilization of forest products, including timber and fuel wood, have played a key role in human societies, comparable to the roles of water and cultivable land. Today, developed countries continue to utilize timber for building houses, and wood pulp for paper. In developing countries almost three billion people rely on wood for heating and cooking. Short-term economic gains made by conversion of forest to agriculture, or overexploitation of wood products, typically leads to loss of long-term income and long term biological productivity. West Africa, Madagascar, Southeast Asia and many other regions have experienced lower revenue because of overexploitation and the consequent declining timber harvests.
Biodiversity
Overexploitation is one of the main threats to global biodiversity. Other threats include pollution, introduced and invasive species, habitat fragmentation, habitat destruction, uncontrolled hybridization, climate change, ocean acidification and the driver behind many of these, human overpopulation.
One of the key health issues associated with biodiversity is drug discovery and the availability of medicinal resources. A significant proportion of drugs are natural products derived, directly or indirectly, from biological sources. Marine ecosystems are of particular interest in this regard. However, unregulated and inappropriate bioprospecting could potentially lead to overexploitation, ecosystem degradation and loss of biodiversity.
Endangered and extinct species
Species from all groups of fauna and flora are affected by overexploitation. This phenomenon is not bound by taxonomy; it spans across mammals, birds, fish, insects, and plants alike. Animals are hunted for their fur, tusks, or meat, while plants are harvested for medicinal purposes, timber, or ornamental uses. This unsustainable practice disrupts ecosystems, threatening biodiversity and leading to the potential extinction of vulnerable species.
All living organisms require resources to survive. Overexploitation of these resources for protracted periods can deplete natural stocks to the point where they are unable to recover within a short time frame. Humans have always harvested food and other resources they need to survive. Human populations, historically, were small, and methods of collection were limited to small quantities. With an exponential increase in human population, expanding markets and increasing demand, combined with improved access and techniques for capture, are causing the exploitation of many species beyond sustainable levels. In practical terms, if continued, it reduces valuable resources to such low levels that their exploitation is no longer sustainable and can lead to the extinction of a species, in addition to having dramatic, unforeseen effects, on the ecosystem. Overexploitation often occurs rapidly as markets open, utilising previously untapped resources, or locally used species.
Today, overexploitation and misuse of natural resources is an ever-present threat for species richness. This is more prevalent when looking at island ecology and the species that inhabit them, as islands can be viewed as the world in miniature. Island endemic populations are more prone to extinction from overexploitation, as they often exist at low densities with reduced reproductive rates. A good example of this are island snails, such as the Hawaiian Achatinella and the French Polynesian Partula. Achatinelline snails have 15 species listed as extinct and 24 critically endangered while 60 species of partulidae are considered extinct with 14 listed as critically endangered. The WCMC have attributed over-collecting and very low lifetime fecundity for the extreme vulnerability exhibited among these species.
As another example, when the humble hedgehog was introduced to the Scottish island of Uist, the population greatly expanded and took to consuming and overexploiting shorebird eggs, with drastic consequences for their breeding success. Twelve species of avifauna are affected, with some species numbers being reduced by 39%.
Where there is substantial human migration, civil unrest, or war, controls may no longer exist. With civil unrest, for example in the Congo and Rwanda, firearms have become common and the breakdown of food distribution networks in such countries leaves the resources of the natural environment vulnerable. Animals are even killed as target practice, or simply to spite the government. Populations of large primates, such as gorillas and chimpanzees, ungulates and other mammals, may be reduced by 80% or more by hunting, and certain species may be eliminated. This decline has been called the bushmeat crisis.
Vertebrates
Overexploitation threatens one-third of endangered vertebrates, as well as other groups. Excluding edible fish, the illegal trade in wildlife is valued at $10 billion per year. Industries responsible for this include the trade in bushmeat, the trade in Chinese medicine, and the fur trade. The Convention for International Trade in Endangered Species of Wild Fauna and Flora, or CITES was set up in order to control and regulate the trade in endangered animals. It currently protects, to a varying degree, some 33,000 species of animals and plants. It is estimated that a quarter of the endangered vertebrates in the United States of America and half of the endangered mammals is attributed to overexploitation.
Birds
Overall, 50 bird species that have become extinct since 1500 (approximately 40% of the total) have been subject to overexploitation, including:
Great Auk – the penguin-like bird of the north, was hunted for its feathers, meat, fat and oil.
Carolina parakeet – The only parrot species native to the eastern United States, was hunted for crop protection and its feathers.
Mammals
The international trade in fur: chinchilla, vicuña, giant otter and numerous cat species
Fish
Aquarium hobbyists: tropical fish
Various
Novelty pets: snakes, parrots, primates and big cats
Chinese medicine: bears, tigers, rhinos, seahorses, Asian black bear and saiga antelope
Invertebrates
Insect collectors: butterflies
Shell collectors: Marine molluscs
Plants
Horticulturists: New Zealand mistletoe (Trilepidea adamsii), orchids, cacti and many other plant species
Cascade effects
Overexploitation of species can result in knock-on or cascade effects. This can particularly apply if, through overexploitation, a habitat loses its apex predator. Because of the loss of the top predator, a dramatic increase in their prey species can occur. In turn, the unchecked prey can then overexploit their own food resources until population numbers dwindle, possibly to the point of extinction.
A classic example of cascade effects occurred with sea otters. Starting before the 17th century and not phased out until 1911, sea otters were hunted aggressively for their exceptionally warm and valuable pelts, which could fetch up to $2500 US. This caused cascade effects through the kelp forest ecosystems along the Pacific Coast of North America.
One of the sea otters’ primary food sources is the sea urchin. When hunters caused sea otter populations to decline, an ecological release of sea urchin populations occurred. The sea urchins then overexploited their main food source, kelp, creating urchin barrens, areas of seabed denuded of kelp, but carpeted with urchins. No longer having food to eat, the sea urchin became locally extinct as well. Also, since kelp forest ecosystems are homes to many other species, the loss of the kelp caused other cascade effects of secondary extinctions.
In 1911, when only one small group of 32 sea otters survived in a remote cove, an international treaty was signed to prevent further exploitation of the sea otters. Under heavy protection, the otters multiplied and repopulated the depleted areas, which slowly recovered. More recently, with declining numbers of fish stocks, again due to overexploitation, killer whales have experienced a food shortage and have been observed feeding on sea otters, again reducing their numbers.
| Physical sciences | Earth science basics: General | Earth science |
45266571 | https://en.wikipedia.org/wiki/Laboratory%20safety | Laboratory safety | Many laboratories contain significant risks, and the prevention of laboratory accidents requires great care and constant vigilance. Examples of risk factors include high voltages, high and low pressures and temperatures, corrosive and toxic chemicals and chemical vapours, radiation, fire, explosions, and biohazards including infective organisms and their toxins.
Measures to protect against laboratory accidents include safety training and enforcement of laboratory safety policies, safety review of experimental designs, the use of personal protective equipment, and the use of the buddy system for particularly risky operations.
In many countries, laboratory work is subject to health and safety legislation. In some cases, laboratory activities can also present environmental health risks, for example, the accidental or deliberate discharge of toxic or infective material from the laboratory into the environment.
Chemical hazards
Hazardous chemicals present physical and/or health threats to workers in clinical, industrial, and academic laboratories. Laboratory chemicals include cancer-causing agents (carcinogens), toxins (e.g., those affecting the liver, kidney, and nervous system), irritants, corrosives, sensitizers, as well as agents that act on the blood system or damage the lungs, skin, eyes, or mucous membranes.
Biological hazards
Biological agents and biological toxins
Many laboratory workers encounter daily exposure to biological hazards. These hazards are present in various sources throughout the laboratory such as blood and body fluids, culture specimens, body tissue and cadavers, and laboratory animals, as well as other workers.
These are federally regulated biological agents (e.g., viruses, bacteria, fungi, and prions) and toxins that have the potential to pose a severe threat to public health and safety, to animal or plant health, or to animal or plant products.
Anthrax - Anthrax is an acute infectious disease caused by a spore-forming bacterium called Bacillus anthracis.
Avian Flu - Avian influenza is caused by Influenza A viruses.
Botulism - Cases of botulism are usually associated with consumption of preserved foods.
Foodborne Disease - Foodborne illnesses are caused by viruses, bacteria, parasites, toxins, metals, and prions (microscopic protein particles). Symptoms range from mild gastroenteritis to life-threatening neurologic, hepatic and renal syndromes.
Hantavirus - Hantaviruses are transmitted to humans from the dried droppings, urine, or saliva of mice and rats.
Legionnaires’ Disease - Legionnaires’ disease is a bacterial disease commonly associated with water-based aerosols.
Molds and fungi - Molds and fungi produce and release millions of spores small enough to be air, water, or insect-borne which may have negative effects on human health including, allergic reactions, asthma, and other respiratory problems.
Plague - The World Health Organization reports 1,000 to 3,000 cases of plague every year. A bioterrorist release of plague could result in a rapid spread of the pneumonic form of the disease, which could have devastating consequences.
Ricin - Ricin is one of the most toxic and easily produced plant toxins. It has been used in the past as a bioterrorist weapon and remains a serious threat.
Smallpox - Smallpox is a highly contagious disease unique to humans. It is estimated that no more than 20 percent of the population has any immunity from previous vaccination.
Tularemia - Tularemia is also known as "rabbit fever" or "deer fly fever" and is extremely infectious. Relatively few bacteria are required to cause the disease, which is why it is an attractive weapon for use in bioterrorism.
Physical hazards and others
Besides exposure to chemicals and biological agents, laboratory workers can also be exposed to a number of physical hazards. Some of the common physical hazards that they may encounter include the following: ergonomic, ionizing radiation, non-ionizing
radiation, and noise hazards.
Ergonomic Hazards
Laboratory workers are at risk for repetitive motion injuries during routine laboratory procedures such as pipetting, working at microscopes, operating microtomes, using cell counters, and keyboarding at computer workstations. Repetitive motion injuries develop over time and occur when muscles and joints are stressed, tendons are inflamed, nerves are pinched and the flow of blood is restricted. Standing and working in awkward positions in front of laboratory hoods/biological safety cabinets can also present ergonomic problems.
Ionizing Radiation
Ionizing radiation sources are found in a wide range of occupational settings, including laboratories. These radiation sources can pose a considerable health risk to affected workers if not properly controlled. Any laboratory possessing or using radioactive isotopes must be licensed by the Nuclear Regulatory Commission (NRC) and/or by a state agency that has been approved by the NRC, 10 CFR 31.11 and 10 CFR 35.12.
The fundamental objectives of radiation protection measures are:
to limit entry of radionuclides into the human body (via ingestion, inhalation, absorption, or through open wounds) to quantities as low as reasonably achievable (ALARA) and always within the established limits;
to limit exposure to external radiation to levels that are within established dose limits and as far below these limits as is reasonably achievable.
Safety hazards
Autoclaves and sterilizers
Workers should be trained to recognize the potential for exposure to burns or cuts that can occur from handling or sorting hot sterilized items or sharp instruments when removing them from autoclaves/sterilizers or from steam lines that service the autoclaves.
Centrifuges
Centrifuges, due to the high speed at which they operate, have great potential for injuring users if not operated properly. Unbalanced centrifuge rotors can result in injury, even death. Sample container breakage can generate aerosols that may be harmful if inhaled.
The majority of all centrifuge accidents are the result of user error.
Compressed gases
Laboratory standard for compressed gas
Is a gas or mixture of gases in a container having an absolute pressure exceeding 40 pounds per square inch (psi) at 70 °F (21.1 °C); or
Is a gas or mixture of gases having an absolute pressure exceeding 104 psi at 130 °F (54.4 °C) regardless of the pressure at 70 °F (21.1 °C); or
Is a liquid having a vapor pressure exceeding 40 psi at 100 °F (37.8 °C) as determined by ASTM (American Society for Testing and Materials)
Within laboratories, compressed gases are usually supplied either through fixed piped gas systems or individual cylinders of gases. Compressed gases can be toxic, flammable, oxidizing, corrosive, or inert. Leakage of any of these gases can be hazardous.
Store, handle, and use compressed gases
All cylinders whether empty or full must be stored upright.
Secure cylinders of compressed gases. Cylinders should never be dropped or allowed to strike each other with force.
Transport compressed gas cylinders with protective caps in place and do not roll or drag the cylinders.
Cryogens and dry ice
Cryogens, substances used to produce very low temperatures [below -153 °C (-243 °F)], such as liquid nitrogen (LN2) which has a boiling point of -196 °C (-321 °F), are commonly used in laboratories.
Although not a cryogen, solid carbon dioxide or dry ice which converts directly to carbon dioxide gas at -78 °C (-109 °F) is also often used in laboratories. Shipments packed with dry ice, samples preserved with liquid nitrogen, and in some cases, techniques that use cryogenic liquids, such as cryogenic grinding of samples, present potential hazards in the laboratory.
Hand protection is required to guard against the hazard of touching cold surfaces. It is recommended that Cryogen Safety Gloves be used by the worker.
Eye protection is required at all times when working with cryogenic fluids. When pouring a cryogen, working with a wide-mouth Dewar flask, or around the exhaust of cold boil-off gas, use of a full face shield is recommended.
Personal protective equipments
Personal protective equipment or PPE is equipment worn to protect against exposure to hazardous substances. PPE does not eliminate the risks of hazards it helps protect the user from exposure. To ensure safety, workplaces provide instructions and training on how to use and choose proper PPE in different situations.
PPE includes:
Long-sleeved shirts, lab coats, aprons
Goggles
Safety gloves;
The two most common types of safety gloves are latex and nitrile gloves. Latex gloves have a high sensitivity when it comes to contact and fine control which is very suitable for surgery. Nitrile gloves are generally more durable and resistant to tearing and chemicals. However, the sulfur in some nitrile gloves can oxidize silver and other highly reactive metals.
Face shield or mask
Particulate respirator
Organic vapor respirator
Electrical
In the laboratory, there is the potential for workers to be exposed to electrical hazards including electric shock, electrocutions, fires, and explosions. Damaged electrical cords can lead to possible shocks or electrocutions. A flexible electrical cord may be damaged by door or window edges, by staples and fastenings, by equipment rolling over it, or simply by aging.
The potential for possible electrocution or electric shock or contact with electrical hazards can result from a number of factors, including the following:
Faulty electrical equipment/instrumentation or wiring;
Damaged receptacles and connectors; and
Unsafe work practices.
Fire
Fire is the most common serious hazard that one faces in a typical laboratory. While proper procedures and training can minimize the chances of an accidental fire, laboratory workers should still be prepared to deal with a fire emergency should it occur. In dealing with a laboratory fire, all containers of infectious materials should be placed into autoclaves, incubators, refrigerators, or freezers for containment.
Small bench-top fires in laboratory spaces are not uncommon. Large laboratory fires are rare. However, the risk of severe injury or death is significant because fuel load and hazard levels in labs are typically very high. Laboratories, especially those using solvents in any quantity, have the potential for flash fires, explosions, rapid spread of fire, and high toxicity of products of combustion (heat, smoke, and flame)
Glassware
Broken glass is a hazard for a sharps
Correct eye protection should be worn in most experiments involving glassware.
Inserting a glass rod through a stopper can introduce the possibility of a stab wound or sharps injury if the rod breaks. The hands must be protected.
Tubing should be cut from a barbed connection so as not to shatter the connection. A quick disconnect is preferable to a barbed fitting.
Ground glass joints can become a breaking hazard if they freeze.
Broken and other waste glass should be discarded in a separate container specially marked to indicate its contents.
Glassware should always be labeled as to its contents.
Rapid heating (or cooling) may cause uneven thermal expansion putting too much mechanical stress on the surface and cause it to fracture. Fracturing is a concern when people new to laboratory become impatient and heat glassware, especially the larger pieces, too fast. Heating of glassware should be slowed using an insulating material, such as metal foil or wool, or specialized equipment such as heated baths, heating mantles or laboratory grade hot plates to avoid fracturing.
Hot glass looks like cold glass, so a person must be careful to avoid grabbing hot glassware.
Glassware can explode if the exhaust is in any way restricted, so any apparatus should be vented.
Glassware can implode under negative pressure
When connecting joints, it is the responsibility of the person overseeing the experiment to select the correct seal. For example, PTFE tape, bands, and fluoroether-based grease or oils may emit toxic perfluoroisobutylene fumes if the rated temperature limits are exceed.
| Physical sciences | Research methods | Basics and measurement |
25119018 | https://en.wikipedia.org/wiki/Plaice | Plaice | Plaice is a common name for a group of flatfish that comprises four species: the European, American, Alaskan and scale-eye plaice.
Commercially, the most important plaice is the European. The principal commercial flatfish in Europe, it is also widely fished recreationally, has potential as an aquaculture species, and is kept as an aquarium fish. Also commercially important is the American plaice.
The term plaice (plural plaice) comes from the 14th-century Anglo-French plais. This in turn comes from the late Latin platessa, meaning flatfish, which originated from the Ancient Greek platys, meaning broad.
Plaice species
European plaice
The European plaice (Pleuronectes platessa) is a right-eyed flounder belonging to the family Pleuronectidae. It is a commercially important flatfish that lives on the sandy bottoms of the European shelf. It ranges geographically from the Barents Sea to the Mediterranean. European plaice are characterised by their smooth brown skin, with distinctive red spots and a bony ridge behind the eyes. They feed on polychaetes, crustaceans and bivalves and can be found at depths of up to 200 metres. At night they move into shallow waters to feed and during the day they bury themselves in the sand. Their maximum recorded length is and maximum reported age 50 years.
Together with sole, European plaice form a group of flatfish that are the most important flatfish in Europe. European plaice have been fished from the North Sea for hundreds of years. They are usually fished from beam trawlers, otter trawlers or seiners. In the Celtic Sea the plaice species is considered overfished.
American plaice
Like the European plaice, the American plaice is a right eyed flatfish belonging to the family Pleuronectidae. American plaice are an Atlantic species, which range from southern Labrador to Rhode Island. They are also found in Europe, where they are called rough dab or long rough dab. They spawn in the Gulf of Maine, with peak activity in April and May. They are brown or reddish, and are generally smaller than European plaice, with a rougher skin and larger scales. Their maximum recorded length is , and maximum reported age 30 years. They are usually found between depths of on sandy bottoms with temperatures between . They feed on small fishes and invertebrates.
The species is considered by the Northwest Atlantic Fisheries Organization to be overfished, with no signs of recovery. Though they are also currently endangered in Canada due to overfishing, the Canadian government believes the species is abundant. Flatfish, as a group, are second-most caught (by weight) only to cod in Canada, with American plaice accounting for 50 percent of all flatfish caught.
American plaice may be an intermediate host for the nematode parasite Otostrongylus circumlitis, which is a lungworm of seals, primarily affecting animals less than 1 year of age.
Alaska plaice
Alaska plaice can live for up to 30 years and grow to long, but most that are caught are only seven or eight years old and about .
Most commercial fisheries do not target Alaska plaice, but many are caught as bycatch by commercial trawlers trying to catch other bottom fish. Thus, many Alaska plaice get caught anyway — so much so that, for example, the 2005 total allowable catch in the Bering Sea and Aleutian Islands management area (BSAI) was reached before the end of May of that year.
Scale-eye plaice
The scale-eye plaice is a flatfish of the family Pleuronectidae. It is a demersal fish that lives at depths of . It can reach in length and can weigh up to . Its native habitat is the northern Pacific, primarily from the Sea of Okhotsk to Japan and Korea, though it is also found in the Bering Sea.
Current conservation and management status
Plaice, along with the other major demersal fish in the North Sea such as cod, monkfish and sole, is listed by the ICES as "outside safe biological limits." Moreover, they are growing less quickly now and are rarely older than six years, whereas they can reach forty. The World Wide Fund for Nature says that in 2006 "of the eight plaice stocks recognised by ICES, only one is considered to be harvested sustainably while three are overexploited. Data is insufficient to assess the remaining stocks; however, landings for all stocks are at or near historical lows."
In cuisine and culture
In North German and Danish cuisine, plaice is one of the most commonly eaten fish. Filleted, battered, and pan-fried plaice is popular hot or cold as an open sandwich topping together with remoulade sauce and lemon slices. Battered plaice is often served hot with french fries and remoulade sauce as a main dish; this fish and chips variant is popular and is commonly available on children's menus in Danish restaurants. Breaded frozen plaice, ready to be baked or fried at home, are readily available in supermarkets. Fresh plaice is also oven-baked.
"The flesh of plaice is white, tender and subtle-flavoured."
Smoked plaice is one of the traditional summer time delicacies of Hiiumaa island.
| Biology and health sciences | Acanthomorpha | null |
25122906 | https://en.wikipedia.org/wiki/Home%20computer | Home computer | Home computers were a class of microcomputers that entered the market in 1977 and became common during the 1980s. They were marketed to consumers as affordable and accessible computers that, for the first time, were intended for the use of a single, non-technical user. These computers were a distinct market segment that typically cost much less than business, scientific, or engineering-oriented computers of the time, such as those running CP/M or the IBM PC, and were generally less powerful in terms of memory and expandability. However, a home computer often had better graphics and sound than contemporary business computers. Their most common uses were word processing, playing video games, and programming.
Home computers were usually sold already manufactured in stylish metal or plastic enclosures. However, some home computers also came as commercial electronic kits, like the Sinclair ZX80, which were both home and home-built computers since the purchaser could assemble the unit from a kit.
Advertisements in the popular press for early home computers were rife with possibilities for their practical use in the home, from cataloging recipes to personal finance to home automation, but these were seldom realized in practice. For example, using a typical 1980s home computer as a home automation appliance would require the computer to be kept powered on at all times and dedicated to this task. Personal finance and database use required tedious data entry.
By contrast, advertisements in the specialty computer press often simply listed specifications, assuming a knowledgeable user who already had applications in mind. If no packaged software was available for a particular application, the home computer user could program one—provided they had invested the requisite hours to learn computer programming, as well as the idiosyncrasies of their system. Since most systems arrived with the BASIC programming language included on the system ROM, it was easy for users to get started creating their own simple applications. Many users found programming to be a fun and rewarding experience, and an excellent introduction to the world of digital technology.
The line between 'business' and 'home' computer market segments vanished completely once IBM PC compatibles became commonly used in the home, since now both categories of computers typically use the same processor architectures, peripherals, operating systems, and applications. Often, the only difference may be the sales outlet through which they are purchased. Another change from the home computer era is that the once-common endeavor of writing one's own software programs has almost vanished from home computer use.
Background
As early as 1965, some experimental projects, such as Jim Sutherland's ECHO IV, explored the possible utility of a computer in the home. In 1969, the Honeywell Kitchen Computer was marketed as a luxury gift item, and would have inaugurated the era of home computing, but none were sold.
Computers became affordable for the general public in the 1970s due to the mass production of the microprocessor, starting in 1971. Early microcomputers such as the Altair 8800 had front-mounted switches and diagnostic lights (nicknamed "blinkenlights") to control and indicate internal system status, and were often sold in kit form to hobbyists. These kits would contain an empty printed circuit board which the buyer would fill with the integrated circuits, other individual electronic components, wires and connectors, and then hand-solder all the connections.
While two early home computers (Sinclair ZX80 and Acorn Atom) could be bought either in kit form or assembled, most home computers were only sold pre-assembled. They were enclosed in plastic or metal cases similar in appearance to typewriter or hi-fi equipment enclosures, which were more familiar and attractive to consumers than the industrial metal card-cage enclosures used by the Altair and similar computers. The keyboard - a feature lacking on the Altair - was usually built into the same case as the motherboard. Ports for plug-in peripheral devices such as a video display, cassette tape recorders, joysticks, and (later) disk drives were either built-in or available on expansion cards. Although the Apple II had internal expansion slots, most other home computer models' expansion arrangements were through externally-accessible 'expansion ports' that also served as a place to plug in cartridge-based games. Usually, the manufacturer would sell peripheral devices designed to be compatible with their computers as extra-cost accessories. Peripherals and software were not often interchangeable between different brands of home computer, or even between successive models of the same brand.
To save the cost of a dedicated monitor, the home computer would often connect through an RF modulator to the family TV set, which served as both video display and sound system.
The rise of the home computer also led to a fundamental shift during the early 1980s in where and how computers were purchased. Traditionally, microcomputers were obtained by mail order or were purchased in person at general electronics retailers like RadioShack. Silicon Valley, in the vanguard of the personal computer revolution, was the first place to see the appearance of new retail stores dedicated to selling only computer hardware, computer software, or both, and also the first place where such stores began to specialize in particular platforms.
By 1982, an estimated 621,000 home computers were in American households, at an average sales price of . After the success of the Radio Shack TRS-80, the Commodore PET, and the original Apple II in 1977, almost every manufacturer of consumer electronics rushed to introduce a home computer. Large numbers of new machines of all types began to appear during the late 1970s and early 1980s. Mattel, Coleco, Texas Instruments, and Timex, none of which had any prior connection to the computer industry, all had short-lived home computer lines in the early 1980s. Some home computers were more successful. The BBC Micro, Sinclair ZX Spectrum, Atari 8-bit computers, and Commodore 64 sold many units over several years and attracted third-party software development.
Almost universally, home computers had a BASIC interpreter combined with a line editor in permanent read-only memory, which one could use to type in BASIC programs and execute them immediately, or save them to tape or disk. In direct mode, the BASIC interpreter was also used as the user interface, and given tasks such as loading, saving, managing, and running files. One exception was the Jupiter Ace, which had a Forth interpreter instead of BASIC. A built-in programming language was seen as a requirement for any computer of the era, and was the main feature setting home computers apart from video game consoles.
Still, home computers competed in the same market as the consoles. A home computer was often seen as simply a higher-end purchase than a console, adding abilities and productivity potential to what would still be mainly a gaming device. A common marketing tactic was to show a computer system and console playing games side by side, then emphasizing the computer's greater ability by showing it running user-created programs, education software, word processing, spreadsheet, and other applications, while the game console showed a blank screen or continued playing the same repetitive game. Another capability home computers had that game consoles of the time lacked was the ability to access remote services over telephone lines by adding a serial port interface, a modem, and communication software. Though it could be costly, it permitted the computer user to access services like Compuserve, and private or corporate bulletin board systems and viewdata services to post or read messages, or to download or upload software. Some enthusiasts with computers equipped with large storage capacity and a dedicated phone line operated bulletin boards of their own. This capability anticipated the internet by nearly 20 years.
Some game consoles offered "programming packs" consisting of a version of BASIC in a ROM cartridge. Atari's BASIC Programming for the Atari 2600 was one of these. For the ColecoVision console, Coleco even announced an expansion module which would convert it into a full-fledged computer system. The Magnavox Odyssey² console had a built-in keyboard to support its C7420 Home Computer Module. Among third-generation consoles, Nintendo's Family Computer offered Family BASIC (sold only in Japan), which included a keyboard that could be connected to an external tape recorder to load and store programs.
Books of type-in program listings like BASIC Computer Games were available, dedicated for the BASICs of most models of computer, with titles along the lines of 64 Amazing BASIC Games for the Commodore 64. While most of the programs in these books were short and simple games or demos, some titles, such as Compute!s SpeedScript series, contained productivity software that rivaled commercial packages. To avoid the tedious process of typing in a program listing from a book, these books would sometimes include a mail-in offer from the author to obtain the programs on disk or cassette for a few dollars. Before the Internet, and before most computer owners had a modem, books were a popular and low-cost means of software distribution—one that had the advantage of incorporating its own documentation. These books also served a role in familiarizing new computer owners with the concepts of programming; some titles added suggested modifications to the program listings for the user to carry out. Applying a patch to modify software to be compatible with one's system, or writing a utility program to fit one's needs, was a skill every advanced computer owner was expected to have.
During the peak years of the home computer market, scores of models were produced, usually as individual design projects with little or no thought given to compatibility between different manufacturers, or even within product lines of the same manufacturer. Except for the Japanese MSX standard, the concept of a computer platform was still forming, with most companies considering rudimentary BASIC language and disk format compatibility sufficient to claim a model as "compatible". Things were different in the business world, where cost-conscious small business owners had been using CP/M running on Z80-based computers from Osborne, Kaypro, Morrow Designs, and a host of other manufacturers. For many of these businesses, the development of the microcomputer made computing and business software affordable where they had not been before.
Introduced in August 1981, the IBM Personal Computer would eventually supplant CP/M as the standard platform used in business. This was largely due to the IBM name and the system's 16 bit open architecture, which expanded maximum memory tenfold, and also encouraged production of third-party clones. In the late 1970s, the 6502-based Apple II had carved out a niche for itself in business, thanks to the industry's first killer app, VisiCalc, released in 1979. However, the Apple II would quickly be displaced for office use by IBM PC compatibles running Lotus 1-2-3. Apple Computer's 1980 Apple III was underwhelming, and although the 1984 release of the Macintosh introduced the modern GUI to the market, it was not common until IBM-compatible computers adopted it. Throughout the 1980s, businesses large and small adopted the PC platform, leading, by the end of the decade, to sub-US$1000 IBM PC XT-class white box machines, usually built in Asia and sold by US companies like PCs Limited.
In 1980, Wayne Green, the publisher of Kilobaud Microcomputing, recommended that companies avoid the term "home computer" in their advertising, as it "I feel is self-limiting for sales...I prefer the term "microcomputers" since it doesn't limit the uses of the equipment in the imagination of the prospective customers". With the exception of Tandy, most computer companies – even those with a majority of sales to home users – agreed, avoiding the term "home computer" because of its association with the image of, as Compute! wrote, "a low-powered, low-end machine primarily suited for playing games". Apple consistently avoided stating that it was a home-computer company, and described the IIc as "a serious computer for the serious home user", despite competing against IBM's PCjr home computer. John Sculley denied that his company sold home computers; rather, he said, Apple sold "computers for use in the home". In 1990, the company reportedly refused to support joysticks on its low-cost Macintosh LC and IIsi computers to prevent customers from considering them as "game machines".
Although the Apple II and Atari computers are functionally similar, Atari's home-oriented marketing resulted in a game-heavy library with much less business software. By the late 1980s, many mass merchants sold video game consoles like the Nintendo Entertainment System, but no longer sold home computers.
Toward the end of the 1980s, clones also became popular with non-corporate customers. Inexpensive, highly-compatible clones succeeded where the PCjr had failed. Replacing the hobbyists who had made up the majority of the home computer market were, as Compute! described them, "people who want to take work home from the office now and then, play a game now and then, learn more about computers, and help educate their children". By 1986, industry experts predicted an "MS-DOS Christmas", and the magazine stated that clones threatened Commodore, Atari, and Apple's domination of the home-computer market.
The declining cost of IBM compatibles on the one hand, and the greatly-increased graphics, sound, and storage abilities of fourth generation video game consoles such as the Sega Genesis and Super Nintendo Entertainment System on the other, combined to cause the market segment for home computers to vanish by the early 1990s in the US. In Europe, the home computer remained a distinct presence for a few years more, with the low-end models of the 16-bit Amiga and Atari ST families being the dominant players, but by the mid-1990s, even the European market had dwindled. The Dutch government even ran a program that allowed businesses to sell computers tax-free to its employees, often accompanied by home training programs. Naturally, these businesses chose to equip their employees with the same systems they themselves were using. Today, a computer bought for home use anywhere will be very similar to those used in offices; made by the same manufacturers, with compatible peripherals, operating systems, and application software.
Technology
Many home computers were superficially similar. Most had a keyboard integrated into the same case as the motherboard, or, more frequently, a mainboard. While the expandable home computers appeared from the very start (the Apple II offered as many as seven expansion slots) as the whole segment was generally aimed downmarket, few offers were priced or positioned high enough to allow for such expandability. Some systems have only one expansion port, often realized in the form of cumbersome "sidecar" systems, such as on the TI-99/4, or required finicky and unwieldy ribbon cables to connect the expansion modules.
Sometimes they were equipped with a cheap membrane or chiclet keyboard in the early days, although full-travel keyboards quickly became universal due to overwhelming consumer preference. Most systems could use an RF modulator to display 20–40 column text output on a home television. Indeed, the use of a television set as a display almost defines the pre-PC home computer. Although dedicated composite or "green screen" computer displays were available for this market segment and offered a sharper display, a monitor was often a later purchase made only after users had bought a floppy disk drive, printer, modem, and the other pieces of a full system. The reason for this was that while those TV-monitors had difficulty displaying the clear and readable 80-column text that became the industry standard at the time, the only consumers who really needed that were the power users utilizing the machine for business purposes, while the average casual consumer would use the system for games only and was content with the lower resolution, for which a TV worked fine. An important exception was the Radio Shack TRS-80, the first mass-marketed computer for home use, which included its own 64-column display monitor and full-travel keyboard as standard features.
This "peripherals sold separately" approach is another defining characteristic of the home computer era. A first-time computer buyer who brought a base C-64 system home and hooked it up to their TV would find they needed to buy a disk drive (the Commodore 1541 was the only fully-compatible model) or Datasette before they could make use of it as anything but a game machine or TV Typewriter.
In the early part of the 1980s, the dominant microprocessors used in home computers were the 8-bit MOS Technology 6502 (Apple, Commodore, Atari, BBC Micro) and Zilog Z80 (TRS-80, ZX81, ZX Spectrum, Commodore 128, Amstrad CPC). One exception was the TI-99/4, announced in 1979 with a 16-bit TMS9900 CPU. The TI was originally to use the 8-bit 9985 processor designed especially for it, but this project was cancelled. However, the glue logic needed to retrofit the 16-bit CPU to an 8-bit 9985 system negated the advantages of the more powerful CPU. Another exception was the Soviet Elektronika BK series of 1984, which used the fully-16-bit and powerful for the time 1801 series CPU, offering a full PDP-11 compatibility and a fully functional Q-Bus slot, though at the cost of very anemic RAM and graphics. The Motorola 6809 was used by the Radio Shack TRS-80 Color Computer, the Fujitsu FM-7, and Dragon 32/64.
Processor clock rates were typically 1–2 MHz for 6502 and 6809-based CPUs and 2–4 MHz for Z80-based systems (yielding roughly equal performance), but this aspect was not emphasized by users or manufacturers, as the systems' limited RAM capacity, graphics abilities, and storage options had a more perceivable effect on performance than CPU speed. For low-price computers, the cost of RAM memory chips contributed greatly to the final product price to the consumer, and fast CPUs demanded expensive, fast memory. As a result, designers kept clock rates only adequate. In some cases, like the Atari and Commodore 8-bit machines, coprocessors were added to speed processing of graphics and audio data. For these computers, clock rate was considered a technical detail of interest only to users needing accurate timing for their own programs. To economize on component cost, often the same crystal used to produce color television-compatible signals was also divided down and used for the processor clock. This meant processors rarely operated at their full rated speed, and had the side-effect that European and North American versions of the same home computer operated at slightly different speeds and different video resolution due to different television standards.
Initially, many home computers used the then-ubiquitous compact audio cassette as a storage mechanism. A rough analogy to how this worked would be to place a recorder on the phone line as a file was uploaded by modem to "save" it, and playing the recording back through the modem to "load". Most cassette implementations were notoriously slow and unreliable, but 8" drives were too bulky for home use, and early 5.25" form-factor drives were priced for business use, out of reach of most home buyers. An innovative alternative was the Exatron Stringy Floppy, a continuous-loop tape drive which was much faster than a data cassette drive and could perform much like a floppy disk drive. It was available for the TRS-80 and some others. A closely-related technology was the ZX Microdrive, developed by Sinclair Research in the UK, for their ZX Spectrum and QL home computers.
Eventually, mass production of 5.25" drives resulted in lower prices, and after about 1984, they pushed cassette drives out of the US home computer market. 5.25" floppy disk drives would remain standard until the end of the 8-bit era. Though external 3.5" drives were made available for home computer systems toward the latter part of the 1980s, almost all software sold for 8-bit home computers remained on 5.25" disks. 3.5" drives were used for data storage, with the exception of the Japanese MSX standard, on which 5.25" floppies were never popular. Standardization of disk formats was not common; sometimes, even different models from the same manufacturer used different disk formats. Almost universally, the floppy disk drives available for 8-bit home computers were housed in external cases, with their own controller boards and power supplies contained within. Only the later, advanced 8-bit home computers housed their drives within the main unit; these included the TRS-80 Model III, TRS-80 Model 4, Apple IIc, MSX2, and Commodore 128D. The later 16-bit machines, such as the Atari 1040ST (not the 520ST), Amiga, and Tandy 1000, did house floppy drive(s) internally. At any rate, to expand any computer with additional floppy drives, external units would have to be plugged in.
Toward the end of the home computer era, drives for a number of home computer models appeared offering disk-format compatibility with the IBM PC. The disk drives sold with the Commodore 128, Amiga, and Atari ST were all able to read and write PC disks, which themselves were undergoing the transition from 5.25" to 3.5" format at the time (though 5.25" drives remained common on PCs until the late 1990s, due to existence of the large software and data archives on five-inch floppies). 5.25" drives were made available for the ST, Amiga, and Macintosh, otherwise 3.5" based systems with no other use for a 5.25" format. Hard drives were never popular on home computers, remaining an expensive, niche product mainly for BBS sysops and the few business users.
Various copy protection schemes were developed for floppy disks; most were broken in short order. Many users would only tolerate copy protection for games, as wear and tear on disks was a significant issue in an entirely floppy-based system. The ability to make a "working backup" disk of vital application software was seen as important. Copy programs that advertised their ability to copy or even remove common protection schemes were a common category of utility software in this pre-DMCA era.
In another defining characteristic of the home computer, instead of a command line, the BASIC interpreter served double duty as a user interface. Coupled to a character-based screen or line editor, BASIC's file management commands could be entered in direct mode. In contrast to modern computers, home computers most often had their operating system (OS) stored in ROM chips. This made startup times very fast (no more than a few seconds), but made OS upgrades difficult or impossible without buying a new unit. Usually, only the most severe bugs were fixed by issuing new ROMs to replace the old ones at the user's cost. In addition, the small size and limited scope of home computer "operating systems" (really little more than what today would be called a kernel) left little room for bugs to hide.
Although modern operating systems include extensive programming libraries to ease development and promote standardization, home computer operating systems provided little support to application programs. Professionally-written software often switched out the ROM-based OS anyway to free the address space it occupied and maximize RAM capacity. This gave the program full control of the hardware and allowed the programmer to optimize performance for a specific task. Games would often turn off unused I/O ports, as well as the interrupts that served them. As multitasking was never common on home computers, this practice went largely unnoticed by users. Most software even lacked an exit command, requiring a reboot to use the system for something else.
In an enduring reflection of their early cassette-oriented nature, most home computers loaded their disk operating system (DOS) separately from the main OS. The DOS was only used for disk and file-related commands and was not required to perform other computing functions. One exception was Commodore DOS, which was not loaded into the computer's main memory at all – Commodore disk drives contained a 6502 processor and ran DOS from internal ROM. While this gave Commodore systems some advanced capabilities – a utility program could sideload a disk copy routine onto the drive and return control to the user while the drive copied the disk on its own – it also made Commodore drives more expensive and difficult to clone.
Many home computers had a cartridge interface which accepted ROM-based software. This was also used for expansion or upgrades such as fast loaders. Application software on cartridge did exist, which loaded instantly and eliminated the need for disk swapping on single-drive setups, but the vast majority of cartridges were games.
PCs at home
From the introduction of the IBM Personal Computer (ubiquitously known as the PC) in 1981, the market for computers meant for the corporate, business, and government sectors came to be dominated by the new machine and its MS-DOS operating system. Even basic PCs cost thousands of dollars and were far out of reach for typical home computer users. However, in the following years, technological advances and improved manufacturing capabilities (mainly greater use of robotics and relocation of production plants to lower-wage locations in Asia) permitted several computer companies to offer lower-cost, PC-style machines that would become competitive with many 8-bit home-market pioneers like Radio Shack, Commodore, Atari, Texas Instruments, and Sinclair. PCs could never become as affordable as these because the same price-reducing measures were available to all computer makers. Furthermore, software and peripherals for PC-style computers tended to cost more than those for 8-bit computers because of the anchoring effect caused by the pricey IBM PC. As well, PCs were inherently more expensive since they could not use the home TV set as a video display. Nonetheless, the overall reduction in manufacturing costs narrowed the price difference between old 8-bit technology and new PCs. Despite their higher absolute prices, PCs were perceived by many to be better values for their utility as superior productivity tools and their access to industry-standard software. Another advantage was the 8088/8086's wide, 20-bit address bus. The PC could access more than 64 kilobytes of memory relatively inexpensively (8-bit CPUs, which generally had multiplexed 16-bit address buses, required complicated, tricky memory management techniques like bank-switching). Similarly, the default PC floppy was double-sided, with about twice the storage capacity of floppy disks used by 8-bit home computers. PC drives tended to cost less because they were most often built-in, requiring no external case, controller, or power supply. The faster clock rates and wider buses available to later Intel CPUs compensated somewhat for the custom graphics and sound chips of the Commodores and Ataris. In time, the growing popularity of home PCs spurred many software publishers to offer gaming and children's software titles.
Many decision-makers in the computer industry believed there could be a viable market for office workers who used PC/DOS computers at their jobs and would appreciate an ability to bring diskettes of data home on weeknights and weekends to continue work after-hours on their "home" computers. So, the ability to run industry-standard MS-DOS software on affordable, user-friendly PCs was anticipated as a source of new sales. Furthermore, many in the industry felt that MS-DOS would eventually (inevitably, it seemed) come to dominate the computer business entirely, and some manufacturers felt the need to offer individual customers PC-style products suitable for the home market.
In early 1984, market colossus IBM produced the PCjr as a PC/DOS-compatible machine aimed squarely at the home user. It proved a spectacular failure because IBM deliberately limited its capabilities and expansion possibilities in order to avoid cannibalizing sales of the profitable PC. IBM management believed that if they made the PCjr too powerful, too many buyers would prefer it over the bigger, more expensive PC. Poor reviews in the computer press and poor sales doomed the PCjr.
Tandy Corporation capitalized on IBM's blunder with its PCjr-compatible Tandy 1000 in November. Like the PCjr, it was pitched as a home, education, and small-business computer, featuring joystick ports, better sound and graphics (same as the PCjr but with enhancements), combined with near-PC/DOS compatibility (unlike Tandy's earlier Tandy 2000). The improved Tandy 1000 video hardware became a standard of its own, known as Tandy Graphics Adapter or TGA. Later, Tandy produced Tandy 1000 variants in form factors and price-points even more suited to the home computer market, comprised particularly by the Tandy 1000 EX and HX models (later supplanted by the 1000 RL), which came in cases resembling the original Apple IIs (CPU, keyboard, expansion slots, and power supply in a slimline cabinet) but also included floppy disk drives. The proprietary Deskmate productivity suite came bundled with the Tandy 1000s. Deskmate was suited to use by computer novices with its point-and-click (though not graphical) user interface. From the launch of the Tandy 1000 series, their manufacture were price-competitive because of Tandy's use of high-density ASIC chip technology, which allowed their engineers to integrate many hardware features into the motherboard (obviating the need for circuit cards in expansion slots as with other brands of PC). Tandy never transferred its manufacturing operation to Asia; all Tandy desktop computers were built in the USA (this was not true of the laptop and pocket computers, nor peripherals).
In 1985, the Epson corporation, a popular and respected producer of inexpensive dot-matrix printers and business computers (the QX-10 and QX-16), introduced its low-cost Epson Equity PC. Its designers took minor shortcuts, such as few expansion slots and a lack of a socket for an 8087 math chip, but Epson did bundle some utility programs that offered decent turnkey functionality for novice users. While not a high performer, the Equity was a reliable and compatible design for half the price of a similarly-configured IBM PC. Epson often promoted sales by bundling one of their printers with it at cost. The Equity I sold well enough to warrant the furtherance of the Equity line with the follow-on Equity II and Equity III.
In 1986, UK home computer maker Amstrad began producing their PC1512 PC-compatible for sale in the UK. Later they would market the machine in the US as the PC6400. In June 1987, an improved model was produced as the PC1640. These machines had fast 8086 CPUs, enhanced CGA graphics, and were feature-laden for their modest prices. They had joystick adapters built into their keyboards and shipped with a licensed version of the Digital Research's GEM, a GUI for the MS-DOS operating system. They became marginal successes in the home market.
In 1987, longtime small computer maker Zenith introduced a low-cost PC they called the EaZy PC. This was positioned as an "appliance" computer much like the original Apple Macintosh: turnkey startup, built-in monochrome video monitor, and lacking expansion slots, requiring proprietary add-ons available only from Zenith, but instead with the traditional MS-DOS Command-line interface. The EaZy PC used a turbo NEC V40 CPU (up-rated 8088) which was rather slow for its time, but the video monitor did feature 400-pixel vertical resolution. This unique computer failed for the same reasons as did IBM's PCjr: poor performance and expandability, and a price too high for the home market.
Another company that offered low-cost PCs for home use was Leading Edge, with their Model M and Model D computers. These were configured like full-featured business PCs, yet still could compete in the home market on price because Leading Edge had access to low-cost hardware from their Asian manufacturing partners Mitsubishi with the Model M and Daewoo with the Model D. The LEWP was bundled with the Model D. It was favorably reviewed by the computer press and sold very well.
By the mid '80s, the market for inexpensive PCs for use in the home market was expanding at such a rate that the two leaders in the US, Commodore and Atari, themselves felt compelled to enter the market with their own lines. They were only marginally successful compared to other companies that made only PCs.
Still, later prices of white box PC clone computers by various manufacturers became competitive with the higher-end home computers (see below). Throughout the 1980s, costs and prices continued to be driven down by: advanced circuit design and manufacturing, multi-function expansion cards, shareware applications such as PC-Talk, PC-Write, and PC-File, greater hardware reliability, and more user-friendly software that demanded less customer support services. The increasing availability of faster processor and memory chips, inexpensive EGA and VGA video cards, sound cards, and joystick adapters also bolstered the viability of PC/DOS computers as alternatives to specially-made computers and game consoles for the home.
High performance
From about 1985, the high end of the home computer market began to be dominated by "next-generation" home computers using the 16-bit Motorola 68000 chip, which enabled the greatly-increased abilities of the Amiga and Atari ST series (in the UK, the Sinclair QL was built around the Motorola 68008 with its external 8-bit bus). Graphics resolutions approximately doubled to give roughly NTSC-class resolution, and color palettes increased from dozens to hundreds or thousands of colors available. The Amiga was built with a custom chipset with dedicated graphics and sound coprocessors for high-performance video and audio. The Amiga found use as a workstation for desktop video, a first for a stand-alone computer, costing far less than dedicated motion-video processing equipment costing many thousands of dollars. Stereo sound became standard for the first time; the Atari ST gained popularity as an affordable alternative for MIDI equipment for the production of music.
Clock rates on the 68000-based systems were approximately with RAM capacities of (for the base Amiga 1000) up to (, a milestone, first seen on the Atari 1040ST). These systems used 3.5" floppy disks from the beginning, but 5.25" drives were made available to facilitate data exchange with IBM PC compatibles. The Amiga and ST both had GUIs with windowing technology. These were inspired by the Macintosh, but at a list price of , the Macintosh itself was too expensive for most households. The Amiga in particular had true multitasking capability, and unlike all other low-cost computers of the era, could run multiple applications in their own windows.
The second generation of MSX computers (MSX2) achieved the performance of high-performance computers using a high-speed video processor (Yamaha V9938) capable of handling resolutions of 512424 pixels, and 256 simultaneous colors from a palette of 512.
MSX
MSX was a standard for a home computing architecture that was intended and hoped to become a universal platform for home computing. It was conceived, engineered and marketed by Microsoft Japan with ASCII Corporation. Computers conforming to the MSX standard were produced by most all major Japanese electronics manufacturers, as well as two Korean ones and several others in Europe and South America. Some 5 million units are known to have been sold in Japan alone. They sold in smaller numbers throughout the world. Due to the "price wars" being waged in the USA home computer market during the 1983-85 period, MSX computers were never marketed to any great extent in the USA. Eventually more advanced mainstream home computers and game consoles obsoleted the MSX machines.
The MSX computers were built around the Zilog Z80 8-bit processor, assisted with dedicated video graphics and audio coprocessors supplied by Intel, Texas Instruments, and General Instrument. MSX computers received a great deal of software support from the traditional Japanese publishers of game software. Microsoft developed the MSX-DOS operating system, a version of their popular MS-DOS adapted to the architecture of these machines, that was also able to run CP/M software directly
Radio frequency interference
After the first wave of game consoles and computers landed in American homes, the United States Federal Communications Commission (FCC) began receiving complaints of electromagnetic interference to television reception. By 1979 the FCC demanded that home computer makers submit samples for radio frequency interference testing. It was found that "first generation" home computers emitted too much radio frequency noise for household use. The Atari 400 and 800 were designed with heavy RF shielding to meet the new requirements. Between 1980 and 1982 regulations governing RF emittance from home computers were phased in. Some companies appealed to the FCC to waive the requirements for home computers, while others (with compliant designs) objected to the waiver. Eventually techniques to suppress interference became standardized.
Reception and sociological impact
In 1977, referring to computers used in home automation at the dawn of the home computer era, Digital Equipment Corporation CEO Ken Olsen is quoted as saying "There is no reason for any individual to have a computer in his home." Despite Olsen's warning, in the late 1970s and early 1980s, from about 1977 to 1983, it was widely predicted that computers would soon revolutionize many aspects of home and family life as they had business practices in the previous decades. Mothers would keep their recipe catalog in "kitchen computer" databases and turn to a medical database for help with child care, fathers would use the family's computer to manage family finances and track automobile maintenance. Children would use online encyclopedias for school work and would be avid video gamers. The computer would even be tasked with babysitting younger children. Home automation would bring about the intelligent home of the 1980s. Using Videotex, NAPLPS or some sort of vaguely conceptualized computer technology, television would gain interactivity. It would be possible to do the week's grocery shopping through the television. The "personalized newspaper" (to be displayed on the television screen) was another commonly predicted application. Morning coffee would be brewed automatically under computer control. The same household computer would control the home's lighting and temperature. Robots would take the garbage out, and be programmed to perform new tasks via the home computer. Electronics were expensive, so it was generally assumed that each home would have only one computer for the entire family to use. Home control would be performed in a multitasking time-sharing arrangement, with interfaces to the various devices it was expected to control.
All this was predicted to be commonplace by the end of the 1980s, but by 1987 Dan Gutman wrote that the predicted revolution was "in shambles", with only 15% of American homes owning a computer. Virtually every aspect that was foreseen would be delayed to later years or would be entirely surpassed by later technological developments. The home computers of the early 1980s could not multitask, which meant that using one as a home automation or entertainment appliance would require it be kept powered on at all times and dedicated exclusively for this use. Even if the computers could be used for multiple purposes simultaneously as today, other technical limitations predominated; memory capacities were too small to hold entire encyclopedias or databases of financial records; floppy disk-based storage was inadequate in both capacity and speed for multimedia work; and the home computers' graphics chips could only display blocky, unrealistic images and blurry, jagged text that would be difficult to read a newspaper from. Although CD-ROM technology was introduced in 1985 with much promise for its future use, the drives were prohibitively expensive and only interfaced with IBM PCs and compatibles.
The Boston Phoenix stated in 1983 that "people are catching on to the fact that 'applications' like balancing your checkbook and filing kitchen recipes are actually faster and easier to do with a pocket calculator and a box of index cards". inCider observed that "companies cannot live by dilettantes alone". Gutman wrote that when the first computer boom ended in 1984, "Suddenly, everybody was saying that the home computer was a fad, just another hula hoop". Robert Lydon, publisher of Personal Computing, stated in 1985 that the home market "never really existed. It was a fad. Just about everyone who was going to buy a computer for their home has done it", and predicted that Apple would cease to exist within two years.
A backlash set in; computer users were "geeks", "nerds" or worse, "hackers". The video game crash of 1983 soured many on home computer technology as users saw large investments in 'the technology of the future' turn into dead-ends when manufacturers pulled out of the market or went out of business. The computers that were bought for use in the family room were either forgotten in closets or relegated to basements and children's bedrooms to be used exclusively for games and the occasional book report. Home computers of the 1980s have been called "a technology in search of a use". In 1984 Tandy executive Steve Leininger, designer of the TRS-80 Model I, admitted that "As an industry we haven't found any compelling reason to buy a computer for the home" other than for word processing. A 1985 study found that, during a typical week, 40% of adult computer owners did not use their computers at all. Usage rates among children were higher, with households reporting that only 16-20% of children aged 6––17 did not use the computer during a typical week.
It would take another 10 years for technology to mature, for the graphical user interface to make the computer approachable for non-technical users, and for the World Wide Web to provide a compelling reason for most people to want a computer in their homes. Separate 1998 studies found that 75% of Americans with Internet access accessed primarily from home and that not having Internet access at home inhibited Internet use. While computers did enter the home in the '90s, and were commonly called "personal" or "home" computers, machines commonly used in the home during this decade were large desktops, often shared amongst all members of a family through usage of multiple user accounts.
It wouldn't be until the 2000s and 2010s that many of the dreams of the home computer revolution were fully realized, though often in unanticipated ways. The cost of computer systems dropped precipitously, allowing individuals to access their own personal computing hardware, with shared desktop machines leaving the home during the 2010s. With better network connectivity and speed, resources such as encyclopedias, recipe catalogs and medical databases moved online and are now accessed over the Internet, with local storage solutions like floppy disks and CD-ROM's falling out of use for these purposes during those decades. Television never gained substantial interactivity on its own; newer forms of interactive media consumption such as live streaming and on demand content instead evolved from Internet video platforms and gradually replaced traditional broadcasting models. As of the 2020s, interactive media consumption is done on screens of all sizes, with the traditional television only largely replaced by the smart TV, an Internet-connected computer with the form factor of a traditional television set, in the later 2010s, after the technologies had matured. The promises of home automation were realized by small embedded devices, not home computers, and the dream of user-controlled, interactive home automation was only realized in the 2010s as these embedded devices began to be connected to externally managed cloud servers over the Internet. Throughout the 2000s, robots such as Roomba and Aibo began to make inroads into the home, but remained a niche product throughout the 2010s, limited largely to tasks such as cleaning.
This delay was not out of keeping with other technologies newly introduced to an unprepared public. Early motorists were widely derided with the cry of "Get a horse!" until the automobile was accepted. Television languished in research labs for decades before regular public broadcasts began. In an example of changing applications for technology, before the invention of radio, the telephone was used to distribute opera and news reports, whose subscribers were denounced as "illiterate, blind, bedridden and incurably lazy people". Likewise, the acceptance of computers into daily life today is a product of continuing refinement of both technology and perception.
Use in the 21st century
Retrocomputing is the use of vintage hardware, possibly performing modern tasks such as surfing the web and email. As programming techniques evolved and these systems were well-understood after decades of use, it became possible to write software giving home computers capabilities undreamed of by their designers. The Contiki OS implements a GUI and TCP/IP stack on the Apple II, Commodore 8-bit and Atari ST (16-bit) platforms, allowing these home computers to function as both internet clients and servers.
The Commodore 64 has been repackaged as the C-One and C64 Direct-to-TV, both designed by Jeri Ellsworth with modern enhancements.
Throughout the 1990s and 1st decade of the 21st century, many home computer systems were available inexpensively at garage sales and on eBay. Many enthusiasts started to collect home computers, with older and rarer systems being much sought after. Sometimes the collections turned into a virtual museum presented on web sites.
As their often-inexpensively manufactured hardware ages and the supply of replacement parts dwindles, it has become popular among enthusiasts to emulate these machines, recreating their software environments on modern computers. One of the more well-known emulators is the Multi Emulator Super System (MESS) which can emulate most of the better-known home computers. A more or less complete list of home computer emulators can be found in the List of computer system emulators article. Games for many 8 and 16 bit home computers became available for the Wii Virtual Console.
Notable home computers
The time line below describes many of the most popular or significant home computers of the late 1970s and of the 1980s.
The most popular home computers in the USA up to 1985 were: the TRS-80 (1977), various models of the Apple II (first introduced in 1977), the Atari 400/800 (1979) and its follow-up models, the VIC-20 (1980), and the Commodore 64 (1982). The VIC-20 was the first computer of any type to sell over one million units, and the 64 is still the highest-selling single model of personal computer ever, with over 17 million produced before production stopped in 1994 – a 12-year run with only minor changes. At one point in 1983 Commodore was selling as many 64s as the rest of the industry's computers combined.
The British market was different, as relatively high prices and lower disposable incomes reduced the appeal of most American products. New Scientist stated in 1977 that "the price of an American kit in dollars rapidly translates into the same figure in pounds sterling by the time it has reached the shores of Britain". The Commodore 64 was also popular, but a BYTE columnist stated in 1985:
Many of the British-made systems like Sinclair's ZX81 and ZX Spectrum, and later the Amstrad/Schneider CPC were much more widely used in Europe than US systems. A few low-cost British Sinclair models were sold in the US by Timex Corporation as the Timex Sinclair 1000 and the ill-fated Timex Sinclair 2068, but neither established a strong following. The only transatlantic success was the Commodore 64, which competed favorably price-wise with the British systems, and was the most popular system in Europe as in the USA.
Until the introduction of the IBM PC in 1981, computers such as the Apple II and TRS 80 also found considerable use in office work. In 1983 IBM introduced the PCjr in an attempt to continue their business computer success in the home computer market, but incompatibilities between it and the standard PC kept users away. Assisted by a large public domain software library and promotional offers from Commodore, the PET had a sizable presence in the North American education market until that segment was largely ceded to the Apple II as Commodore focused on the C-64's success in the mass retail market.
1970s
Three microcomputers were the prototypes for what would later become the home computer market segment; but when introduced they sold as much to hobbyists and small businesses as to the home.
June 1977: Apple II (North America), color graphics, eight expansion slots; one of the first computers to use a typewriter-like plastic case design.
August 1977: TRS-80 (N. Am.), first home computer for less than US$600, used a dedicated monitor for US Federal Communications Commission (FCC) rules compliance.
October 1977: Commodore PET (N. Am.), first all-in-one computer: keyboard/screen/tape storage built into stamped sheet metal enclosure.
In 1977 Compucolor II, although shipments did not start until the next year. The Compucolor II was smaller, less expensive than first model which was an upgrade kit for the company's color computer terminal, turning the Intecolor 8001 into the Compucolor 8001 and used the newly introduced 5.25-inch floppy disks instead of the former 8-inch models.
The following computers also introduced significant advancements to the home computer segment:
1979: TI-99/4, first home computer with a 16-bit processor and first to add hardware supported sprite graphics
1979: Atari 8-bit computers (N. Am.), first computers with custom chip set and programmable video chip and built-in audio output
1980s
January 1980: Sinclair ZX80, available in the United Kingdom for less than a hundred pounds
1980: VIC-20 (N. Am.), under US$300; first computer of any kind to pass one million sold.
1980: TRS-80 Color Computer (N. Am.), Motorola 6809, optional OS-9 multi-user multi-tasking.
July 1980: TRS-80 Model III (N. Am.), essentially a TRS-80 Model I repackaged in an all-in-one cabinet, to comply with FCC regulations for radiofrequency interference, to eliminate cable clutter, and use only one electrical outlet. Some enhancements like extended character set, repeating keys, and real time clock.
June 1981: TI-99/4A, based on the less successful TI-99/4.
1981: ZX81 (Europe), £49.95 in kit form; £69.95 pre-built, released as Timex Sinclair 1000 in US in 1982.
1981: BBC Micro (Europe), premier educational computer in the UK for a decade; advanced BBC BASIC with integrated 6502 machine code assembler, and a large number of I/O ports, ~ 1.5 million sold.
April 1982: ZX Spectrum (Europe), best-selling British home computer; catalysed the UK software industry, widely cloned by the Soviet Union.
June 1982: MicroBee (Australia), initially as a kit, then as a finished unit.
August 1982: Dragon 32 (UK) became, for a short time, the best-selling home micro in the United Kingdom.
August 1982: Commodore 64 (N. Am.), custom graphic & synthesizer chipset, best-selling computer model of all time: ~ 17 million sold.
Jan. 1983: Apple IIe, Apple II enhanced. Reduced component count and production costs enabled high-volume production, until 1993.
April 1983: TRS-80 Model 4, major upgrade compatible with Model III. Ran industry-standard CP/M, updated TRSDOS 6, 4 MHz speed, 128KB RAM max, 80x24 screen, 640x240 high-res option. In September the transportable "luggable" Model 4P unveiled.
1983: Acorn Electron A stripped down 'sibling' of the BBC microcomputer with limited functionality. The Electron recovered from a slow start to become one of the more popular home computers of that era in the UK.
1983: Sanyo PHC-25, with 16k of RAM, one of a number of Sanyo models
1983: Coleco Adam, one of the few home computers to be sold only as a complete system with storage device and printer; cousin to the ColecoVision game console.
1983: MSX (Japan, Korea, the Arab League, Europe, N+S. Am., USSR), a computer 'reference design' by ASCII and Microsoft, produced by several companies: ~ 5 million sold in Japan.
1983: VTech Laser 200, entry level computer aimed at being the cheapest on market, also sold as Salora Fellow, Texet TX8000 & Dick Smith VZ 200.
1983: Oric 1 and Oric Atmos (Europe), a home computer equipped with a full travel keyboard and an extended version of Microsoft BASIC in ROM.
January 1984: The Macintosh is introduced, providing many consumers their first look at a graphical user interface, which would eventually replace the home computer as it was known.
April 1984: Apple IIc, Apple II compact. No expansion slots, and built-in ports for pseudo-plug and play ease of use. The Apple II most geared to home use, to complement the Apple IIe's dominant education market share.
March 1984: IBM PCjr, designed, priced and marketed as a home computer for kids and teens but purchased mostly by business customers who wanted an inexpensive IBM compatible PC.
1984: Tiki 100 (Norway), Zilog Z80-based home/educational computer made by Tiki Data.
June 1984: Amstrad/Schneider CPC, a very popular system in the UK which sold also well in Europe.
1985: TRS-80 Model 4D: updated Model 4 with double-sided drives and Deskmate productivity suite.
1985: Elektronika BK-0010, one of the first 16-bit home computers; made in USSR.
1985: Robotron KC 85/1 (Europe), one of the few 8-bit general-purpose microcomputers produced in East Germany. As the KC line of computers, with the exception of the KC compact, was not available for sale to the general public due to the strict prioritization of 'societal users' over consumers, they are not genuine 'home computers'.
1985: Atari ST (N. Am.), first with a graphical user interface (GEM) for less than US$1000; also 1 MB RAM and 16-bit Motorola 68000 processor for under US$1000.
1985: MSX2, the second generation of MSX Computers is launched worldwide. They achieved the performance of high-performance computers using a high-speed video processor (Yamaha V9938) capable of handling resolutions of 512x424 pixels, and 256 simultaneous colors from a palette of 512
June 1985: Commodore 128 (N. Am.) Final, most advanced 8-bit Commodore, retained full C64 compatibility while adding CP/M in a complex multi-mode architecture
July 1985: Amiga 1000 (N. Am.), custom chip set for graphics and digital audio; multitasking OS with both GUI and CLI interfaces; 16-bit Motorola 68000 processor. Initially designed as a game console but repositioned as a home computer.
1986: Apple IIGS, fifth and most advanced model in the Apple II, with greatly enhanced graphics and sound abilities. Used a 16-bit 65C816 CPU, the same as used in the Super Nintendo Entertainment System.
June 1987: Acorn Archimedes (Europe), launched with an 8 MHz 32-bit ARM2 microprocessor, with between 512 KB and 4 MB of RAM, and an optional 20 or 40 MB hard drive.
October 1987: Amiga 500 (N. Am.), Amiga 1000 repackaged into a C64-like housing with keyboard and motherboard in the same enclosure, along with a 3.5" floppy disk drive. Introduced at the same time as the more expandable Amiga 2000.
1988 - The MSX2+ is launched in Japan. It is able to show more than 19,000 simultaneous colors on screen thanks to hardware-based graphic compression.
1989: SAM Coupé (Europe), based on 6 MHz Z80 microprocessor; marketed as a logical upgrade from the ZX Spectrum.
1990s
December 1991: The MSX TurboR is launched in Japan only. This is the last generation of MSX computers that was put to market by a household electronic brand. It is also the first MSX based on a 16 bit CPU: The Ascii R800 processor.
1992: Atari Falcon (N. Am.), the final home computer from Atari, it shipped with a digital signal processor.
October 1992: Amiga 1200 (N. Am.), the final home computer from Commodore, it sold well in Europe.
| Technology | Computer hardware | null |
31167376 | https://en.wikipedia.org/wiki/Poposauroidea | Poposauroidea | Poposauroidea is a clade of advanced pseudosuchians. It includes poposaurids, shuvosaurids, ctenosauriscids, and other unusual pseudosuchians such as Qianosuchus and Lotosaurus. It excludes most large predatory quadrupedal "rauisuchians" such as rauisuchids and "prestosuchids". Those reptiles are now allied with crocodylomorphs (crocodile ancestors) in a clade known as Loricata, which is the sister taxon to the poposauroids in the clade Paracrocodylomorpha. Although it was first formally defined in 2007, the name "Poposauroidea" has been used for many years. The group has been referred to as Poposauridae by some authors, although this name is often used more narrowly to refer to the family that includes Poposaurus and its close relatives. It was phylogenetically defined in 2011 by Sterling Nesbitt as Poposaurus gracilis and all taxa more closely related to it than to Postosuchus kirkpatricki, Crocodylus niloticus (the Nile crocodile), Ornithosuchus woodwardi, or Aetosaurus ferratus.
Poposauroids went extinct at the end of the Triassic period along with other non-crocodylomorph pseudosuchians. They were among the most diverse and longest lasting members of non-crocodylomorph Pseudosuchia, with Xilousuchus (a ctenosauriscid) living near the very beginning of the Triassic and Effigia (a shuvosaurid) surviving up until near the end of the Triassic. Despite the high level of diversity and anatomical disparity within Poposauroidea, certain features of the clade can be determined, particularly in the structure of the snout and pelvis (hip). Many of these features are examples of convergent evolution with dinosaurs, with bipedal poposauroids such as Poposaurus and shuvosaurids having been mistaken for theropod dinosaurs in the past.
Features
Poposauroidea was a diverse group of pseudosuchians, containing genera with many different ecological adaptations. Some (Poposaurus and shuvosaurids) were short-armed bipeds, while others (ctenosauriscids and Lotosaurus) were robust quadrupeds with elongated neural spines, creating a 'sail' like that of certain "pelycosaurs" (like Dimetrodon) and spinosaurids. Lotosaurus and shuvosaurids were toothless and presumably beaked herbivores while Qianosuchus, Poposaurus and ctenosauriscids were sharp-toothed predators. The ecological disparity of many members of this clade means that it is difficult to assess what the ancestral poposauroid would have looked like.
Skull and vertebrae
Poposauroids can be differentiated from other pseudosuchians by the structure of the tip of the snout, particularly the premaxillary bone which lies in front of the nares (nostril holes). This bone possesses two bony extensions ("processes") which wrap around the nares. The anterodorsal process, which wraps above the nares to contact the nasal bones on the top edge of the snout, is typically quite short in pseudosuchians. Poposauroids have elongated anterodorsal processes, longer than the main premaxillary body. The posterodorsal process, which wraps below the nares to contact the maxilla on the side of the snout, possesses the opposite state. It is much shorter in poposauroids (compared to other pseudosuchians), restricted to only a portion of the lower edge of the nares. This has the added effect of allowing the maxilla to form the rest of the hole's lower and rear edge, with the front edge of the maxilla becoming concave as well. Although these snout features are rare among pseudosuchians, they are much more common in certain avemetatarsalians (bird-line archosaurs) such as pterosaurs and saurischian dinosaurs. The rear branch of the maxilla tapers in most poposauroids, with the exception of Qianosuchus. This contrasts with loricatans, in which this branch is rectangular in shape.
Poposauroids also possess several features which are unusual compared to archosaurs in general. For example, in most archosaurs each side of the braincase possesses a pit from where the internal carotid arteries may exit the brain. In early poposauroids, these pits migrated to the underside of the braincase, thereby resembling the primitive condition seen in archosaur relatives such as Euparkeria and proterochampsians. Nevertheless, this reversion is undone in shuvosaurids (and possibly earlier, although no braincase material is known in Poposaurus or Lotosaurus). In addition, most poposauroids possessed elongated necks, and all of them had long and thin cervical ribs. This second neck trait contrasts with the condition in other pseudosuchians, phytosaurs, and pterosaurs, which have short and stout cervical ribs. The neural spines of the dorsal (back) vertebrae are thin and plate-like, even in members of Poposauroidea without sails. This differs compared to the vertebrae of most other early pseudosuchians (as well as Euparkeria and phytosaurs), which have neural spines that expand outward to form a flat, rectangular surface when seen from above.
Pelvis
Like other reptiles, the pelvis of poposauroids is formed by three plate-like bones: the ilium which lies above the acetabulum (hip socket), the pubis which is below and in front of the acetabulum, and the ischium which is below and behind the acetabulum. The ilium is a large, complex bone, with a forward-pointing (preacetabular) process, a rear-pointing (postacetabular) process, and a lower portion which forms the upper edge of the acetabulum. In most archosaurs, the lower portion of the ilium is wedge-shaped, forming the inner face of a "closed" acetabulum. In poposauroids the lower portion of the ilium is concave, creating a partially to completely "open" acetabulum formed by open space instead of bone. The only other archosaurs with open hip sockets are dinosaurs and (to a lesser extent) crocodylomorphs. The upper edge of the acetabulum is formed by a pronounced ridge on the ilium, known as a supraacetabular rim or crest.
Although all poposauroids possessed open acetabula, most other specializations of the ilium did not evolve until the clade containing Poposaurus and the shuvosaurids. For example, the supraacetabular crest projects downward, rather than outward in this clade. This trait is rare in archosaurs, only evolving independently in a few early theropod dinosaurs such as Coelophysis and Dilophosaurus. Another feature is the presence of an additional diagonal crest which branches upward from the supraacetabular rim. Although such a crest evolved independently in a number of different archosaurs, this specific subset of poposauroids is unique in having the crest be inclined forward (rather than vertical) and confluent with the elongated preacetabular blade, which is another derived feature of the clade.
The pubis and ischium were also specialized in poposauroids. In every other archosaur, the two bones contact each other on the lower edge of the acetabulum. In poposauroids other than Qianosuchus and Lotosaurus, the bones did not touch, leaving the acetabulum open from the sides and below. The width of the pubis is variable at different parts of its shaft. The portion near the acetabulum is thickened, but the tip of the bone (except in Qianosuchus) is very thin when seen head-on. In most other archosaurs, the pubis has a consistent width. Theropod dinosaurs and a few other archosaurs have a distal part of the pubis which is thinner than the proximal part. Shuvosaurids and Lotosaurus also possessed ischia (on either side of the body) which were fused to each other at the midline of the body.
Sacrum
Although the ancestral archosaur only had two sacral (hip) vertebrae, many different archosaur groups acquired additional sacral vertebrae over the course of their evolution. Nesbitt (2011) argued that additional sacral vertebrae formed between these two "primordial" vertebrae. He gave the well-preserved sacrum of the poposauroid Arizonasaurus as evidence to this process. Poposauroids have three to four sacral vertebrae, with the last and third-to-last vertebrae articulating with the ilium in a way similar to the two primordial vertebrae of more primitive archosauriformes such as Euparkeria and phytosaurs. The second-to-last vertebra has a form unlike the vertebrae of these archosauriforms, and Nesbitt concluded that it was an "insertion", formed from the innermost sections of the two primordial vertebrae. Although this process is not unique to poposauroids, it is only known in a few other archosaur lineages, such as Batrachotomus, silesaurids, and dinosaurs.
Basal poposauroids such as Arizonasaurus and Qianosuchus only had three sacral vertebra, with the second vertebra being the 'insertion'. More advanced poposauroids such as Poposaurus and shuvosaurids have four sacral vertebrae, the third recognizable as the insertion. This means that the first vertebra must have been another addition, seemingly the last dorsal vertebra which had been repurposed and transformed into a sacral vertebra. This incorporated dorsal vertebra called a dorsosacral. They were irregularly distributed among archosaurs, known in a few ornithosuchids and aetosaurs as well as a variety of dinosaurs (most commonly in ornithischians and theropods)
In almost all archosariforms, the sacral ribs of the first primordial sacral vertebra contact the ilium near the base of that bone, close to its contact with the pubis. Poposauroids had first primordial sacral ribs with additional forward branches, which lie on the inner edge of the ilium's preacetabular blade. In poposauroids more advanced than Qianosuchus, the sacral vertebrae fuse into a single bone, the sacrum. This fusion occurred incrementally, at different portions of the vertebra. For example, the zygapophyses fused together as early as the ctenosauriscids. The centra (main cylindrical portion) of the sacral vertebrae also may have fused as early as the ctenosauriscids. The bases of the neural arches (the portion of the vertebrae above the spinal cord) were fused in some ctenosauriscids (Arizonasaurus) but not others (Bromsgroveia), and were also fused in all poposauroids more advanced than the ctenosauriscids.
Other
Unlike most pseudosuchians, poposauroids lack bony scutes known as osteoderms. The only exception to this is Qianosuchus, which possessed numerous tiny osteoderms, lying in a row extending down the neck and body. In all poposauroids, the tip of the fibula (outer shin bone) is symmetrical and straight when seen from the side, rather than slanted as in other non-crocodylomorph pseudosuchians. Those more advanced than ctenosauriscids had flattened hooflike pedal unguals (toe claws). Some poposauroids had very short arms compared to the length of their legs, although disarticulation in Qianosuchus and a lack of limb material in ctenosauriscids means that it is unknown whether this trait was basal to the group as a whole. Missing data for ctenosauriscids also obscures when certain traits of the caudal vertebrae and ankle bones were gained or lost within Poposauroidea.
History
Franz Nopcsa first used the term Poposauridae in 1923 to refer to poposauroids. At this time, the sole member of the group was Poposaurus, which was considered to be a theropod dinosaur. Over the following years, poposauroids were placed in various groups, including Saurischia, Theropoda, and Carnosauria. This classification existed up until the 1970s, when better remains indicated that Poposaurus was a pseudosuchian rather than a dinosaur. Other genera such as Sillosuchus and Shuvosaurus were later erected. Like Poposaurus, Shuvosaurus was originally thought to be a theropod dinosaur.
Sankar Chatterjee reclassified poposauroids as theropod dinosaurs with his description of the new genus Postosuchus in 1985. Chatterjee even considered poposauroids to be the ancestors of tyrannosaurs. Postosuchus was widely considered to be a poposauroid for the next ten years and was included in many phylogenetic analyses of Triassic archosaurs. In 1995, Robert Long and Phillip A Murry noted that several specimens referred to Postosuchus were distinct from the holotype, and so they assigned those specimens to the new genera Lythrosuchus and Chatterjeea.
In 2005, Sterling Nesbitt noted that "ctenosauriscids" such as Arizonasaurus, Bromsgroveia, and Lotosaurus shared many similarities with "poposaurids" such as Poposaurus, Sillosuchus, and "Chatterjeea" (now known as Shuvosaurus). He proposed that they formed a clade (informally named "Group X") to the exception of other pseudosuchians.
"Group X" was formally given the name "Poposauroidea" by Jonathan C. Weinbaum and Axel Hungerbühler in 2007. In their paper, Weinbaum and Hungerbühler described two new skeletons of Poposaurus and incorporated several new characters of the genus into a phylogenetic analysis. Poposauroidea was recovered as a monophyletic grouping, while other rauisuchians (namely Rauisuchidae and Prestosuchidae) were placed as basal forms of a new group called Paracrocodyliformes.
Brusatte et al. (2010) conducted a phylogenetic study of archosaurs that resulted in a grouping referred to as Poposauroidea. Unlike many recent studies, they found Rauisuchia to be monophyletic, consisting of two major clades: Rauisuchoidea and Poposauroidea. The monophyly of Rauisuchia was not strongly supported in Brusatte et al.'''s analysis. They noted that if their tree was enlarged by one step, Poposauroidea fell outside Rauisuchia to become the sister group of Ornithosuchidae, which is thought to closely related to, but outside, Rauisuchia. In their tree, Poposauroidea included genera usually classified as poposauroids as well as several other genera that were not previously placed in the group. One of these genera, Qianosuchus, is unique among pseudosuchians in its semiaquatic lifestyle.
In his massive revision of archosaurs which included a large cladistic analysis, Sterling J. Nesbitt (2011) found Xilousuchus to be a poposauroid most closely related to Arizonasaurus. Nesbitt's analysis did not recover a monophyletic Rauisuchia or monophyletic Rauisuchoidea. Poposauroidea was found to be monophyletic, and more resolved than in previous analyses, with Qianosuchus as the most basal member of the group and Lotosaurus'' grouping with shuvosaurids instead of ctenosauriscids. The cladogram below follows Nesbitt (2011) with clade names based on previous studies.
| Biology and health sciences | Other prehistoric archosaurs | Animals |
32154505 | https://en.wikipedia.org/wiki/Drylands | Drylands | Drylands are defined by a scarcity of water. Drylands are zones where precipitation is balanced by evaporation from surfaces and by transpiration by plants (evapotranspiration). The United Nations Environment Program defines drylands as tropical and temperate areas with an aridity index of less than 0.65. One can classify drylands into four sub-types:
Dry sub-humid lands
Semi-arid lands
Arid lands
Hyper-arid lands
Some authorities regard hyper-arid lands as deserts (United Nations Convention to Combat Desertification - UNCCD) although a number of the world's deserts include both hyper-arid and arid climate zones. The UNCCD excludes hyper-arid zones from its definition of drylands.
Drylands cover 41.3% of the Earth's land surface, including 15% of Latin America, 66% of Africa, 40% of Asia, and 24% of Europe. There is a significantly greater proportion of drylands in developing countries (72%), and the proportion increases with aridity: almost 100% of all hyper-arid lands are in the developing world. Nevertheless, the United States, Australia, and several countries in Southern Europe also contain significant dryland areas.
Drylands are complex, evolving structures whose characteristics and dynamic properties depend on many interrelated interactions between climate, soil, and vegetation.
Biodiversity
The livelihoods of millions of people in developing countries depend highly on dryland biodiversity to ensure their food security and their well-being. Drylands, unlike more humid biomes, rely mostly on above ground water runoff for redistribution of water, and almost all their water redistribution occurs on the surface. Dryland inhabitants' lifestyle provides global environmental benefits which contribute to halt climate change, such as carbon sequestration and species conservation. Dryland biodiversity is equally of central importance as to ensuring sustainable development, along with providing significant global economic values through the provision of ecosystem services and biodiversity products. The UN Conference on Sustainable Development Rio+20, held in Brazil in June 2012, stressed the intrinsic value of biological diversity and recognized the severity of global biodiversity loss and degradation of ecosystems.
Drylands in East Africa
The East African drylands cover about 47% of land areas and are home to around 20 million people. Pastoralists who rely on cattle for both economic and social well-being constitute the majority of rural inhabitants in the drylands. Pastoralists use strategic movement to gain access to pasture during the dry season, using the available resources effectively. However, due to a variety of factors, this method has changed and been constrained. Challenges connected to demographics and climate change. The greatest issue in drylands, is land degradation which poses a huge danger to the world's capacity to end hunger. Drylands occupy around 2 million km² or respectively 90%, 75%, and 67% of Kenya, Tanzania, and Ethiopia respectively. More than 60 million people, or 40% of these countries’ population, live in drylands. The low level of precipitation and the high degree of variability in the climatic conditions limit the possibilities for rainfed crop production in these areas.
The four sub-types
Dry and sub-humid lands
Countries like Burkina Faso, Botswana, Iraq, Kazakhstan, Turkmenistan and the Republic of Moldova, are 99% covered in areas of dry and sub-humid lands. The biodiversity of dry and sub-humid lands allows them to adapt to the unpredictable rainfall patterns that lead to floods and droughts. These areas produce the vast amount the world's crops and livestock. Even further than producing the vast majority of crops in the world, it is also significant because it includes many different biomes. Biomes include:
Grassland
Savannahs
Mediterranean climate
Semi-arid lands
Semi-arid lands can be found in several regions of the world. For instance in places such as Europe, Mexico, Southwestern parts of the U.S, Countries in Africa that are just above the equator, and several Southern countries in Asia.
Definition of semi-arid lands
According to literature, arid and semi-arid lands are defined based on the characteristics of the climate. For instance, Mongi et al. (2010) consider semi-arid lands as places where the annual rainfall ranges between 500 and 800mm. Fabricius et al. on the other hand insist that the concept of aridity should also include conditions of aridity and semi-aridity. Furthermore, they consider that a huge part of the Sub-Saharan area covering around 40 countries on the continent is land having arid conditions. Arid and semi-arid lands have much higher evapotranspiration rates as compared to the precipitation along with high air temperature mainly during dry seasons, high and almost continuous isolation throughout the year, and the presence of dry gale-force winds.
Manifestations of Climate Change in semi-arid lands
Based on spatial repartition of greenhouse gas emissions (GGE) in the atmosphere, it seems that Africa contributes marginally in comparison to the rest of the world. Africa generates on average less than 4% of GGE produced in the world. Comparative data on GGE per person show that Europeans and Americans generate about 50 to 100 times more gas than Africans (Thiam, 2009).
Based on the consequences caused by variability and climate change, it appears that African populations are more vulnerable than others. To illustrate, the trend of reduced rainfall in the Sahel area has been marked by climatic extremes with devastating consequences on natural resources, agricultural and pastoral activities, etc. In semi-arid lands, manifestations of climate change on communities and socio-economic activities are more diversified.
The characterization and impact of the variability trend of rainfall depend on several random factors. Among the random factors, we can mention, the nature and the critical thresholds of extreme events, the frequency of these extremes according to regions, the precision of data used, and the results of mathematical simulations, and propagation. The state of scientific knowledge has allowed for the identification of the principal manifestations of climate change on the development of socio-economic activities in semi-arid lands. These manifestations are:
Increased variability of precipitations and their characteristics (number of rainfall days, date of start, length of the season) that can be translated to an abrupt alternative between dry and humid years.
a shorter rainy season correlatively to its late start;
an increase in the occurrence of dry sequences that can happen at any time in space and time during the actual period;
a tendency to the increment of maximal rains cumulated in fewer consecutive days, that causes damage and important loss on socio-economic systems (culture, infrastructure) and humans;
Dry and violent winds associated with very scarce rainfall that prevent enough humidification of the soils; making difficult the development of the whole vegetal life;
The actual rise without compromise of observed temperatures according to forecasts of the GIEC creates stressful thermal situations that may seriously handicap vegetal and animal productivity.
Adaptation, Resilience in SALS
In semi-arid lands where pastoralism is the principal activity, the main adaptation measures are an early departure to transhumance, the reduction of the size of the herd, a change in the management of water, and diversification of paths of transhumance. This allows breeders to safeguard their livestock and prevent huge losses as was the case in the drought of the seventies. Breeders purchase stock for the livestock or simply stock it. They become proactive (engage in trade, real estate, guarding, transport) in certain countries like Burkina Faso, Senegal, Mali, and Kenya. These adaptation strategies allow them to be more resilient to the socio-economic consequences of climate change.
Arid lands.
Arid lands make up about 41% of the world's land and are home to 20% of the world's people. They have several characteristics that make them unique :
Rainfall scarcity
High temperatures
Evapotranspiration and low humidity
Hyper-arid lands
These lands cover 4.2% of the world and consist of areas without vegetation. They receive irregular rainfall that barely surpasses 100 millimeters, and in some cases, they may not receive rainfall for several years.
| Physical sciences | Biomes: General | Earth science |
32159470 | https://en.wikipedia.org/wiki/Reboot | Reboot | In computing, rebooting is the process by which a running computer system is restarted, either intentionally or unintentionally. Reboots can be either a cold reboot (alternatively known as a hard reboot) in which the power to the system is physically turned off and back on again (causing an initial boot of the machine); or a warm reboot (or soft reboot) in which the system restarts while still powered up. The term restart (as a system command) is used to refer to a reboot when the operating system closes all programs and finalizes all pending input and output operations before initiating a soft reboot.
Terminology
Etymology
Early electronic computers (like the IBM 1401) had no operating system and little internal memory. The input was often a stack of punch cards or via a switch register. On systems with cards, the computer was initiated by pressing a start button that performed a single command - "read a card". This first card then instructed the machine to read more cards that eventually loaded a user program. This process was likened to an old saying, "picking yourself up by the bootstraps", referring to a horseman who lifts himself off the ground by pulling on the straps of his boots. This set of initiating punch cards was called "bootstrap cards". Thus a cold start was called booting the computer up. If the computer crashed, it was rebooted. The boot reference carried over to all subsequent types of computers.
Cold versus warm reboot
For IBM PC compatible computers, a cold boot is a boot process in which the computer starts from a powerless state, in which the system performs a complete power-on self-test (POST). Both the operating system and third-party software can initiate a cold boot; the restart command in Windows 9x initiates a cold reboot, unless Shift key is held.
A warm boot is initiated by the BIOS, either as a result of the Control-Alt-Delete key combination or directly through BIOS interrupt INT 19h. It may not perform a complete POST - for example, it may skip the memory test - and may not perform a POST at all. Malware may prevent or subvert a warm boot by intercepting the Ctrl + Alt + Delete key combination and prevent it from reaching BIOS. The Windows NT family of operating systems also does the same and reserves the key combination for its own use.
Operating systems based on Linux support an alternative to warm boot; the Linux kernel has optional support for kexec, a system call which transfers execution to a new kernel and skips hardware or firmware reset. The entire process occurs independently of the system firmware. The kernel being executed does not have to be a Linux kernel.
Outside the domain of IBM PC compatible computers, the types of boot may not be as clear. According to Sue Loh of Windows CE Base Team, Windows CE devices support three types of boots: Warm, cold and clean. A warm boot discards program memory. A cold boot additionally discards storage memory (also known as the "object store"), while a clean boot erases all forms of memory storage from the device. However, since these areas do not exist on all Windows CE devices, users are only concerned with two forms of reboot: one that resets the volatile memory and one that wipes the device clean and restores factory settings. For example, for a Windows Mobile 5.0 device, the former is a cold boot and the latter is a clean boot.
Hard reboot
A hard reboot means that the system is not shut down in an orderly manner, skipping file system synchronisation and other activities that would occur on an orderly shutdown. This can be achieved by either applying a reset, by cycling power, by issuing the command in most Unix-like systems, or by triggering a kernel panic.
Hard reboots are used in the cold boot attack.
Restart
The term "restart" is used by the Microsoft Windows and Linux families of operating systems to denote an operating system-assisted reboot. In a restart, the operating system ensures that all pending I/O operations are gracefully ended before commencing a reboot.
Causes
Deliberate
Users may deliberately initiate a reboot. Rationale for such action may include:
Troubleshooting: Rebooting may be used by users, support staff or system administrators as a technique to work around bugs in software, for example memory leaks or processes that hog resources to the detriment of the overall system, or to terminate malware. While this approach does not address the root cause of the issue, resetting a system back to a good, known state may allow it to be used again for some period until the issue next occurs.
Switching operating systems: On a multi-boot system without a hypervisor, a reboot is required to switch between installed operating systems.
Offensive: As stated earlier, components lose power during a cold reboot; therefore, components such as RAM that require power lose the data they hold. However, in a cold boot attack, special configurations may allow for part of the system state, like a RAM disk, to be preserved through the reboot.
The means of performing a deliberate reboot also vary and may include:
Manual, hardware-based: A power switch or reset button can cause the system to reboot. Doing so, however, may cause the loss of all unsaved data.
Manual, software-based: Computer software and operating system can trigger a reboot as well; both Microsoft Windows and several Unix-like operating systems can be shut down from the command line or through the GUI.
Automated: Software can be scheduled to run at a certain time and date; therefore, it is possible to schedule a reboot.
Power failure
Unexpected loss of power for any reason (including power outage, power supply failure or depletion of battery on a mobile device) forces the system user to perform a cold boot once the power is restored. Some BIOSes have an option to automatically boot the system after a power failure. An uninterruptible power supply (UPS), backup battery or redundant power supply can prevent such circumstances.
Random reboot
"Random reboot" is a non-technical term referring to an unintended (and often undesired) reboot following a system crash, whose root cause may not immediately be evident to the user. Such crashes may occur due to a multitude of software and hardware problems, such as triple faults. They are generally symptomatic of an error in ring 0 that is not trapped by an error handler in an operating system or a hardware-triggered non-maskable interrupt.
Systems may be configured to reboot automatically after a power failure, or a fatal system error or kernel panic. The method by which this is done varies depending on whether the reboot can be handled via software or must be handled at the firmware or hardware level. Operating systems in the Windows NT family (from Windows NT 3.1 through Windows 7) have an option to modify the behavior of the error handler so that a computer immediately restarts rather than displaying a Blue Screen of Death (BSOD) error message. This option is enabled by default in some editions.
Hibernation
The introduction of advanced power management allowed operating systems greater control of hardware power management features. With Advanced Configuration and Power Interface (ACPI), newer operating systems are able to manage different power states and thereby sleep and/or hibernate. While hibernation also involves turning a system off then subsequently back on again, the operating system does not start from scratch, thereby differentiating this process from rebooting.
Simulated reboot
A reboot may be simulated by software running on an operating system. For example: the Sysinternals BlueScreen utility, which is used for pranking; or some modes of the XScreenSaver "hack", for entertainment (albeit possibly concerning at first glance). Malware may also simulate a reboot, and thereby deceive a computer user for some nefarious purpose.
Microsoft App-V sequencing tool captures all the file system operations of an installer in order to create a virtualized software package for users. As part of the sequencing process, it will detect when an installer requires a reboot, interrupt the triggered reboot, and instead simulate the required reboot by restarting services and loading/unloading libraries.
Windows deviations and labeling criticism
Windows 8 & 10 enable (by default) a hibernation-like "Fast Startup" (a.k.a. "Fast Boot") which can cause problems (including confusion) for users accustomed to turning off computers to (cold) reboot them.
| Technology | Computer hardware | null |
29828324 | https://en.wikipedia.org/wiki/Classical%20Cepheid%20variable | Classical Cepheid variable | Classical Cepheids are a type of Cepheid variable star. They are young, population I variable stars that exhibit regular radial pulsations with periods of a few days to a few weeks and visual amplitudes ranging from a few tenths of a magnitude up to about 2 magnitudes. Classical Cepheids are also known as Population I Cepheids, Type I Cepheids, and Delta Cepheid variables.
There exists a well-defined relationship between a classical Cepheid variable's luminosity and pulsation period, securing Cepheids as viable standard candles for establishing the galactic and extragalactic distance scales. Hubble Space Telescope (HST) observations of classical Cepheid variables have enabled firmer constraints on Hubble's law, which describes the expansion rate of the observable Universe. Classical Cepheids have also been used to clarify many characteristics of our galaxy, such as the local spiral arm structure and the Sun's distance from the galactic plane.
Around 800 classical Cepheids are known in the Milky Way galaxy, out of an expected total of over 6,000. Several thousand more are known in the Magellanic Clouds, with more discovered in other galaxies; the Hubble Space Telescope has identified some in NGC 4603, which is 100 million light years distant.
Properties
Classical Cepheid variables are 4–20 times more massive than the Sun, and around 1,000 to 50,000 (over 200,000 for the unusual V810 Centauri) times more luminous. Spectroscopically they are bright giants or low luminosity supergiants of spectral class F6 – K2. The temperature and spectral type vary as they pulsate. Their radii are a few tens to a few hundred times that of the sun. More luminous Cepheids are cooler and larger and have longer periods. Along with the temperature changes their radii also change during each pulsation (e.g. by ~25% for the longer-period l Car), resulting in brightness variations up to two magnitudes. The brightness changes are more pronounced at shorter wavelengths.
Cepheid variables may pulsate in a fundamental mode, the first overtone, or rarely a mixed mode. Pulsations in an overtone higher than first are rare but interesting. The majority of classical Cepheids are thought to be fundamental mode pulsators, although it is not easy to distinguish the mode from the shape of the light curve. Stars pulsating in an overtone are more luminous and larger than a fundamental mode pulsator with the same period.
When an intermediate mass star (IMS) first evolves away from the main sequence, it crosses the instability strip very rapidly while the hydrogen shell is still burning. When the helium core ignites in an IMS, it may execute a blue loop and crosses the instability strip again, once while evolving to high temperatures and again evolving back towards the asymptotic giant branch. Stars more massive than about start core helium burning before reaching the red-giant branch and become red supergiants, but may still execute a blue loop through the instability strip. The duration and even existence of blue loops is very sensitive to the mass, metallicity, and helium abundance of the star. In some cases, stars may cross the instability strip for a fourth and fifth time when helium shell burning starts. The rate of change of the period of a Cepheid variable, along with chemical abundances detectable in the spectrum, can be used to deduce which crossing a particular star is making.
Classical Cepheid variables were B type main-sequence stars earlier than about B7, possibly late O stars, before they ran out of hydrogen in their cores. More massive and hotter stars develop into more luminous Cepheids with longer periods, although it is expected that young stars within our own galaxy, at near solar metallicity, will generally lose sufficient mass by the time they first reach the instability strip that they will have periods of 50 days or less. Above a certain mass, depending on metallicity, red supergiants will evolve back to blue supergiants rather than execute a blue loop, but they will do so as unstable yellow hypergiants rather than regularly pulsating Cepheid variables. Very massive stars never cool sufficiently to reach the instability strip and do not ever become Cepheids. At low metallicity, for example in the Magellanic Clouds, stars can retain more mass and become more luminous Cepheids with longer periods.
Light curves
A Cepheid light curve is typically asymmetric with a rapid rise to maximum light followed by a slower fall to minimum (e.g. Delta Cephei). This is due to the phase difference between the radius and temperature variations and is considered characteristic of a fundamental mode pulsator, the most common type of type I Cepheid. In some cases the smooth pseudo-sinusoidal light curve shows a "bump", a brief slowing of the decline or even a small rise in brightness, thought to be due to a resonance between the fundamental and second overtone. The bump is most commonly seen on the descending branch for stars with periods around 6 days (e.g. Eta Aquilae). As the period increases, the location of the bump moves closer to the maximum and may cause a double maximum, or become indistinguishable from the primary maximum, for stars having periods around 10 days (e.g. Zeta Geminorum). At longer periods the bump can be seen on the ascending branch of the light curve (e.g. X Cygni), but for period longer than 20 days the resonance disappears.
A minority of classical Cepheids show nearly symmetric sinusoidal light curves. These are referred to as s-Cepheids, usually have lower amplitudes, and commonly have short periods. The majority of these are thought to be first overtone (e.g. X Sagittarii), or higher, pulsators, although some unusual stars apparently pulsating in the fundamental mode also show this shape of light curve (e.g. S Vulpeculae). Stars pulsating in the first overtone are expected to only occur with short periods in our galaxy, although they may have somewhat longer periods at lower metallicity, for example in the Magellanic Clouds. Higher overtone pulsators and Cepheids pulsating in two overtones at the same time are also more common in the Magellanic Clouds, and they usually have low amplitude somewhat irregular light curves.
Discovery
On September 10, 1784 Edward Pigott detected the variability of Eta Aquilae, the first known representative of the class of classical Cepheid variables. However, the namesake for classical Cepheids is the star Delta Cephei, discovered to be variable by John Goodricke a month later. Delta Cephei is also of particular importance as a calibrator for the period-luminosity relation since its distance is among the most precisely established for a Cepheid, thanks in part to its membership in a star cluster<ref
name=dezeeuw1999></ref> and the availability of precise Hubble Space Telescope and Hipparcos parallaxes.
Period-luminosity relation
A classical Cepheid's luminosity is directly related to its period of variation. The longer the pulsation period, the more luminous the star. The period-luminosity relation for classical Cepheids was discovered in 1908 by Henrietta Swan Leavitt in an investigation of thousands of variable stars in the Magellanic Clouds. She published it in 1912 with further evidence. Once the period-luminosity relation is calibrated, the luminosity of a given Cepheid whose period is known can be established. Their distance is then found from their apparent brightness. The period-luminosity relation has been calibrated by many astronomers throughout the twentieth century, beginning with Hertzsprung. Calibrating the period-luminosity relation has been problematic; however, a firm Galactic calibration was established by Benedict et al. 2007 using precise HST parallaxes for 10 nearby classical Cepheids. Also, in 2008, ESO astronomers estimated with a precision within 1% the distance to the Cepheid RS Puppis, using light echos from a nebula in which it is embedded. However, that latter finding has been actively debated in the literature.
The following experimental correlations between a Population I Cepheid's period P and its mean absolute magnitude Mv was established from Hubble Space Telescope trigonometric parallaxes for 10 nearby Cepheids:
with P measured in days.
The following relations can also be used to calculate the distance d to classical Cepheids:
or
I and V represent near infrared and visual apparent mean magnitudes, respectively. The distance d is in parsecs.
Small amplitude Cepheids
Classical Cepheid variables with visual amplitudes below 0.5 magnitudes, almost symmetrical sinusoidal light curves, and short periods, have been defined as a separate group called small amplitude Cepheids. They receive the acronym DCEPS in the GCVS. Periods are generally less than 7 days, although the exact cutoff is still debated.
The term s-Cepheid is used for short period small amplitude Cepheids with sinusoidal light curves that are considered to be first overtone pulsators. They are found near the red edge of the instability strip. Some authors use s-Cepheid as a synonym for the small amplitude DCEPS stars, while others prefer to restrict it only to first overtone stars.
Small amplitude Cepheids (DCEPS) include Polaris and FF Aquilae, although both may be pulsating in the fundamental mode. Confirmed first overtone pulsators include BG Crucis and BP Circini.
Uncertainties in Cepheid determined distances
Chief among the uncertainties tied to the Cepheid distance scale are: the nature of the period-luminosity relation in various passbands, the impact of metallicity on both the zero-point and slope of those relations, and the effects of photometric contamination (blending) and a changing (typically unknown) extinction law on classical Cepheid distances. All these topics are actively debated in the literature.
These unresolved matters have resulted in cited values for the Hubble constant ranging between 60 km/s/Mpc and 80 km/s/Mpc. Resolving this discrepancy is one of the foremost problems in astronomy since the cosmological parameters of the Universe may be constrained by supplying a precise value of the Hubble constant.
Examples
Several classical Cepheids have variations that can be recorded with night-by-night, trained naked eye observation, including the prototype Delta Cephei in the far north, Zeta Geminorum and Eta Aquilae ideal for observation around the tropics (near the ecliptic and thus zodiac) and in the far south Beta Doradus. The closest class member is the North Star (Polaris) whose distance is debated and whose present variability is approximately 0.05 of a magnitude.
| Physical sciences | Stellar astronomy | Astronomy |
36354724 | https://en.wikipedia.org/wiki/Line%20%28software%29 | Line (software) | Line is a freeware app and service for instant messaging and social networking, operated by the Japanese company LY Corporation, co-owned by SoftBank Group. Line was launched in Japan in June 2011 by NHN Japan, a subsidiary of Naver.
Initially designed for text messaging and VoIP voice and video calling, it has gradually expanded to become a super-app providing services including a digital wallet (Line Pay), news stream (Line Today), video on demand (Line TV) and digital comic distribution (Line Manga and Line Webtoon).
Line became Japan's largest social network in 2013 and is used by over 70% of the population as of 2023; it is also popular mainly in Indonesia, Taiwan and Thailand.
History
Launch
Naver had launched a messaging app called Naver Talk for the South Korean market in February 2011. However, rival Korean company Kakao dominated the market with its KakaoTalk app, launched in March 2010
Naver/NHN co-founder and chairman Lee Hae-Jin and a team of engineers were in Japan when the Tōhoku earthquake and tsunami struck in March 2011. The earthquake and tsunami left millions without power and phone lines and SMS networks were overwhelmed. Since Wi-Fi and some 3G remained largely usable, many people turned to KakaoTalk, which was just beginning to gain a foothold in Japan. Lee was inspired to launch a messaging and chat app in the wake of the disaster and his NHN Japan team was testing a beta version of an app accessible on smartphones, tablet and PC, which would work on data network and provide continuous and free instant messaging and calling service, within two months. The app was launched as Line in June 2011.
Because Naver/NHN had a far superior cultural knowledge of what Japanese users wanted, and a much larger corporate marketing budget, Line quickly surpassed KakaoTalk in Japan. Line also offers free voice calls and, since Japan's telecoms make customers pay for both SMSs and smartphone calls, this feature, which KakaoTalk did not have, was a major selling point.
2011-2015
Line experienced an unexpected server overload in October 2011 due to the app's popularity. To improve scalability to accommodate its user growth, NHN Japan chose HBase as the primary storage for user profiles, contacts and groups. In December 2011, Naver announced that Naver Talk would be merged into Line, effective early 2012.
In July 2012, NHN Japan announced the new Line features Home and Timeline. The features allowed users to share recent personal developments to a community of contacts in real-time, similar to the status updates in social networking services such as Facebook. On April 1, 2013, Naver's Japanese branch name was changed from NHN Japan to Line Corporation.
Because it was tailored to Japanese consumers' tastes and offered free smartphone calls as well as texting, with the help of a massive marketing campaign it quickly outpaced its existing rival KakaoTalk for the Japanese market. It reached 100 million users within 18 months and 200 million users six months later. Line gradually replaced carrier-based email as the most popular method of communication in Japan.
Line became Japan's largest social network by the end of 2013, with more than 300 million registrants worldwide, of which more than 50 million users were within Japan. In October 2014, Line announced that it had attracted 560 million users worldwide with 170 million active user accounts. In February 2015, it announced the 600 million users mark had been passed and 700 million were expected by the end of the year.
Line was originally developed as a mobile application for Android and IOS smartphones. The service has since expanded to: BlackBerry OS (August 2012), Nokia Asha (Asia and Oceania, March 2013), Windows Phone (July 2013), Firefox OS (February 2014), iPadOS (October 2014) and as a Google Chrome App (via the Chrome Web Store). The application also exists in versions for laptop and desktop computers using the Microsoft Windows and macOS platforms.
2016-present
Ownership
In July 2016, Line Corporation held IPOs on both the New York Stock Exchange and the Tokyo Stock Exchange.
In late December 2020, Line Corporation delisted from both the New York Stock Exchange and the Tokyo Stock Exchange, in advance of its absorption-type merger agreement with Z Holdings.
On March 1, 2021, SoftBank Group affiliate and Yahoo! Japan operator Z Holdings completed a merger with Line Corporation. Under the new structure, A Holdings, a subsidiary of SoftBank Corporation and Naver Corporation, will own 65.3% of Z Holdings, which will operate Line and Yahoo! Japan.
Market share
By 18 January 2013, Line had been downloaded 100 million times worldwide. The number expanded to 140 million by early July 2013 and to 200 million by July 21. As of June 2016, Japan claimed 68 million users while Thailand had 33 million. As of February 2014, Indonesia had 20 million users, Taiwan 17 million, while India and Spain had 16 million each. In April 2014, Naver announced that Line had reached 400 million worldwide users and by 2017 this had grown to 700 million.
As of 2021 Line had 92 million users in Japan, and a total of 178 million across its four largest markets: Japan, Indonesia, Taiwan and Thailand. A survey done in 2023 showed that, for the first time, more Japanese seniors preferred to use Line over email for communicating.
Features
Line is an application that works on multiple platforms and has access via multiple personal computer operating systems. Users can also share photos, videos and music, send the current or any specific: locations, voice audios, emojis, stickers and emoticons to friends. Users can see a real-time confirmation when messages are sent and received or use a hidden chat feature, which can hide and delete a chat history (from both involved devices and Line servers) after a time set by the user.
The application also makes free voice and video calls. Users can also chat and share media in a group by creating and joining groups that have up to 500 people. Chats also provide bulletin boards on which users can post, like and comment. This application also has timeline and homepage features that allow users to post pictures, text and stickers on their homepages. Users can also change their Line theme to the theme Line provides in the theme shop for free or users can buy other famous cartoon characters they like. Line also has a feature, called a Snap movie, that users can use to record a stop-motion video and add in provided background music.
In January 2015, Line Taxi was released in Tokyo as a competitor to Uber. Line launched a new android app called "Popcorn buzz" in June 2015. The app facilitates group calls with up to 200 members. In June a new Emoji keyboard was also released for IOS devices, which provides a Line-like experience with the possibility to add stickers. In September 2015 a new Android launcher was released on the Google Play Store, helping the company to promote its own services through the new user interface.
Official channels
Line includes a feature known as "official channels" which allows companies, especially news media outlets, publications and other mass media companies to offer an official channel which users can join and thereby receive regular updates, published articles or news updates from companies or news outlets.
Stickers
Line features a Sticker Shop where users are able to purchase virtual stickers depicting original and well-known characters. The stickers are used during chat sessions between users and act as large emoji. Users can purchase stickers as gifts, with many stickers available as free downloads, depending on country availability. Purchased stickers are attached to an account and can be used on other platforms. New sticker sets are released weekly. Line's message stickers feature original characters as well as a number of popular manga, anime and gaming characters, movie tie-ins and characters from Disney properties such as Pixar. Some sticker sets, such as those that celebrate special events like the 2012 Summer Olympics, are released for only a limited time. Other sticker sets that support charity are known as Charity Stickers. For example, in 2016, Line released "Support Kumamoto" Charity Stickers to provide aid to victims of the 2016 Kumamoto earthquakes. All proceeds earned from the sales of these stickers were to be donated to the Japanese Red Cross Society to provide financial support and aid for the victims.
The original default characters and stickers, known as Line Friends, were created by Kang Byeongmok, also known as "Mogi", in 2011.
There are over 1 billion stickers sent by worldwide users on a daily basis. The popular characters Milk & Mocha began as stickers on Line in Indonesia.
Games
NHN Japan created Line Games in 2011. Only those with a Line application account can install and play the games. Players can connect with friends, send and accept items and earn friend points. The game range includes: puzzles, match-three, side-scroller, musical performance, simulation, battle and spot-the-difference games. In September 2013, Line Corporation announced its games had been downloaded 200 million times worldwide.
On July 10, 2017, Line Games acquired NextFloor Corporation, developers of Dragon Flight and Destiny Child. On January 5, 2017, Line Games was announced as the publisher for Hundred Soul (formerly known as Project 100) by Hound 13.
On December 12, 2018, Line Games held a media event called LPG (Line Games-Play-Game) to introduce its games for 2019. Mobile games announced include: Exos Heroes (by OOZOO), Ravenix: The Card Master (also by OOZOO), Dark Summoners (by SkeinGlobe), Project PK (by Rock Square) and Super String (by Factorial Games). Project NM by Space Dive was also announced for PC. Games to be released on mobile and PC include: Project NL (by MeerKat Games) and Uncharted Waters Origins (by Line Games and Koei Tecmo).
On 10 Jul 2019, Nintendo released Dr. Mario World co-developed by Line Games. On July 18, 2019, First Summoner developed by SkeinGlobe was released.
Line Pay
Line introduced Line Pay worldwide on December 16, 2014. The service allows users to request and send money from users in their contact list and make mobile payments in store. The service has since expanded to allow other features such as offline wire transfers when making purchases and ATM transactions like depositing and withdrawing money. Unlike other Line services, Line Pay is offered worldwide through the Line app.
Line Taxi
Line Taxi was launched in January 2015 in partnership with Nihon Kotsu, a local taxi service in Japan. Just like Line Pay, Line Taxi is not offered as a separate app but rather through the Line app where users can request a taxi and automatically pay for it when they connect their account to Line Pay.
Line Wow
Announced alongside Line Pay and Line Taxi, a service that allows users to instantly access delivery services for registered food or products and services.
Line Today
A news hub integrated in the Line app.
Line Shopping
A referral program for online shopping. Customers get extra discount or earn Line Points by purchasing through the Line Shopping service.
Line Gift
A gift sending services on Line. Customers can send gift via Line.
Line Doctor
A matching platform for finding doctors online.
Line Lite
In 2015, a lower-overhead Android app was released for emerging markets called Line Lite. This supports messages and calls but not themes or timeline.
It became available worldwide in August 2015.
In January 2022 Line announced the discontinuation of Line Lite, taking effect on the 28th February 2022.
Limitations
Line accounts can be accessed on only one mobile device (running the app version) or one personal computer (running the version for these). Additional mobile devices can install the app but require different mobile numbers or e-mail addresses for the Line account.
If "Line Lite" for Android was installed and activated, the user was told they will be "logged out of the normal Line". This message did not make clear that it was impossible to log back in to the normal Line, which would delete all history data when next launched. Line Lite has now been discontinued.
On 7 November 2024, Line earlier than version 12 ceased working, refusing to even start up to allow the user to access their data, with later versions refusing to work on existing hardware/operating systems, just generating random error messages along the lines of Nql7f_65 and IQvAj_65.
Security
In August 2013, it was possible to intercept a Line chat session at the network level using packet capture software and to reconstruct it on a PC. Messages were sent in cleartext to Line's server when on cellular data but encrypted when using Wi-Fi most of the time.
Until February 2016, it was also possible to "clone" an iPhone from a backup and then use the "cloned" iPhone to access the same Line account as used by the original iPhone. This loophole was widely rumored (but never proven) to have been used to intercept Line messages between the popular Japanese television personality Becky and her married romantic partner Enon Kawatani; the intercepted messages were published in the magazine Shukan Bunshun and led to the temporary suspension of Becky's television career.
In July 2016, Line Corporation turned on end-to-end encryption by default for all Line users. It had earlier been available as an opt-in feature since October 2015. The app uses the ECDH protocol for client-to-client encryption. In August 2016, Line expanded its end-to-end encryption to also encompass its group chats, voice calls and video calls.
In March 2021, the Japanese government announced that it would investigate Line after reports that it let Chinese system-maintenance engineers in Shanghai access Japanese users' data without informing them, beginning in August 2018. Four Chinese engineers in a Shanghai-based affiliate that Line subcontracted to develop AI accessed the messages stored in the Japanese computer system and personal information of Line users, such as: name, phone number, email address and Line ID. Photos and video footage posted by Japanese users were also stored on a server in South Korea. Line stated in March 2021 that it had since blocked access to user data at the Chinese affiliate and that it would revise its privacy policy and make it more explicit. Line had been used by the Japanese government and local governments to notify residents about developments in dealing with the coronavirus pandemic. In response to the reports of security issues, the national government and many local governments halted their usage of Line in late March 2021. In April 2021 the government ordered Line to take measures to properly protect customers' information and to report improvement measures within a month. Line also relocated image and other data stored in South Korea to Japan. As of November 2021, the Tokyo metropolitan government offers proof of COVID-19 vaccinations through the Line app, with expansion planned for other prefectures.
On 12 April 2021, Line suffered a large-scale crash in Taiwan.
More than 70,000 Line Pay users in Taiwan have been affected by a leak of transaction information during the period from December 26, 2020, to April 2, 2021.
In October 2023, the company confirmed a data breach where over 440,000 items of personal data were leaked. This included user age groups, genders, and partial service usage history, along with business partner and employee information like email addresses and names. The leak was traced back to unauthorized access through a Naver subcontractor's computer, which shared an authentication system with Line, allowing the breach. In March 2024, the Japanese government ordered Line and Naver to separate their systems because of the data leak.
Japan's Ministry of Internal Affairs and Communications issued administrative guidance to LY Corporation twice on April 16, 2024.LY Corporation is required to submit another report by The Ministry of Internal Affairs and Communications before July 1, 2024.
Censorship
Line suppressed content in China to conform with government censorship. Analysis by Citizen Lab showed that accounts registered with Chinese phone numbers download a list of banned words that cannot be sent or received through Line.
Line publicly confirmed the practice in December 2013. However, by 2014, access to Line chat servers has been entirely blocked by the Great Firewall, while the company still makes revenue in China from brick-and-mortar stores.
In Indonesia, Line has responded to pressure from the Indonesian Communication and Information Ministry to remove emojis and stickers it believes make reference to homosexuality, for example the emoji "two men holding hands". Line issued a public statement on the issue: "Line regrets the incidents of some stickers which are considered sensitive by many people. We ask for your understanding because at the moment we are working on this issue to remove the stickers".
In Thailand, Line is suspected of responding to pressure from the Thai government and, after previously approving 'Red Buffalo' stickers, which had been used to refer to the Red Shirts political faction, including by the Red Shirts themselves, removed the stickers.
In Russia, on 28 April 2017, Russia's Federal Service for Supervision of Communications, Information Technology and Mass Media (Roskomnadzor) placed Line on a banned list. Telecommunications companies appear to have taken steps in May to progressively block access to Line and other services using smartphones. The Russian Internet Regulation Law obliges social network operators to store personal information of their Russian customers in the country and submit it if requested by the authorities; Line is believed to have been found to be in breach of this provision. According to reports, BBM, Imo.im and Vchat have been newly added to the list of banned services, in addition to Line. On 3 May 2017 access to Line chat servers was entirely blocked by the Roskomnadzor and the Line servers were added to the Unified Register of Banned Sites, after which Russian users began to experience problems receiving and sending messages.
Issues
Similarity and Imitation of Other Services
The chief executive officer of Line, Mr. Morikawa, has stated that they referred to services like KakaoTalk and Instagram when developing Line.
A game similar to KakaoTalk’s top hit "Anipang" appeared on Line as "Line Pop".
Patent Infringement
The "Furufuru" feature on Line, which allowed users to add friends by shaking their smartphones in the same location and exchanging contact information, was found to infringe on a patent held by a Kyoto-based IT company called "Future Eye." The company filed a lawsuit in the Tokyo District Court, demanding 300 million yen in compensation. The court ruled on May 19, 2021, acknowledging the patent infringement and ordering Line to pay approximately 14 million yen in damages. Line later revealed that it had reached a settlement with Future Eye and expressed its intention to continue respecting intellectual property while improving its services for customers. The "Furufuru" feature was discontinued in May 2020.
Personal Information Protection Issues
Personal Information Leaks and Inadequate Countermeasures
Since around 2012, as Line gained popularity, there was an increasing concern about the safety of personal information. It was noted that the contents of phonebooks, which contain "other people's personal information," were being uploaded to third parties without their consent.
Since phone numbers are used to identify accounts on Line, there is a risk of misuse when phone numbers, which are used for membership registrations or reservations, are exploited.
There have been concerns about the risks of phone number-linked social graphs being leaked or users’ Line registration names being associated with their phone numbers, especially when using the desktop version to register phone numbers randomly.
In April 2013, Line announced that it had received three global assurance reports, confirming the security of its information management. However, there have been several instances of personal information leaks since then. Of the three certifications, one was a simplified version of SOC2 that was used for marketing purposes.
Due to ongoing concerns, Line began recommending the use of the [+Message] service for more secure communication in 2021.
Ignoring Vulnerabilities
The Information-Technology Promotion Agency notified Line about several critical software vulnerabilities, such as the risk of external access to chat history and photos, and the exposure of data stored on SD cards. Despite being notified about multiple vulnerabilities, Line acknowledged only a few and did not take steps to resolve them. The agency repeatedly informed Line until the company finally admitted the issue, which was reported by FACTA Online in 2015.
Incidents
Use in Sexual Crimes and Other Criminal Activities
Since around 2012, there has been an increasing trend of incidents such as extortion and compensated dating occurring through Line. However, Line does not have a feature for exchanging contact information with strangers, similar to "dating apps." Most of these incidents happen through external bulletin boards, websites, or apps, where users exchange IDs and then establish contact with each other 90% of girls are sexually victimized via smartphones, with the majority using Line, according to a scattering of sources. It has been observed that 90% of sexual crimes involving minors occur through smartphones, with the majority of cases using Line. Line's terms of service also prohibit using the service for the purpose of meeting strangers for dating.
Services that aim to exchange Line IDs through bulletin boards, unlike dating sites, are not subject to regulations like the Dating Sites Regulation Law or the Harmful Site Regulation Law, and are not filtered by laws aimed at protecting minors. As a result, the police can only respond to requests for action, while Line has taken measures such as issuing warnings about these services and periodically blocking ID search features for users under the age of 18.
Students with immature social skills have used Line to engage in verbal abuse, exclusion, the spreading of bullying images, and other forms of new bullying. Educational institutions and boards are urgently working on countermeasures.
Due to the ongoing misuse of Line for sexual crimes, Kyoto Prefecture and the Kyoto Prefectural Police have requested Line to implement measures promoting proper use, prevent the misuse of "bulletin board apps" that could lead to child pornography or child prostitution, and create systems that make it harder for users to access illegal or harmful content.
Use in Bullying
In 2014, the Ministry of Education, Culture, Sports, Science and Technology (MEXT) reported an increase in bullying cases involving computers and mobile phones. Shuichi Hirai, Director of the Division of Students and Pupils at MEXT, stated, "Bullying through platforms like Line has evolved, and it has become harder for adults to detect these incidents". Cyberbullying, which occurs in private communications between children, makes it difficult for others to observe, and existing countermeasures have not been effective. Line is often used for malicious actions, such as excluding individuals from groups, spreading insults and defamatory statements, and sharing humiliating images. These types of harmful posts are frequently observed during long holidays.
In August 2017, the National Web Counseling Association received a record number of 353 inquiries about Line-related bullying.
Line Account Takeover Incidents
In June 2014, a series of incidents occurred where Line accounts were hijacked, and special fraud was committed using these accounts. The fraudsters used leaked passwords to illegally log in and posed as acquaintances to extort web money from victims. Reports also indicated that celebrities had their Line accounts hijacked.
Minor Involved in Illegal Access Leading to Prosecution
In the summer of 2019, two minors living in the Kanto region of Japan learned about a vulnerability in Line's image server. They accessed the server illegally using their home computers, leading to their prosecution under the Unauthorized Computer Access Law. The two minors claimed they were "just trying to see if it was true" when questioned.
Data Viewing and Storage Issues by the South Korean Government and Companies
Alleged Data Interception by the South Korean Government
On June 18, 2014, FACTA Online reported that the South Korean government had been intercepting Line's data (free calls and text messages). According to the article, South Korean cybersecurity officials admitted during a meeting with Japan's Cabinet Cybersecurity Center that the National Intelligence Service of South Korea was collecting and analyzing data exchanged on Line. They also claimed that wiretapping, or directly collecting data from communication lines, is not illegal in South Korea due to the absence of laws protecting the "secrecy of communications" in the country.
In response to this report, Lines president at the time, Akira Morikawa, denied the claims in a blog post, stating that there was no such incident. He emphasized that Line's communication data showed no signs of unauthorized access, and argued that Line employs its own encryption format, making it impossible to analyze the data (However, until the issue was exposed, passwords and messages were stored and transmitted in plaintext). In response, FACTA publisher Shigeo Abe countered the following day, stating that the article was based on solid evidence. At that time, the details supporting both sides’ claims had not been fully disclosed, leading to comments from third parties that there was insufficient evidence to make a clear judgment. On March 25, 2021, a curious event occurred when Akira Morikawa’s blog post was deleted, but after it became a topic of discussion on social media and in the media, he re-published the article.
Line Usage Ban by the Taiwan Presidential Office
On September 23, 2014, the Presidential Office of the Republic of China announced that Line would be banned from use on government computers due to security concerns.
Storage and Viewing of User Data on Servers of Overseas Outsourcing Companies in South Korea, China, and Other Countries
On March 17, 2021, it was reported that all user image and video data, as well as transaction information from Line Pay, were being stored on servers belonging to Line's parent company, Naver. Access permissions for this data were granted to employees of Line's South Korean subsidiary, Line Plus, for security check purposes. On the morning of March 17, Chief Cabinet Secretary Katsunobu Kato stated at a press conference, "The relevant government agencies will confirm the facts and take appropriate action." Due to the current privacy policy not adequately communicating the situation to users, Line planned to review the policy and gradually transfer data to domestic servers in Japan starting mid-2021.
Line's data centers are located in multiple countries around the world. User data is mainly divided into text messages, images, and videos. Member registration information, chat texts, Line IDs, phone numbers, email addresses, friend relationships, friend lists, location information, address books, Line Profile+ (including names and addresses), voice call histories (without recordings), and Line service payment histories are managed on servers within Japan, handled with internal data governance according to company standards. However, images, videos, Keep, albums, notes, timelines, and Line Pay transaction information are managed on servers in South Korea.
Only "text messages" and "one-on-one calls" are encrypted with Line's self-developed end-to-end encryption protocol, "Letter Sealing." Even if the data is accessed from the database, the contents of "text messages" and "one-on-one calls" cannot be viewed. "Letter Sealing" is enabled by default, but if the recipient disables this feature, it will not function, even if the sender has it enabled. Text messages, images, and videos are encrypted along the communication route and sent to the server, regardless of the "Letter Sealing" setting. The image and video data are distributed across multiple servers for storage. The security team continuously monitors traffic to address any issues. The servers storing images and videos are planned to be gradually transferred to domestic servers in Japan from mid-2021.
Line Digital Technology (Shanghai) Limited, a subsidiary of Line Plus Corporation in Dalian, develops internal tools, AI features, and various functionalities available in the Line app. They also monitor servers, networks, and PC terminals under their jurisdiction for unauthorized access. During software development, the security team checks the source code and conducts security tests to prevent unauthorized programs from being included.
Naver China, a Chinese subsidiary of Naver Corporation that handles Line;s outsourcing work, is responsible for monitoring chat texts, Line official accounts, and timeline content for users outside Japan, Taiwan, Thailand, and Indonesia who have been "reported" by other users.
Line's domestic subsidiary, Line Fukuoka, and a major outsourcing company group in China monitor around 18,000 timeline posts and 74,000 open chat messages daily. Texts reported for spam or nuisance behavior in user chats between Japanese users are uploaded in plain text from user devices to servers and are monitored by Line Fukuoka.
Line is used by the Japanese government and local municipalities for providing administrative services and COVID-19 notifications. In response to the reports, the Ministry of Internal Affairs and Communications announced on March 19 that it would temporarily suspend the use of Line services and request local governments to investigate their usage. On March 17, Fukuoka City confirmed with Line Fukuoka that personal information entered by citizens and chat content were not among the data accessible to Chinese contractors, and that the situation was not such that any access was possible. Therefore, the city decided to continue using the service for administrative purposes. On March 25, Hyogo Prefecture confirmed with Line Corporation that no unauthorized access or data leaks had occurred and decided to continue using Line for COVID-19-related services. Osaka City resumed services on Line that do not handle confidential information starting April 1.
On April 9, 2021, Akira Amari, a member of the Liberal Democratic Party of Japan, stated that Line and its parent company Z Holdings had promised to implement measures such as adopting a cybersecurity system based on the U.S. National Institute of Standards and Technology SP800-171 level and limiting data management to countries with information protection rules equivalent to Japan's. The domestic transfer of user information is scheduled to be completed by 2024.
Administrative Guidance by the Personal Information Protection Commission and the Ministry of Internal Affairs and Communications
Starting on March 31, 2021, the Personal Information Protection Commission conducted an on-site inspection of Line under Article 40, Section 1 of the Personal Information Protection Act. On April 23, the Commission issued administrative guidance regarding the issue where Chinese contractors were able to access Line's personal information management servers. Additionally, starting March 26, the Ministry of Internal Affairs and Communications (MIC) provided written guidance to Line about safety management measures for its internal systems and appropriate explanations to users.
On March 28, 2024, the Personal Information Protection Commission issued administrative guidance (recommendation) to Line Yahoo in response to a data breach involving 520,000 personal data records. On April 1, 2024, Line Yahoo reported that it would gradually separate its systems from Naver, with full separation scheduled for December 2026. On April 16, 2024, the Ministry of Internal Affairs and Communications conducted a rare second round of administrative guidance to Line Yahoo, criticizing insufficient measures to protect communication privacy and cybersecurity. They requested a review of the relationship with Naver.
Final Report on Economic Security Issues
On October 18, 2021, a special committee formed by Z Holdings, the parent company of Line, released its final report. It criticized the fact that Chinese affiliates had access to personal information stored on Korean servers, stating that economic security concerns were not adequately considered and that the system lacked a proper review framework. The report noted that the Chinese government, under the National Intelligence Law enacted in 2017, could compel private companies to provide information, strengthening government control over data. The committee highlighted the insufficient measures taken by Line to handle government access to data (government surveillance).
The report concluded that Line had misrepresented the situation by claiming that user data was stored exclusively in Japan, even though it was actually stored on servers in South Korea. The committee argued that this was due to Line's desire to maintain the image of the Line app as a domestic service and avoid openly acknowledging its ties to South Korea. Although Line's actions did not violate domestic law, the handling of personal data had caused a loss of trust. In response, Line issued a statement on October 18, acknowledging that its governance and risk management systems had not kept pace with its rapid growth. To prevent future incidents, the special committee recommended that Z Holdings establish an expert committee to consult third-party opinions and introduce a highly independent Data Protection Officer at major business units.
Other incidents
Alleged Violation of Laws by Line Games
It was reported by Mainichi Shimbun that the Ministry of Finance and the Kanto Local Finance Bureau conducted an on-site inspection of Line's mobile game "Line Pop" due to suspected violations of the Funds Settlement Act regarding paid in-game items. In response to the report, Line issued a statement titled "Our Opinion Regarding Some of the Reported Content," immediately denying part of the allegations.
Line Games Temporarily Suspended from App Store for Violating App Store Guidelines
The Line Quick Game was temporarily removed from the Apple App Store after violations of the App Store's regulations, which included issues with the games "Line で発見!! たまごっち", "探検ドリランド ブレイブハンターズ", and "釣り★スタ Quick". A total of 8 titles were taken down for one month due to these violations.
Abuse by Doctors in Line Healthcare
On August 3, 2020, it was reported that a doctor registered in Line Healthcare, a paid service, had verbally abused a user, violating the service's terms of use. Line Healthcare issued an apology and announced measures to prevent recurrence on August 20, 2020.
Personal Information Leakage from Line Game Refund Applications
From April 12, 2018, to September 20, 2020, personal information entered into the refund application form for Line Game service terminations was exposed and accessible through the Internet Archive. The exposed information included bank account details, email addresses, and Line app identifiers of 18 individuals. After the issue was discovered, the archive was deleted.
Data Breach in Line Creators Market
From April 17, 2014, the launch of Line Creators Market until October 31, 2020, files containing potentially sensitive personal information uploaded by Line Creators Market vendors were publicly accessible. The data, which was included in Internet Archive's collection, was available to the public. After the issue was identified, access to the files was blocked, and the data from the archive was deleted.
Fake Posts on Line Open Chat
Shukan Bunshun reported that employees of Line were posting fake messages (known as "Sakura posts") on its new service, Open Chat, impersonating ordinary users like high school girls or trendy women. The "Sakura posts" were not done solely by staff, but followed a manual called "Talk-room Operation" created by the head office. Line explained that these posts were made to improve the overall quality and user satisfaction of Open Chat and to create good quality chat rooms. They also stated that staff involvement in managing some chat rooms was intended to protect minor users. However, Shukan Bunshun reported that, according to insiders, these posts were aimed at creating a positive image to facilitate future monetization. Line avoided directly commenting on the article but acknowledged the problem, stating that it was an issue for users, as they were not informed that posts came from employees or that they were posting under false identities. The company updated the manual on April 12 and provided explanations to users on April 15.
Incorrect Message Display in Reporting Function
Between 2017 and 2021, during program updates to the LINE app, a bug caused incorrect messaging to be displayed in the report function. The erroneous message stated that by reporting a user, it would send "the reported user's information, including the latest 10 messages" when, in fact, it only intended to send "the reported user's information." This bug occurred on iOS from December 4, 2017, to March 30, 2021, on Android from August 20, 2018, to March 28, 2021, and on Desktop from March 4, 2021, to March 30, 2021. After confirming the error in late March 2021, the issue was fixed. The reported content was not used for commercial purposes like advertising but was intended solely for public benefit to protect users from harmful content.
Related products
Line Friends
Line Friends are featured characters that are shown in stickers of the application. They include Brown, Cony, Sally, James, Moon, Boss, Jessica, Edward, Leonard, Choco, Pangyo and Rangers. Two anime series, Line Offline and Line Town, were produced in 2013, picturing the Line Friends as employees for the fictional Line Corporation.
Line Man
On-demand assistant for food and messenger delivery services in Bangkok.
Line TV
A video on demand service operating in Taiwan and Thailand.
Stores
There are physical stores in Japan, South Korea, China, Taiwan, Hong Kong, Philippines, Thailand, U.S. and a Korean online store to purchase Line Friends merchandise. Occasionally, Line will have pop-up or temporary stores globally.
| Technology | Social network and blogging | null |
26534334 | https://en.wikipedia.org/wiki/Anorexia%20nervosa | Anorexia nervosa | Anorexia nervosa (AN), often referred to simply as anorexia, is an eating disorder characterized by food restriction, body image disturbance, fear of gaining weight, and an overpowering desire to be thin.
Individuals with anorexia nervosa have a fear of being overweight or being seen as such, despite the fact that they are typically underweight. The DSM-5 describes this perceptual symptom as "disturbance in the way in which one's body weight or shape is experienced". In research and clinical settings, this symptom is called "body image disturbance" or body dysmorphia. Individuals with anorexia nervosa also often deny that they have a problem with low weight due to their altered perception of appearance. They may weigh themselves frequently, eat small amounts, and only eat certain foods. Some patients with anorexia nervosa binge eat and purge to influence their weight or shape. Purging can manifest as induced vomiting, excessive exercise, and/or laxative abuse. Medical complications may include osteoporosis, infertility, and heart damage, along with the cessation of menstrual periods. In cases where the patients with anorexia nervosa continually refuse significant dietary intake and weight restoration interventions, a psychiatrist can declare the patient to lack capacity to make decisions. Then, these patients' medical proxies decide that the patient needs to be fed by restraint via nasogastric tube.
Anorexia often develops during adolescence or young adulthood. A noted origin of anorexia nervosa rests primarily in sexual abuse and problematic familial relations, especially those of overprotecting parents showing excessive possessiveness over their children. The exacerbations of the mental illness are thought to follow a major life-change or stress-inducing events. Ultimately however, causes of anorexia are varied and may differ from individual to individual. There is emerging evidence that there is a genetic component, with identical twins more often affected than fraternal twins. Cultural factors play a very significant role, with societies that value thinness having higher rates of the disease. Anorexia also commonly occurs in athletes who play sports where a low bodyweight is thought to be advantageous for aesthetics or performance, such as dance, gymnastics, running, and figure skating.
Treatment of anorexia involves restoring the patient back to a healthy weight, treating their underlying psychological problems, and addressing underlying maladaptive behaviors. A daily low dose of olanzapine (Zyprexa®, Eli Lilly) has been shown to increase appetite and assist with weight gain in anorexia nervosa patients. Psychiatrists may prescribe their anorexia nervosa patients medications to better manage their anxiety or depression. Different therapy methods may be useful, such as cognitive behavioral therapy or an approach where parents assume responsibility for feeding their child, known as Maudsley family therapy. Sometimes people require admission to a hospital to restore weight. Evidence for benefit from nasogastric tube feeding is unclear. Such an intervention is often assumably highly distressing for both anorexia patients and healthcare staff when administered against the patient's will under restraint. Some people with anorexia will have a single episode and recover while others may have recurring episodes over years. The largest risk of relapse occurs within the first year post-discharge from eating disorder therapy treatment. Within the first 2 years post-discharge from eating disorder treatment, approximately 31% of anorexia nervosa patients relapse. Many complications, both physical and psychological, improve or resolve with nutritional rehabilitation and adequate weight gain.
It is estimated to occur in 0.3% to 4.3% of women and 0.2% to 1% of men in Western countries at some point in their life. About 0.4% of young women are affected in a given year and it is estimated to occur ten times more commonly among women than men. It is unclear whether the increased incidence of anorexia observed in the 20th and 21st centuries is due to an actual increase in its frequency or simply due to improved diagnostic capabilities. In 2013, it directly resulted in about 600 deaths globally, up from 400 deaths in 1990. Eating disorders also increase a person's risk of death from a wide range of other causes, including suicide. About 5% of people with anorexia die from complications over a ten-year period with medical complications and suicide being the primary and secondary causes of death respectively.
Signs and symptoms
Anorexia nervosa is an eating disorder characterized by attempts to lose weight by way of starvation. A person with anorexia nervosa may exhibit a number of signs and symptoms, the type and severity of which may vary and be present but not readily apparent. Though anorexia is typically recognized by the physical manifestations of the illness, it is a mental disorder that can be present at any weight.
Anorexia nervosa, and the associated malnutrition that results from self-imposed starvation, can cause complications in every major organ system in the body. Hypokalemia, a drop in the level of potassium in the blood, is a sign of anorexia nervosa. A significant drop in potassium can cause abnormal heart rhythms, constipation, fatigue, muscle damage, and paralysis.
Signs and symptoms may be classified in various categories including: physical, cognitive, affective, behavioral and perceptual:
Physical symptoms
A low body mass index for one's age and height (except in cases of "atypical anorexia")
Rapid, continuous weight loss
Dry hair and skin, hair thinning, as well as hair loss
Feeling cold all the time (hypothermia)
Raynaud Phenomenon
Hypotension or orthostatic hypotension
Bradycardia or tachycardia
Chronic fatigue
Insomnia
Having severe muscle tension, aches and pains
Irregular or absent menstrual periods
Infertility
Gastrointestinal disease
Halitosis (from vomiting or starvation-induced ketosis)
Abdominal distension
Russell's Sign; can be a tell-tale sign of self-induced vomiting with scratches on the back of the hand
Tooth erosion
Lanugo: soft, fine hair growing over the face and body
Orange discoloration of the skin, particularly the feet (Carotenosis)
Cognitive symptoms
An obsession with counting calories and monitoring contents of food
Preoccupation with food, recipes, or cooking; may cook elaborate dinners for others, but not eat the food themselves or consume a very small portion
Admiration of thinner people
Thoughts of being fat or not thin enough
An altered mental representation of one's body
Impaired theory of mind, exacerbated by lower BMI and depression
Memory impairment
Difficulty in abstract thinking and problem solving
Rigid and inflexible thinking
Poor self-esteem
Hypercriticism and perfectionism
Affective symptoms
Depression
Ashamed of oneself or one's body
Anxiety disorders
Rapid mood swings
Emotional dysregulation
Alexithymia
Behavioral symptoms
Compulsive weighing
Regular body checking
Food restriction, both in terms of caloric content and type (for example, macronutrient groups)
Food rituals, such as cutting food into tiny pieces and measuring it, refusing to eat around others, and hiding or discarding of food
Purging, which may be achieved through self-induced vomiting, laxatives, diet pills, emetics, diuretics, or exercise. The goals of purging are various, including the prevention of weight gain, discomfort with the physical sensation of being full or bloated, and feelings of guilt or impurity.
Excessive exercise or compulsive movement, such as pacing.
Self harming or self-loathing
Social withdrawal and solitude, stemming from the avoidance of friends, family, and events where food may be present
Excessive water consumption to create a false impression of satiety
Excessive caffeine consumption
Perceptual symptoms
Perception that one is not sick (anosognosia) or not sick "enough," which may prevent some from seeking recovery
Perception of self as heavier or fatter than in reality, ie. body image disturbance
Altered body schema, ie. a distorted and unconscious perception of one's body size and shape that influences how the individual experiences their body during physical activities. For example, a patient with anorexia nervosa may genuinely fear that they cannot fit through a narrow passageway. However, due to their malnourished state, their body is significantly smaller than someone with a normal BMI who would actually struggle to fit through the same space. In spite of having a small frame, the patient's altered body schema leads them to perceive their body as larger than it is.
Altered interoception
Interoception
Interoception involves the conscious and unconscious sense of the internal state of the body, and it has an important role in homeostasis and regulation of emotions. Aside from noticeable physiological dysfunction, interoceptive deficits also prompt individuals with anorexia to concentrate on distorted perceptions of multiple elements of their body image. This exists in both people with anorexia and in healthy individuals due to impairment in interoceptive sensitivity and interoceptive awareness.
Aside from weight gain and outer appearance, people with anorexia also report abnormal bodily functions such as indistinct feelings of fullness. This provides an example of miscommunication between internal signals of the body and the brain. Due to impaired interoceptive sensitivity, powerful cues of fullness may be detected prematurely in highly sensitive individuals, which can result in decreased calorie consumption and generate anxiety surrounding food intake in anorexia patients. People with anorexia also report difficulty identifying and describing their emotional feelings and the inability to distinguish emotions from bodily sensations in general, called alexithymia.
Interoceptive awareness and emotion are deeply intertwined, and could mutually impact each other in abnormalities. Anorexia patients also exhibit emotional regulation difficulties that ignite emotionally-cued eating behaviors, such as restricting food or excessive exercising. Impaired interoceptive sensitivity and interoceptive awareness can lead anorexia patients to adapt distorted interpretations of weight gain that are cued by physical sensations related to digestion (e.g., fullness). Combined, these interoceptive and emotional elements could together trigger maladaptive and negatively reinforced behavioral responses that assist in the maintenance of anorexia. In addition to metacognition, people with anorexia also have difficulty with social cognition including interpreting others' emotions, and demonstrating empathy. Abnormal interoceptive awareness and interoceptive sensitivity shown through all of these examples have been observed so frequently in anorexia that they have become key characteristics of the illness.
Comorbidity
Other psychological issues may factor into anorexia nervosa. Some pre-existing disorders can increase a person's likelihood to develop an eating disorder. Additionally, Anorexia Nervosa can contribute to the development of certain conditions. The presence of psychiatric comorbidity has been shown to affect the severity and type of anorexia nervosa symptoms in both adolescents and adults.
Post traumatic stress disorder remains highly prevalent among patients with anorexia nervosa, with more comorbid PTSD being associated with more severe eating disorder symptoms. Obsessive-compulsive disorder (OCD) and obsessive-compulsive personality disorder (OCPD) are highly comorbid with AN. OCD is linked with more severe symptomatology and worse prognosis. The causality between personality disorders and eating disorders has yet to be fully established. Other comorbid conditions include depression, alcoholism, substance abuse, borderline and other personality disorders, anxiety disorders, attention deficit hyperactivity disorder, and body dysmorphic disorder (BDD). Depression and anxiety are the most common comorbidities, and depression is associated with a worse outcome.
Autism spectrum disorders occur more commonly among people with eating disorders than in the general population, with about 30% of children and adults with AN likely having autism. Zucker et al. (2007) proposed that conditions on the autism spectrum make up the cognitive endophenotype underlying anorexia nervosa and appealed for increased interdisciplinary collaboration.
Causes
There is evidence for biological, psychological, developmental, and sociocultural risk factors, but the exact cause of eating disorders is unknown.
Genetic
Anorexia nervosa is highly heritable. Twin studies have shown a heritability rate of 28–58%. First-degree relatives of those with anorexia have roughly 12 times the risk of developing anorexia. Association studies have been performed, studying 128 different polymorphisms related to 43 genes including genes involved in regulation of eating behavior, motivation and reward mechanics, personality traits and emotion. Consistent associations have been identified for polymorphisms associated with agouti-related peptide, brain derived neurotrophic factor, catechol-o-methyl transferase, SK3 and opioid receptor delta-1. Epigenetic modifications, such as DNA methylation, may contribute to the development or maintenance of anorexia nervosa, though clinical research in this area is in its infancy.
A 2019 study found a genetic relationship with mental disorders, such as schizophrenia, obsessive–compulsive disorder, anxiety disorder and depression; and metabolic functioning with a negative correlation with fat mass, type 2 diabetes and leptin.
Environmental
Obstetric complications: prenatal and perinatal complications may factor into the development of anorexia nervosa, such as preterm birth, maternal anemia, diabetes mellitus, preeclampsia, placental infarction, and neonatal heart abnormalities. Neonatal complications may also have an influence on harm avoidance, one of the personality traits associated with the development of AN.
Neuroendocrine dysregulation: altered signaling of peptides that facilitate communication between the gut, brain and adipose tissue, such as ghrelin, leptin, neuropeptide Y and orexin, may contribute to the pathogenesis of anorexia nervosa by disrupting regulation of hunger and satiety.
Gastrointestinal diseases: people with gastrointestinal disorders may be more at risk of developing disorders of eating practices than the general population, principally restrictive eating disturbances. An association of anorexia nervosa with celiac disease has been found. The role that gastrointestinal symptoms play in the development of eating disorders seems rather complex. Some authors report that unresolved symptoms prior to gastrointestinal disease diagnosis may create a food aversion in these persons, causing alterations to their eating patterns. Other authors report that greater symptoms throughout their diagnosis led to greater risk. It has been documented that some people with celiac disease, irritable bowel syndrome or inflammatory bowel disease who are not conscious about the importance of strictly following their diet, choose to consume their trigger foods to promote weight loss. On the other hand, individuals with good dietary management may develop anxiety, food aversion and eating disorders because of concerns around cross contamination of their foods. Some authors suggest that medical professionals should evaluate the presence of unrecognized celiac disease in all people with an eating disorder, especially if they present any gastrointestinal symptoms, (such as decreased appetite, abdominal pain, bloating, distension, vomiting, diarrhea or constipation), weight loss, or growth failure. With routinely asking celiac patients about weight or body shape concerns, dieting or vomiting for weight control, to evaluate the possible presence of an eating disorders, especially in women.
Anorexia nervosa is more likely to occur in a person's pubertal years. Some explanatory hypotheses for the rising prevalence of eating disorders in adolescence are "increase of adipose tissue in girls, hormonal changes of puberty, societal expectations of increased independence and autonomy that are particularly difficult for anorexic adolescents to meet; [and] increased influence of the peer group and its values."
Anorexia as adaptation
Studies have hypothesized that disordered eating patterns may also arise secondary to starvation. The results of the Minnesota Starvation Experiment, for example, showed that normal controls will exhibit many of the same behavioral patterns associated with AN when subjected to starvation. Similarly, scientific experiments conducted using mice have suggested that other mammals exhibit these same behaviors, especially compulsive movement, when caloric restriction is induced, likely mediated by various changes in the neuroendocrine system. This has given further rise to the hypothesis that anorexia nervosa and other restrictive eating disorders may be an evolutionarily advantageous adaptive response to a perceived famine in the environment. Recent research has further expanded this perspective, showing how caloric restriction may be adaptive in volatile or uncertain environment - thus potentially explaining the association between an increased risk to develop anorexia nervosa and adverse childhood experiences.
Psychological
Early theories of the cause of anorexia linked it to childhood sexual abuse or dysfunctional families; evidence is conflicting, and well-designed research is needed. The fear of food is known as sitiophobia or cibophobia, and is part of the differential diagnosis. Other psychological causes of anorexia include low self-esteem, feeling like there is lack of control, depression, anxiety, and loneliness. People with anorexia are, in general, highly perfectionistic and most have obsessive compulsive personality traits which may facilitate sticking to a restricted diet. It has been suggested that patients with anorexia are rigid in their thought patterns, and place a high level of importance upon being thin.
Although the prevalence rates vary greatly, between 37% and 100%, there appears to be a link between traumatic events and eating disorder diagnosis. Approximately 72% of individuals with anorexia report experiencing a traumatic event prior to the onset of eating disorder symptoms, with binge-purge subtype reporting the highest rates. There are many traumatic events that have been identified as possible risk factors for the development of anorexia, the first of which was childhood sexual abuse. A considerable number of patients who developed anorexia nervosa faced childhood maltreatment in the forms of emotional abuse and neglect, although researchers have been less apt to investigate this type of abuse. Interpersonal, as opposed to non-interpersonal trauma, has been seen as the most common type of traumatic event, which can encompass sexual, physical, and emotional abuse. Individuals who experience repeated trauma, like those who experience trauma perpetrated by a caregiver or loved one, have increased symptom severity of anorexia and a greater prevalence of comorbid psychiatric diagnoses.
As mentioned previously, the prevalence of PTSD among anorexia nervosa patients ranges from 4% to 24%. A complicated symptom profile develops when trauma and anorexia meld; the bodily experience of the individual is changed and intrusive thoughts and sensations may be experienced. Traumatic events can lead to intrusive and obsessive thoughts, and the symptom of anorexia that has been most closely linked to a PTSD diagnosis is increased obsessive thoughts pertaining to food. Similarly, impulsivity is linked to the purge and binge-purge subtypes of anorexia, trauma, and PTSD. Emotional trauma (e.g., invalidation, chaotic family environment in childhood) may lead to difficulty with emotions, particularly the identification of and how physical sensations contribute to the emotional response.
When trauma is perpetrated on an individual, it can lead to feelings of not being safe within their own body. Both physical and sexual abuse can lead to an individual seeing their body as belonging to an "other" and not to the "self". Individuals who feel as though they have no control over their bodies due to trauma may use food as a means of control because the choice to eat is an unmatched expression of control. By controlling the intake of food, individuals can decide when and how much they eat. Individuals, particularly children experiencing abuse, may feel a loss of control over their life, circumstances, and their own bodies. Particularly sexual abuse, but also physical abuse, can make individuals feel that the body is not a safe place and an object over which another has control. Starvation, in the case of anorexia, may also lead to reduction in the body as a sexual object, making starvation a solution. Restriction may also be a means by which the pain an individual is experiencing can be communicated.
Sociological
Anorexia nervosa has been increasingly diagnosed since 1950; the increase has been linked to vulnerability and internalization of body ideals. People in professions where there is a particular social pressure to be thin (such as models and dancers) were more likely to develop anorexia, and those with anorexia have much higher contact with cultural sources that promote weight loss. This trend can also be observed for people who partake in certain sports, such as jockeys and wrestlers. There is a higher incidence and prevalence of anorexia nervosa in sports with an emphasis on aesthetics, where low body fat is advantageous, and sports in which one has to make weight for competition. Family group dynamics can play a role in the perpetuation of anorexia including negative expressed emotion in overprotective families where blame is frequently experienced among its members. In the face of constant pressure to be thin, often perpetuated by teasing and bullying, feelings of low self-esteem and self-worth can arise, including the perception that one is not "deserving" of food.
Media effects
Persistent exposure to media that present thin ideal may constitute a risk factor for body dissatisfaction and anorexia nervosa. Cultures that equate thinness with beauty often have higher rates of anorexia nervosa. The cultural ideal for body shape for men versus women continues to favor slender women and athletic, V-shaped muscular men. Media sources such as magazines, television shows, and social media can contribute to body dissatisfaction and disordered eating across the globe, by emphasizing Western ideals of slimness. A 2002 review found that, of the magazines most popular among people aged 18 to 24 years, those read by men, unlike those read by women, were more likely to feature ads and articles on shape than on diet. Body dissatisfaction and internalization of body ideals are risk factors for anorexia nervosa that threaten the health of both male and female populations.
Another online aspect contributing to higher rates of eating disorders such as anorexia nervosa are websites and communities on social media that stress the importance of attainment of body ideals extol. These communities promote anorexia nervosa through the use of religious metaphors, lifestyle descriptions, "thinspiration" or "fitspiration" (inspirational photo galleries and quotes that aim to serve as motivators for attainment of body ideals). Pro-anorexia websites reinforce internalization of body ideals and the importance of their attainment.
Cultural
Cultural attitudes towards body image, beauty, and health also significantly impact the incidence of anorexia nervosa. There is a stark contrast between Western societies that idolize slimness and certain Eastern traditions that worship gods depicted with larger bodies, and these varying cultural norms have varying influences on eating behaviors, self-perception, and anorexia in their respective cultures. For example, despite the fact that "fat phobia", or a fear of fat, is a key diagnostic criteria of anorexia by the DSM-5, anorexic patients in Asia rarely display this trait, as deep-rooted cultural values in Asian cultures praise larger bodies. Fat phobia appears to be intricately linked to Western culture, encompassing how various cultural perceptions impact anorexia in various ways. It calls on the need for greater, diverse cultural consideration when looking at the diagnosis and experience of anorexia. For instance, in a cross-sectional study done on British South Asian adolescent English adolescent anorexia patients, it was found that both patients' symptom profiles differed. South Asians were less likely to exhibit fat-phobia as a symptom versus their English counterparts, instead exhibiting loss of appetite. However, both kinds of patients had distorted body images, implying the possibility of disordered eating and highlighting the need for cultural sensitivity when diagnosing anorexia.
Notably, although these cultural distinctions persist, modernization and globalization slowly homogenize these attitudes. Anorexia is increasingly tied to the pressures of a global culture that celebrates Western ideals of thinness. The spread of Western media, fashion, and lifestyle ideals across the globe has begun to shift perceptions and standards of beauty in diverse cultures, contributing to a rise in the incidence of anorexia in places they were once rare in. Anorexia, once primarily associated with Western culture, seems more than ever to be linked to the cultures of modernity and globalization.
Mechanisms
Evidence from physiological, pharmacological and neuroimaging studies suggest serotonin (also called 5-HT) may play a role in anorexia. While acutely ill, metabolic changes may produce a number of biological findings in people with anorexia that are not necessarily causative of the anorexic behavior. For example, abnormal hormonal responses to challenges with serotonergic agents have been observed during acute illness, but not recovery. Nevertheless, increased cerebrospinal fluid concentrations of 5-hydroxyindoleacetic acid (a metabolite of serotonin), and changes in anorectic behavior in response to acute tryptophan depletion (tryptophan is a metabolic precursor to serotonin) support a role in anorexia. The activity of the 5-HT2A receptors has been reported to be lower in patients with anorexia in a number of cortical regions, evidenced by lower binding potential of this receptor as measured by PET or SPECT, independent of the state of illness. While these findings may be confounded by comorbid psychiatric disorders, taken as a whole they indicate serotonin in anorexia. These alterations in serotonin have been linked to traits characteristic of anorexia such as obsessiveness, anxiety, and appetite dysregulation.
Neuroimaging studies investigating the functional connectivity between brain regions have observed a number of alterations in networks related to cognitive control, introspection, and sensory function. Alterations in networks related to the dorsal anterior cingulate cortex may be related to excessive cognitive control of eating related behaviors. Similarly, altered somatosensory integration and introspection may relate to abnormal body image. A review of functional neuroimaging studies reported reduced activations in "bottom up" limbic region and increased activations in "top down" cortical regions which may play a role in restrictive eating.
Compared to controls, people who have recovered from anorexia show reduced activation in the reward system in response to food, and reduced correlation between self reported liking of a sugary drink and activity in the striatum and anterior cingulate cortex. Increased binding potential of 11C radiolabelled raclopride in the striatum, interpreted as reflecting decreased endogenous dopamine due to competitive displacement, has also been observed.
Structural neuroimaging studies have found global reductions in both gray matter and white matter, as well as increased cerebrospinal fluid volumes. Regional decreases in the left hypothalamus, left inferior parietal lobe, right lentiform nucleus and right caudate have also been reported in acutely ill patients. However, these alterations seem to be associated with acute malnutrition and largely reversible with weight restoration, at least in nonchronic cases in younger people. In contrast, some studies have reported increased orbitofrontal cortex volume in currently ill and in recovered patients, although findings are inconsistent. Reduced white matter integrity in the fornix has also been reported.
Diagnosis
A diagnostic assessment includes the person's current circumstances, biographical history, current symptoms, and family history. The assessment also includes a mental state examination, which is an assessment of the person's current mood and thought content, focusing on views on weight and patterns of eating.
DSM-5
Anorexia nervosa is classified under the Feeding and Eating Disorders in the latest revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM 5). There is no specific BMI cut-off that defines low weight required for the diagnosis of anorexia nervosa.
The diagnostic criteria for anorexia nervosa (all of which needing to be met for diagnosis) are:
Restriction of energy intake relative to requirements leading to a low body weight. (Criterion A)
Intense fear of gaining weight or persistent behaviors that interfere with gaining weight. (Criterion B)
Disturbance in the way a person's weight or body shape is experienced or a lack of recognition about the risks of the low body weight. (Criterion C)
Relative to the previous version of the DSM (DSM-IV-TR), the 2013 revision (DSM5) reflects changes in the criteria for anorexia nervosa. Most notably, the amenorrhea (absent period) criterion was removed. Amenorrhea was removed for several reasons: it does not apply to males, it is not applicable for females before or after the age of menstruation or taking birth control pills, and some women who meet the other criteria for AN still report some menstrual activity.
Subtypes
There are two subtypes of AN:
Restrictive Type: In the most recent months leading up to the evaluation, the patient has not engaged in binging and purging via laxative or diuretic abuse, enemas, or self-induced vomiting. The weight loss accomplished in this patient is mainly through the use of one or more of the following methods: fasting, dieting, and excessive exercise.
Binge-eating / Purging Type: In the last few months, the patient has recurrently engaged in binge-purge cycles.
Levels of severity
The use of the body mass index in the diagnosis of eating disorders has been controversial, largely owing to its oversimplification of health and failure to take into account complicating factors such as body composition or the initial bodyweight of the patient prior to the onset of AN. As such, the DSM-5 does not have a strict BMI cutoff for the diagnosis of anorexia nervosa, but it nevertheless uses BMI to establish levels of severity, which it states as follows:
Mild: BMI of greater than 17
Moderate: BMI of 16–16.99
Severe: BMI of 15–15.99
Extreme: BMI of less than 15
Investigations
Medical tests to check for signs of physical deterioration in anorexia nervosa may be performed by a general physician or psychiatrist.
Physical Examination:
Blinded Weight: The patient will strip and put on a surgical gown alone. The patient will step backwards onto the scale as the healthcare provider blocks the reading from the patient's line of vision.
Orthostatic Vitals: The patient lies completely flat for five minutes, and then, the medical provider measures the patient's blood pressure and heart rate. The patient stands up and stays stationary for two minutes. Then, the blood pressure and heart rate are assessed again, making note of any patient symptoms upon standing like dizziness. According to the College of Family Physicians of Canada, a change in orthostatic heart rate greater than 20 beats/minute or a change in orthostatic blood pressure greater than 10mmHg can warrant admission for an adolescent.
Examination of hands and arms for brittle nails, Russell's sign, swollen joints, lanugo, and self harm.
Auscultation of the chest for rubs, gallops, thrills, murmurs, and apex beat.
Examination of the face for puffiness, dental decay, swollen parotid glands, and conjunctival hemorrhage.
Blood Tests:
Complete blood count (CBC): a test of the white blood cells, red blood cells and platelets used to assess the presence of various disorders such as leukocytosis, leukopenia, thrombocytosis and anemia which may result from malnutrition.
Chem-20: Chem-20 also known as SMA-20 a group of twenty separate chemical tests performed on blood serum. Tests include protein and electrolytes such as potassium, chlorine and sodium and tests specific to liver and kidney function.
Glucose tolerance test: Oral glucose tolerance test (OGTT) used to assess the body's ability to metabolize glucose. Can be useful in detecting various disorders such as diabetes, an insulinoma, Cushing's Syndrome, hypoglycemia and polycystic ovary syndrome.
Lipid profile: includes cholesterol (including total cholesterol, HDL and LDL) and triglycerides.
Serum cholinesterase test: a test of liver enzymes (acetylcholinesterase and pseudocholinesterase) useful as a test of liver function and to assess the effects of malnutrition.
Liver Function Test: A series of tests used to assess liver function some of the tests are also used in the assessment of malnutrition, protein deficiency, kidney function, bleeding disorders, and Crohn's Disease.
Luteinizing hormone (LH) response to gonadotropin-releasing hormone (GnRH): Tests the pituitary glands' response to GnRh, a hormone produced in the hypothalamus. Hypogonadism is often seen in anorexia nervosa cases.
Creatine kinase (CK) test: measures the circulating blood levels of creatine kinase an enzyme found in the heart (CK-MB), brain (CK-BB) and skeletal muscle (CK-MM).
Blood urea nitrogen (BUN) test: urea nitrogen is the byproduct of protein metabolism first formed in the liver then removed from the body by the kidneys. The BUN test is primarily used to test kidney function. A low BUN level may indicate the effects of malnutrition.
BUN-to-creatinine ratio: A BUN to creatinine ratio is used to predict various conditions. A high BUN/creatinine ratio can occur in severe hydration, acute kidney failure, congestive heart failure, and intestinal bleeding. A low BUN/creatinine ratio can indicate a low protein diet, celiac disease, rhabdomyolysis, or cirrhosis of the liver.
Thyroid function tests: tests used to assess thyroid functioning by checking levels of thyroid-stimulating hormone (TSH), thyroxine (T4), and triiodothyronine (T3).
Additional Medical Screenings:
Urinalysis: a variety of tests performed on the urine used in the diagnosis of medical disorders, to test for substance abuse, and as an indicator of overall health
Electrocardiogram (EKG or ECG): measures electrical activity of the heart. It can be used to detect various disorders such as hyperkalemia.
Electroencephalogram (EEG): measures the electrical activity of the brain. It can be used to detect abnormalities such as those associated with pituitary tumors.
Differential diagnoses
A variety of medical and psychological conditions have been misdiagnosed as anorexia nervosa; in some cases the correct diagnosis was not made for more than ten years.
The distinction between binge purging anorexia, bulimia nervosa and Other Specified Feeding or Eating Disorders (OSFED) is often difficult for non-specialist clinicians. A main factor differentiating binge-purge anorexia from bulimia is the gap in physical weight. Patients with bulimia nervosa are ordinarily at a healthy weight, or slightly overweight. Patients with binge-purge anorexia are commonly underweight. Moreover, patients with the binge-purging subtype may be significantly underweight and typically do not binge-eat large amounts of food. In contrast, those with bulimia nervosa tend to binge large amounts of food. It is not unusual for patients with an eating disorder to "move through" various diagnoses as their behavior and beliefs change over time.
Treatment
Treatment for people with anorexia nervosa should be individualized and tailored to each person's medical, psychological, and nutritional circumstances. Treating this condition with an interdisciplinary team is suggested so that the different health care professional specialties can help addresses the different challenges that can be associated with recovery. Treatment for anorexia typically involves a combination of medical, psychological interventions such as therapy, and nutritional interventions (diet) interventions. Hospitalization may also be needed in some cases, and the person requires a comprehensive medical assessment to help direct the treatment options. There is no conclusive evidence that any particular treatment approach for anorexia nervosa works better than others. In some clinical settings a specific body image intervention is performed to reduce body dissatisfaction and body image disturbance. Although restoring the person's weight is the primary task at hand, optimal treatment also includes and monitors behavioral change in the individual as well.
In general, treatment for anorexia nervosa aims to address three main areas:
Restoring the person to a healthy weight;
Treating the psychological disorders related to the illness;
Reducing or eliminating behaviors or thoughts that originally led to the disordered eating.
Psychological support
Psychological support, often in the form of cognitive-behavioral therapy (CBT), family-bases treatment, or psychotherapy aims to change distorted thoughts and behaviors around food, body image, and self-worth, with family-based therapy also being a key approach for younger patients.
Family-based therapy
Family-based treatment (FBT) may be more successful than individual therapy for adolescents with AN. Various forms of family-based treatment have been proven to work in the treatment of adolescent AN including conjoint family therapy (CFT), in which the parents and child are seen together by the same therapist, and separated family therapy (SFT) in which the parents and child attend therapy separately with different therapists. Proponents of family therapy for adolescents with AN assert that it is important to include parents in the adolescent's treatment. The evidence supporting family based therapy for adults is weak and despite the evidence that it is effective and the primary choice for treatment in adolescents, there is no evidence it is helpful for adults. A four- to five-year follow up study of the Maudsley family therapy, an evidence-based manualized model, showed full recovery at rates up to 90%. The Maudsley model of family therapy is problem focused, and the treatment targets re-establishing regular eating, weight restoration, and the reduction of illness behaviors like purging. The Maudsley model is split into three phases, with phase one focusing on the parents implementing weight restoration in the child; phase two transitioning control over food back to the individual at an age-appropriate level; and phase three focusing on other issues related to typical adolescent development (e.g., social and other psychological developments), and helps parents learn how to interact with their child. Although this model is recommended by the National Institute of Mental Health (NIMH), critics claim that it has the potential to create power struggles in an intimate relationship and may disrupt equal partnerships.
Cognitive behavioral therapy
Cognitive behavioral therapy (CBT) is useful in adolescents and adults with anorexia nervosa. One of the most known psychotherapy in the field is CBT-E, an enhanced cognitive-behavior therapy specifically focus to eating disorder psychopathology. Acceptance and commitment therapy is a third-wave cognitive-behavioral therapy which has shown promise in the treatment of AN. Cognitive remediation therapy (CRT) is also used in treating anorexia nervosa. Schema-Focused Therapy (a form of CBT) was developed by Dr. Jeffrey Young and is effective in helping patients identify origins and triggers for disordered eating.
Psychotherapy
Psychotherapy for individuals with AN is challenging as they may value being thin and may seek to maintain control and resist change. Initially, developing a desire to change is fundamental. There is no strong evidence to suggest one type of psychotherapy over another for treating anorexia nervosa in adults or adolescents.
Diet
Diet is the most essential factor to work on in people with anorexia nervosa, and must be tailored to each person's needs. Food variety is important when establishing meal plans as well as foods that are higher in energy density, especially in carbohydrates and dietary fat, which are easier for the undernourished body to break down. Evidence of a role for zinc supplementation during refeeding is unclear. Dieticians work with the medical team to add dietary supplements like iron, every other day, or calcium.
Historically, practitioners have slowly increased calories at a measured pace from a starting point of around 1,200 kcal/day. However, as understanding of the process of weight restoration has improved, an approach that favors a higher starting point and a more rapid rate of increase has become increasingly common. In either approach, the end goal is typically in the range of 3,000 to 3,500 kcal/day. People who experience hypermetabolism in response to refeeding have higher caloric intake needs.
Extreme hunger
People who have undergone significant caloric deficits often report experiencing hyperphagia, or extreme hunger. With adequate refeeding and the full restoration of both fat mass and fat-free mass, hunger eventually becomes normalized. However, the restoration of fat-free mass typically takes longer than that of body fat, leading to "fat overshoot" or "overshoot weight," wherein the patient's body fat levels are greater than pre-starvation levels. The timeline of the complete normalization of hunger varies considerably from individual to individual, from a few months to multiple years.
Refeeding syndrome
Treatment professionals tend to be conservative with refeeding in anorexic patients due to the risk of refeeding syndrome (RFS), which occurs when a malnourished person is refed too quickly for their body to be able to adapt. Two of the most common indicators that RFS is occurring are low phosophate levels and low potassium levels. RFS is most likely to happen in severely or extremely underweight anorexics, as well as when medical comorbidities, such as infection or cardiac failure, are present. In these circumstances, it is recommended to start refeeding more slowly but to build up rapidly as long as RFS does not occur. Recommendations on energy requirements in the most medically compromised patients vary, from 5–10 kcal/kg/day to 1900 kcal/day. This risk-averse approach can lead to underfeeding, which results in poorer outcomes for short- and long-term recovery.
Medication
Pharmaceuticals have limited benefit for anorexia itself. There is a lack of good information from which to make recommendations concerning the effectiveness of antidepressants in treating anorexia. Administration of olanzapine has been shown to result in a modest but statistically significant increase in body weight of anorexia nervosa patients.
Admission to hospital
Patients with AN may be deemed to have a lack of insight regarding the necessity of treatment, and thus may be involuntarily treated without their consent. AN has a high mortality and patients admitted in a severely ill state to medical units are at particularly high risk. Diagnosis can be challenging, risk assessment may not be performed accurately, consent and the need for compulsion may not be assessed appropriately, refeeding syndrome may be missed or poorly treated and the behavioural and family problems in AN may be missed or poorly managed. Guidelines published by the Royal College of Psychiatrists recommend that medical and psychiatric experts work together in managing severely ill people with AN.
Experience of treatment
Patients involved in treatment sometimes felt that treatment focused on biological aspects of body weight and eating behaviour change rather than their perceptions or emotional state. Patients felt that a therapists trust in them shown by being treated as a complete person with their own capacities was significant. Some patients defined recovery from AN in terms of reclaiming a lost identity. Additionally, access to timely treatment can be hindered by systemic challenges within the medical system. Some individuals have reported experiencing delays in treatment, particularly when transitioning from adolescence to adulthood. It has been noted that once patients reach the age of 17, they may encounter obstacles in receiving continued care, with treatment resuming only after they turn 18. This delay can exacerbate the severity of the disorder.
Healthcare workers involved in the treatment of anorexia reported frustration and anger to setbacks in treatment and noncompliance and were afraid of patients dying. Some healthcare workers felt that they did not understand the treatment and that medical doctors were making decisions. They may feel powerless to improve a patient's situation and deskilled as a result. Healthcare workers involved in monitoring patients consumption of food felt watched themselves. Healthcare workers often feel a degree of moral dissonance of not being in control of outcomes which they may protect against by focusing on individual tasks, avoiding identifying with patients (for example by making their eating behavior very different and not sharing personal information with patients), and blaming patients for their distress. Healthcare workers would inflexibly follow process to avoid responsibility. Healthcare workers attempted to reach balance by gradually giving patients back control avoiding feeling sole responsibility for outcomes, being mindful of their emotional state, and trying to view eating disorders as external from patients.
Prognosis
AN has the highest mortality rate of any psychological disorder. The mortality rate is 11 to 12 times greater than in the general population, and the suicide risk is 56 times higher. Half of women with AN achieve a full recovery, while an additional 20–30% may partially recover. Not all people with anorexia recover completely: about 20% develop anorexia nervosa as a chronic disorder. If anorexia nervosa is not treated, serious complications such as heart conditions and kidney failure can arise and eventually lead to death. The average number of years from onset to remission of AN is seven for women and three for men. After ten to fifteen years, 70% of people no longer meet the diagnostic criteria, but many still continue to have eating-related problems. People who have autism recover more slowly, probably due to autism's effects on thinking patterns, such as reduced cognitive flexibility.
Alexithymia (inability to identify and describe one's own emotions) influences treatment outcome. Recovery is also viewed on a spectrum rather than black and white. According to the Morgan-Russell criteria, individuals can have a good, intermediate, or poor outcome. Even when a person is classified as having a "good" outcome, weight only has to be within 15% of average, and normal menstruation must be present in females. The good outcome also excludes psychological health. Recovery for people with anorexia nervosa is undeniably positive, but recovery does not mean a return to normal.
Complications
Anorexia nervosa can have serious implications if its duration and severity are significant and if onset occurs before the completion of growth, pubertal maturation, or the attainment of peak bone mass. Complications specific to adolescents and children with anorexia nervosa can include growth retardation, as height gain may slow and can stop completely with severe weight loss or chronic malnutrition. In such cases, provided that growth potential is preserved, height increase can resume and reach full potential after normal intake is resumed. Height potential is normally preserved if the duration and severity of illness are not significant or if the illness is accompanied by delayed bone age (especially prior to a bone age of approximately 15 years), as hypogonadism may partially counteract the effects of undernutrition on height by allowing for a longer duration of growth compared to controls. Appropriate early treatment can preserve height potential, and may even help to increase it in some post-anorexic subjects, due to factors such as long-term reduced estrogen-producing adipose tissue levels compared to premorbid levels. In some cases, especially where onset is before puberty, complications such as stunted growth and pubertal delay are usually reversible. Gastroesophageal reflux disease (GERD) is another way that it can affect those who self induce vomit. Extreme acid exposure can also cause dental problems such as, dental erosions and enamel hypoplasia. If purging behaviors persist, the acid in the stomach can erode tooth enamel.
Anorexia nervosa causes alterations in the female reproductive system; significant weight loss, as well as psychological stress and intense exercise, typically results in a cessation of menstruation in women who are past puberty. In patients with anorexia nervosa, there is a reduction of the secretion of gonadotropin releasing hormone in the central nervous system which prevents ovulation. Anorexia nervosa can also result in pubertal delay or arrest. Both height gain and pubertal development are dependent on the release of growth hormone and gonadotropins (LH and FSH) from the pituitary gland. Suppression of gonadotropins in people with anorexia nervosa has been documented. Typically, growth hormone (GH) levels are high, but levels of IGF-1, the downstream hormone that should be released in response to GH are low; this indicates a state of "resistance" to GH due to chronic starvation. IGF-1 is necessary for bone formation, and decreased levels in anorexia nervosa contribute to a loss of bone density and potentially contribute to osteopenia or osteoporosis. Anorexia nervosa can also result in reduction of peak bone mass. Buildup of bone is greatest during adolescence, and if onset of anorexia nervosa occurs during this time and stalls puberty, low bone mass may be permanent.
Hepatic steatosis, or fatty infiltration of the liver, can also occur, and is an indicator of malnutrition in children. Neurological disorders that may occur as complications include seizures and tremors. Wernicke encephalopathy, which results from vitamin B1 deficiency, has been reported in patients who are extremely malnourished; symptoms include confusion, problems with the muscles responsible for eye movements and abnormalities in walking gait.
The most common gastrointestinal complications of anorexia nervosa are delayed stomach emptying and constipation, but also include elevated liver function tests, diarrhea, acute pancreatitis, heartburn, difficulty swallowing, and, rarely, superior mesenteric artery syndrome. Delayed stomach emptying, or gastroparesis, often develops following food restriction and weight loss; the most common symptom is bloating with gas and abdominal distension, and often occurs after eating. Other symptoms of gastroparesis include early satiety, fullness, nausea, and vomiting. The symptoms may inhibit efforts at eating and recovery, but can be managed by limiting high-fiber foods, using liquid nutritional supplements, or using metoclopramide to increase emptying of food from the stomach. Gastroparesis generally resolves when weight is regained.
Cardiac complications
Anorexia nervosa increases the risk of sudden cardiac death, though the precise cause is unknown. Cardiac complications include structural and functional changes to the heart. Some of these cardiovascular changes are mild and are reversible with treatment, while others may be life-threatening. Cardiac complications can include arrhythmias, abnormally slow heart beat, low blood pressure, decreased size of the heart muscle, reduced heart volume, mitral valve prolapse, myocardial fibrosis, and pericardial effusion.
Abnormalities in conduction and repolarization of the heart that can result from anorexia nervosa include QT prolongation, increased QT dispersion, conduction delays, and junctional escape rhythms. Electrolyte abnormalities, particularly hypokalemia and hypomagnesemia, can cause anomalies in the electrical activity of the heart, and result in life-threatening arrhythmias. Hypokalemia most commonly results in patients with anorexia when restricting is accompanied by purging (induced vomiting or laxative use). Hypotension (low blood pressure) is common, and symptoms include fatigue and weakness. Orthostatic hypotension, a marked decrease in blood pressure when standing from a supine position, may also occur. Symptoms include lightheadedness upon standing, weakness, and cognitive impairment, and may result in fainting or near-fainting. Orthostasis in anorexia nervosa indicates worsening cardiac function and may indicate a need for hospitalization. Hypotension and orthostasis generally resolve upon recovery to a normal weight. The weight loss in anorexia nervosa also causes atrophy of cardiac muscle. This leads to decreased ability to pump blood, a reduction in the ability to sustain exercise, a diminished ability to increase blood pressure in response to exercise, and a subjective feeling of fatigue.
Some individuals may also have a decrease in cardiac contractility. Cardiac complications can be life-threatening, but the heart muscle generally improves with weight gain, and the heart normalizes in size over weeks to months, with recovery. Atrophy of the heart muscle is a marker of the severity of the disease, and while it is reversible with treatment and refeeding, it is possible that it may cause permanent, microscopic changes to the heart muscle that increase the risk of sudden cardiac death. Individuals with anorexia nervosa may experience chest pain or palpitations; these can be a result of mitral valve prolapse. Mitral valve prolapse occurs because the size of the heart muscle decreases while the tissue of the mitral valve remains the same size. Studies have shown rates of mitral valve prolapse of around 20 percent in those with anorexia nervosa, while the rate in the general population is estimated at 2–4 percent. It has been suggested that there is an association between mitral valve prolapse and sudden cardiac death, but it has not been proven to be causative, either in patients with anorexia nervosa or in the general population.
Relapse
Rates of relapse after treatment range 30–72% over a period of 2–26 months, with a rate of approximately 50% in 12 months after weight restoration. Relapse occurs in approximately a third of people in hospital, and is greatest in the first six to eighteen months after release from an institution. BMI or measures of body fat and leptin levels at discharge were the strongest predictors of relapse, as well as signs of eating psychopathology at discharge. Duration of illness, age, severity, the proportion of AN binge-purge subtype, and presence of comorbidities are also contributing factors.
Epidemiology
Anorexia is estimated to occur in 0.9% to 4.3% of women and 0.2% to 0.3% of men in Western countries at some point in their life. About 0.4% of young females are affected in a given year and it is estimated to occur three to ten times less commonly in males. The cause of this disparity is not well-established but is thought to be linked to both biological and socio-cultural factors. Rates in most of the developing world are unclear. Often it begins during the teen years or young adulthood. Medical students are a high risk group, with an overall estimated prevalence of 10.4% globally.
The lifetime rate of atypical anorexia nervosa, a form of ED-NOS in which the person loses a significant amount of weight and is at risk for serious medical complications despite having a higher body-mass index, is much higher, at 5–12%. Additionally, a UCSF study showed severity of illness is independent of current BMI, and "patients with large, rapid, or long-duration of weight loss were more severely ill regardless of their current weight."
While anorexia became more commonly diagnosed during the 20th century it is unclear if this was due to an increase in its frequency or simply better diagnosis. Most studies show that since at least 1970 the incidence of AN in adult women is fairly constant, while there is some indication that the incidence may have been increasing for girls aged between 14 and 20.
Underrepresentation
In non-Westernized countries, including those in Africa (excluding South Africa), eating disorders are less frequently reported and studied compared to Western countries, with available data mostly limited to case reports and isolated studies rather than prevalence investigations. Theories to explain these lower rates of eating disorders, lower reporting, and lower research rates in these countries include the attention to effects of westernisation and culture change on the prevalence of anorexia.
Athletes are often overlooked as anorexic. Research emphasizes the importance to take athletes' diet, weight and symptoms into account when diagnosing anorexia, instead of just looking at weight and BMI. For athletes, ritualized activities such as weigh-ins place emphasis on gaining and losing large amounts of weight, which may promote the development of eating disorders among them.
Males
While anorexia nervosa is more commonly found in women, it can also affect men, with a lifetime prevalence of 0.3% in men. However, a lack of awareness of eating disorders in males may lead to underdiagnosis and underreporting. This can include a lack of knowledge about what kinds of behaviors males with eating disorders might display, as they differ slightly from those found in females, with a 2009 survey showing that females are more inclined to report fasting, body checking, and body avoidance, whereas males are more prone to report overeating. An additional difference is in the use of supplements to affect bodyweight, with women being more prone to using diet pills and men being more prone to using anabolic steroids. In a 2013 Canadian study, 4% of boys in grade nine used steroids.
Moreover, men who exhibit symptoms of anorexia may not meet the BMI criteria outlined in the DSM-IV due to having more muscle mass and therefore a higher bodyweight. Consequently, a subclinical diagnosis, such as Eating Disorder Not Otherwise Specified (ED-NOS) in the DSM-IV or Other Specified Feeding or Eating Disorder (OSFED) in the DSM-5, is often made instead.
Men with anorexia may also experience body dysmorphia, reporting their bodies to be twice as large than in actuality, and body dissatisfaction, especially with regard to muscularity and body composition. As in the case of women, men are more prone to develop an eating disorder if their occupation or sport emphasizes having a slim physique or lighter weight, like modeling, dancing, horse racing, wrestling, and gymnastics. Hormonal changes may also be observed in males with anorexia nervosa, with marked changes in their serum testosterone, luteinizing hormone, and follicle stimulating hormone. Such extreme endocrine disturbances can potentially result in infertility.
Anorexic men are sometimes colloquially referred to as manorexic or as having bigorexia.
Elderly
An increasing trend of anorexia among the elderly, termed "Anorexia of Aging," is observed, characterized by behaviors similar to those seen in typical anorexia nervosa but often accompanied by excessive laxative use. Most geriatric anorexia patients limit their food intake to dairy or grains, whereas an adolescent anorexic has a more general limitation.
This eating disorder that affects older adults has two types – early onset and late onset. Early onset refers to a recurrence of anorexia in late life in an individual who experienced the disease during their youth. Late onset describes instances where the eating disorder begins for the first time late in life.
The stimulus for anorexia in elderly patients is typically a loss of control over their lives, which can be brought on by many events, including moving into an assisted living facility. This is also a time when most older individuals experience a rise in conflict with family members, such as limitations on driving or limitations on personal freedom, which increases the likelihood of an issue with anorexia. There can be physical issues in the elderly that leads to anorexia of aging, including a decline in chewing ability, a decline in taste and smell, and a decrease in appetite. Psychological reasons for the elderly to develop anorexia can include depression and bereavement, and even an indirect attempt at suicide. There are also common comorbid psychiatric conditions with aging anorexics, including major depression, anxiety disorder, obsessive compulsive disorder, bipolar disorder, schizophrenia, and dementia.
The signs and symptoms that go along with anorexia of aging are similar to what is observed in adolescent anorexia, including sudden weight loss, unexplained hair loss or dental problems, and a desire to eat alone.
There are also several medical conditions that can result from anorexia in the elderly. An increased risk of illness and death can be a result of anorexia. There is also a decline in muscle and bone mass as a result of a reduction in protein intake during anorexia. Another result of anorexia in the aging population is irreparable damage to kidneys, heart or colon and an imbalance of electrolytes.
Many assessments are available to diagnose anorexia in the aging community. These assessments include the Simplified Nutritional Assessment Questionnaire (SNAQ) and Functional Assessment of Anorexia/Cachexia Therapy (FAACT). Specific to the geriatric populace, the interRAI system identifies detrimental conditions in assisted living facilities and nursing homes. Even a simple screening for nutritional insufficiencies such as low levels of important vitamins, can help to identify someone who has anorexia of aging.
Anorexia in the elderly should be identified by the retirement communities but is often overlooked, especially in patients with dementia. Some studies report that malnutrition is prevalent in nursing homes, with up to 58% of residents suffering from it, which can lead to the difficulty of identifying anorexia. One of the challenges with assisted living facilities is that they often serve bland, monotonous food, which lessens residents' desire to eat.
The treatment for anorexia of aging is undifferentiated as anorexia for any other age group. Some of the treatment options include outpatient and inpatient facilities, antidepressant medication and behavioral therapy such as meal observation and discussing eating habits.
History
The history of anorexia nervosa begins with descriptions of religious fasting dating from the Hellenistic era and continuing into the medieval period. The medieval practice of self-starvation by women, including some young women, in the name of religious piety and purity also concerns anorexia nervosa; it is sometimes referred to as anorexia mirabilis. The earliest medical descriptions of anorexic illnesses are generally credited to English physician Richard Morton in 1689.
Etymologically, anorexia is a term of Greek origin: an- (ἀν-, prefix denoting negation) and orexis (ὄρεξις, "appetite"), translating literally to "a loss of appetite". In and of itself, this term does not have a harmful connotation, e.g., exercise-induced anorexia simply means that hunger is naturally suppressed during and after sufficiently intense exercise sessions. It is the adjective nervosa that indicates the functional and non-organic nature of the disorder, but this adjective is also often omitted when the context is clear. Despite the literal translation of anorexia, the feeling of hunger in anorexia nervosa is frequently present and the pathological control of this instinct is a source of satisfaction for the patients.
The term "anorexia nervosa" was coined in 1873 by Sir William Gull, one of Queen Victoria's personal physicians. Gull published a seminal paper providing a number of detailed case descriptions of patients with anorexia nervosa. In the same year, French physician Ernest-Charles Lasègue similarly published details of a number of cases in a paper entitled De l'Anorexie hystérique.
In the late 19th century anorexia nervosa became widely accepted by the medical profession as a recognized condition. Awareness of the condition was largely limited to the medical profession until the latter part of the 20th century, when German-American psychoanalyst Hilde Bruch published The Golden Cage: the Enigma of Anorexia Nervosa in 1978. Despite major advances in neuroscience, Bruch's theories tend to dominate popular thinking. A further important event was the death of the popular singer and drummer Karen Carpenter in 1983, which prompted widespread ongoing media coverage of eating disorders.
| Biology and health sciences | Mental disorders | Health |
37824359 | https://en.wikipedia.org/wiki/Evolution%20of%20fish | Evolution of fish | The evolution of fish began about 530 million years ago during the Cambrian explosion. It was during this time that the early chordates developed the skull and the vertebral column, leading to the first craniates and vertebrates. The first fish lineages belong to the Agnatha, or jawless fish. Early examples include Haikouichthys. During the late Cambrian, eel-like jawless fish called the conodonts, and small mostly armoured fish known as ostracoderms, first appeared. Most jawless fish are now extinct; but the extant lampreys may approximate ancient pre-jawed fish. Lampreys belong to the Cyclostomata, which includes the extant hagfish, and this group may have split early on from other agnathans.
The earliest jawed vertebrates probably developed during the late Ordovician period. They are first represented in the fossil record from the Silurian by two groups of fish: the armoured fish known as placoderms, which evolved from the ostracoderms; and the Acanthodii (or spiny sharks). The jawed fish that are still extant in modern days also appeared during the late Silurian: the Chondrichthyes (or cartilaginous fish) and the Osteichthyes (or bony fish). The bony fish evolved into two separate groups: the Actinopterygii (or ray-finned fish) and Sarcopterygii (which includes the lobe-finned fish).
During the Devonian period a great increase in fish variety occurred, especially among the ostracoderms and placoderms, and also among the lobe-finned fish and early sharks. This has led to the Devonian being known as the age of fishes. It was from the lobe-finned fish that the tetrapods evolved, the four-limbed vertebrates, represented today by amphibians, reptiles, mammals, and birds. Transitional tetrapods first appeared during the early Devonian, and by the late Devonian the first tetrapods appeared. The diversity of jawed vertebrates may indicate the evolutionary advantage of a jawed mouth; but it is unclear if the advantage of a hinged jaw is greater biting force, improved respiration, or a combination of factors.
Fish, like many other organisms, have been greatly affected by extinction events throughout natural history. The earliest ones, the Ordovician–Silurian extinction events, led to the loss of many species. The Late Devonian extinction led to the extinction of the ostracoderms and placoderms by the end of the Devonian, as well as other fish. The spiny sharks became extinct at the Permian–Triassic extinction event; the conodonts became extinct at the Triassic–Jurassic extinction event. The Cretaceous–Paleogene extinction event, and the present day Holocene extinction, have also affected fish variety and fish stocks.
Overview
Fish may have evolved from an animal similar to a coral-like sea squirt (a tunicate), whose larvae resemble early fish in important ways. The first ancestors of fish may have kept the larval form into adulthood, as some sea squirts do today, although this path cannot be proven.
Vertebrates, in other words the first fish, originated about 530 million years ago during the Cambrian explosion, which saw the rise in animal diversity.
The first ancestors of fish, or animals that were probably closely related to fish, were Haikouichthys and Myllokunmingia. These two genera all appeared around 530 Mya. Unlike the other fauna that dominated the Cambrian, these groups had the basic vertebrate body plan: a notochord, rudimentary vertebrae, and a well-defined head and tail. All of these early vertebrates lacked jaws, relying instead on filter-feeding close to the seabed.
These were followed by indisputable fossil vertebrates in the form of heavily armoured fish discovered in rocks from the Ordovician period (500–430 Mya).
The first jawed vertebrates appeared in the late Ordovician and became common in the Devonian, often known as the "Age of Fishes". The two groups of bony fish, the Actinopterygii and Sarcopterygii, evolved and became common. The Devonian saw the demise of virtually all jawless fish, save for lampreys and hagfish, as well as the Placodermi, a group of armoured fish that dominated much of the late Silurian, and the rise of the first labyrinthodonts, transitional between fish and amphibians.
The colonisation of new niches resulted in diversification of body plans and sometimes an increase in size. The Devonian period (395 to 345 Mya) brought in such giants as the placoderm Dunkleosteus, which could grow up to seven meters long, and early air-breathing fish that could remain on land for extended periods. Among this latter group were ancestral amphibians.
The reptiles appeared from labyrinthodonts in the subsequent Carboniferous period. The anapsid and synapsid amniotas were common during the late Paleozoic, while the diapsids became dominant during the Mesozoic. In the sea, the bony fish became dominant.
The later radiations, such as those of fish in the Silurian and Devonian periods, involved fewer taxa, mainly with very similar body plans. The first animals to venture onto dry land were arthropods. Some fish had lungs and strong, bony fins and could crawl onto the land also.
Jawless fish
Jawless fish belong to the superclass Agnatha in the phylum Chordata, subphylum Vertebrata. Agnatha means 'un-jawed, without jaws' (from Ancient Greek). It excludes all vertebrates with jaws, known as gnathostomes. Although a minor element of modern marine fauna, jawless fish were prominent among the early fish in the early Paleozoic. Two types of Early Cambrian animal which apparently had fins, vertebrate musculature, and gills are known from the early Cambrian Maotianshan shales of China: Haikouichthys and Myllokunmingia. They have been tentatively assigned to Agnatha by Janvier. A third possible agnathan from the same region is Haikouella.
Many Ordovician, Silurian and Devonian agnathians were armoured with heavy, bony, and often elaborately sculpted, plates derived from mineralized scales. The first armoured agnathans—the ostracoderms, precursors to the bony fish and hence to the tetrapods (including humans)—are known from the Middle Ordovician, and by the Late Silurian the agnathans had reached the high point of their evolution. Most of the ostracoderms, such as thelodonts, osteostracans and galeaspids, were more closely related to the gnathostomes than to the surviving agnathans, known as cyclostomes. Cyclostomes apparently split from other agnathans before the evolution of dentine and bone, which are present in many fossil agnathans, including conodonts. Agnathans declined in the Devonian and never recovered.
The agnathans as a whole are paraphyletic, because most extinct agnathans belong to the stem group of the gnathostomes, the jawed fish that evolved from them. Molecular data, both from rRNA and from mtDNA strongly supports the theory that living agnathans, known as cyclostomes, are monophyletic. In phylogenetic taxonomy, the relationships between animals are not typically divided into ranks, but illustrated as a nested "family tree" known as a cladogram. Phylogenetic groups are given definitions based on their relationship to one another, rather than purely on physical traits such as the presence of a backbone. This nesting pattern is often combined with traditional taxonomy, in a practice known as evolutionary taxonomy.
The cladogram for jawless fish is based on studies by Philippe Janvier and others for the Tree of Life Web Project. (†=group is extinct)
†Conodonts
Conodonts resembled primitive jawless eels. They appeared 520 Ma ago and were wiped out 200 Ma ago. Initially they were known only from tooth-like microfossils called conodont elements. These "teeth" have been variously interpreted as filter-feeding apparatuses or as a "grasping and crushing array". Conodonts ranged in length from a centimetre to the 40 cm Promissum. Their large eyes had a lateral position, which makes a predatory role unlikely. The preserved musculature hints that some conodonts (Promissum at least) were efficient cruisers but incapable of bursts of speed. In 2012 researchers classified the conodonts in the phylum Chordata on the basis of their fins with fin rays, chevron-shaped muscles and notochord. Some researchers see them as vertebrates similar in appearance to modern hagfish and lampreys, though phylogenetic analysis suggests that they are more derived than either of these groups.
†Ostracoderms
Ostracoderms ( 'shell-skinned') are armoured jawless fish of the Paleozoic. The term does not often appear in classifications today because the taxon is paraphyletic or polyphyletic, and has no phylogenetic meaning. However, the term is still used informally to group together the armoured jawless fish.
The ostracoderm armour consisted of 3–5 mm polygonal plates that shielded the head and gills, and then overlapped further down the body like scales. The eyes were particularly shielded. Earlier chordates used their gills for both respiration and feeding, whereas ostracoderms used their gills for respiration only. They had up to eight separate pharyngeal gill pouches along the side of the head, which were permanently open with no protective operculum. Unlike invertebrates that use ciliated motion to move food, ostracoderms used their muscular pharynx to create a suction that pulled small and slow-moving prey into their mouths.
The first fossil fish that were discovered were ostracoderms. The Swiss anatomist Louis Agassiz received some fossils of bony armored fish from Scotland in the 1830s. He had a hard time classifying them as they did not resemble any living creature. He compared them at first with extant armored fish such as catfish and sturgeons but later, realizing that they had no movable jaws, classified them in 1844 into a new group "ostracoderms".
Ostracoderms existed in two major groups, the more primitive heterostracans and the cephalaspids. Later, about 420 million years ago, the jawed fish evolved from one of the ostracoderms. After the appearance of jawed fish, most ostracoderm species underwent a decline, and the last ostracoderms became extinct at the end of the Devonian period.
Jawed fish
The vertebrate jaw probably originally evolved in the Silurian period and appeared in the Placoderm fish, which further diversified in the Devonian. The two most anterior pharyngeal arches are thought to have become the jaw itself and the hyoid arch, respectively. The hyoid system suspends the jaw from the braincase of the skull, permitting great mobility of the jaws. Already long assumed to be a paraphyletic assemblage leading to more derived gnathostomes, the discovery of Entelognathus suggests that placoderms are directly ancestral to modern bony fish.
As in most vertebrates, fish jaws are bony or cartilaginous and oppose vertically, comprising an upper jaw and a lower jaw. The jaw is derived from the most anterior two pharyngeal arches supporting the gills, and usually bears numerous teeth. The skull of the last common ancestor of today's jawed vertebrates is assumed to have resembled sharks.
It is thought that the original selective advantages offered by the jaw were not related to feeding, but to increases in respiration efficiency. The jaws were used in the buccal pump (observable in modern fish and amphibians) that pumps water across the gills of fish or air into the lungs in the case of amphibians. Over evolutionary time the more familiar use of jaws (to humans) in feeding was selected for and became a very important function in vertebrates. Many teleost fish have substantially modified their jaws for suction feeding and jaw protrusion, resulting in highly complex jaws with dozens of bones involved.
Jawed vertebrates and jawed fish evolved from earlier jawless fish. The cladogram for jawed vertebrates is a continuation of the cladogram in the section above. (†=extinct)
†Placoderms
Placoderms, class Placodermi ('plate-skinned'), are extinct armoured prehistoric fish, which appeared about 430 Ma in the Early to Middle Silurian. They were mostly wiped out during the Late Devonian extinction event, 378 Ma, though some survived and made a slight recovery in diversity during the Famennian epoch before dying out entirely at the close of the Devonian, 360 mya; they are ultimately ancestral to modern gnathostome vertebrates. Their head and thorax were covered with massive and often ornamented armoured plates. The rest of the body was scaled or naked, depending on the species. The armour shield was articulated, with the head armour hinged to the thoracic armour. This allowed placoderms to lift their heads, unlike ostracoderms. Placoderms were the first jawed fish; their jaws likely evolved from the first of their gill arches. The chart on the right shows the rise and demise of the separate placoderm lineages: Acanthothoraci, Rhenanida, Antiarchi, Petalichthyidae, Ptyctodontida and Arthrodira.
†Spiny sharks
Spiny sharks, class Acanthodii, are extinct fish that share features with both bony and cartilaginous fish, though ultimately more closely related to and ancestral to the latter. Despite being called "spiny sharks", acanthodians predate sharks, though they gave rise to them. They evolved in the sea at the beginning of the Silurian period, some 50 million years before the first sharks appeared. Eventually competition from bony fish proved too much, and the spiny sharks died out in Permian times about 250 Ma. In form they resembled sharks, but their epidermis was covered with tiny rhomboid platelets like the scales of holosteans (gars, bowfins).
Cartilaginous fish
Cartilaginous fish, class Chondrichthyes, consisting of sharks, rays and chimaeras, appeared by about 395 million years ago, in the Middle Devonian, evolving from acanthodians. The class contains the subclasses Holocephali (chimaeras) and Elasmobranchii (sharks and rays). The radiation of elasmobranches in the chart on the right is divided into the following taxa: Cladoselache, Eugeneodontiformes, Symmoriida, Xenacanthiformes, Ctenacanthiformes, Hybodontiformes, Galeomorphii, Squaliformes and Batoidea.
Bony fish
Bony fish, class Osteichthyes, are characterised by bony skeleton rather than cartilage. They appeared in the late Silurian, about 419 million years ago. The recent discovery of Entelognathus strongly suggests that bony fish (and possibly cartilaginous fish, via acanthodians) evolved from early placoderms. A subclass of the Osteichthyes, the ray-finned fish (Actinopterygii), have become the dominant group of fish in the post-Paleozoic and modern world, with some 30,000 living species. The bony (and cartilaginous) fish groups that emerged after the Devonian were characterised by steady improvements in foraging and locomotion.
Lobe-finned fish
Lobe-finned fish, fish belonging to the class Sarcopterygii, are mostly extinct bony fish, basally characterised by robust and stubby lobe fins containing a robust internal skeleton, cosmoid scales and internal nostrils. Their fins are fleshy, lobed, and paired, joined to the body by a single bone. The fins of lobe-finned fish differ from those of all other fish in that each is borne on a fleshy, lobelike, scaly stalk extending from the body. The pectoral and pelvic fins are articulated in ways resembling the tetrapod limbs they were the precursors to. The fins evolved into the legs of the first tetrapod land vertebrates, amphibians. They also possess two dorsal fins with separate bases, as opposed to the single dorsal fin of ray-finned fish. The braincase of lobe-finned fish primitively has a hinge line, but this is lost in tetrapods and lungfish. Many early lobe-finned fish have a symmetrical tail. All lobe-finned fish possess teeth covered with true enamel.
Lobe-finned fish, such as coelacanths and lungfish, were the most diverse group of bony fish in the Devonian. Taxonomists who subscribe to the cladistic approach include the grouping Tetrapoda within the Sarcopterygii, and the tetrapods in turn include all species of four-limbed vertebrates. The fin-limbs of lobe-finned fish such as the coelacanths show a strong similarity to the expected ancestral form of tetrapod limbs. The lobe-finned fish apparently followed two different lines of development and are accordingly separated into two subclasses, the Rhipidistia (including the lungfish, and the Tetrapodomorpha, which include the Tetrapoda) and the Actinistia (coelacanths). The first lobe-finned fish, found in the uppermost Silurian (c. 418 Mya), closely resembled spiny sharks, which became extinct at the end of the Paleozoic. In the Early to Middle Devonian (416–385 Mya), while the predatory placoderms dominated the seas, some lobe-finned fish came into freshwater habitats.
In the Early Devonian (416-397 Mya), the lobe-finned fish split into two main lineages — the coelacanths and the rhipidistians. The heyday of the former was the Late Devonian and Carboniferous, from 385 to 299 Mya, as they were more common during those periods than in any other period in the Phanerozoic; coelacanths still live today in the oceans (genus Latimeria). The Rhipidistians, whose ancestors probably lived in estuaries, migrated into freshwater habitats. They in turn split into two major groups: the lungfish and the tetrapodomorphs. The lungfish's greatest diversity was in the Triassic period; today there are three genera left. The lungfish evolved the first proto-lungs and proto-limbs, developing the ability to live outside a water environment in the Middle Devonian (397–385 Mya). The first tetrapodomorphs, which included the gigantic rhizodonts, had the same general anatomy as the lungfish, who were their closest kin, but they appear not to have left their water habitat until the Late Devonian epoch (385–359 Mya), with the appearance of tetrapods (four-legged vertebrates). Lobe-finned fish continued until towards the end of Paleozoic era, suffering heavy losses during the Permian-Triassic extinction event (251 Mya).
Ray-finned fish
Ray-finned fish, class Actinopterygii, differ from lobe-finned fish in that their fins consist of webs of skin supported by spines ("rays") made of bone or horn. There are other differences in respiratory and circulatory structures. Ray-finned fish normally have skeletons made from true bone, though this is not true of sturgeons and paddlefish.
Ray-finned fish are a dominant vertebrate group, containing half of all known vertebrate species. They inhabit abyssal depths in the sea, coastal inlets and freshwater rivers and lakes, and are a major source of food for humans.
Timeline
| Biology and health sciences | Basics_4 | Biology |
45294967 | https://en.wikipedia.org/wiki/Penghu%201 | Penghu 1 | Penghu 1 is a fossil jaw (mandible) belonging to an extinct hominin species of the genus Homo. It was collected from seafloor sediments of the Penghu Channel off the coast of Taiwan, dating to sometime in the Middle Pleistocene or Late Pleistocene. The precise classification of the mandible is disputed. Some believe it to be the fossil of a H. erectus, an archaic H. sapiens or possibly a Denisovan.
History and discovery
The fossil was recovered sometime before 2008 by fishermen working in the Penghu Channel between the Penghu Islands and mainland Taiwan and acquired by Tainan citizen Mr. Kun-Yu Tsai. The fossil was found 60–120 meters below the water's surface and about 25 kilometers off the western coast of Taiwan in an area which was once part of the mainland. Sea levels have risen since the last ice age and in consequence have submerged the area where the fossil was recovered. After Mr. Tsai donated the fossil to the National Museum of Natural Science, it was described in 2015 by an international team of Japanese, Taiwanese, and Australian scientists.
Penghu 1 is currently housed at the National Museum of Natural Science in Taichung.
The fossil is stratigraphically dated to younger than 450 kya, based on prehistoric sea-level lowering to either between 190 and 130 kya, or to between 70 and 10 kya.
Fossil morphology
The fossil consists of a nearly complete right lower jaw with four teeth, including worn molars and premolars. The mandible has a high index of robustness, a robust lateral torus, large molars, and with the help of 3D reconstruction it was revealed to have a large bicondylar breadth. These features help confirm that the fossil was from the middle-late Pleistocene era. The alveoli of its four incisors and right canine have been preserved as well showing their great length. The specimen was assigned to the genus Homo based on its jaw and tooth morphology. The mandible shows a receding anterior surface and lacks a pronounced chin which has helped distinguish it from the species Homo sapiens. However, the fossil exhibited derived traits similar to early Homo habilis including the shortness and width of its jaw. These and other characteristics such as the agenesis of the M3 molar have been sufficient evidence to classify the specimen of the genus Homo.
Classification
Although the genus of the Penghu 1 has been widely accepted, there is much discussion on the potential species of the specimen. The Penghu 1 mandible has been described as most similar to Hexian fossils of Homo erectus. Both Penghu 1 and the Hexian mandible share similar crown size, mandibular prominence, and general robustness. As a result of these similarities and their late presence in Eastern Asia, the authors of "The first Archaic Homo of Taiwan" proposed several models for their existence. The features the mandibles' shared could be explained by either the retention of primitive characteristics of early Asian Homo erectus, a migration of Homo with robust jaws from Africa, inclusion in the species Homo heidelbergensis, or they could have been an adapted form of Homo erectus. However, the species identity or taxonomic relationships lack consensus due to limited material. Co-author Yousuke Kaifu cautioned that additional skeletal parts are needed before species evaluation. In 2015 paleontologist Mark McMenamin argued that unique dental characteristics of the jaw were sufficient to establish a separate species, which he dubbed Homo tsaichangensis. In a 2015 paper, Lelo Suvad accepted the validity of the new species H. tsaichangensis. Chinese anthropologists Xinzhi Wu and Haowen Tong did not agree with the naming of a new species, tentatively assigning the mandible to archaic Homo sapiens, leaving open the possibility of elevating it to a distinct species should more fossils be discovered.
In 2019 Chen Fahu along with a group of co-authors presented a piece suspecting the Penghu 1 mandible to be a member of the hominid group Denisovans. This conclusion has been supported through its comparison with the Denisovan Xiahe mandible. The Xiahe mandible was discovered on the Tibetan Plateau and is dated to be about 160,000 years old. The Xiahe specimen has similar dental morphology compared to Penghu 1. They share 4 distinct characteristics: their M2's are close in mesiodistal width, they both show the agenesis of the M3 molar, they have a similar unique M2 root structure which relates to modern Asian populations, and the P3 displays Tomes' root, which is rarely found in other fossil hominins.
Wu & Bae (2024) assigned Penghu 1 to the new species Homo juluensis, as Xujiayao hominin, Xiahe mandible and Denisovans.
| Biology and health sciences | Homo | Biology |
45298242 | https://en.wikipedia.org/wiki/Manot%201 | Manot 1 | Manot 1 is a fossil specimen designated to a skullcap that represents an archaic modern human discovered in Manot Cave, Western Galilee, Israel.
It was discovered in 2008 and the scientific description was published in 2015. Radiometric dating indicates that it is about 54,700 years old (the late Mousterian), and thought to be directly ancestral to the
Upper Paleolithic populations of the Levant and Europe.
Discovery
Manot 1 was discovered inside the Manot Cave when the cave itself was discovered in 2008. The cave is situated in Western Galilee, about 10 km north of the HaYonim Cave and 50 km northeast of Mt. Carmel Cave. It was discovered accidentally when a bulldozer cracked open its roof during construction work. Archaeologists from the Cave Research Unit of Hebrew University of Jerusalem were immediately informed and made the initial survey. They found the skullcap alongside stone tools, charcoal pieces, and other human remains. Tools found included a Levallois point, burins, bladelets, overpassed blades, and Aurignacian tools. They also found remains of "fallow deer, red deer, mountain gazelle, horse, aurochs, hyena, and bear". They reported it to the Israel Antiquities Authority (IAA), which granted another brief survey of the cave. The IAA granted a full-scale excavation in 2010. The excavation was conducted by a collaboration of archaeologists from Hebrew University of Jerusalem, Tel Aviv University, Geological Survey of Israel, Zinman Institute of Archaeology of University of Haifa, Kimmel Center for Archaeological Sciences of Weizmann Institute of Science, and the Department of Archaeology of Boston University.
Description
Manot 1 is an adult individual represented by an almost complete skullcap (calvaria) very similar to those of modern humans. But it has a relatively small brain size, which is estimated at around 1,100 mL, compared to modern human brain which is about 1,400 mL. Its unique features are the bun-shaped occipital, the moderate arch of the parietals, flat sagittal area, presence of a suprainiac fossa, and the pronounced superior nuchal line. These combined features indicate that it shares a number of features between the most recent African humans and those of European from the Upper Paleolithic period. But it has notable differences from those of other archaic humans found in the neighbouring Levant. It may also possibly be a human-Neanderthal hybrid. The discoverers concluded that:
Significance
The Skhul 5 and Skhul 9 skulls, dated to between 120,000 and 80,000 years old, are the oldest known anatomically modern human fossils found in West Asia.
Manot 1, at 55,000 years old, is the oldest fossil found in West Asia which post-dates the presumed recent out-of-Africa expansion, after about 70,000 years ago. It is thought to be ancestral to the modern Western Eurasian lineages that began to develop during the Upper Paleolithic.
The age of the fossil is consistent with the period of interbreeding between Neanderthals and modern humans. While extraction and sequencing of DNA from the remains could potentially confirm that interbreeding was occurring at that time, the odds of doing so successfully are reduced by the region's warm climate, which speeds up DNA degradation.
| Biology and health sciences | Homo | Biology |
35442274 | https://en.wikipedia.org/wiki/Dorcatherium | Dorcatherium | Dorcatherium is an extinct genus of tragulid ruminant which existed in Europe, East Africa and the Siwaliks during the Miocene and Possibly Pliocene .
| Biology and health sciences | Other artiodactyla | Animals |
25139347 | https://en.wikipedia.org/wiki/Kaprosuchus | Kaprosuchus | Kaprosuchus is an extinct genus of mahajangasuchid crocodyliform. It is known from a single nearly complete skull collected from the Upper Cretaceous Echkar Formation of Niger. The name means "boar crocodile" from the Greek , kapros ("boar") and , soukhos ("crocodile") in reference to its unusually large caniniform teeth which resemble those of a boar. It has been nicknamed "BoarCroc" by Paul Sereno and Hans Larsson, who first described the genus in a monograph published in ZooKeys in 2009 along with other Saharan crocodyliformes such as Anatosuchus and Laganosuchus. The type species is K. saharicus.
Description
Kaprosuchus is known from a nearly complete skull 507 mm in length in which the lower jaw measured 603 mm long, total length is estimated to be around long. It possesses three sets of tusk-like caniniform teeth that project above and below the skull, one of which in the lower jaw fits into notches in upper jaw. This type of dentition is not seen in any other known crocodyliform. Another unique characteristic of Kaprosuchus is the presence of large, rugose horns formed from the squamosal and parietal bones that project posteriorly from the skull. Smaller projections are also seen in the closely related Mahajangasuchus.
The snout of Kaprosuchus shows generalized proportions and the naris is positioned dorsally. In Kaprosuchus many teeth are hypertrophied and labiolingually (laterally) compressed, unlike those of other crocodyliforms with similarly shallow snouts, which are usually subconical and of moderate length. Another difference between the skull of Kaprosuchus and those of other crocodyliforms that also possess dorsoventrally compressed snouts is the great depth of the posterior portion of the skull.
In Kaprosuchus, the orbits (i.e., eye sockets) open laterally and are angled slightly forward rather than upward. The orbits turned forward suggest that there was somewhat stereoscopic vision, i.e., an overlap in the visual field of the animal.
The surfaces of the premaxillae are rugose with the edges elevated above the body of the bone, suggesting that a keratinous shield would have been supported by the rugosities at the tip of the snout. Along the interpremaxillary suture, the area where the two premaxillae meet, the surface is smooth, giving the paired rugosity of the premaxillae the resemblance of a moustache in anterior view.
Classification
Kaprosuchus is a member of the family Mahajangasuchidae along with closely related Mahajangasuchus insignis from the Upper Cretaceous of Madagascar. Although it differs greatly from any other known crocodyliform, Kaprosuchus shares several characteristics with Mahajangasuchus. These include the obliteration of all but the posterior portion of the internasal suture; a laterally facing rugose external articular fossa; the positioning of the jaw joint below the posterior maxillary teeth; a deep, anterodorsally oriented mandibular symphysis; a vertically descending ectopterygoid that is slightly inset from the lateral margin of the jugal; a flared choanal septum forming an articular foot for the palatine; and the hornlike dorsal projection of the external rim of the squamosal (although this is much more developed in Kaprosuchus than Mahajangasuchus).
At the time of Kaprosuchus''' description, Sereno and Larrson considered mahajangasuchids to be a family of Neosuchian crocodyliforms. However further studies on the relationship of this family have repeatedly found them to form a sister clade to peirosaurids, forming a clade that in turn groups together with uruguaysuchids such as Anatosuchus and Araripesuchus as an early diverging branch of notosuchians.
PaleobiologyKaprosuchus was once thought to have been a primarily if not exclusively terrestrial predator. Evidence for this behavior includes the positioning of the orbits laterally and somewhat anteriorly, which suggests an overlap in vision. This is unlike many other neosuchians, including extant crocodilians, in which the orbits are positioned dorsally as an adaptation to aquatic predation where the head can be held underwater while the eyes remain above the surface. Additional support for terrestrial predation can be found in the teeth and jaws. The enlarged caniniforms are sharp-edged and relatively straight, unlike the fluted, subconical, recurved teeth of aquatic crocodyliforms. Because the retroarticular process of the lower jaw is long, it is likely that the jaws were able to open relatively quickly with a large gape to allow for the opposing caniniforms to clear one another. The fused nasal bones are thought to have provided reinforcement for the jaws against compression associated with a powerful bite. The telescoped, dorsally positioned external nares are seen as protection against impact if the animal rammed prey with its robust snout. The keratinous shield thought to have covered the tip of the snout would have provided further protection.
However, Kaprosuchus is now thought to have been semiaquatic, as the relative Mahajangasuchus'' has been suggested to be a primarily aquatic predator, and that the specimens of both genera show cranial adaptations usually found in "definitively semiaquatic" crocodylomorphs, "such as elongate platyrostral or tube-like snouts, orbits located dorsally on the skull, and/or dorsally-facing external nares."
| Biology and health sciences | Prehistoric crocodiles | Animals |
26547932 | https://en.wikipedia.org/wiki/Ordinal%20number | Ordinal number | In set theory, an ordinal number, or ordinal, is a generalization of ordinal numerals (first, second, th, etc.) aimed to extend enumeration to infinite sets.
A finite set can be enumerated by successively labeling each element with the least natural number that has not been previously used. To extend this process to various infinite sets, ordinal numbers are defined more generally using linearly ordered greek letter variables that include the natural numbers and have the property that every set of ordinals has a least or "smallest" element (this is needed for giving a meaning to "the least unused element"). This more general definition allows us to define an ordinal number (omega) to be the least element that is greater than every natural number, along with ordinal numbers , , etc., which are even greater than .
A linear order such that every non-empty subset has a least element is called a well-order. The axiom of choice implies that every set can be well-ordered, and given two well-ordered sets, one is isomorphic to an initial segment of the other. So ordinal numbers exist and are essentially unique.
Ordinal numbers are distinct from cardinal numbers, which measure the size of sets. Although the distinction between ordinals and cardinals is not always apparent on finite sets (one can go from one to the other just by counting labels), they are very different in the infinite case, where different infinite ordinals can correspond to sets having the same cardinal. Like other kinds of numbers, ordinals can be added, multiplied, and exponentiated, although none of these operations are commutative.
Ordinals were introduced by Georg Cantor in 1883 in order to accommodate infinite sequences and classify derived sets, which he had previously introduced in 1872 while studying the uniqueness of trigonometric series.
Ordinals extend the natural numbers
A natural number (which, in this context, includes the number 0) can be used for two purposes: to describe the size of a set, or to describe the position of an element in a sequence. When restricted to finite sets, these two concepts coincide, since all linear orders of a finite set are isomorphic.
When dealing with infinite sets, however, one has to distinguish between the notion of size, which leads to cardinal numbers, and the notion of position, which leads to the ordinal numbers described here. This is because while any set has only one size (its cardinality), there are many nonisomorphic well-orderings of any infinite set, as explained below.
Whereas the notion of cardinal number is associated with a set with no particular structure on it, the ordinals are intimately linked with the special kind of sets that are called well-ordered. A well-ordered set is a totally ordered set (an ordered set such that, given two distinct elements, one is less than the other) in which every non-empty subset has a least element. Equivalently, assuming the axiom of dependent choice, it is a totally ordered set without any infinite decreasing sequence — though there may be infinite increasing sequences. Ordinals may be used to label the elements of any given well-ordered set (the smallest element being labelled 0, the one after that 1, the next one 2, "and so on"), and to measure the "length" of the whole set by the least ordinal that is not a label for an element of the set. This "length" is called the order type of the set.
Any ordinal is defined by the set of ordinals that precede it. In fact, the most common definition of ordinals identifies each ordinal as the set of ordinals that precede it. For example, the ordinal 42 is generally identified as the set . Conversely, any set S of ordinals that is downward closed — meaning that for any ordinal α in S and any ordinal β < α, β is also in S — is (or can be identified with) an ordinal.
This definition of ordinals in terms of sets allows for infinite ordinals. The smallest infinite ordinal is , which can be identified with the set of natural numbers (so that the ordinal associated with every natural number precedes ). Indeed, the set of natural numbers is well-ordered—as is any set of ordinals—and since it is downward closed, it can be identified with the ordinal associated with it.
Perhaps a clearer intuition of ordinals can be formed by examining a first few of them: as mentioned above, they start with the natural numbers, 0, 1, 2, 3, 4, 5, ... After all natural numbers comes the first infinite ordinal, ω, and after that come ω+1, ω+2, ω+3, and so on. (Exactly what addition means will be defined later on: just consider them as names.) After all of these come ω·2 (which is ω+ω), ω·2+1, ω·2+2, and so on, then ω·3, and then later on ω·4. Now the set of ordinals formed in this way (the ω·m+n, where m and n are natural numbers) must itself have an ordinal associated with it: and that is ω2. Further on, there will be ω3, then ω4, and so on, and ωω, then ωωω, then later ωωωω, and even later ε0 (epsilon nought) (to give a few examples of relatively small—countable—ordinals). This can be continued indefinitely (as every time one says "and so on" when enumerating ordinals, it defines a larger ordinal). The smallest uncountable ordinal is the set of all countable ordinals, expressed as ω1 or .
Definitions
Well-ordered sets
In a well-ordered set, every non-empty subset contains a distinct smallest element. Given the axiom of dependent choice, this is equivalent to saying that the set is totally ordered and there is no infinite decreasing sequence (the latter being easier to visualize). In practice, the importance of well-ordering is justified by the possibility of applying transfinite induction, which says, essentially, that any property that passes on from the predecessors of an element to that element itself must be true of all elements (of the given well-ordered set). If the states of a computation (computer program or game) can be well-ordered—in such a way that each step is followed by a "lower" step—then the computation will terminate.
It is inappropriate to distinguish between two well-ordered sets if they only differ in the "labeling of their elements", or more formally: if the elements of the first set can be paired off with the elements of the second set such that if one element is smaller than another in the first set, then the partner of the first element is smaller than the partner of the second element in the second set, and vice versa. Such a one-to-one correspondence is called an order isomorphism, and the two well-ordered sets are said to be order-isomorphic or similar (with the understanding that this is an equivalence relation).
Formally, if a partial order ≤ is defined on the set S, and a partial order ≤' is defined on the set S' , then the posets (S,≤) and (S' ,≤') are order isomorphic if there is a bijection f that preserves the ordering. That is, f(a) ≤' f(b) if and only if a ≤ b. Provided there exists an order isomorphism between two well-ordered sets, the order isomorphism is unique: this makes it quite justifiable to consider the two sets as essentially identical, and to seek a "canonical" representative of the isomorphism type (class). This is exactly what the ordinals provide, and it also provides a canonical labeling of the elements of any well-ordered set. Every well-ordered set (S,<) is order-isomorphic to the set of ordinals less than one specific ordinal number under their natural ordering. This canonical set is the order type of (S,<).
Essentially, an ordinal is intended to be defined as an isomorphism class of well-ordered sets: that is, as an equivalence class for the equivalence relation of "being order-isomorphic". There is a technical difficulty involved, however, in the fact that the equivalence class is too large to be a set in the usual Zermelo–Fraenkel (ZF) formalization of set theory. But this is not a serious difficulty. The ordinal can be said to be the order type of any set in the class.
Definition of an ordinal as an equivalence class
The original definition of ordinal numbers, found for example in the Principia Mathematica, defines the order type of a well-ordering as the set of all well-orderings similar (order-isomorphic) to that well-ordering: in other words, an ordinal number is genuinely an equivalence class of well-ordered sets. This definition must be abandoned in ZF and related systems of axiomatic set theory because these equivalence classes are too large to form a set. However, this definition still can be used in type theory and in Quine's axiomatic set theory New Foundations and related systems (where it affords a rather surprising alternative solution to the Burali-Forti paradox of the largest ordinal).
Von Neumann definition of ordinals
Rather than defining an ordinal as an equivalence class of well-ordered sets, it will be defined as a particular well-ordered set that (canonically) represents the class. Thus, an ordinal number will be a well-ordered set; and every well-ordered set will be order-isomorphic to exactly one ordinal number.
For each well-ordered set , defines an order isomorphism between and the set of all subsets of having the form ordered by inclusion. This motivates the standard definition, suggested by John von Neumann at the age of 19, now called definition of von Neumann ordinals: "each ordinal is the well-ordered set of all smaller ordinals". In symbols, . Formally:
A set S is an ordinal if and only if S is strictly well-ordered with respect to set membership and every element of S is also a subset of S.
The natural numbers are thus ordinals by this definition. For instance, 2 is an element of and 2 is equal to and so it is a subset of .
It can be shown by transfinite induction that every well-ordered set is order-isomorphic to exactly one of these ordinals, that is, there is an order preserving bijective function between them.
Furthermore, the elements of every ordinal are ordinals themselves. Given two ordinals S and T, S is an element of T if and only if S is a proper subset of T. Moreover, either S is an element of T, or T is an element of S, or they are equal. So every set of ordinals is totally ordered. Further, every set of ordinals is well-ordered. This generalizes the fact that every set of natural numbers is well-ordered.
Consequently, every ordinal S is a set having as elements precisely the ordinals smaller than S. For example, every set of ordinals has a supremum, the ordinal obtained by taking the union of all the ordinals in the set. This union exists regardless of the set's size, by the axiom of union.
The class of all ordinals is not a set. If it were a set, one could show that it was an ordinal and thus a member of itself, which would contradict its strict ordering by membership. This is the Burali-Forti paradox. The class of all ordinals is variously called "Ord", "ON", or "∞".
An ordinal is finite if and only if the opposite order is also well-ordered, which is the case if and only if each of its non-empty subsets has a greatest element.
Other definitions
There are other modern formulations of the definition of ordinal. For example, assuming the axiom of regularity, the following are equivalent for a set x:
x is a (von Neumann) ordinal,
x is a transitive set, and set membership is trichotomous on x,
x is a transitive set totally ordered by set inclusion,
x is a transitive set of transitive sets.
These definitions cannot be used in non-well-founded set theories. In set theories with urelements, one has to further make sure that the definition excludes urelements from appearing in ordinals.
Transfinite sequence
If α is any ordinal and X is a set, an α-indexed sequence of elements of X is a function from α to X. This concept, a transfinite sequence (if α is infinite) or ordinal-indexed sequence, is a generalization of the concept of a sequence. An ordinary sequence corresponds to the case α = ω, while a finite α corresponds to a tuple, a.k.a. string.
Transfinite induction
Transfinite induction holds in any well-ordered set, but it is so important in relation to ordinals that it is worth restating here.
Any property that passes from the set of ordinals smaller than a given ordinal α to α itself, is true of all ordinals.
That is, if P(α) is true whenever P(β) is true for all , then P(α) is true for all α. Or, more practically: in order to prove a property P for all ordinals α, one can assume that it is already known for all smaller .
Transfinite recursion
Transfinite induction can be used not only to prove things, but also to define them. Such a definition is normally said to be by transfinite recursion – the proof that the result is well-defined uses transfinite induction. Let F denote a (class) function F to be defined on the ordinals. The idea now is that, in defining F(α) for an unspecified ordinal α, one may assume that F(β) is already defined for all and thus give a formula for F(α) in terms of these F(β). It then follows by transfinite induction that there is one and only one function satisfying the recursion formula up to and including α.
Here is an example of definition by transfinite recursion on the ordinals (more will be given later): define function F by letting F(α) be the smallest ordinal not in the set , that is, the set consisting of all F(β) for . This definition assumes the F(β) known in the very process of defining F; this apparent vicious circle is exactly what definition by transfinite recursion permits. In fact, F(0) makes sense since there is no ordinal , and the set is empty. So F(0) is equal to 0 (the smallest ordinal of all). Now that F(0) is known, the definition applied to F(1) makes sense (it is the smallest ordinal not in the singleton set ), and so on (the and so on is exactly transfinite induction). It turns out that this example is not very exciting, since provably for all ordinals α, which can be shown, precisely, by transfinite induction.
Successor and limit ordinals
Any nonzero ordinal has the minimum element, zero. It may or may not have a maximum element. For example, 42 has maximum 41 and ω+6 has maximum ω+5. On the other hand, ω does not have a maximum since there is no largest natural number. If an ordinal has a maximum α, then it is the next ordinal after α, and it is called a successor ordinal, namely the successor of α, written α+1. In the von Neumann definition of ordinals, the successor of α is since its elements are those of α and α itself.
A nonzero ordinal that is not a successor is called a limit ordinal. One justification for this term is that a limit ordinal is the limit in a topological sense of all smaller ordinals (under the order topology).
When is an ordinal-indexed sequence, indexed by a limit and the sequence is increasing, i.e. whenever its limit is defined as the least upper bound of the set that is, the smallest ordinal (it always exists) greater than any term of the sequence. In this sense, a limit ordinal is the limit of all smaller ordinals (indexed by itself). Put more directly, it is the supremum of the set of smaller ordinals.
Another way of defining a limit ordinal is to say that α is a limit ordinal if and only if:
There is an ordinal less than α and whenever ζ is an ordinal less than α, then there exists an ordinal ξ such that ζ < ξ < α.
So in the following sequence:
0, 1, 2, ..., ω, ω+1
ω is a limit ordinal because for any smaller ordinal (in this example, a natural number) there is another ordinal (natural number) larger than it, but still less than ω.
Thus, every ordinal is either zero, or a successor (of a well-defined predecessor), or a limit. This distinction is important, because many definitions by transfinite recursion rely upon it. Very often, when defining a function F by transfinite recursion on all ordinals, one defines F(0), and F(α+1) assuming F(α) is defined, and then, for limit ordinals δ one defines F(δ) as the limit of the F(β) for all β<δ (either in the sense of ordinal limits, as previously explained, or for some other notion of limit if F does not take ordinal values). Thus, the interesting step in the definition is the successor step, not the limit ordinals. Such functions (especially for F nondecreasing and taking ordinal values) are called continuous. Ordinal addition, multiplication and exponentiation are continuous as functions of their second argument (but can be defined non-recursively).
Indexing classes of ordinals
Any well-ordered set is similar (order-isomorphic) to a unique ordinal number ; in other words, its elements can be indexed in increasing fashion by the ordinals less than . This applies, in particular, to any set of ordinals: any set of ordinals is naturally indexed by the ordinals less than some . The same holds, with a slight modification, for classes of ordinals (a collection of ordinals, possibly too large to form a set, defined by some property): any class of ordinals can be indexed by ordinals (and, when the class is unbounded in the class of all ordinals, this puts it in class-bijection with the class of all ordinals). So the -th element in the class (with the convention that the "0-th" is the smallest, the "1-st" is the next smallest, and so on) can be freely spoken of. Formally, the definition is by transfinite induction: the -th element of the class is defined (provided it has already been defined for all ), as the smallest element greater than the -th element for all .
This could be applied, for example, to the class of limit ordinals: the -th ordinal, which is either a limit or zero is (see ordinal arithmetic for the definition of multiplication of ordinals). Similarly, one can consider additively indecomposable ordinals (meaning a nonzero ordinal that is not the sum of two strictly smaller ordinals): the -th additively indecomposable ordinal is indexed as . The technique of indexing classes of ordinals is often useful in the context of fixed points: for example, the -th ordinal such that is written . These are called the "epsilon numbers".
Closed unbounded sets and classes
A class of ordinals is said to be unbounded, or cofinal, when given any ordinal , there is a in such that (then the class must be a proper class, i.e., it cannot be a set). It is said to be closed when the limit of a sequence of ordinals in the class is again in the class: or, equivalently, when the indexing (class-)function is continuous in the sense that, for a limit ordinal, (the -th ordinal in the class) is the limit of all for ; this is also the same as being closed, in the topological sense, for the order topology (to avoid talking of topology on proper classes, one can demand that the intersection of the class with any given ordinal is closed for the order topology on that ordinal, this is again equivalent).
Of particular importance are those classes of ordinals that are closed and unbounded, sometimes called clubs. For example, the class of all limit ordinals is closed and unbounded: this translates the fact that there is always a limit ordinal greater than a given ordinal, and that a limit of limit ordinals is a limit ordinal (a fortunate fact if the terminology is to make any sense at all!). The class of additively indecomposable ordinals, or the class of ordinals, or the class of cardinals, are all closed unbounded; the set of regular cardinals, however, is unbounded but not closed, and any finite set of ordinals is closed but not unbounded.
A class is stationary if it has a nonempty intersection with every closed unbounded class. All superclasses of closed unbounded classes are stationary, and stationary classes are unbounded, but there are stationary classes that are not closed and stationary classes that have no closed unbounded subclass (such as the class of all limit ordinals with countable cofinality). Since the intersection of two closed unbounded classes is closed and unbounded, the intersection of a stationary class and a closed unbounded class is stationary. But the intersection of two stationary classes may be empty, e.g. the class of ordinals with cofinality ω with the class of ordinals with uncountable cofinality.
Rather than formulating these definitions for (proper) classes of ordinals, one can formulate them for sets of ordinals below a given ordinal : A subset of a limit ordinal is said to be unbounded (or cofinal) under provided any ordinal less than is less than some ordinal in the set. More generally, one can call a subset of any ordinal cofinal in provided every ordinal less than is less than or equal to some ordinal in the set. The subset is said to be closed under provided it is closed for the order topology in , i.e. a limit of ordinals in the set is either in the set or equal to itself.
Arithmetic of ordinals
There are three usual operations on ordinals: addition, multiplication, and exponentiation. Each can be defined in essentially two different ways: either by constructing an explicit well-ordered set that represents the operation or by using transfinite recursion. The Cantor normal form provides a standardized way of writing ordinals. It uniquely represents each ordinal as a finite sum of ordinal powers of ω. However, this cannot form the basis of a universal ordinal notation due to such self-referential representations as ε0 = ωε0.
Ordinals are a subclass of the class of surreal numbers, and the so-called "natural" arithmetical operations for surreal numbers are an alternative way to combine ordinals arithmetically. They retain commutativity at the expense of continuity.
Interpreted as nimbers, a game-theoretic variant of numbers, ordinals can also be combined via nimber arithmetic operations. These operations are commutative but the restriction to natural numbers is generally not the same as ordinary addition of natural numbers.
Ordinals and cardinals
Initial ordinal of a cardinal
Each ordinal associates with one cardinal, its cardinality. If there is a bijection between two ordinals (e.g. and ), then they associate with the same cardinal. Any well-ordered set having an ordinal as its order-type has the same cardinality as that ordinal. The least ordinal associated with a given cardinal is called the initial ordinal of that cardinal. Every finite ordinal (natural number) is initial, and no other ordinal associates with its cardinal. But most infinite ordinals are not initial, as many infinite ordinals associate with the same cardinal. The axiom of choice is equivalent to the statement that every set can be well-ordered, i.e. that every cardinal has an initial ordinal. In theories with the axiom of choice, the cardinal number of any set has an initial ordinal, and one may employ the Von Neumann cardinal assignment as the cardinal's representation. (However, we must then be careful to distinguish between cardinal arithmetic and ordinal arithmetic.) In set theories without the axiom of choice, a cardinal may be represented by the set of sets with that cardinality having minimal rank (see Scott's trick).
One issue with Scott's trick is that it identifies the cardinal number with , which in some formulations is the ordinal number . It may be clearer to apply Von Neumann cardinal assignment to finite cases and to use Scott's trick for sets which are infinite or do not admit well orderings. Note that cardinal and ordinal arithmetic agree for finite numbers.
The α-th infinite initial ordinal is written , it is always a limit ordinal. Its cardinality is written . For example, the cardinality of ω0 = ω is , which is also the cardinality of ω2 or ε0 (all are countable ordinals). So ω can be identified with , except that the notation is used when writing cardinals, and ω when writing ordinals (this is important since, for example, = whereas ). Also, is the smallest uncountable ordinal (to see that it exists, consider the set of equivalence classes of well-orderings of the natural numbers: each such well-ordering defines a countable ordinal, and is the order type of that set), is the smallest ordinal whose cardinality is greater than , and so on, and is the limit of the for natural numbers n (any limit of cardinals is a cardinal, so this limit is indeed the first cardinal after all the ).
Cofinality
The cofinality of an ordinal is the smallest ordinal that is the order type of a cofinal subset of . Notice that a number of authors define cofinality or use it only for limit ordinals. The cofinality of a set of ordinals or any other well-ordered set is the cofinality of the order type of that set.
Thus for a limit ordinal, there exists a -indexed strictly increasing sequence with limit . For example, the cofinality of ω2 is ω, because the sequence ω·m (where m ranges over the natural numbers) tends to ω2; but, more generally, any countable limit ordinal has cofinality ω. An uncountable limit ordinal may have either cofinality ω as does or an uncountable cofinality.
The cofinality of 0 is 0. And the cofinality of any successor ordinal is 1. The cofinality of any limit ordinal is at least .
An ordinal that is equal to its cofinality is called regular and it is always an initial ordinal. Any limit of regular ordinals is a limit of initial ordinals and thus is also initial even if it is not regular, which it usually is not. If the Axiom of Choice, then is regular for each α. In this case, the ordinals 0, 1, , , and are regular, whereas 2, 3, , and ωω·2 are initial ordinals that are not regular.
The cofinality of any ordinal α is a regular ordinal, i.e. the cofinality of the cofinality of α is the same as the cofinality of α. So the cofinality operation is idempotent.
Some "large" countable ordinals
As mentioned above (see Cantor normal form), the ordinal ε0 is the smallest satisfying the equation , so it is the limit of the sequence 0, 1, , , , etc. Many ordinals can be defined in such a manner as fixed points of certain ordinal functions (the -th ordinal such that is called , then one could go on trying to find the -th ordinal such that , "and so on", but all the subtlety lies in the "and so on"). One could try to do this systematically, but no matter what system is used to define and construct ordinals, there is always an ordinal that lies just above all the ordinals constructed by the system. Perhaps the most important ordinal that limits a system of construction in this manner is the Church–Kleene ordinal, (despite the in the name, this ordinal is countable), which is the smallest ordinal that cannot in any way be represented by a computable function (this can be made rigorous, of course). Considerably large ordinals can be defined below , however, which measure the "proof-theoretic strength" of certain formal systems (for example, measures the strength of Peano arithmetic). Large countable ordinals such as countable admissible ordinals can also be defined above the Church-Kleene ordinal, which are of interest in various parts of logic.
Topology and ordinals
Any ordinal number can be made into a topological space by endowing it with the order topology; this topology is discrete if and only if it is less than or equal to ω. A subset of ω + 1 is open in the order topology if and only if either it is cofinite or it does not contain ω as an element.
See the Topology and ordinals section of the "Order topology" article.
History
The transfinite ordinal numbers, which first appeared in 1883, originated in Cantor's work with derived sets. If P is a set of real numbers, the derived set is the set of limit points of P. In 1872, Cantor generated the sets P(n) by applying the derived set operation n times to P. In 1880, he pointed out that these sets form the sequence and he continued the derivation process by defining P(∞) as the intersection of these sets. Then he iterated the derived set operation and intersections to extend his sequence of sets into the infinite: The superscripts containing ∞ are just indices defined by the derivation process.
Cantor used these sets in the theorems:
These theorems are proved by partitioning into pairwise disjoint sets: . For since contains the limit points of P(β), the sets have no limit points. Hence, they are discrete sets, so they are countable. Proof of first theorem: If for some index α, then is the countable union of countable sets. Therefore, is countable.
The second theorem requires proving the existence of an α such that . To prove this, Cantor considered the set of all α having countably many predecessors. To define this set, he defined the transfinite ordinal numbers and transformed the infinite indices into ordinals by replacing ∞ with ω, the first transfinite ordinal number. Cantor called the set of finite ordinals the first number class. The second number class is the set of ordinals whose predecessors form a countably infinite set. The set of all α having countably many predecessors—that is, the set of countable ordinals—is the union of these two number classes. Cantor proved that the cardinality of the second number class is the first uncountable cardinality.
Cantor's second theorem becomes: If is countable, then there is a countable ordinal α such that . Its proof uses proof by contradiction. Let be countable, and assume there is no such α. This assumption produces two cases.
Case 1: is non-empty for all countable β. Since there are uncountably many of these pairwise disjoint sets, their union is uncountable. This union is a subset of , so P' is uncountable.
Case 2: is empty for some countable β. Since , this implies . Thus, P(β) is a perfect set, so it is uncountable. Since , the set is uncountable.
In both cases, is uncountable, which contradicts being countable. Therefore, there is a countable ordinal α such that . Cantor's work with derived sets and ordinal numbers led to the Cantor-Bendixson theorem.
Using successors, limits, and cardinality, Cantor generated an unbounded sequence of ordinal numbers and number classes. The -th number class is the set of ordinals whose predecessors form a set of the same cardinality as the α-th number class. The cardinality of the -th number class is the cardinality immediately following that of the α-th number class. For a limit ordinal α, the α-th number class is the union of the β-th number classes for . Its cardinality is the limit of the cardinalities of these number classes.
If n is finite, the n-th number class has cardinality . If , the α-th number class has cardinality . Therefore, the cardinalities of the number classes correspond one-to-one with the aleph numbers. Also, the α-th number class consists of ordinals different from those in the preceding number classes if and only if α is a non-limit ordinal. Therefore, the non-limit number classes partition the ordinals into pairwise disjoint sets.
| Mathematics | Set theory | null |
26551602 | https://en.wikipedia.org/wiki/Limit%20%28mathematics%29 | Limit (mathematics) | In mathematics, a limit is the value that a function (or sequence) approaches as the argument (or index) approaches some value. Limits of functions are essential to calculus and mathematical analysis, and are used to define continuity, derivatives, and integrals.
The concept of a limit of a sequence is further generalized to the concept of a limit of a topological net, and is closely related to limit and direct limit in category theory.
The limit inferior and limit superior provide generalizations of the concept of a limit which are particularly relevant when the limit at a point may not exist.
Notation
In formulas, a limit of a function is usually written as
and is read as "the limit of of as approaches equals ". This means that the value of the function can be made arbitrarily close to , by choosing sufficiently close to . Alternatively, the fact that a function approaches the limit as approaches is sometimes denoted by a right arrow (→ or ), as in
which reads " of tends to as tends to ".
History
According to Hankel (1871), the modern concept of limit originates from Proposition X.1 of Euclid's Elements, which forms the basis of the Method of exhaustion found in Euclid and Archimedes: "Two unequal magnitudes being set out, if from the greater there is subtracted a magnitude greater than its half, and from that which is left a magnitude greater than its half, and if this process is repeated continually, then there will be left some magnitude less than the lesser magnitude set out."
Grégoire de Saint-Vincent gave the first definition of limit (terminus) of a geometric series in his work Opus Geometricum (1647): "The terminus of a progression is the end of the series, which none progression can reach, even not if she is continued in infinity, but which she can approach nearer than a given segment."
The modern definition of a limit goes back to Bernard Bolzano who, in 1817, developed the basics of the epsilon-delta technique to define continuous functions. However, his work remained unknown to other mathematicians until thirty years after his death.
Augustin-Louis Cauchy in 1821, followed by Karl Weierstrass, formalized the definition of the limit of a function which became known as the (ε, δ)-definition of limit.
The modern notation of placing the arrow below the limit symbol is due to G. H. Hardy, who introduced it in his book A Course of Pure Mathematics in 1908.
Types of limits
In sequences
Real numbers
The expression 0.999... should be interpreted as the limit of the sequence 0.9, 0.99, 0.999, ... and so on. This sequence can be rigorously shown to have the limit 1, and therefore this expression is meaningfully interpreted as having the value 1.
Formally, suppose is a sequence of real numbers. When the limit of the sequence exists, the real number is the limit of this sequence if and only if for every real number , there exists a natural number such that for all , we have .
The common notation
is read as:
"The limit of an as n approaches infinity equals L" or "The limit as n approaches infinity of an equals L".
The formal definition intuitively means that eventually, all elements of the sequence get arbitrarily close to the limit, since the absolute value is the distance between and .
Not every sequence has a limit. A sequence with a limit is called convergent; otherwise it is called divergent. One can show that a convergent sequence has only one limit.
The limit of a sequence and the limit of a function are closely related. On one hand, the limit as approaches infinity of a sequence is simply the limit at infinity of a function —defined on the natural numbers . On the other hand, if X is the domain of a function and if the limit as approaches infinity of is for every arbitrary sequence of points in X − x0 which converges to , then the limit of the function as approaches is equal to . One such sequence would be .
Infinity as a limit
There is also a notion of having a limit "tend to infinity", rather than to a finite value . A sequence is said to "tend to infinity" if, for each real number , known as the bound, there exists an integer such that for each ,
That is, for every possible bound, the sequence eventually exceeds the bound. This is often written or simply .
It is possible for a sequence to be divergent, but not tend to infinity. Such sequences are called oscillatory. An example of an oscillatory sequence is .
There is a corresponding notion of tending to negative infinity, , defined by changing the inequality in the above definition to with
A sequence with is called unbounded, a definition equally valid for sequences in the complex numbers, or in any metric space. Sequences which do not tend to infinity are called bounded. Sequences which do not tend to positive infinity are called bounded above, while those which do not tend to negative infinity are bounded below.
Metric space
The discussion of sequences above is for sequences of real numbers. The notion of limits can be defined for sequences valued in more abstract spaces, such as metric spaces. If is a metric space with distance function , and is a sequence in , then the limit (when it exists) of the sequence is an element such that, given , there exists an such that for each , we have
An equivalent statement is that if the sequence of real numbers .
Example: Rn
An important example is the space of -dimensional real vectors, with elements where each of the are real, an example of a suitable distance function is the Euclidean distance, defined by
The sequence of points converges to if the limit exists and .
Topological space
In some sense the most abstract space in which limits can be defined are topological spaces. If is a topological space with topology , and is a sequence in , then the limit (when it exists) of the sequence is a point such that, given a (open) neighborhood of , there exists an such that for every ,
is satisfied. In this case, the limit (if it exists) may not be unique. However it must be unique if is a Hausdorff space.
Function space
This section deals with the idea of limits of sequences of functions, not to be confused with the idea of limits of functions, discussed below.
The field of functional analysis partly seeks to identify useful notions of convergence on function spaces. For example, consider the space of functions from a generic set to . Given a sequence of functions such that each is a function , suppose that there exists a function such that for each ,
Then the sequence is said to converge pointwise to . However, such sequences can exhibit unexpected behavior. For example, it is possible to construct a sequence of continuous functions which has a discontinuous pointwise limit.
Another notion of convergence is uniform convergence. The uniform distance between two functions is the maximum difference between the two functions as the argument is varied. That is,
Then the sequence is said to uniformly converge or have a uniform limit of if with respect to this distance. The uniform limit has "nicer" properties than the pointwise limit. For example, the uniform limit of a sequence of continuous functions is continuous.
Many different notions of convergence can be defined on function spaces. This is sometimes dependent on the regularity of the space. Prominent examples of function spaces with some notion of convergence are Lp spaces and Sobolev space.
In functions
Suppose is a real-valued function and is a real number. Intuitively speaking, the expression
means that can be made to be as close to as desired, by making sufficiently close to . In that case, the above equation can be read as "the limit of of , as approaches , is ".
Formally, the definition of the "limit of as approaches " is given as follows. The limit is a real number so that, given an arbitrary real number (thought of as the "error"), there is a such that, for any satisfying , it holds that . This is known as the (ε, δ)-definition of limit.
The inequality is used to exclude from the set of points under consideration, but some authors do not include this in their definition of limits, replacing with simply . This replacement is equivalent to additionally requiring that be continuous at .
It can be proven that there is an equivalent definition which makes manifest the connection between limits of sequences and limits of functions. The equivalent definition is given as follows. First observe that for every sequence in the domain of , there is an associated sequence , the image of the sequence under . The limit is a real number so that, for all sequences , the associated sequence .
One-sided limit
It is possible to define the notion of having a "left-handed" limit ("from below"), and a notion of a "right-handed" limit ("from above"). These need not agree. An example is given by the positive indicator function, , defined such that if , and if . At , the function has a "left-handed limit" of 0, a "right-handed limit" of 1, and its limit does not exist. Symbolically, this can be stated as, for this example,
, and , and from this it can be deduced doesn't exist, because .
Infinity in limits of functions
It is possible to define the notion of "tending to infinity" in the domain of ,
This could be considered equivalent to the limit as a reciprocal tends to 0:
or it can be defined directly: the "limit of as tends to positive infinity" is defined as a value such that, given any real , there exists an so that for all , . The definition for sequences is equivalent: As , we have .
In these expressions, the infinity is normally considered to be signed ( or ) and corresponds to a one-sided limit of the reciprocal. A two-sided infinite limit can be defined, but an author would explicitly write to be clear.
It is also possible to define the notion of "tending to infinity" in the value of ,
Again, this could be defined in terms of a reciprocal:
Or a direct definition can be given as follows: given any real number , there is a so that for , the absolute value of the function . A sequence can also have an infinite limit: as , the sequence .
This direct definition is easier to extend to one-sided infinite limits. While mathematicians do talk about functions approaching limits "from above" or "from below", there is not a standard mathematical notation for this as there is for one-sided limits.
Nonstandard analysis
In non-standard analysis (which involves a hyperreal enlargement of the number system), the limit of a sequence can be expressed as the standard part of the value of the natural extension of the sequence at an infinite hypernatural index n=H. Thus,
Here, the standard part function "st" rounds off each finite hyperreal number to the nearest real number (the difference between them is infinitesimal). This formalizes the natural intuition that for "very large" values of the index, the terms in the sequence are "very close" to the limit value of the sequence. Conversely, the standard part of a hyperreal represented in the ultrapower construction by a Cauchy sequence , is simply the limit of that sequence:
In this sense, taking the limit and taking the standard part are equivalent procedures.
Limit sets
Limit set of a sequence
Let be a sequence in a topological space . For concreteness, can be thought of as , but the definitions hold more generally. The limit set is the set of points such that if there is a convergent subsequence with , then belongs to the limit set. In this context, such an is sometimes called a limit point.
A use of this notion is to characterize the "long-term behavior" of oscillatory sequences. For example, consider the sequence . Starting from n=1, the first few terms of this sequence are . It can be checked that it is oscillatory, so has no limit, but has limit points .
Limit set of a trajectory
This notion is used in dynamical systems, to study limits of trajectories. Defining a trajectory to be a function , the point is thought of as the "position" of the trajectory at "time" . The limit set of a trajectory is defined as follows. To any sequence of increasing times , there is an associated sequence of positions . If is the limit set of the sequence for any sequence of increasing times, then is a limit set of the trajectory.
Technically, this is the -limit set. The corresponding limit set for sequences of decreasing time is called the -limit set.
An illustrative example is the circle trajectory: . This has no unique limit, but for each , the point is a limit point, given by the sequence of times . But the limit points need not be attained on the trajectory. The trajectory also has the unit circle as its limit set.
Uses
Limits are used to define a number of important concepts in analysis.
Series
A particular expression of interest which is formalized as the limit of a sequence is sums of infinite series. These are "infinite sums" of real numbers, generally written as
This is defined through limits as follows: given a sequence of real numbers , the sequence of partial sums is defined by
If the limit of the sequence exists, the value of the expression is defined to be the limit. Otherwise, the series is said to be divergent.
A classic example is the Basel problem, where . Then
However, while for sequences there is essentially a unique notion of convergence, for series there are different notions of convergence. This is due to the fact that the expression does not discriminate between different orderings of the sequence , while the convergence properties of the sequence of partial sums can depend on the ordering of the sequence.
A series which converges for all orderings is called unconditionally convergent. It can be proven to be equivalent to absolute convergence. This is defined as follows. A series is absolutely convergent if is well defined. Furthermore, all possible orderings give the same value.
Otherwise, the series is conditionally convergent. A surprising result for conditionally convergent series is the Riemann series theorem: depending on the ordering, the partial sums can be made to converge to any real number, as well as .
Power series
A useful application of the theory of sums of series is for power series. These are sums of series of the form
Often is thought of as a complex number, and a suitable notion of convergence of complex sequences is needed. The set of values of for which the series sum converges is a circle, with its radius known as the radius of convergence.
Continuity of a function at a point
The definition of continuity at a point is given through limits.
The above definition of a limit is true even if . Indeed, the function need not even be defined at . However, if is defined and is equal to , then the function is said to be continuous at the point .
Equivalently, the function is continuous at if as , or in terms of sequences, whenever , then .
An example of a limit where is not defined at is given below.
Consider the function
then is not defined (see Indeterminate form), yet as moves arbitrarily close to 1, correspondingly approaches 2:
Thus, can be made arbitrarily close to the limit of 2—just by making sufficiently close to .
In other words,
This can also be calculated algebraically, as for all real numbers .
Now, since is continuous in at 1, we can now plug in 1 for , leading to the equation
In addition to limits at finite values, functions can also have limits at infinity. For example, consider the function
where:
As becomes extremely large, the value of approaches , and the value of can be made as close to as one could wish—by making sufficiently large. So in this case, the limit of as approaches infinity is , or in mathematical notation,
Continuous functions
An important class of functions when considering limits are continuous functions. These are precisely those functions which preserve limits, in the sense that if is a continuous function, then whenever in the domain of , then the limit exists and furthermore is .
In the most general setting of topological spaces, a short proof is given below:
Let be a continuous function between topological spaces and . By definition, for each open set in , the preimage is open in .
Now suppose is a sequence with limit in . Then is a sequence in , and is some point.
Choose a neighborhood of . Then is an open set (by continuity of ) which in particular contains , and therefore is a neighborhood of . By the convergence of to , there exists an such that for , we have .
Then applying to both sides gives that, for the same , for each we have . Originally was an arbitrary neighborhood of , so . This concludes the proof.
In real analysis, for the more concrete case of real-valued functions defined on a subset , that is, , a continuous function may also be defined as a function which is continuous at every point of its domain.
Limit points
In topology, limits are used to define limit points of a subset of a topological space, which in turn give a useful characterization of closed sets.
In a topological space , consider a subset . A point is called a limit point if there is a sequence in such that .
The reason why is defined to be in rather than just is illustrated by the following example. Take and . Then , and therefore is the limit of the constant sequence . But is not a limit point of .
A closed set, which is defined to be the complement of an open set, is equivalently any set which contains all its limit points.
Derivative
The derivative is defined formally as a limit. In the scope of real analysis, the derivative is first defined for real functions defined on a subset . The derivative at is defined as follows. If the limit of
as exists, then the derivative at is this limit.
Equivalently, it is the limit as of
If the derivative exists, it is commonly denoted by .
Properties
Sequences of real numbers
For sequences of real numbers, a number of properties can be proven. Suppose and are two sequences converging to and respectively.
Sum of limits is equal to limit of sum
Product of limits is equal to limit of product
Inverse of limit is equal to limit of inverse (as long as )
Equivalently, the function is continuous about nonzero .
Cauchy sequences
A property of convergent sequences of real numbers is that they are Cauchy sequences. The definition of a Cauchy sequence is that for every real number , there is an such that whenever ,
Informally, for any arbitrarily small error , it is possible to find an interval of diameter such that eventually the sequence is contained within the interval.
Cauchy sequences are closely related to convergent sequences. In fact, for sequences of real numbers they are equivalent: any Cauchy sequence is convergent.
In general metric spaces, it continues to hold that convergent sequences are also Cauchy. But the converse is not true: not every Cauchy sequence is convergent in a general metric space. A classic counterexample is the rational numbers, , with the usual distance. The sequence of decimal approximations to , truncated at the th decimal place is a Cauchy sequence, but does not converge in .
A metric space in which every Cauchy sequence is also convergent, that is, Cauchy sequences are equivalent to convergent sequences, is known as a complete metric space.
One reason Cauchy sequences can be "easier to work with" than convergent sequences is that they are a property of the sequence alone, while convergent sequences require not just the sequence but also the limit of the sequence .
Order of convergence
Beyond whether or not a sequence converges to a limit , it is possible to describe how fast a sequence converges to a limit. One way to quantify this is using the order of convergence of a sequence.
A formal definition of order of convergence can be stated as follows. Suppose is a sequence of real numbers which is convergent with limit . Furthermore, for all . If positive constants and exist such that
then is said to converge to with order of convergence . The constant is known as the asymptotic error constant.
Order of convergence is used for example the field of numerical analysis, in error analysis.
Computability
Limits can be difficult to compute. There exist limit expressions whose modulus of convergence is undecidable. In recursion theory, the limit lemma proves that it is possible to encode undecidable problems using limits.
There are several theorems or tests that indicate whether the limit exists. These are known as convergence tests. Examples include the ratio test and the squeeze theorem. However they may not tell how to compute the limit.
| Mathematics | Analysis | null |
26555929 | https://en.wikipedia.org/wiki/Harbor%20seal | Harbor seal | The harbor (or harbour) seal (Phoca vitulina), also known as the common seal, is a true seal found along temperate and Arctic marine coastlines of the Northern Hemisphere. The most widely distributed species of pinniped (walruses, eared seals, and true seals), they are found in coastal waters of the northern Atlantic and Pacific oceans, Baltic and North seas.
Harbor seals are brown, silvery white, tan, or gray, with distinctive V-shaped nostrils. An adult can attain a length of 1.85 m (6.1 ft) and a mass of up to . Blubber under the seal's skin helps to maintain body temperature. Females outlive males (30–35 years versus 20–25 years). Harbor seals stick to familiar resting spots or haulout sites, generally rocky areas (although ice, sand, and mud may also be used) where they are protected from adverse weather conditions and predation, near a foraging area. Males may fight over mates under water and on land. Females bear a single pup after a nine-month gestation, which they care for alone. Pups can weigh up to and are able to swim and dive within hours of birth. They develop quickly on their mothers' fat-rich milk, and are weaned after four to six weeks.
The global population of harbor seals is 350,000–500,000, but the freshwater subspecies Ungava seal in Northern Quebec is endangered. Once a common practice, sealing is now illegal in many nations within the animal's range.
Description
Individual harbor seals possess a unique pattern of spots, either dark on a light background or light on a dark. They vary in colour from brownish black to tan or grey; underparts are generally lighter. The body and flippers are short, heads are rounded. Nostrils appear distinctively V-shaped. As with other true seals, there is no pinna (ear flap). An ear canal may be visible behind the eye. Including the head and flippers, they may reach an adult length of 1.85 m (6.1 ft) and a weight of 55 to 168 kg (120 to 370 lb). Females are generally smaller than males.
Population
There are an estimated 350,000–500,000 harbor seals worldwide. While the population is not threatened as a whole, the Greenland, Hokkaidō and Baltic Sea populations are exceptions. Local populations have been reduced or eliminated through disease (especially the phocine distemper virus) and conflict with humans, both unintentionally and intentionally. Killing seals perceived to threaten fisheries is legal in Norway, and Canada, but commercial hunting is illegal. Seals are also taken in subsistence hunting and accidentally as bycatch (mainly in bottomset nets). Along the Norwegian coast, bycatch accounted for 48% of pup mortality. Killing or taking seals has been illegal in the United Kingdom since 1 March 2021.
Seals in the United Kingdom are protected by the 1970 Conservation of Seals Act, which prohibits most forms of killing. In the United States, the Marine Mammal Protection Act of 1972 prohibits the killing of any marine mammals, and most local ordinances, as well as NOAA, instruct citizens to leave them alone unless serious danger to the seal exists.
In North America
Pacific Coast
The California population of subspecies P. v. richardii amounted to about 25,000 individuals as of 1984. Pacific harbor seals or California harbor seals are found along the entire Pacific Coast shoreline of the state. They prefer to remain relatively close to shore in subtidal and intertidal zones, and have not been seen beyond the Channel Islands as a pelagic form; moreover, they often venture into bays and estuaries and even swim up coastal rivers. They feed in shallow littoral waters on herring, flounder, hake, anchovy, codfish, and sculpin.
Breeding occurs in California from March to May, with pupping between April and May, depending on local populations. As top-level feeders in the kelp forest, harbor seals enhance species diversity and productivity. They are preyed upon by killer whales (orcas) and white sharks. Haul out sites in California include urban beaches and from time to time they can be seen having a nap on the beach in all of San Francisco Bay, which would include the conurbation of Richmond, Oakland, and San Francisco, the Greater Los Angeles area, which would include Santa Barbara, the city of Los Angeles itself, and Long Beach, and all of San Diego Bay, most famously beaches near La Jolla.
Considerable scientific inquiry has been carried out by the Marine Mammal Center and other research organizations beginning in the 1980s regarding the incidence and transmission of diseases in harbor seals in the wild, including analysis of phocine herpesvirus. In San Francisco Bay, some harbor seals are fully or partially reddish in color, possibly caused by an accumulation of trace elements such as iron or selenium in the ocean, or a change in the hair follicles.
Although some of the largest harbor seal pupping areas are found in California, they are also found north along the Pacific Coast in Oregon, Washington, British Columbia and Alaska. Large populations move with the season south along the west coast of Canada and may winter on the islands in Washington and Oregon. Pupping is known to occur in both Washington and Oregon as of 2020. People are advised to stay at least 50m away from harbor seals that have hauled out on land, especially the pups, as mothers will abandon them when there is excessive human activity nearby.
Atlantic Coast
Historically, the range of the harbor seal extended from the mouth of the St. Lawrence River and Greenland to the sandy beaches of North Carolina, a distance of well over a thousand miles (greater than 1600 km) Evidence of their presence in these areas is consistent with both the fossil record as well as a few landmarks named for them during colonization: Robbin's Reef, off of Bayonne, New Jersey, gets its name from the Dutch word robben, meaning "seals". On the border between Canada and the US is an island known as Machias Seal Island, a place where today the harbor seal will occasionally visit but is now a sanctuary for puffins. Over the course of hundreds of years, however, the seal was wiped out steadily by being shot on sight by fishermen and by massive pollution. The evidence for this is found in documents all along the coast of New England which put a bounty on the head of every seal shot, as well as the accounts of harbormasters. New York City, when it was founded in the 1640s, was founded on top of an enormous estuary teeming with life that included the harbor seal. Oil in the 1800s started the process of pollution that was later compounded by even more toxic 20th century chemicals that included PCB's and dioxin. By the time of the 1972 Clean Water Act, New York Harbor was almost dead-almost no living thing could survive in it. Approximately 300 miles to the north, Boston Harbor was equally polluted. Raw sewage had been dumped in the harbor since the late 1800s and the stench of fecal matter in the Charles River was overpowering, as evidenced by the song "Dirty Water" by the Standells, written in 1966. Flatfish, abundant in the area, had enormous tumors in their livers by the 1980s and the harbor seal was long gone, shot to oblivion.
As of 2020, however, the seals have returned. They never were extirpated from Canada and certain pockets of the Maine coast, and thus an important mother population was created from whence the species could reclaim the home of their ancestors. Currently, they are sighted as far south as the barrier islands of North Carolina on a regular basis, with Massachusetts being the southernmost point of known pupping areas along the Atlantic Coast. Harbor seals move south from eastern Canadian waters to breed along the coast of Maine, Cape Cod, and the South Shore in Massachusetts in May and June, and return northward in fall. Others will head south from these areas to "vacation" in warmer waters, particularly young seals unable to compete with adults for food and territory; they do not return north until spring.
One park ranger in New York City, which is dead center of its West Atlantic range, says that "New York is like their Miami resort." This refers to the habit of young seals leaving Cape Cod and even some Arctic waters to inhabit the harbor in winter. In 2018 the New York Post reported that the harbor is now "cleaner than it has been in 110 years," and since the first decade of the 21st century, the harbor seal has found the old turf of its ancestors to be a land of plenty and the water to be livable. Within sight of the New York skyline, known colonies of harbor seals are found on Hoffman and Swinburne Islands as well as portions of Red Hook and Staten Island, readily hauling out every from October until very early May. Known favorite foods of the seal are returning in grand numbers to New York Harbor as well as nearby New Jersey, from Raritan Bay all the way down the entire Jersey Shore, with schools of mossbunker regularly attracting harbor seals, their cousins the grey seals, dolphins and, most recently, whales. Both the northern and southern shores of Long Island have a reliable population of harbor seals as well as greys, where they will take sand lance as well as some species of crab as part of their diet.
Subspecies
The five proposed subspecies of Phoca vitulina are:
Habitat and diet
Harbor seals prefer to frequent familiar resting sites. They may spend several days at sea and travel up to 50 km in search of feeding grounds, and will also swim more than a hundred miles upstream into fresh water in large rivers in search of migratory fish like shad and likely salmon. Resting sites may be both rugged, rocky coasts, such as those of the Hebrides or the shorelines of New England, or sandy beaches, like the ones that flank Normandy in Northern France or the Outer Banks of North Carolina. Harbor seals frequently congregate in harbors, bays, sandy intertidal zones, and estuaries in pursuit of prey fish such as salmon, menhaden, anchovy, sea bass, herring, mackerel, cod, whiting and flatfish, and occasionally shrimp, crabs, mollusks, and squid. Atlantic subspecies of either Europe or North America also exploit deeper-dwelling fish of the genus Ammodytes as a food source and Pacific subspecies have been recorded occasionally consuming fish of the genus Oncorhynchus. Although primarily coastal, dives of over 500 m have been recorded. Harbor seals have been recorded to attack, kill and eat several kinds of ducks.
Behavior, survival, and reproduction
Harbor seals are solitary, but are gregarious when hauled out and during the breeding season, though they do not form groups as large as some other seals. When not actively feeding, they haul to rest. They tend to be coastal, not venturing more than 20 km offshore. The mating system is not known, but thought to be polygamous. Females give birth once per year, with a gestation period around nine months. Females have a mean age at sexual maturity of 3.72 years and a mean age at first parturition of 4.64. Both courtship and mating occur under water. Researchers have found males gather under water, turn on their backs, put their heads together, and vocalize to attract females ready for breeding. Pregnancy rate of females was 92% from age 3 to age 36, with lowered reproductive success after the age of 25 years.
Birthing of pups occurs annually on shore. The timing of the pupping season varies with location, occurring in February for populations in lower latitudes, and as late as July in the subarctic zone. The mothers are the sole providers of care, with lactation lasting 24 days. The single pups are born well developed, capable of swimming and diving within hours. Suckling for three to four weeks, pups feed on the mother's rich, fatty milk and grow rapidly; born weighing up to 16 kilograms, the pups may double their weight by the time of weaning.
Harbor seals must spend a great deal of time on shore when molting, which occurs shortly after breeding. This onshore time is important to the life cycle, and can be disturbed when substantial human presence occurs. The timing of onset of molt depends on the age and sex of the animal, with yearlings molting first and adult males last. A female mates again immediately following the weaning of her pup. Harbor seals are sometimes reluctant to haul out in the presence of humans, so shoreline development and access must be carefully studied, and if necessary managed, in known locations of seal haul out.
In comparison to many pinniped species, and in contrast to otariid pinnipeds, harbor seals are generally regarded to be more vocally reticent. However, they do utilize non-harmonic vocalizations to maintain breeding territories and to attract mates during specified times of year, and also during mother and pup interactions.
Annual survival rates were calculated at 0.91 for adult males, and 0.902 for adult females. Maximum age for females was 36 and for males 31 years.
Notable individuals
Andre, rescued and trained by his owner Harry Goodridge, he became an iconic figure in his hometown of Rockport, Maine.
Hoover, also rescued from a Maine harbor. Hoover became famous for his ability to imitate human speech, something not observed in any other mammal.
Popeye, the official seal of Friday Harbor, Washington, notable for her common sightings up until 2019, when she was presumed to have died. She was identified and named for her cloudy left eye. There is a statue of her in the Port of Friday Harbor.
Freddie, a seal pup commonly spotted along the Thames in central London. Named after Freddie Mercury due to his bushy whiskers and playfulness. Freddie was known to travel unusually far into London from the Thames Estuary, and was often sighted as far west as Hammersmith. On 21 March 2021 he had to be put down after he was violently mauled by an out-of-control dog.
| Biology and health sciences | Pinnipeds | Animals |
32177451 | https://en.wikipedia.org/wiki/Function%20%28computer%20programming%29 | Function (computer programming) | In computer programming, a function (also procedure, method, subroutine, routine, or subprogram) is a callable unit of software logic that has a well-defined interface and behavior and can be invoked multiple times.
Callable units provide a powerful programming tool. The primary purpose is to allow for the decomposition of a large and/or complicated problem into chunks that have relatively low cognitive load and to assign the chunks meaningful names (unless they are anonymous). Judicious application can reduce the cost of developing and maintaining software, while increasing its quality and reliability.
Callable units are present at multiple levels of abstraction in the programming environment. For example, a programmer may write a function in source code that is compiled to machine code that implements similar semantics. There is a callable unit in the source code and an associated one in the machine code, but they are different kinds of callable units with different implications and features.
Terminology
The meaning of each callable term (function, procedure, method, ...) is, in fact, different. They are not synonymous. Nevertheless, they each add a capability to programming that has commonality.
The term used tends to reflect the context in which it is used usually based on the language being used. For example:
Subprogram, routine and subroutine were more commonly used in the past but are less common today
Routine and subroutine have essentially the same meaning but describe a hierarchical relationship, much like how a subdirectory is structurally subordinate to its parent directory; program and subprogram are similarly related
Some consider function to imply a mathematical function, having no side-effects, but in many contexts function refers to any callable
In the context of Visual Basic and Ada, , short for subroutine or subprocedure, is the name of a callable that does not return a value whereas a does return a value
Object-oriented languages such as C# and Java use the term method to refer to a member function of an object
History
The idea of a callable unit was initially conceived by John Mauchly and Kathleen Antonelli during their work on ENIAC and recorded in a January 1947 Harvard symposium on "Preparation of Problems for EDVAC-type Machines." Maurice Wilkes, David Wheeler, and Stanley Gill are generally credited with the formal invention of this concept, which they termed a closed sub-routine, contrasted with an open subroutine or macro. However, Alan Turing had discussed subroutines in a paper of 1945 on design proposals for the NPL ACE, going so far as to invent the concept of a return address stack.
The idea of a subroutine was worked out after computing machines had already existed for some time. The arithmetic and conditional jump instructions were planned ahead of time and have changed relatively little, but the special instructions used for procedure calls have changed greatly over the years. The earliest computers and microprocessors, such as the Manchester Baby and the RCA 1802, did not have a single subroutine call instruction. Subroutines could be implemented, but they required programmers to use the call sequence—a series of instructions—at each call site.
Subroutines were implemented in Konrad Zuse's Z4 in 1945.
In 1945, Alan M. Turing used the terms "bury" and "unbury" as a means of calling and returning from subroutines.
In January 1947 John Mauchly presented general notes at 'A Symposium of Large Scale Digital Calculating Machinery'
under the joint sponsorship of Harvard University and the Bureau of Ordnance, United States Navy. Here he discusses serial and parallel operation suggesting
Kay McNulty had worked closely with John Mauchly on the ENIAC team and developed an idea for subroutines for the ENIAC computer she was programming during World War II. She and the other ENIAC programmers used the subroutines to help calculate missile trajectories.
Goldstine and von Neumann wrote a paper dated 16 August 1948 discussing the use of subroutines.
Some very early computers and microprocessors, such as the IBM 1620, the Intel 4004 and Intel 8008, and the PIC microcontrollers, have a single-instruction subroutine call that uses a dedicated hardware stack to store return addresses—such hardware supports only a few levels of subroutine nesting, but can support recursive subroutines. Machines before the mid-1960s—such as the UNIVAC I, the PDP-1, and the IBM 1130—typically use a calling convention which saved the instruction counter in the first memory location of the called subroutine. This allows arbitrarily deep levels of subroutine nesting but does not support recursive subroutines. The IBM System/360 had a subroutine call instruction that placed the saved instruction counter value into a general-purpose register; this can be used to support arbitrarily deep subroutine nesting and recursive subroutines. The Burroughs B5000 (1961) is one of the first computers to store subroutine return data on a stack.
The DEC PDP-6 (1964) is one of the first accumulator-based machines to have a subroutine call instruction that saved the return address in a stack addressed by an accumulator or index register. The later PDP-10 (1966), PDP-11 (1970) and VAX-11 (1976) lines followed suit; this feature also supports both arbitrarily deep subroutine nesting and recursive subroutines.
Language support
In the very early assemblers, subroutine support was limited. Subroutines were not explicitly separated from each other or from the main program, and indeed the source code of a subroutine could be interspersed with that of other subprograms. Some assemblers would offer predefined macros to generate the call and return sequences. By the 1960s, assemblers usually had much more sophisticated support for both inline and separately assembled subroutines that could be linked together.
One of the first programming languages to support user-written subroutines and functions was FORTRAN II. The IBM FORTRAN II compiler was released in 1958. ALGOL 58 and other early programming languages also supported procedural programming.
Libraries
Even with this cumbersome approach, subroutines proved very useful. They allowed the use of the same code in many different programs. Memory was a very scarce resource on early computers, and subroutines allowed significant savings in the size of programs.
Many early computers loaded the program instructions into memory from a punched paper tape. Each subroutine could then be provided by a separate piece of tape, loaded or spliced before or after the main program (or "mainline"); and the same subroutine tape could then be used by many different programs. A similar approach was used in computers that loaded program instructions from punched cards. The name subroutine library originally meant a library, in the literal sense, which kept indexed collections of tapes or decks of cards for collective use.
Return by indirect jump
To remove the need for self-modifying code, computer designers eventually provided an indirect jump instruction, whose operand, instead of being the return address itself, was the location of a variable or processor register containing the return address.
On those computers, instead of modifying the function's return jump, the calling program would store the return address in a variable so that when the function completed, it would execute an indirect jump that would direct execution to the location given by the predefined variable.
Jump to subroutine
Another advance was the jump to subroutine instruction, which combined the saving of the return address with the calling jump, thereby minimizing overhead significantly.
In the IBM System/360, for example, the branch instructions BAL or BALR, designed for procedure calling, would save the return address in a processor register specified in the instruction, by convention register 14. To return, the subroutine had only to execute an indirect branch instruction (BR) through that register. If the subroutine needed that register for some other purpose (such as calling another subroutine), it would save the register's contents to a private memory location or a register stack.
In systems such as the HP 2100, the JSB instruction would perform a similar task, except that the return address was stored in the memory location that was the target of the branch. Execution of the procedure would actually begin at the next memory location. In the HP 2100 assembly language, one would write, for example
...
JSB MYSUB (Calls subroutine MYSUB.)
BB ... (Will return here after MYSUB is done.)
to call a subroutine called MYSUB from the main program. The subroutine would be coded as
MYSUB NOP (Storage for MYSUB's return address.)
AA ... (Start of MYSUB's body.)
...
JMP MYSUB,I (Returns to the calling program.)
The JSB instruction placed the address of the NEXT instruction (namely, BB) into the location specified as its operand (namely, MYSUB), and then branched to the NEXT location after that (namely, AA = MYSUB + 1). The subroutine could then return to the main program by executing the indirect jump JMP MYSUB, I which branched to the location stored at location MYSUB.
Compilers for Fortran and other languages could easily make use of these instructions when available. This approach supported multiple levels of calls; however, since the return address, parameters, and return values of a subroutine were assigned fixed memory locations, it did not allow for recursive calls.
Incidentally, a similar method was used by Lotus 1-2-3, in the early 1980s, to discover the recalculation dependencies in a spreadsheet. Namely, a location was reserved in each cell to store the return address. Since circular references are not allowed for natural recalculation order, this allows a tree walk without reserving space for a stack in memory, which was very limited on small computers such as the IBM PC.
Call stack
Most modern implementations of a function call use a call stack, a special case of the stack data structure, to implement function calls and returns. Each procedure call creates a new entry, called a stack frame, at the top of the stack; when the procedure returns, its stack frame is deleted from the stack, and its space may be used for other procedure calls. Each stack frame contains the private data of the corresponding call, which typically includes the procedure's parameters and internal variables, and the return address.
The call sequence can be implemented by a sequence of ordinary instructions (an approach still used in reduced instruction set computing (RISC) and very long instruction word (VLIW) architectures), but many traditional machines designed since the late 1960s have included special instructions for that purpose.
The call stack is usually implemented as a contiguous area of memory. It is an arbitrary design choice whether the bottom of the stack is the lowest or highest address within this area, so that the stack may grow forwards or backwards in memory; however, many architectures chose the latter.
Some designs, notably some Forth implementations, used two separate stacks, one mainly for control information (like return addresses and loop counters) and the other for data. The former was, or worked like, a call stack and was only indirectly accessible to the programmer through other language constructs while the latter was more directly accessible.
When stack-based procedure calls were first introduced, an important motivation was to save precious memory. With this scheme, the compiler does not have to reserve separate space in memory for the private data (parameters, return address, and local variables) of each procedure. At any moment, the stack contains only the private data of the calls that are currently active (namely, which have been called but haven't returned yet). Because of the ways in which programs were usually assembled from libraries, it was (and still is) not uncommon to find programs that include thousands of functions, of which only a handful are active at any given moment. For such programs, the call stack mechanism could save significant amounts of memory. Indeed, the call stack mechanism can be viewed as the earliest and simplest method for automatic memory management.
However, another advantage of the call stack method is that it allows recursive function calls, since each nested call to the same procedure gets a separate instance of its private data.
In a multi-threaded environment, there is generally more than one stack. An environment that fully supports coroutines or lazy evaluation may use data structures other than stacks to store their activation records.
Delayed stacking
One disadvantage of the call stack mechanism is the increased cost of a procedure call and its matching return. The extra cost includes incrementing and decrementing the stack pointer (and, in some architectures, checking for stack overflow), and accessing the local variables and parameters by frame-relative addresses, instead of absolute addresses. The cost may be realized in increased execution time, or increased processor complexity, or both.
This overhead is most obvious and objectionable in leaf procedures or leaf functions, which return without making any procedure calls themselves. To reduce that overhead, many modern compilers try to delay the use of a call stack until it is really needed. For example, the call of a procedure P may store the return address and parameters of the called procedure in certain processor registers, and transfer control to the procedure's body by a simple jump. If the procedure P returns without making any other call, the call stack is not used at all. If P needs to call another procedure Q, it will then use the call stack to save the contents of any registers (such as the return address) that will be needed after Q returns.
Features
In general, a callable unit is a list of instructions that, starting at the first instruction, executes sequentially except as directed via its internal logic. It can be invoked (called) many times during the execution of a program. Execution continues at the next instruction after the call instruction when it returns control.
Implementations
The features of implementations of callable units evolved over time and varies by context.
This section describes features of the various common implementations.
General characteristics
Most modern programming languages provide features to define and call functions, including syntax for accessing such features, including:
Delimit the implementation of a function from the rest of the program
Assign an identifier, name, to a function
Define formal parameters with a name and data type for each
Assign a data type to the return value, if any
Specify a return value in the function body
Call a function
Provide actual parameters that correspond to a called function's formal parameters
Return control to the caller at the point of call
Consume the return value in the caller
Dispose of the values returned by a call
Provide a private naming scope for variables
Identify variables outside the function that are accessible within it
Propagate an exceptional condition out of a function and to handle it in the calling context
Package functions into a container such as module, library, object, or class
Naming
Some languages, such as Pascal, Fortran, Ada and many dialects of BASIC, use a different name for a callable unit that returns a value (function or subprogram) vs. one that does not (subroutine or procedure).
Other languages, such as C, C++, C# and Lisp, use only one name for a callable unit, function. The C-family languages use the keyword void to indicate no return value.
Call syntax
If declared to return a value, a call can be embedded in an expression in order to consume the return value. For example, a square root callable unit might be called like y = sqrt(x).
A callable unit that does not return a value is called as a stand-alone statement like print("hello"). This syntax can also be used for a callable unit that returns a value, but the return value will be ignored.
Some older languages require a keyword for calls that do not consume a return value, like CALL print("hello").
Parameters
Most implementations, especially in modern languages, support parameters which the callable declares as formal parameters. A caller passes actual parameters, a.k.a. arguments, to match. Different programming languages provide different conventions for passing arguments.
Return value
In some languages, such as BASIC, a callable has different syntax (i.e. keyword) for a callable that returns a value vs. one that does not.
In other languages, the syntax is the same regardless.
In some of these languages an extra keyword is used to declare no return value; for example void in C, C++ and C#.
In some languages, such as Python, the difference is whether the body contains a return statement with a value, and a particular callable may return with or without a value based on control flow.
Side effects
In many contexts, a callable may have side effect behavior such as modifying passed or global data, reading from or writing to a peripheral device, accessing a file, halting the program or the machine, or temporarily pausing program execution.
Side effects are considered undesireble by Robert C. Martin, who is known for promoting design principles. Martin argues that side effects can result in temporal coupling or order dependencies.
In strictly functional programming languages such as Haskell, a function can have no side effects, which means it cannot change the state of the program. Functions always return the same result for the same input. Such languages typically only support functions that return a value, since there is no value in a function that has neither return value nor side effect.
Local variables
Most contexts support local variables memory owned by a callable to hold intermediate values. These variables are typically stored in the call's activation record on the call stack along with other information such as the return address.
Nested call recursion
If supported by the language, a callable may call itself, causing its execution to suspend while another nested execution of the same callable executes. Recursion is a useful means to simplify some complex algorithms and break down complex problems. Recursive languages provide a new copy of local variables on each call. If the programmer desires the recursive callable to use the same variables instead of using locals, they typically declare them in a shared context such static or global.
Languages going back to ALGOL, PL/I and C and modern languages, almost invariably use a call stack, usually supported by the instruction sets to provide an activation record for each call. That way, a nested call can modify its local variables without affecting any of the suspended calls variables.
Recursion allows direct implementation of functionality defined by mathematical induction and recursive divide and conquer algorithms. Here is an example of a recursive function in C/C++ to find Fibonacci numbers:
int Fib(int n) {
if (n <= 1) {
return n;
}
return Fib(n - 1) + Fib(n - 2);
}
Early languages like Fortran did not initially support recursion because only one set of variables and return address were allocated for each callable. Early computer instruction sets made storing return addresses and variables on a stack difficult. Machines with index registers or general-purpose registers, e.g., CDC 6000 series, PDP-6, GE 635, System/360, UNIVAC 1100 series, could use one of those registers as a stack pointer.
Nested scope
Some languages, e.g., Ada, Pascal, PL/I, Python, support declaring and defining a function inside, e.g., a function body, such that the name of the inner is only visible within the body of the outer.
Reentrancy
If a callable can be executed properly even when another execution of the same callable is already in progress, that callable is said to be reentrant. A reentrant callable is also useful in multi-threaded situations since multiple threads can call the same callable without fear of interfering with each other. In the IBM CICS transaction processing system, quasi-reentrant was a slightly less restrictive, but similar, requirement for application programs that were shared by many threads.
Overloading
Some languages support overloading allow multiple callables with the same name in the same scope, but operating on different types of input. Consider the square root function applied to real number, complex number and matrix input. The algorithm for each type of input is different, and the return value may have a different type. By writing three separate callables with the same name. i.e. sqrt, the resulting code may be easier to write and to maintain since each one has a name that is relatively easy to understand and to remember instead of giving longer and more complicated names like sqrt_real, sqrt_complex, qrt_matrix.
Overloading is supported in many languages that support strong typing. Often the compiler selects the overload to call based on the type of the input arguments or it fails if the input arguments do not select an overload. Older and weakly-typed languages generally do not support overloading.
Here is an example of overloading in C++, two functions Area that accept different types:
// returns the area of a rectangle defined by height and width
double Area(double h, double w) { return h * w; }
// returns the area of a circle defined by radius
double Area(double r) { return r * r * 3.14; }
int main() {
double rectangle_area = Area(3, 4);
double circle_area = Area(5);
}
PL/I has the GENERIC attribute to define a generic name for a set of entry references called with different types of arguments. Example:
DECLARE gen_name GENERIC(
name WHEN(FIXED BINARY),
flame WHEN(FLOAT),
pathname OTHERWISE);
Multiple argument definitions may be specified for each entry. A call to "gen_name" will result in a call to "name" when the argument is FIXED BINARY, "flame" when FLOAT", etc. If the argument matches none of the choices "pathname" will be called.
Closure
A closure is a callable plus values of some of its variables captured from the environment in which it was created. Closures were a notable feature of the Lisp programming language, introduced by John McCarthy. Depending on the implementation, closures can serve as a mechanism for side-effects.
Exception reporting
Besides its happy path behavior, a callable may need to inform the caller about an exceptional condition that occurred during its execution.
Most modern languages support exceptions which allows for exceptional control flow that pops the call stack until an exception handler is found to handle the condition.
Languages that do not support exceptions can use the return value to indicate success or failure of a call. Another approach is to use a well-known location like a global variable for success indication. A callable writes the value and the caller reads it after a call.
In the IBM System/360, where return code was expected from a subroutine, the return value was often designed to be a multiple of 4—so that it could be used as a direct branch table index into a branch table often located immediately after the call instruction to avoid extra conditional tests, further improving efficiency. In the System/360 assembly language, one would write, for example:
BAL 14, SUBRTN01 go to a subroutine, storing return address in R14
B TABLE(15) use returned value in reg 15 to index the branch table,
* branching to the appropriate branch instr.
TABLE B OK return code =00 GOOD }
B BAD return code =04 Invalid input } Branch table
B ERROR return code =08 Unexpected condition }
Call overhead
A call has runtime overhead, which may include but is not limited to:
Allocating and reclaiming call stack storage
Saving and restoring processor registers
Copying input variables
Copying values after the call into the caller's context
Automatic testing of the return code
Handling of exceptions
Dispatching such as for a virtual method in an object-oriented language
Various techniques are employed to minimize the runtime cost of calls.
Compiler optimization
Some optimizations for minimizing call overhead may seem straight forward, but cannot be used if the callable has side effects. For example, in the expression (f(x)-1)/(f(x)+1), the function f cannot be called only once with its value used two times since the two calls may return different results. Moreover, in the few languages which define the order of evaluation of the division operator's operands, the value of x must be fetched again before the second call, since the first call may have changed it. Determining whether a callable has a side effect is difficult indeed, undecidable by virtue of Rice's theorem. So, while this optimization is safe in a purely functional programming language, a compiler for an language not limited to functional typically assumes the worst case, that every callable may have side effects.
Inlining
Inlining eliminates calls for particular callables. The compiler replaces each call with the compiled code of the callable. Not only does this avoid the call overhead, but it also allows the compiler to optimize code of the caller more effectively by taking into account the context and arguments at that call. Inlining, however, usually increases the compiled code size, except when only called once or the body is very short, like one line.
Sharing
Callables can be defined within a program, or separately in a library that can be used by multiple programs.
Inter-operability
A compiler translates call and return statements into machine instructions according to a well-defined calling convention. For code compiled by the same or a compatible compiler, functions can be compiled separately from the programs that call them. The instruction sequences corresponding to call and return statements are called the procedure's prologue and epilogue.
Built-in functions
A built-in function, or builtin function, or intrinsic function, is a function for which the compiler generates code at compile time or provides in a way other than for other functions. A built-in function does not need to be defined like other functions since it is built in to the programming language.
Programming
Trade-offs
Advantages
Advantages of breaking a program into functions include:
Decomposing a complex programming task into simpler steps: this is one of the two main tools of structured programming, along with data structures
Reducing duplicate code within a program
Enabling reuse of code across multiple programs
Dividing a large programming task among various programmers or various stages of a project
Hiding implementation details from users of the function
Improving readability of code by replacing a block of code with a function call where a descriptive function name serves to describe the block of code. This makes the calling code concise and readable even if the function is not meant to be reused.
Improving traceability (i.e. most languages offer ways to obtain the call trace which includes the names of the involved functions and perhaps even more information such as file names and line numbers); by not decomposing the code into functions, debugging would be severely impaired
Disadvantages
Compared to using in-line code, invoking a function imposes some computational overhead in the call mechanism.
A function typically requires standard housekeeping code – both at the entry to, and exit from, the function (function prologue and epilogue – usually saving general purpose registers and return address as a minimum).
Conventions
Many programming conventions have been developed regarding callables.
With respect to naming, many developers name a callable with a phrase starting with a verb when it does a certain task, with an adjective when it makes an inquiry, and with a noun when it is used to substitute variables.
Some programmers suggest that a callable should perform exactly one task, and if it performs more than one task, it should be split up into multiple callables. They argue that callables are key components in software maintenance, and their roles in the program must remain distinct.
Proponents of modular programming advocate that each callable should have minimal dependency on the rest of the codebase. For example, the use of global variables is generally deemed unwise, because it adds coupling between all callables that use the global variables. If such coupling is not necessary, they advise to refactor callables to accept passed parameters instead.
Examples
Early BASIC
Early BASIC variants require each line to have a unique number (line number) that orders the lines for execution, provides no separation of the code that is callable, no mechanism for passing arguments or to return a value and all variables are global. It provides the command GOSUB where sub is short for sub procedure, subprocedure or subroutine. Control jumps to the specified line number and then continues on the next line on return.
10 REM A BASIC PROGRAM
20 GOSUB 100
30 GOTO 20
100 INPUT “GIVE ME A NUMBER”; N
110 PRINT “THE SQUARE ROOT OF”; N;
120 PRINT “IS”; SQRT(N)
130 RETURN
This code repeatedly asks the user to enter a number and reports the square root of the value. Lines 100-130 are the callable.
Small Basic
In Microsoft Small Basic, targeted to the student first learning how to program in a text-based language, a callable unit is called a subroutine.
The Sub keyword denotes the start of a subroutine and is followed by a name identifier. Subsequent lines are the body which ends with the EndSub keyword.
Sub SayHello
TextWindow.WriteLine("Hello!")
EndSub
This can be called as SayHello().
Visual Basic
In later versions of Visual Basic (VB), including the latest product line and VB6, the term procedure is used for the callable unit concept. The keyword Sub is used to return no value and Function to return a value. When used in the context of a class, a procedure is a method.
Each parameter has a data type that can be specified, but if not, defaults to Object for later versions based on .NET and variant for VB6.
VB supports parameter passing conventions by value and by reference via the keywords ByVal and ByRef, respectively.
Unless ByRef is specified, an argument is passed ByVal. Therefore, ByVal is rarely explicitly specified.
For a simple type like a number these conventions are relatively clear. Passing ByRef allows the procedure to modify the passed variable whereas passing ByVal does not. For an object, semantics can confuse programmers since an object is always treated as a reference. Passing an object ByVal copies the reference; not the state of the object. The called procedure can modify the state of the object via its methods yet cannot modify the object reference of the actual parameter.
Sub DoSomething()
' Some Code Here
End Sub
The does not return a value and has to be called stand-alone, like DoSomething
Function GiveMeFive() as Integer
GiveMeFive= 5
End Function
This returns the value 5, and a call can be part of an expression like y = x + GiveMeFive()
Sub AddTwo(ByRef intValue as Integer)
intValue = intValue + 2
End Sub
This has a side-effect modifies the variable passed by reference and could be called for variable v like AddTwo(v). Giving v is 5 before the call, it will be 7 after.
C and C++
In C and C++, a callable unit is called a function.
A function definition starts with the name of the type of value that it returns or void to indicate that it does not return a value. This is followed by the function name, formal arguments in parentheses, and body lines in braces.
In C++, a function declared in a class (as non-static) is called a member function or method. A function outside of a class can be called a free function to distinguish it from a member function.
void doSomething() {
/* some code */
}
This function does not return a value and is always called stand-alone, like doSomething()
int giveMeFive() {
return 5;
}
This function returns the integer value 5. The call can be stand-alone or in an expression like y = x + giveMeFive()
void addTwo(int *pi) {
*pi += 2;
}
This function has a side-effect modifies the value passed by address to the input value plus 2. It could be called for variable v as addTwo(&v) where the ampersand (&) tells the compiler to pass the address of a variable. Giving v is 5 before the call, it will be 7 after.
void addTwo(int& i) {
i += 2;
}
This function requires C++ would not compile as C. It has the same behavior as the preceding example but passes the actual parameter by reference rather than passing its address. A call such as addTwo(v) does not include an ampersand since the compiler handles passing by reference without syntax in the call.
PL/I
In PL/I a called procedure may be passed a descriptor providing information about the argument, such as string lengths and array bounds. This allows the procedure to be more general and eliminates the need for the programmer to pass such information. By default PL/I passes arguments by reference. A (trivial) function to change the sign of each element of a two-dimensional array might look like:
change_sign: procedure(array);
declare array(*,*) float;
array = -array;
end change_sign;
This could be called with various arrays as follows:
/* first array bounds from -5 to +10 and 3 to 9 */
declare array1 (-5:10, 3:9)float;
/* second array bounds from 1 to 16 and 1 to 16 */
declare array2 (16,16) float;
call change_sign(array1);
call change_sign(array2);
Python
In Python, the keyword def denotes the start of a function definition. The statements of the function body follow as indented on subsequent lines and end at the line that is indented the same as the first line or end of file.
def format_greeting(name):
return "Welcome " + name
def greet_martin():
print(format_greeting("Martin"))
The first function returns greeting text that includes the name passed by the caller. The second function calls the first and is called like greet_martin() to write "Welcome Martin" to the console.
Prolog
In the procedural interpretation of logic programs, logical implications behave as goal-reduction procedures. A rule (or clause) of the form:
A :- B
which has the logical reading:
A if B
behaves as a procedure that reduces goals that unify with A to subgoals that are instances ofB.
Consider, for example, the Prolog program:
mother_child(elizabeth, charles).
father_child(charles, william).
father_child(charles, harry).
parent_child(X, Y) :- mother_child(X, Y).
parent_child(X, Y) :- father_child(X, Y).
Notice that the motherhood function, X = mother(Y) is represented by a relation, as in a relational database. However, relations in Prolog function as callable units.
For example, the procedure call ?- parent_child(X, charles) produces the output X = elizabeth. But the same procedure can be called with other input-output patterns. For example:
?- parent_child(elizabeth, Y).
Y = charles.
?- parent_child(X, Y).
X = elizabeth,
Y = charles.
X = charles,
Y = harry.
X = charles,
Y = william.
?- parent_child(william, harry).
no.
?- parent_child(elizabeth, charles).
yes.
| Technology | Software development: General | null |
33775893 | https://en.wikipedia.org/wiki/Rainfed%20agriculture | Rainfed agriculture | Rainfed agriculture is a type of farming that relies on rainfall for water. It provides much of the food consumed by poor communities in developing countries. E.g., rainfed agriculture accounts for more than 95% of farmed land in sub-Saharan Africa, 90% in Latin America, 75% in the Near East and North Africa, 65% in East Asia, and 60% in South Asia.
There is a strong correlation between poverty, hunger and water scarcity in part because of the dependencies on rainfed agriculture in developing economies. Moreover, because of increased weather variability, climate change is expected to make rain-fed farmers more vulnerable to climate change.
Rainfed agriculture is distinguished in most of the literature from irrigated agriculture, which applies water from other sources, such as freshwater from streams, rivers and lakes or groundwater. As farmers become more aware of and develop better water resource management strategies, most agriculture exists on a spectrum between rainfed and irrigated agriculture.
Hunger and water correlation
There is a correlation between poverty, hunger, and water scarcity. The UN Millennium Development Project has identified the ‘hot spot’ countries in the world suffering from the largest prevalence of malnutrition. These countries coincide closely with those located in the semi-arid and dry sub-humid hydroclimates in the world (i.e., savanna and steppe ecosystems), where rainfed agriculture is the dominant source of food and where water constitutes a key limiting factor to crop growth. Of the 850 million undernourished people in the world, essentially all live in poor, developing countries, which predominantly are located in tropical regions.
Levels of productivity, particularly in parts of sub-Saharan Africa and South Asia, are low due to degraded soils, high levels of evaporation, droughts, floods and a general lack of effective water management. A major study into water use by agriculture, known as the Comprehensive Assessment of Water Management in Agriculture, coordinated by the International Water Management Institute, noted a close correlation between hunger, poverty, and water. However, it concluded that there was much opportunity to raise the productivity of rainfed farming. Managing rainwater and soil moisture more effectively and using supplemental and small-scale irrigation is believed to hold the key to helping the greatest number of poor people. It called for a new era of water investments and policies for upgrading rainfed agriculture that would go beyond controlling field-level soil and water to bring new freshwater sources through better local management of rainfall and runoff.
The importance of rainfed agriculture varies regionally, but it produces most food for poor communities in developing countries. In sub-Saharan Africa, more than 95% of the farmed land is rainfed, while the corresponding figure for Latin America is almost 90%, for South Asia about 60%, for East Asia 65%, and for the Near East and North Africa 75%. Most countries in the world depend primarily on rainfed agriculture for their grain food. Despite large strides made in improving productivity and environmental conditions in many developing countries, a great number of poor families in Africa and Asia still face poverty, hunger, food insecurity, and malnutrition where rainfed agriculture is the main agricultural activity. These problems are exacerbated by adverse biophysical growing conditions and the poor socioeconomic infrastructure in many areas in the semi-arid tropics (SAT). The SAT is the home to 38% of the developing countries’ poor, 75% of whom live in rural areas. Over 45% of the world's hungry and more than 70% of its malnourished children live in the SAT.
Output trends
Since the late 1960s, agricultural land use has expanded by 20–25%, which has contributed to approximately 30% of the overall grain production growth during the period. The remaining yield outputs originated from intensification through yield increases per unit land area. However, the regional variation is large, as is the difference between irrigated and rainfed agriculture. In developing countries, rainfed grain yields are on average 1.5 hectare, compared with 3.1 hectare for irrigated yields, and increase in production from rainfed agriculture has mainly originated from land expansion.
| Technology | Agriculture_2 | null |
45311025 | https://en.wikipedia.org/wiki/Spectrometer | Spectrometer | A spectrometer () is a scientific instrument used to separate and measure spectral components of a physical phenomenon. Spectrometer is a broad term often used to describe instruments that measure a continuous variable of a phenomenon where the spectral components are somehow mixed. In visible light a spectrometer can separate white light and measure individual narrow bands of color, called a spectrum. A mass spectrometer measures the spectrum of the masses of the atoms or molecules present in a gas. The first spectrometers were used to split light into an array of separate colors. Spectrometers were developed in early studies of physics, astronomy, and chemistry. The capability of spectroscopy to determine chemical composition drove its advancement and continues to be one of its primary uses. Spectrometers are used in astronomy to analyze the chemical composition of stars and planets, and spectrometers gather data on the origin of the universe.
Examples of spectrometers are devices that separate particles, atoms, and molecules by their mass, momentum, or energy. These types of spectrometers are used in chemical analysis and particle physics.
Types of spectrometer
Optical spectrometers or optical emission spectrometer
Optical absorption spectrometers
Optical spectrometers (often simply called "spectrometers"), in particular, show the intensity of light as a function of wavelength or of frequency. The different wavelengths of light are separated by refraction in a prism or by diffraction by a diffraction grating. Ultraviolet–visible spectroscopy is an example.
These spectrometers utilize the phenomenon of optical dispersion. The light from a source can consist of a continuous spectrum, an emission spectrum (bright lines), or an absorption spectrum (dark lines). Because each element leaves its spectral signature in the pattern of lines observed, a spectral analysis can reveal the composition of the object being analyzed.
A spectrometer that is calibrated for measurement of the incident optical power is called a spectroradiometer.
Optical emission spectrometers
Optical emission spectrometers (often called "OES or spark discharge spectrometers"), are used to evaluate metals to determine the chemical composition with very high accuracy. A spark is applied through a high voltage on the surface which vaporizes particles into a plasma. The particles and ions then emit radiation that is measured by detectors (photomultiplier tubes) at different characteristic wavelengths.
Electron spectroscopy
Some forms of spectroscopy involve analysis of electron energy rather than photon energy. X-ray photoelectron spectroscopy is an example.
Mass spectrometer
A mass spectrometer is an analytical instrument that is used to identify the amount and type of chemicals present in a sample by measuring the mass-to-charge ratio and abundance of gas-phase ions.
Time-of-flight spectrometer
The energy spectrum of particles of known mass can also be measured by determining the time of flight between two detectors (and hence, the velocity) in a time-of-flight spectrometer. Alternatively, if the particle-energy is known, masses can be determined in a time-of-flight mass spectrometer.
Magnetic spectrometer
When a fast charged particle (charge q, mass m) enters a constant magnetic field B at right angles, it is deflected into a circular path of radius r, due to the Lorentz force. The momentum p of the particle is then given by
,
where m and v are mass and velocity of the particle. The focusing principle of the oldest and simplest magnetic spectrometer, the semicircular spectrometer, invented by J. K. Danisz, is shown on the left. A constant magnetic field is perpendicular to the page. Charged particles of momentum p that pass the slit are deflected into circular paths of radius r = p/qB. It turns out that they all hit the horizontal line at nearly the same place, the focus; here a particle counter should be placed. Varying B, this makes possible to measure the energy spectrum of alpha particles in an alpha particle spectrometer, of beta particles in a beta particle spectrometer, of particles (e.g., fast ions) in a particle spectrometer, or to measure the relative content of the various masses in a mass spectrometer.
Since Danysz' time, many types of magnetic spectrometers more complicated than the semicircular type have been devised.
Resolution
Generally, the resolution of an instrument tells us how well two close-lying energies (or wavelengths, or frequencies, or masses) can be resolved. Generally, for an instrument with mechanical slits, higher resolution will mean lower intensity.
| Technology | Measuring instruments | null |
35456546 | https://en.wikipedia.org/wiki/Ricci%20calculus | Ricci calculus | In mathematics, Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields on a differentiable manifold, with or without a metric tensor or connection. It is also the modern name for what used to be called the absolute differential calculus (the foundation of tensor calculus), tensor calculus or tensor analysis developed by Gregorio Ricci-Curbastro in 1887–1896, and subsequently popularized in a paper written with his pupil Tullio Levi-Civita in 1900. Jan Arnoldus Schouten developed the modern notation and formalism for this mathematical framework, and made contributions to the theory, during its applications to general relativity and differential geometry in the early twentieth century. The basis of modern tensor analysis was developed by Bernhard Riemann in a paper from 1861.
A component of a tensor is a real number that is used as a coefficient of a basis element for the tensor space. The tensor is the sum of its components multiplied by their corresponding basis elements. Tensors and tensor fields can be expressed in terms of their components, and operations on tensors and tensor fields can be expressed in terms of operations on their components. The description of tensor fields and operations on them in terms of their components is the focus of the Ricci calculus. This notation allows an efficient expression of such tensor fields and operations. While much of the notation may be applied with any tensors, operations relating to a differential structure are only applicable to tensor fields. Where needed, the notation extends to components of non-tensors, particularly multidimensional arrays.
A tensor may be expressed as a linear sum of the tensor product of vector and covector basis elements. The resulting tensor components are labelled by indices of the basis. Each index has one possible value per dimension of the underlying vector space. The number of indices equals the degree (or order) of the tensor.
For compactness and convenience, the Ricci calculus incorporates Einstein notation, which implies summation over indices repeated within a term and universal quantification over free indices. Expressions in the notation of the Ricci calculus may generally be interpreted as a set of simultaneous equations relating the components as functions over a manifold, usually more specifically as functions of the coordinates on the manifold. This allows intuitive manipulation of expressions with familiarity of only a limited set of rules.
Applications
Tensor calculus has many applications in physics, engineering and computer science including elasticity, continuum mechanics, electromagnetism (see mathematical descriptions of the electromagnetic field), general relativity (see mathematics of general relativity), quantum field theory, and machine learning.
Working with a main proponent of the exterior calculus Élie Cartan, the influential geometer Shiing-Shen Chern summarizes the role of tensor calculus:In our subject of differential geometry, where you talk about manifolds, one difficulty is that the geometry is described by coordinates, but the coordinates do not have meaning. They are allowed to undergo transformation. And in order to handle this kind of situation, an important tool is the so-called tensor analysis, or Ricci calculus, which was new to mathematicians. In mathematics you have a function, you write down the function, you calculate, or you add, or you multiply, or you can differentiate. You have something very concrete. In geometry the geometric situation is described by numbers, but you can change your numbers arbitrarily. So to handle this, you need the Ricci calculus.
Notation for indices
Basis-related distinctions
Space and time coordinates
Where a distinction is to be made between the space-like basis elements and a time-like element in the four-dimensional spacetime of classical physics, this is conventionally done through indices as follows:
The lowercase Latin alphabet is used to indicate restriction to 3-dimensional Euclidean space, which take values 1, 2, 3 for the spatial components; and the time-like element, indicated by 0, is shown separately.
The lowercase Greek alphabet is used for 4-dimensional spacetime, which typically take values 0 for time components and 1, 2, 3 for the spatial components.
Some sources use 4 instead of 0 as the index value corresponding to time; in this article, 0 is used. Otherwise, in general mathematical contexts, any symbols can be used for the indices, generally running over all dimensions of the vector space.
Coordinate and index notation
The author(s) will usually make it clear whether a subscript is intended as an index or as a label.
For example, in 3-D Euclidean space and using Cartesian coordinates; the coordinate vector shows a direct correspondence between the subscripts 1, 2, 3 and the labels , , . In the expression , is interpreted as an index ranging over the values 1, 2, 3, while the , , subscripts are only labels, not variables. In the context of spacetime, the index value 0 conventionally corresponds to the label .
Reference to basis
Indices themselves may be labelled using diacritic-like symbols, such as a hat (ˆ), bar (¯), tilde (˜), or prime (′) as in:
to denote a possibly different basis for that index. An example is in Lorentz transformations from one frame of reference to another, where one frame could be unprimed and the other primed, as in:
This is not to be confused with van der Waerden notation for spinors, which uses hats and overdots on indices to reflect the chirality of a spinor.
Upper and lower indices
Ricci calculus, and index notation more generally, distinguishes between lower indices (subscripts) and upper indices (superscripts); the latter are not exponents, even though they may look as such to the reader only familiar with other parts of mathematics.
In the special case that the metric tensor is everywhere equal to the identity matrix, it is possible to drop the distinction between upper and lower indices, and then all indices could be written in the lower position. Coordinate formulae in linear algebra such as for the product of matrices may be examples of this. But in general, the distinction between upper and lower indices should be maintained.
Covariant tensor components
A lower index (subscript) indicates covariance of the components with respect to that index:
Contravariant tensor components
An upper index (superscript) indicates contravariance of the components with respect to that index:
Mixed-variance tensor components
A tensor may have both upper and lower indices:
Ordering of indices is significant, even when of differing variance. However, when it is understood that no indices will be raised or lowered while retaining the base symbol, covariant indices are sometimes placed below contravariant indices for notational convenience (e.g. with the generalized Kronecker delta).
Tensor type and degree
The number of each upper and lower indices of a tensor gives its type: a tensor with upper and lower indices is said to be of type , or to be a type- tensor.
The number of indices of a tensor, regardless of variance, is called the degree of the tensor (alternatively, its valence, order or rank, although rank is ambiguous). Thus, a tensor of type has degree .
Summation convention
The same symbol occurring twice (one upper and one lower) within a term indicates a pair of indices that are summed over:
The operation implied by such a summation is called tensor contraction:
This summation may occur more than once within a term with a distinct symbol per pair of indices, for example:
Other combinations of repeated indices within a term are considered to be ill-formed, such as
{|
|-
| || (both occurrences of are lower; would be fine)
|-
| || ( occurs twice as a lower index; or would be fine).
|}
The reason for excluding such formulae is that although these quantities could be computed as arrays of numbers, they would not in general transform as tensors under a change of basis.
Multi-index notation
If a tensor has a list of all upper or lower indices, one shorthand is to use a capital letter for the list:
where and .
Sequential summation
A pair of vertical bars around a set of all-upper indices or all-lower indices (but not both), associated with contraction with another set of indices when the expression is completely antisymmetric in each of the two sets of indices:
means a restricted sum over index values, where each index is constrained to being strictly less than the next. More than one group can be summed in this way, for example:
When using multi-index notation, an underarrow is placed underneath the block of indices:
where
Raising and lowering indices
By contracting an index with a non-singular metric tensor, the type of a tensor can be changed, converting a lower index to an upper index or vice versa:
The base symbol in many cases is retained (e.g. using where appears here), and when there is no ambiguity, repositioning an index may be taken to imply this operation.
Correlations between index positions and invariance
This table summarizes how the manipulation of covariant and contravariant indices fit in with invariance under a passive transformation between bases, with the components of each basis set in terms of the other reflected in the first column. The barred indices refer to the final coordinate system after the transformation.
The Kronecker delta is used, see also below.
{| class="wikitable"
|-
!
! Basis transformation
! Component transformation
! Invariance
|-
! Covector, covariant vector, 1-form
|
|
|
|-
! Vector, contravariant vector
|
|
|
|}
General outlines for index notation and operations
Tensors are equal if and only if every corresponding component is equal; e.g., tensor equals tensor if and only if
for all . Consequently, there are facets of the notation that are useful in checking that an equation makes sense (an analogous procedure to dimensional analysis).
Free and dummy indices
Indices not involved in contractions are called free indices. Indices used in contractions are termed dummy indices, or summation indices.
A tensor equation represents many ordinary (real-valued) equations
The components of tensors (like , etc.) are just real numbers. Since the indices take various integer values to select specific components of the tensors, a single tensor equation represents many ordinary equations. If a tensor equality has free indices, and if the dimensionality of the underlying vector space is , the equality represents equations: each index takes on every value of a specific set of values.
For instance, if
is in four dimensions (that is, each index runs from 0 to 3 or from 1 to 4), then because there are three free indices (), there are 43 = 64 equations. Three of these are:
This illustrates the compactness and efficiency of using index notation: many equations which all share a similar structure can be collected into one simple tensor equation.
Indices are replaceable labels
Replacing any index symbol throughout by another leaves the tensor equation unchanged (provided there is no conflict with other symbols already used). This can be useful when manipulating indices, such as using index notation to verify vector calculus identities or identities of the Kronecker delta and Levi-Civita symbol (see also below). An example of a correct change is:
whereas an erroneous change is:
In the first replacement, replaced and replaced everywhere, so the expression still has the same meaning. In the second, did not fully replace , and did not fully replace (incidentally, the contraction on the index became a tensor product), which is entirely inconsistent for reasons shown next.
Indices are the same in every term
The free indices in a tensor expression always appear in the same (upper or lower) position throughout every term, and in a tensor equation the free indices are the same on each side. Dummy indices (which implies a summation over that index) need not be the same, for example:
as for an erroneous expression:
In other words, non-repeated indices must be of the same type in every term of the equation. In the above identity, line up throughout and occurs twice in one term due to a contraction (once as an upper index and once as a lower index), and thus it is a valid expression. In the invalid expression, while lines up, and do not, and appears twice in one term (contraction) and once in another term, which is inconsistent.
Brackets and punctuation used once where implied
When applying a rule to a number of indices (differentiation, symmetrization etc., shown next), the bracket or punctuation symbols denoting the rules are only shown on one group of the indices to which they apply.
If the brackets enclose covariant indices – the rule applies only to all covariant indices enclosed in the brackets, not to any contravariant indices which happen to be placed intermediately between the brackets.
Similarly if brackets enclose contravariant indices – the rule applies only to all enclosed contravariant indices, not to intermediately placed covariant indices.
Symmetric and antisymmetric parts
Symmetric part of tensor
Parentheses, ( ), around multiple indices denotes the symmetrized part of the tensor. When symmetrizing indices using to range over permutations of the numbers 1 to , one takes a sum over the permutations of those indices for , and then divides by the number of permutations:
For example, two symmetrizing indices mean there are two indices to permute and sum over:
while for three symmetrizing indices, there are three indices to sum over and permute:
The symmetrization is distributive over addition;
Indices are not part of the symmetrization when they are:
not on the same level, for example;
within the parentheses and between vertical bars (i.e. |⋅⋅⋅|), modifying the previous example;
Here the and indices are symmetrized, is not.
Antisymmetric or alternating part of tensor
Square brackets, [ ], around multiple indices denotes the antisymmetrized part of the tensor. For antisymmetrizing indices – the sum over the permutations of those indices multiplied by the signature of the permutation is taken, then divided by the number of permutations:
where is the generalized Kronecker delta of degree , with scaling as defined below.
For example, two antisymmetrizing indices imply:
while three antisymmetrizing indices imply:
as for a more specific example, if represents the electromagnetic tensor, then the equation
represents Gauss's law for magnetism and Faraday's law of induction.
As before, the antisymmetrization is distributive over addition;
As with symmetrization, indices are not antisymmetrized when they are:
not on the same level, for example;
within the square brackets and between vertical bars (i.e. |⋅⋅⋅|), modifying the previous example;
Here the and indices are antisymmetrized, is not.
Sum of symmetric and antisymmetric parts
Any tensor can be written as the sum of its symmetric and antisymmetric parts on two indices:
as can be seen by adding the above expressions for and . This does not hold for other than two indices.
Differentiation
For compactness, derivatives may be indicated by adding indices after a comma or semicolon.
Partial derivative
While most of the expressions of the Ricci calculus are valid for arbitrary bases, the expressions involving partial derivatives of tensor components with respect to coordinates apply only with a coordinate basis: a basis that is defined through differentiation with respect to the coordinates. Coordinates are typically denoted by , but do not in general form the components of a vector. In flat spacetime with linear coordinatization, a tuple of differences in coordinates, , can be treated as a contravariant vector. With the same constraints on the space and on the choice of coordinate system, the partial derivatives with respect to the coordinates yield a result that is effectively covariant. Aside from use in this special case, the partial derivatives of components of tensors do not in general transform covariantly, but are useful in building expressions that are covariant, albeit still with a coordinate basis if the partial derivatives are explicitly used, as with the covariant, exterior and Lie derivatives below.
To indicate partial differentiation of the components of a tensor field with respect to a coordinate variable , a comma is placed before an appended lower index of the coordinate variable.
This may be repeated (without adding further commas):
These components do not transform covariantly, unless the expression being differentiated is a scalar. This derivative is characterized by the product rule and the derivatives of the coordinates
where is the Kronecker delta.
Covariant derivative
The covariant derivative is only defined if a connection is defined. For any tensor field, a semicolon () placed before an appended lower (covariant) index indicates covariant differentiation. Less common alternatives to the semicolon include a forward slash () or in three-dimensional curved space a single vertical bar ().
The covariant derivative of a scalar function, a contravariant vector and a covariant vector are:
where are the connection coefficients.
For an arbitrary tensor:
An alternative notation for the covariant derivative of any tensor is the subscripted nabla symbol . For the case of a vector field :
The covariant formulation of the directional derivative of any tensor field along a vector may be expressed as its contraction with the covariant derivative, e.g.:
The components of this derivative of a tensor field transform covariantly, and hence form another tensor field, despite subexpressions (the partial derivative and the connection coefficients) separately not transforming covariantly.
This derivative is characterized by the product rule:
Connection types
A Koszul connection on the tangent bundle of a differentiable manifold is called an affine connection.
A connection is a metric connection when the covariant derivative of the metric tensor vanishes:
An affine connection that is also a metric connection is called a Riemannian connection. A Riemannian connection that is torsion-free (i.e., for which the torsion tensor vanishes: ) is a Levi-Civita connection.
The for a Levi-Civita connection in a coordinate basis are called Christoffel symbols of the second kind.
Exterior derivative
The exterior derivative of a totally antisymmetric type tensor field with components (also called a differential form) is a derivative that is covariant under basis transformations. It does not depend on either a metric tensor or a connection: it requires only the structure of a differentiable manifold. In a coordinate basis, it may be expressed as the antisymmetrization of the partial derivatives of the tensor components:
This derivative is not defined on any tensor field with contravariant indices or that is not totally antisymmetric. It is characterized by a graded product rule.
Lie derivative
The Lie derivative is another derivative that is covariant under basis transformations. Like the exterior derivative, it does not depend on either a metric tensor or a connection. The Lie derivative of a type tensor field along (the flow of) a contravariant vector field may be expressed using a coordinate basis as
This derivative is characterized by the product rule and the fact that the Lie derivative of a contravariant vector field along itself is zero:
Notable tensors
Kronecker delta
The Kronecker delta is like the identity matrix when multiplied and contracted:
The components are the same in any basis and form an invariant tensor of type , i.e. the identity of the tangent bundle over the identity mapping of the base manifold, and so its trace is an invariant.
Its trace is the dimensionality of the space; for example, in four-dimensional spacetime,
The Kronecker delta is one of the family of generalized Kronecker deltas. The generalized Kronecker delta of degree may be defined in terms of the Kronecker delta by (a common definition includes an additional multiplier of on the right):
and acts as an antisymmetrizer on indices:
Torsion tensor
An affine connection has a torsion tensor :
where are given by the components of the Lie bracket of the local basis, which vanish when it is a coordinate basis.
For a Levi-Civita connection this tensor is defined to be zero, which for a coordinate basis gives the equations
Riemann curvature tensor
If this tensor is defined as
then it is the commutator of the covariant derivative with itself:
since the connection is torsionless, which means that the torsion tensor vanishes.
This can be generalized to get the commutator for two covariant derivatives of an arbitrary tensor as follows:
which are often referred to as the Ricci identities.
Metric tensor
The metric tensor is used for lowering indices and gives the length of any space-like curve
where is any smooth strictly monotone parameterization of the path. It also gives the duration of any time-like curve
where is any smooth strictly monotone parameterization of the trajectory. | Mathematics | Multivariable and vector calculus | null |
35458904 | https://en.wikipedia.org/wiki/Data%20science | Data science | Data science is an interdisciplinary academic field that uses statistics, scientific computing, scientific methods, processing, scientific visualization, algorithms and systems to extract or extrapolate knowledge and insights from potentially noisy, structured, or unstructured data.
Data science also integrates domain knowledge from the underlying application domain (e.g., natural sciences, information technology, and medicine). Data science is multifaceted and can be described as a science, a research paradigm, a research method, a discipline, a workflow, and a profession.
Data science is "a concept to unify statistics, data analysis, informatics, and their related methods" to "understand and analyze actual phenomena" with data. It uses techniques and theories drawn from many fields within the context of mathematics, statistics, computer science, information science, and domain knowledge. However, data science is different from computer science and information science. Turing Award winner Jim Gray imagined data science as a "fourth paradigm" of science (empirical, theoretical, computational, and now data-driven) and asserted that "everything about science is changing because of the impact of information technology" and the data deluge.
A data scientist is a professional who creates programming code and combines it with statistical knowledge to create insights from data.
Foundations
Data science is an interdisciplinary field focused on extracting knowledge from typically large data sets and applying the knowledge and insights from that data to solve problems in a wide range of application domains. The field encompasses preparing data for analysis, formulating data science problems, analyzing data, developing data-driven solutions, and presenting findings to inform high-level decisions in a broad range of application domains. As such, it incorporates skills from computer science, statistics, information science, mathematics, data visualization, information visualization, data sonification, data integration, graphic design, complex systems, communication and business. Statistician Nathan Yau, drawing on Ben Fry, also links data science to human–computer interaction: users should be able to intuitively control and explore data. In 2015, the American Statistical Association identified database management, statistics and machine learning, and distributed and parallel systems as the three emerging foundational professional communities.
Relationship to statistics
Many statisticians, including Nate Silver, have argued that data science is not a new field, but rather another name for statistics. Others argue that data science is distinct from statistics because it focuses on problems and techniques unique to digital data. Vasant Dhar writes that statistics emphasizes quantitative data and description. In contrast, data science deals with quantitative and qualitative data (e.g., from images, text, sensors, transactions, customer information, etc.) and emphasizes prediction and action. Andrew Gelman of Columbia University has described statistics as a non-essential part of data science.
Stanford professor David Donoho writes that data science is not distinguished from statistics by the size of datasets or use of computing and that many graduate programs misleadingly advertise their analytics and statistics training as the essence of a data-science program. He describes data science as an applied field growing out of traditional statistics.
Etymology
Early usage
In 1962, John Tukey described a field he called "data analysis", which resembles modern data science. In 1985, in a lecture given to the Chinese Academy of Sciences in Beijing, C. F. Jeff Wu used the term "data science" for the first time as an alternative name for statistics. Later, attendees at a 1992 statistics symposium at the University of Montpellier II acknowledged the emergence of a new discipline focused on data of various origins and forms, combining established concepts and principles of statistics and data analysis with computing.
The term "data science" has been traced back to 1974, when Peter Naur proposed it as an alternative name to computer science. In 1996, the International Federation of Classification Societies became the first conference to specifically feature data science as a topic. However, the definition was still in flux. After the 1985 lecture at the Chinese Academy of Sciences in Beijing, in 1997 C. F. Jeff Wu again suggested that statistics should be renamed data science. He reasoned that a new name would help statistics shed inaccurate stereotypes, such as being synonymous with accounting or limited to describing data. In 1998, Hayashi Chikio argued for data science as a new, interdisciplinary concept, with three aspects: data design, collection, and analysis.
During the 1990s, popular terms for the process of finding patterns in datasets (which were increasingly large) included "knowledge discovery" and "data mining".
Modern usage
In 2012, technologists Thomas H. Davenport and DJ Patil declared "Data Scientist: The Sexiest Job of the 21st Century", a catchphrase that was picked up even by major-city newspapers like the New York Times and the Boston Globe. A decade later, they reaffirmed it, stating that "the job is more in demand than ever with employers".
The modern conception of data science as an independent discipline is sometimes attributed to William S. Cleveland. In a 2001 paper, he advocated an expansion of statistics beyond theory into technical areas; because this would significantly change the field, it warranted a new name. "Data science" became more widely used in the next few years: in 2002, the Committee on Data for Science and Technology launched the Data Science Journal. In 2003, Columbia University launched The Journal of Data Science. In 2014, the American Statistical Association's Section on Statistical Learning and Data Mining changed its name to the Section on Statistical Learning and Data Science, reflecting the ascendant popularity of data science.
The professional title of "data scientist" has been attributed to DJ Patil and Jeff Hammerbacher in 2008. Though it was used by the National Science Board in their 2005 report "Long-Lived Digital Data Collections: Enabling Research and Education in the 21st Century", it referred broadly to any key role in managing a digital data collection.
There is still no consensus on the definition of data science, and it is considered by some to be a buzzword. Big data is a related marketing term. Data scientists are responsible for breaking down big data into usable information and creating software and algorithms that help companies and organizations determine optimal operations.
Data science and data analysis
Data science and data analysis are both important disciplines in the field of data management and analysis, but they differ in several key ways. While both fields involve working with data, data science is more of an interdisciplinary field that involves the application of statistical, computational, and machine learning methods to extract insights from data and make predictions, while data analysis is more focused on the examination and interpretation of data to identify patterns and trends.
Data analysis typically involves working with smaller, structured datasets to answer specific questions or solve specific problems. This can involve tasks such as data cleaning, data visualization, and exploratory data analysis to gain insights into the data and develop hypotheses about relationships between variables. Data analysts typically use statistical methods to test these hypotheses and draw conclusions from the data. For example, a data analyst might analyze sales data to identify trends in customer behavior and make recommendations for marketing strategies.
Data science, on the other hand, is a more complex and iterative process that involves working with larger, more complex datasets that often require advanced computational and statistical methods to analyze. Data scientists often work with unstructured data such as text or images and use machine learning algorithms to build predictive models and make data-driven decisions. In addition to statistical analysis, data science often involves tasks such as data preprocessing, feature engineering, and model selection. For instance, a data scientist might develop a recommendation system for an e-commerce platform by analyzing user behavior patterns and using machine learning algorithms to predict user preferences.
While data analysis focuses on extracting insights from existing data, data science goes beyond that by incorporating the development and implementation of predictive models to make informed decisions. Data scientists are often responsible for collecting and cleaning data, selecting appropriate analytical techniques, and deploying models in real-world scenarios. They work at the intersection of mathematics, computer science and domain expertise to solve complex problems and uncover hidden patterns in large datasets.
Despite these differences, data science and data analysis are closely related fields and often require similar skill sets. Both fields require a solid foundation in statistics, programming, and data visualization, as well as the ability to communicate findings effectively to both technical and non-technical audiences. Both fields benefit from critical thinking and domain knowledge, as understanding the context and nuances of the data is essential for accurate analysis and modeling.
In summary, data analysis and data science are distinct yet interconnected disciplines within the broader field of data management and analysis. Data analysis focuses on extracting insights and drawing conclusions from structured data, while data science involves a more comprehensive approach that combines statistical analysis, computational methods, and machine learning to extract insights, build predictive models, and drive data-driven decision-making. Both fields use data to understand patterns, make informed decisions, and solve complex problems across various domains.
Data Science as an Academic Discipline
As illustrated in the previous sections, there is substantially some considerable differences between data science, data analysis and statistics. Consequently, just like statistics grew into an independent field from applied mathematics, similarly data science has emerged as an independent field and has gained traction over the recent years. The unique demand for professional skills on computerized data analysis skills has exploded due to the increasing amounts of data emanating from a variety of independent sources. Whereas some of these highly sought skills can be provided by statisticians, the lack of high algorithmic writing skills makes them less preferred than trained data scientists who provide unique expertise on skills such as NoSQL, Apache Hadoop, Cloud Computing platforms and use of complex networks. This paradigm shift has seen various institution craft academic programmes to prepare skilled labor for the market. Some of the institutions offering degree programmes in data science include Stanford University, Harvard University, University of Oxford, ETH Zurich, Meru University among many others.
Cloud computing for data science
Cloud computing can offer access to large amounts of computational power and storage. In big data, where volumes of information are continually generated and processed, these platforms can be used to handle complex and resource-intensive analytical tasks.
Some distributed computing frameworks are designed to handle big data workloads. These frameworks can enable data scientists to process and analyze large datasets in parallel, which can reducing processing times.
Ethical consideration in data science
Data science involve collecting, processing, and analyzing data which often including personal and sensitive information. Ethical concerns include potential privacy violations, bias perpetuation, and negative societal impacts
Machine learning models can amplify existing biases present in training data, leading to discriminatory or unfair outcomes.
| Physical sciences | Science basics | Basics and measurement |
52134207 | https://en.wikipedia.org/wiki/Stone%20slab | Stone slab | A stone slab is a big stone, flat and relatively thin, often of rectangular or almost rectangular form. They are generally used for paving floors, for covering walls or as headstones.
In dolmens
Most dolmen constructions were built using stone slabs of big dimensions. Their architecture often includes a corridor of access that can be constructed using stone slabs or dry stones. The burial chamber, with variable shapes (e.g. rectangular, polygonal, oval, circular) can also be preceded by an anteroom. In some dolmens, the entrance has a door cut into one or more vertical stone slabs.
In construction
The main applications of the slabs as material of construction are for pavings and in the construction of roofs. They can be employed for other uses, among them:
Balconies formed from a slab
Dry stone constructions of: walls, caves, rooms.
The base of some fireplaces are built with stone slabs (a big one or some smaller together).
In religious altars, the altar stone can be a stone slab, more or less elaborated or in its natural state.
In rustic tables
In gastronomy
One system of cooking is cooking "to the slab". Similar to the systems of "to the iron" or "grilled", in the procedure to bake to the slab the foods (e.g. meat, fish, vegetables) are put on a slab heated on a fire with oil, butter or lard and other garnishings.
This system was rather popular in zones of the Pyrenees and often practised by farmers and shepherds. At present it can consider incorporated to the gastronomy of all the levels.
Grave slabs
From prehistoric times there have been examples of graves covered with a stone slab, in its natural state or carved. This use of slabs as tombstone has extended the concept of natural slab to the tombstone variant: flat, thin and polished. An instance is the slab in the tomb of King Pere el Gran of Aragon, which weighs 900 kg.
Such tombstones usually have inscriptions. This traditionally includes the name of the deceased, date of birth and/or death. The inscriptions are generally on a frontal side but also in some cases in the verso (on the top side) and around the edges. Some families commission or make an inscription on the underside. Some also have epitaphs: in praise (eulogies); citations of religious texts, such as "Requiescat in pace"; sentiments or quotations.
A pyramidal or "hipped" stone slab, sometimes surmounting another base or fuller sarcophagus is a design seen across all continents as most organic debris will fall off of this and overgrowth from moss, grass and akin lowest-level plants. Examples are the graves of Sir John Whittaker Ellis and of the 1st Baron Cozens-Hardy.
Washboards
Washing clothes is a basic need in civilised societies and, in general, in all the parts of the world. In primitive periods—before running water, washing machines and detergents—it was necessary to go to wash clothes at the river bank or in a laundry room.
Clothes were washed manually, by rubbing and sometimes striking them against a hard surface with soap. The aim was to penetrate the mix of water and soap between the fibres of the fabric to pull out the dirt. The slabs to wash the clothes were slabs of natural stone chosen to present a fine and relatively flat surface. The small rounded irregularities could help of friction in the washing process.
In some cases "artificial slabs" were made especially, in which the friction surface was wood, although the apparatus was still called "washing slab".
There were also "artificial slabs" made with an undulated steel sheet. (These type of washboards have been used as percussion instruments in jazz and blues bands).
The wash to the stone of cowboys trousers and similar clothes is a stone washing process that uses the friction of some parts of the clothes against a coarse stone (or similar). The aim is to achieve a change of appearance of the clothes, imitating natural wear.
As hunting traps
Hunting with slabs is a system of hunting by means of a slab-trap. The fundamental part of the device is a slab. Preparing this trap was a delicate task.
Preparation of the trap: A slab of suitable dimensions is held in a raised position forming an appropriate angle with the horizontal. The slab, in unstable position, held in place by means of a few twigs or branches in a particular state, a state that can be called "ready to be triggered" (or at the trigger point). Once the slab is ready, one needs to put a suitable bait to attract the animal that wants to capture.
When the animal (e.g. bird, rabbit) tries to eat the bait, the slab falls on top of the animal and gets trapped (or crushed).
The term "slab" in toponyms
From the term slab and its derivatives, there are many toponyms among them.
| Technology | Building materials | null |
25154546 | https://en.wikipedia.org/wiki/Mechanical%20filter | Mechanical filter | A mechanical filter is a signal processing filter usually used in place of an electronic filter at radio frequencies. Its purpose is the same as that of a normal electronic filter: to pass a range of signal frequencies, but to block others. The filter acts on mechanical vibrations which are the analogue of the electrical signal. At the input and output of the filter, transducers convert the electrical signal into, and then back from, these mechanical vibrations.
The components of a mechanical filter are all directly analogous to the various elements found in electrical circuits. The mechanical elements obey mathematical functions which are identical to their corresponding electrical elements. This makes it possible to apply electrical network analysis and filter design methods to mechanical filters. Electrical theory has developed a large library of mathematical forms that produce useful filter frequency responses and the mechanical filter designer is able to make direct use of these. It is only necessary to set the mechanical components to appropriate values to produce a filter with an identical response to the electrical counterpart.
Steel alloys and iron–nickel alloys are common materials for mechanical filter components; nickel is sometimes used for the input and output couplings. Resonators in the filter made from these materials need to be machined to precisely adjust their resonance frequency before final assembly.
While the meaning of mechanical filter in this article is one that is used in an electromechanical role, it is possible to use a mechanical design to filter mechanical vibrations or sound waves (which are also essentially mechanical) directly. For example, filtering of audio frequency response in the design of loudspeaker cabinets can be achieved with mechanical components. In the electrical application, in addition to mechanical components which correspond to their electrical counterparts, transducers are needed to convert between the mechanical and electrical domains. A representative selection of the wide variety of component forms and topologies for mechanical filters are presented in this article.
The theory of mechanical filters was first applied to improving the mechanical parts of phonographs in the 1920s. By the 1950s mechanical filters were being manufactured as self-contained components for applications in radio transmitters and high-end receivers. The high "quality factor", Q, that mechanical resonators can attain, far higher than that of an all-electrical LC circuit, made possible the construction of mechanical filters with excellent selectivity. Good selectivity, being important in radio receivers, made such filters highly attractive. Contemporary researchers are working on microelectromechanical filters, the mechanical devices corresponding to electronic integrated circuits.
Elements
The elements of a passive linear electrical network consist of inductors, capacitors and resistors which have the properties of inductance, elastance (inverse capacitance) and resistance, respectively. The mechanical counterparts of these properties are, respectively, mass, stiffness and damping. In most electronic filter designs, only inductor and capacitor elements are used in the body of the filter (although the filter may be terminated with resistors at the input and output). Resistances are not present in a theoretical filter composed of ideal components and only arise in practical designs as unwanted parasitic elements. Likewise, a mechanical filter would ideally consist only of components with the properties of mass and stiffness, but in reality some damping is present as well.
The mechanical counterparts of voltage and electric current in this type of analysis are, respectively, force (F) and velocity (v) and represent the signal waveforms. From this, a mechanical impedance can be defined in terms of the imaginary angular frequency, jω, which entirely follows the electrical analogy.
{| class="wikitable" width=80%
! Mechanical element !! Formula !! Mechanical impedance !! Electrical counterpart
|-
| Stiffness, || || || Elastance, ,
|-
| Mass, || || || Inductance,
|-
| Damping, || || || Resistance,
|}
{|
|+
|-
|
|-
|
|}
The scheme presented in the table is known as the impedance analogy. Circuit diagrams produced using this analogy match the electrical impedance of the mechanical system seen by the electrical circuit, making it intuitive from an electrical engineering standpoint. There is also the mobility analogy,
in which force corresponds to current and velocity corresponds to voltage. This has equally valid results but requires using the reciprocals of the electrical counterparts listed above. Hence, where is electrical conductance (the reciprocal of resistance, if there is no reactance). Equivalent circuits produced by this scheme are similar, but are the dual impedance forms whereby series elements become parallel, capacitors become inductors, and so on. Circuit diagrams using the mobility analogy more closely match the mechanical arrangement of the circuit, making it more intuitive from a mechanical engineering standpoint. In addition to their application to electromechanical systems, these analogies are widely used to aid analysis in acoustics.
Any mechanical component will unavoidably possess both mass and stiffness. This translates in electrical terms to an LC circuit, that is, a circuit consisting of an inductor and a capacitor, hence mechanical components are resonators and are often used as such. It is still possible to represent inductors and capacitors as individual lumped elements in a mechanical implementation by minimising (but never quite eliminating) the unwanted property. Capacitors may be made of thin, long rods, that is, the mass is minimised and the compliance is maximised. Inductors, on the other hand, may be made of short, wide pieces which maximise the mass in comparison to the compliance of the piece.
Mechanical parts act as a transmission line for mechanical vibrations. If the wavelength is short in comparison to the part then a lumped-element model as described above is no longer adequate and a distributed-element model must be used instead. The mechanical distributed elements are entirely analogous to electrical distributed elements and the mechanical filter designer can use the methods of electrical distributed-element filter design.
History
Harmonic telegraph
Mechanical filter design was developed by applying the discoveries made in electrical filter theory to mechanics. However, a very early example (1870s) of acoustic filtering was the "harmonic telegraph", which arose precisely because electrical resonance was poorly understood but mechanical resonance (in particular, acoustic resonance) was very familiar to engineers. This situation was not to last for long; electrical resonance had been known to science for some time before this, and it was not long before engineers started to produce all-electric designs for filters. In its time, though, the harmonic telegraph was of some importance. The idea was to combine several telegraph signals on one telegraph line by what would now be called frequency division multiplexing thus saving enormously on line installation costs. The key of each operator activated a vibrating electromechanical reed which converted this vibration into an electrical signal. Filtering at the receiving operator was achieved by a similar reed tuned to precisely the same frequency, which would only vibrate and produce a sound from transmissions by the operator with the identical tuning.
Versions of the harmonic telegraph were developed by Elisha Gray, Alexander Graham Bell, Ernest Mercadier and others. Its ability to act as a sound transducer to and from the electrical domain was to inspire the invention of the telephone.
Mechanical equivalent circuits
Once the basics of electrical network analysis began to be established, it was not long before the ideas of complex impedance and filter design theories were carried over into mechanics by analogy. Kennelly, who was also responsible for introducing complex impedance, and Webster were the first to extend the concept of impedance into mechanical systems in 1920. Mechanical admittance and the associated mobility analogy came much later and are due to Firestone in 1932.
It was not enough to just develop a mechanical analogy. This could be applied to problems that were entirely in the mechanical domain, but for mechanical filters with an electrical application it is necessary to include the transducer in the analogy as well. Poincaré (1907) was the first to describe a transducer as a pair of linear algebraic equations relating electrical variables (voltage and current) to mechanical variables (force and velocity). These equations can be expressed as a matrix relationship in much the same way as the z-parameters of a two-port network in electrical theory, to which this is entirely analogous:
where and represent the voltage and current respectively on the electrical side of the transducer.
Wegel, in 1921, was the first to express these equations in terms of mechanical impedance as well as electrical impedance. The element is the open circuit mechanical impedance, that is, the impedance presented by the mechanical side of the transducer when no current is entering the electrical side. The element , conversely, is the clamped electrical impedance, that is, the impedance presented to the electrical side when the mechanical side is clamped and prevented from moving (velocity is zero). The remaining two elements, and describe the transducer forward and reverse transfer functions respectively. Once these ideas were in place, engineers were able to extend electrical theory into the mechanical domain and analyse an electromechanical system as a unified whole.
Sound reproduction
An early application of these new theoretical tools was in phonographic sound reproduction. A recurring problem with early phonograph designs was that mechanical resonances in the pickup and sound transmission mechanism caused excessively large peaks and troughs in the frequency response, resulting in poor sound quality. In 1923, Harrison of the Western Electric Company filed a patent for a phonograph in which the mechanical design was entirely represented as an electrical circuit. The horn of the phonograph is represented as a transmission line, and is a resistive load for the rest of the circuit, while all the mechanical and acoustic parts—from the pickup needle through to the horn—are translated into lumped components according to the impedance analogy. The circuit arrived at is a ladder topology of series resonant circuits coupled by shunt capacitors. This can be viewed as a bandpass filter circuit. Harrison designed the component values of this filter to have a specific passband corresponding to the desired audio passband (in this case 100 Hz to 6 kHz) and a flat response. Translating these electrical element values back into mechanical quantities provided specifications for the mechanical components in terms of mass and stiffness, which in turn could be translated into physical dimensions for their manufacture. The resulting phonograph has a flat frequency response in its passband and is free of the resonances previously experienced. Shortly after this, Harrison filed another patent using the same methodology on telephone transmit and receive transducers.
Harrison used Campbell's image filter theory, which was the most advanced filter theory available at the time. In this theory, filter design is viewed essentially as an impedance matching problem. More advanced filter theory was brought to bear on this problem by Norton in 1929 at Bell Labs. Norton followed the same general approach though he later described to Darlington the filter he designed as being "maximally flat". Norton's mechanical design predates the paper by Butterworth who is usually credited as the first to describe the electronic maximally flat filter. The equations Norton gives for his filter correspond to a singly terminated Butterworth filter, that is, one driven by an ideal voltage source with no impedance, whereas the form more usually given in texts is for the doubly terminated filter with resistors at both ends, making it hard to recognise the design for what it is. Another unusual feature of Norton's filter design arises from the series capacitor, which represents the stiffness of the diaphragm. This is the only series capacitor in Norton's representation, and without it, the filter could be analysed as a low-pass prototype. Norton moves the capacitor out of the body of the filter to the input at the expense of introducing a transformer into the equivalent circuit (Norton's figure 4). Norton has used here the "turning round the L" impedance transform to achieve this.
The definitive description of the subject from this period is Maxfield and Harrison's 1926 paper. There, they describe not only how mechanical bandpass filters can be applied to sound reproduction systems, but also apply the same principles to recording systems and describe a much improved disc cutting head.
Volume production
Modern mechanical filters for intermediate frequency (IF) applications were first investigated by Robert Adler of Zenith Electronics who built a 455 kHz filter in 1946. The idea was taken up by Collins Radio Company who started the first volume production of mechanical filters from the 1950s onwards. These were originally designed for telephone frequency-division multiplex applications where there is commercial advantage in using high quality filters. Precision and steepness of the transition band leads to a reduced width of guard band, which in turn leads to the ability to squeeze more telephone channels into the same cable. This same feature is useful in radio transmitters for much the same reason. Mechanical filters quickly also found popularity in VHF/UHF radio IF stages of the high end radio sets (military, marine, amateur radio and the like) manufactured by Collins. They were favoured in the radio application because they could achieve much higher -factors than the equivalent LC filter. High allows filters to be designed which have high selectivity, important for distinguishing adjacent radio channels in receivers. They also had an advantage in stability over both LC filters and monolithic crystal filters. The most popular design for radio applications was torsional resonators because radio IF typically lies in the 100 to 500 kHz band.
Transducers
Both magnetostrictive and piezoelectric transducers are used in mechanical filters. Piezoelectric transducers are favoured in recent designs since the piezoelectric material can also be used as one of the resonators of the filter, thus reducing the number of components and thereby saving space. They also avoid the susceptibility to extraneous magnetic fields of the magnetostrictive type of transducer.
Magnetostrictive
A magnetostrictive material is one which changes shape when a magnetic field is applied. In reverse, it produces a magnetic field when distorted. The magnetostrictive transducer requires a coil of conducting wire around the magnetostrictive material. The coil either induces a magnetic field in the transducer and sets it in motion or else picks up an induced current from the motion of the transducer at the filter output. It is also usually necessary to have a small magnet to bias the magnetostrictive material into its operating range. It is possible to dispense with the magnets if the biasing is taken care of on the electronic side by providing a d.c. current superimposed on the signal, but this approach would detract from the generality of the filter design.
The usual magnetostrictive materials used for the transducer are either ferrite or compressed powdered iron. Mechanical filter designs often have the resonators coupled with steel or nickel-iron wires, but on some designs, especially older ones, nickel wire may be used for the input and output rods. This is because it is possible to wind the transducer coil directly on to a nickel coupling wire since nickel is slightly magnetostrictive. However, it is not strongly so and coupling to the electrical circuit is weak. This scheme also has the disadvantage of eddy currents, a problem that is avoided if ferrites are used instead of nickel.
The coil of the transducer adds some inductance on the electrical side of the filter. It is common practice to add a capacitor in parallel with the coil so that an additional resonator is formed which can be incorporated into the filter design. While this will not improve performance to the extent that an additional mechanical resonator would, there is some benefit and the coil has to be there in any case.
Piezoelectric
A piezoelectric material is one which changes shape when an electric field is applied. In reverse, it produces an electric field when it is distorted. A piezoelectric transducer, in essence, is made simply by plating electrodes on to the piezoelectric material. Early piezoelectric materials used in transducers such as barium titanate had poor temperature stability. This precluded the transducer from functioning as one of the resonators; it had to be a separate component. This problem was solved with the introduction of lead zirconate titanate (abbreviated PZT) which is stable enough to be used as a resonator. Another common piezoelectric material is quartz, which has also been used in mechanical filters. However, ceramic materials such as PZT are preferred for their greater electromechanical coupling coefficient.
One type of piezoelectric transducer is the Langevin type, named after a transducer used by Paul Langevin in early sonar research. This is good for longitudinal modes of vibration. It can also be used on resonators with other modes of vibration if the motion can be mechanically converted into a longitudinal motion. The transducer consists of a layer of piezoelectric material sandwiched transversally into a coupling rod or resonator.
Another kind of piezoelectric transducer has the piezoelectric material sandwiched in longitudinally, usually into the resonator itself. This kind is good for torsional vibration modes and is called a torsional transducer.
As miniaturized by using thin film manufacturing methods piezoelectric resonators are called thin-film bulk acoustic resonators (FBARs).
Resonators
It is possible to achieve an extremely high with mechanical resonators. Mechanical resonators typically have a of 10,000 or so, and 25,000 can be achieved in torsional resonators using a particular nickel-iron alloy. This is an unreasonably high figure to achieve with LC circuits, whose is limited by the resistance of the inductor coils.
Early designs in the 1940s and 1950s started by using steel as a resonator material. This has given way to nickel-iron alloys, primarily to maximise the since this is often the primary appeal of mechanical filters rather than price. Some of the metals that have been used for mechanical filter resonators and their are shown in the table.
Piezoelectric crystals are also sometimes used in mechanical filter designs. This is especially true for resonators that are also acting as transducers for inputs and outputs.
One advantage that mechanical filters have over LC electrical filters is that they can be made very stable. The resonance frequency can be made so stable that it varies only 1.5 parts per billion (ppb) from the specified value over the operating temperature range (), and its average drift with time can be as low as 4 ppb per day. This stability with temperature is another reason for using nickel-iron as the resonator material. Variations with temperature in the resonance frequency (and other features of the frequency function) are directly related to variations in the Young's modulus, which is a measure of stiffness of the material. Materials are therefore sought that have a small temperature coefficient of Young's modulus. In general, Young's modulus has a negative temperature coefficient (materials become less stiff with increasing temperature) but additions of small amounts of certain other elements in the alloy
can produce a material with a temperature coefficient that changes sign from negative through zero to positive with temperature. Such a material will have a zero coefficient of temperature with resonance frequency around a particular temperature. It is possible to adjust the point of zero temperature coefficient to a desired position by heat treatment of the alloy.
Resonator modes
It is usually possible for a mechanical part to vibrate in a number of different modes, however the design will be based on a particular vibrational mode and the designer will take steps to try to restrict the resonance to this mode. As well as the straightforward longitudinal mode some others which are used include flexural mode, torsional mode, radial mode and drumhead mode.
Modes are numbered according to the number of half-wavelengths in the vibration. Some modes exhibit vibrations in more than one direction (such as drumhead mode which has two) and consequently the mode number consists of more than one number. When the vibration is in one of the higher modes, there will be multiple nodes on the resonator where there is no motion. For some types of resonator, this can provide a convenient place to make a mechanical attachment for structural support. Wires attached at nodes will have no effect on the vibration of the resonator or the overall filter response. In figure 5, some possible anchor points are shown as wires attached at the nodes.
Circuit designs
There are a great many combinations of resonators and transducers that can be used to construct a mechanical filter. A selection of some of these is shown in the diagrams. Figure 6 shows a filter using disc flexural resonators and magnetostrictive transducers. The transducer drives the centre of the first resonator, causing it to vibrate. The edges of the disc move in antiphase to the centre when the driving signal is at, or close to, resonance, and the signal is transmitted through the connecting rods to the next resonator. When the driving signal is not close to resonance, there is little movement at the edges, and the filter rejects (does not pass) the signal. Figure 7 shows a similar idea involving longitudinal resonators connected together in a chain by connecting rods. In this diagram, the filter is driven by piezoelectric transducers. It could equally well have used magnetostrictive transducers. Figure 8 shows a filter using torsional resonators. In this diagram, the input has a torsional piezoelectric transducer and the output has a magnetostrictive transducer. This would be quite unusual in a real design, as both input and output usually have the same type of transducer. The magnetostrictive transducer is only shown here to demonstrate how longitudinal vibrations may be converted to torsional vibrations and vice versa. Figure 9 shows a filter using drumhead mode resonators. The edges of the discs are fixed to the casing of the filter (not shown in the diagram) so the vibration of the disc is in the same modes as the membrane of a drum. Collins calls this type of filter a disc wire filter.
The various types of resonator are all particularly suited to different frequency bands. Overall, mechanical filters with lumped elements of all kinds can cover frequencies from about 5 to 700 kHz although mechanical filters down as low as a few kilohertz (kHz) are rare. The lower part of this range, below 100 kHz, is best covered with bar flexural resonators. The upper part is better done with torsional resonators. Drumhead disc resonators are in the middle, covering the range from around 100 to 300 kHz.
The frequency response behaviour of all mechanical filters can be expressed as an equivalent electrical circuit using the impedance analogy described above. An example of this is shown in figure 8b which is the equivalent circuit of the mechanical filter of figure 8a. Elements on the electrical side, such as the inductance of the magnetostrictive transducer, are omitted but would be taken into account in a complete design. The series resonant circuits on the circuit diagram represent the torsional resonators, and the shunt capacitors represent the coupling wires. The component values of the electrical equivalent circuit can be adjusted, more or less at will, by modifying the dimensions of the mechanical components. In this way, all the theoretical tools of electrical analysis and filter design can be brought to bear on the mechanical design. Any filter realisable in electrical theory can, in principle, also be realised as a mechanical filter. In particular, the popular finite element approximations to an ideal filter response of the Butterworth and Chebyshev filters can both readily be realised. As with the electrical counterpart, the more elements that are used, the closer the approximation approaches the ideal, however, for practical reasons the number of resonators does not normally exceed eight.
Semi-lumped designs
Frequencies of the order of megahertz (MHz) are above the usual range for mechanical filters. The components start to become very small, or alternatively the components are large compared to the signal wavelength. The lumped-element model described above starts to break down and the components must be considered as distributed elements. The frequency at which the transition from lumped to distributed modeling takes place is much lower for mechanical filters than it is for their electrical counterparts. This is because mechanical vibrations travel at the speed of sound for the material the component is composed of. For solid components, this is many times (x15 for nickel-iron) the speed of sound in air () but still considerably less than the speed of electromagnetic waves (approx. in vacuum). Consequently, mechanical wavelengths are much shorter than electrical wavelengths for the same frequency. Advantage can be taken of these effects by deliberately designing components to be distributed elements, and the components and methods used in electrical distributed-element filters can be brought to bear. The equivalents of stubs and impedance transformers are both achievable. Designs which use a mixture of lumped and distributed elements are referred to as semi-lumped.
An example of such a design is shown in figure 10a. The resonators are disc flexural resonators similar to those shown in figure 6, except that these are energised from an edge, leading to vibration in the fundamental flexural mode with a node in the centre, whereas the figure 6 design is energised in the centre leading to vibration in the second flexural mode at resonance. The resonators are mechanically attached to the housing by pivots at right angles to the coupling wires. The pivots are to ensure free turning of the resonator and minimise losses. The resonators are treated as lumped elements; however, the coupling wires are made exactly one half-wavelength long and are equivalent to a open circuit stub in the electrical equivalent circuit. For a narrow-band filter, a stub of this sort has the approximate equivalent circuit of a parallel shunt tuned circuit as shown in figure 10b. Consequently, the connecting wires are being used in this design to add additional resonators into the circuit and will have a better response than one with just the lumped resonators and short couplings. For even higher frequencies, microelectromechanical methods can be used as described below.
Bridging wires
Bridging wires are rods that couple together resonators that are not adjacent. They can be used to produce poles of attenuation in the stopband. This has the benefit of increasing the stopband rejection. When the pole is placed near the passband edge, it also has the benefit of increasing roll-off and narrowing the transition band. The typical effects of some of these on filter frequency response are shown in figure 11. Bridging across a single resonator (figure 11b) can produce a pole of attenuation in the high stopband. Bridging across two resonators (figure 11c) can produce a pole of attenuation in both the high and the low stopband. Using multiple bridges (figure 11d) will result in multiple poles of attenuation. In this way, the attenuation of the stopbands can be deepened over a broad frequency range.
The method of coupling between non-adjacent resonators is not limited to mechanical filters. It can be applied to other filter formats and the general term for this class is cross-coupled filter. For instance, channels can be cut between cavity resonators, mutual inductance can be used with discrete component filters, and feedback paths can be used with active analogue or digital filters. Nor was the method first discovered in the field of mechanical filters; the earliest description is in a 1948 patent for filters using microwave cavity resonators. However, mechanical filter designers were the first (1960s) to develop practical filters of this kind and the method became a particular feature of mechanical filters.
Microelectromechanical filters
A new technology emerging in mechanical filtering is microelectromechanical systems (MEMS). MEMS are very small micromachines with component sizes measured in micrometres (μm), but not as small as nanomachines. These filters can be designed to operate at much higher frequencies than can be achieved with traditional mechanical filters. These systems are mostly fabricated from silicon (Si), silicon nitride (Si3N4), or polymers. A common component used for radio frequency filtering (and MEMS applications generally), is the cantilever resonator. Cantilevers are simple mechanical components to manufacture by much the same methods used by the semiconductor industry; masking, photolithography and etching, with a final undercutting etch to separate the cantilever from the substrate. The technology has great promise since cantilevers can be produced in large numbers on a single substrate—much as large numbers of transistors are currently contained on a single silicon chip.
The resonator shown in figure 12 is around 120 μm in length. Experimental complete filters with an operating frequency of 30 GHz have been produced using cantilever varactors as the resonator elements. The size of this filter is around 4×3.5 mm. Cantilever resonators are typically applied at frequencies below 200 MHz, but other structures, such as micro-machined cavities, can be used in the microwave bands. Extremely high resonators can be made with this technology; flexural mode resonators with a in excess of 80,000 at 8 MHz are reported.
Adjustment
The precision applications in which mechanical filters are used require that the resonators are accurately adjusted to the specified resonance frequency. This is known as trimming and usually involves a mechanical machining process. In most filter designs, this can be difficult to do once the resonators have been assembled into the complete filter so the resonators are trimmed before assembly. Trimming is done in at least two stages; coarse and fine, with each stage bringing the resonance frequency closer to the specified value. Most trimming methods involve removing material from the resonator which will increase the resonance frequency. The target frequency for a coarse trimming stage consequently needs to be set below the final frequency since the tolerances of the process could otherwise result in a frequency higher than the following fine trimming stage could adjust for.
The coarsest method of trimming is grinding of the main resonating surface of the resonator; this process has an accuracy of around . Better control can be achieved by grinding the edge of the resonator instead of the main surface. This has a less dramatic effect and consequently better accuracy. Processes that can be used for fine trimming, in order of increasing accuracy, are sandblasting, drilling, and laser ablation. Laser trimming is capable of achieving an accuracy of .
Trimming by hand, rather than machine, was used on some early production components but would now normally only be encountered during product development. Methods available include sanding and filing. It is also possible to add material to the resonator by hand, thus reducing the resonance frequency. One such method is to add solder, but this is not suitable for production use since the solder will tend to reduce the high of the resonator.
In the case of MEMS filters, it is not possible to trim the resonators outside of the filter because of the integrated nature of the device construction. However, trimming is still a requirement in many MEMS applications. Laser ablation can be used for this but material deposition methods are available as well as material removal. These methods include laser or ion-beam induced deposition.
| Technology | Signal processing | null |
25156868 | https://en.wikipedia.org/wiki/Stomatosuchidae | Stomatosuchidae | Stomatosuchidae is an extinct family of neosuchian crocodylomorphs. It is defined as the most inclusive clade containing Stomatosuchus inermis but not Notosuchus terrestris, Simosuchus clarki, Araripesuchus gomesii, Baurusuchus pachecoi, Peirosaurus torminni, or Crocodylus niloticus. Two genera are known to belong to Stomatosuchidae: Stomatosuchus, the type genus, and Laganosuchus. Fossils have been found from Egypt, Morocco, and Niger. Both lived during the Cenomanian stage of the Late Cretaceous. The skulls of stomatosuchids are said to be platyrostral because they have unusually flattened, elongate, duck-shaped craniums with U-shaped jaws. This platyrostral condition is similar to what is seen in the "nettosuchid" Mourasuchus, which is not closely related to stomatosuchids as it is a more derived alligatoroid that existed during the Miocene.
Unlike Mourasuchus, stomatosuchids have jaws that are less strongly bowed. Additionally, the glenoid is rounded rather than cupped at the posterior end of the jaw, and the retroarticular process is straight rather than dorsally curving like in Mourasuchus and other extant crocodylians.
The only existing specimens of stomatosuchids belong to the recently described genus Laganosuchus, which is known from two species, L. thaumastos and L. maghrebensis from the Echkar Formation in Niger and the Kem Kem Beds in Morocco, respectively. The genus Stomatosuchus is known only from a holotype skull collected from the Bahariya Formation in Egypt, which was destroyed in World War II with the bombing of the Munich Museum. Because Stomatosuchus is known only from brief accounts by Ernst Stromer and Franz Nopcsa (1926) and no additional material has ever been found, the genus remains enigmatic.
The genus Aegyptosuchus was once considered to be a member of Stomatosuchidae, but it is now placed within its own family, Aegyptosuchidae.
| Biology and health sciences | Prehistoric crocodiles | Animals |
32186638 | https://en.wikipedia.org/wiki/Flip-flop%20%28electronics%29 | Flip-flop (electronics) | In electronics, flip-flops and latches are circuits that have two stable states that can store state information – a bistable multivibrator. The circuit can be made to change state by signals applied to one or more control inputs and will output its state (often along with its logical complement too). It is the basic storage element in sequential logic. Flip-flops and latches are fundamental building blocks of digital electronics systems used in computers, communications, and many other types of systems.
Flip-flops and latches are used as data storage elements to store a single bit (binary digit) of data; one of its two states represents a "one" and the other represents a "zero". Such data storage can be used for storage of state, and such a circuit is described as sequential logic in electronics. When used in a finite-state machine, the output and next state depend not only on its current input, but also on its current state (and hence, previous inputs). It can also be used for counting of pulses, and for synchronizing variably-timed input signals to some reference timing signal.
The term flip-flop has historically referred generically to both level-triggered (asynchronous, transparent, or opaque) and edge-triggered (synchronous, or clocked) circuits that store a single bit of data using gates. Modern authors reserve the term flip-flop exclusively for edge-triggered storage elements and latches for level-triggered ones. The terms "edge-triggered", and "level-triggered" may be used to avoid ambiguity.
When a level-triggered latch is enabled it becomes transparent, but an edge-triggered flip-flop's output only changes on a clock edge (either positive going or negative going).
Different types of flip-flops and latches are available as integrated circuits, usually with multiple elements per chip. For example, 74HC75 is a quadruple transparent latch in the 7400 series.
History
The first electronic latch was invented in 1918 by the British physicists William Eccles and F. W. Jordan. It was initially called the Eccles–Jordan trigger circuit and consisted of two active elements (vacuum tubes). The design was used in the 1943 British Colossus codebreaking computer and such circuits and their transistorized versions were common in computers even after the introduction of integrated circuits, though latches and flip-flops made from logic gates are also common now. Early latches were known variously as trigger circuits or multivibrators.
According to P. L. Lindley, an engineer at the US Jet Propulsion Laboratory, the flip-flop types detailed below (SR, D, T, JK) were first discussed in a 1954 UCLA course on computer design by Montgomery Phister, and then appeared in his book Logical Design of Digital Computers. Lindley was at the time working at Hughes Aircraft under Eldred Nelson, who had coined the term JK for a flip-flop which changed states when both inputs were on (a logical "one"). The other names were coined by Phister. They differ slightly from some of the definitions given below. Lindley explains that he heard the story of the JK flip-flop from Eldred Nelson, who is responsible for coining the term while working at Hughes Aircraft. Flip-flops in use at Hughes at the time were all of the type that came to be known as J-K. In designing a logical system, Nelson assigned letters to flip-flop inputs as follows: #1: A & B, #2: C & D, #3: E & F, #4: G & H, #5: J & K. Nelson used the notations "j-input" and "k-input" in a patent application filed in 1953.
Implementation
Transparent or asynchronous latches can be built around a single pair of cross-coupled inverting elements: vacuum tubes, bipolar transistors, field-effect transistors, inverters, and inverting logic gates have all been used in practical circuits.
Clocked flip-flops are specially designed for synchronous systems; such devices ignore their inputs except at the transition of a dedicated clock signal (known as clocking, pulsing, or strobing). Clocking causes the flip-flop either to change or to retain its output signal based upon the values of the input signals at the transition. Some flip-flops change output on the rising edge of the clock, others on the falling edge.
Since the elementary amplifying stages are inverting, two stages can be connected in succession (as a cascade) to form the needed non-inverting amplifier. In this configuration, each amplifier may be considered as an active inverting feedback network for the other inverting amplifier. Thus the two stages are connected in a non-inverting loop although the circuit diagram is usually drawn as a symmetric cross-coupled pair (both the drawings are initially introduced in the Eccles–Jordan patent).
Types
Flip-flops and latches can be divided into common types: SR ("set-reset"), D ("data"), T ("toggle"), and JK (see History section above). The behavior of a particular type can be described by the characteristic equation that derives the "next" output () in terms of the input signal(s) and/or the current output, .
Asynchronous set-reset latches
When using static gates as building blocks, the most fundamental latch is the asynchronous Set-Reset (SR) latch.
Its two inputs S and R can set the internal state to 1 using the combination S=1 and R=0, and can reset the internal state to 0 using the combination S=0 and R=1.
The SR latch can be constructed from a pair of cross-coupled NOR or NAND logic gates. The stored bit is present on the output marked Q.
It is convenient to think of NAND, NOR, AND and OR as controlled operations, where one input is chosen as the control input set and the other bit as the input to be processed depending on the state of the control. Then, all of these gates have one control value that ignores the input (x) and outputs a constant value, while the other control value lets the input pass (maybe complemented):
Essentially, they can all be used as switches that either set a specific value or let an input value pass.
SR NOR latch
The SR NOR latch consists of two parallel NOR gates where the output of each NOR is also fanned out into one input of the other NOR, as shown in the figure.
We call these output-to-input connections feedback inputs, or simply feedbacks.
The remaining inputs we will use as control inputs as explained above.
Notice that at this point, because everything is symmetric, it does not matter to which inputs the outputs are connected.
We now break the symmetry by choosing which of the remaining control inputs will be our set and reset and we can call "set NOR" the NOR gate with the set control and "reset NOR" the NOR with the reset control; in the figures the set NOR is the bottom one and the reset NOR is the top one.
The output of the reset NOR will be our stored bit Q, while we will see that the output of the set NOR stores its complement .
To derive the behavior of the SR NOR latch, consider S and R as control inputs and remember that, from the equations above, set and reset NOR with control 1 will fix their outputs to 0, while set and reset NOR with control 0 will act as a NOT gate.
With this it is now possible to derive the behavior of the SR latch as simple conditions (instead of, for example, assigning values to each line see how they propagate):
While the R and S are both zero, both R NOR and S NOR simply impose the feedback being the complement of the output, this is satisfied as long as the outputs are the complement of each other. Thus the outputs Q and are maintained in a constant state, whether Q=0 or Q=1.
If S=1 while R=0, then the set NOR will fix =0, while the reset NOR will adapt and set Q=1. Once S is set back to zero the values are maintained as explained above.
Similarly, if R=1 while S=0, then the reset NOR fixes Q=0 while the set NOR with adapt =1. Again the state is maintained if R is set back to 0.
If R=S=1, the NORs will fix both outputs to 0, which is not a valid state storing complementary values.
{| class="wikitable" style="text-align:center;"
|+ SR latch operation
|-
! colspan="4" | Characteristic table
! colspan="4" | Excitation table
|-
! S !! R !! Qnext !! Action
! Q !! Qnext !! S !! R
|-
| 0 || 0 || Q || Hold state
| 0 || 0 || 0 ||
|-
| 0 || 1 || 0 || Reset
| 0 || 1 || 1 || 0
|-
| 1 || 0 || 1 || Set
| 1 || 0 || 0 || 1
|-
| 1 || 1 || || Not allowed
| 1 || 1 || || 0
|}
Note: X means don't care, that is, either 0 or 1 is a valid value.
The R = S = 1 combination is called a restricted combination or a forbidden state because, as both NOR gates then output zeros, it breaks the logical equation Q = not . The combination is also inappropriate in circuits where both inputs may go low simultaneously (i.e. a transition from restricted to hold). The output could remain in a metastable state and may eventually lock at either 1 or 0 depending on the propagation time relations between the gates (a race condition).
To overcome the restricted combination, one can add gates to the inputs that would convert (S, R) = (1, 1) to one of the non-restricted combinations. That can be:
Q = 1 (1, 0) – referred to as an S (dominated)-latch
Q = 0 (0, 1) – referred to as an R (dominated)-latch
This is done in nearly every programmable logic controller.
Hold state (0, 0) – referred to as an E-latch
Alternatively, the restricted combination can be made to toggle the output. The result is the JK latch.
The characteristic equation for the SR latch is:
or
where A + B means (A or B), AB means (A and B)
Another expression is:
with
NAND latch
The circuit shown below is a basic NAND latch. The inputs are also generally designated and for Set and Reset respectively. Because the NAND inputs must normally be logic 1 to avoid affecting the latching action, the inputs are considered to be inverted in this circuit (or active low).
The circuit uses the same feedback as SR NOR, just replacing NOR gates with NAND gates, to "remember" and retain its logical state even after the controlling input signals have changed.
Again, recall that a 1-controlled NAND always outputs 0, while a 0-controlled NAND acts as a NOT gate.
When the S and R inputs are both high, feedback maintains the Q outputs to the previous state.
When either is zero, they fix their output bits to 0 while to other adapts to the complement.
S=R=0 produces the invalid state.
SR AND-OR latch
From a teaching point of view, SR latches drawn as a pair of cross-coupled components (transistors, gates, tubes, etc.) are often hard to understand for beginners. A didactically easier explanation is to draw the latch as a single feedback loop instead of the cross-coupling. The following is an SR latch built with an AND gate with one inverted input and an OR gate. Note that the inverter is not needed for the latch functionality, but rather to make both inputs High-active.
{| class="wikitable"
|+ SR AND-OR latch operation
|-
! S !! R !! Action
|-
| 0 || 0 || No change; random initial
|-
| 1 || 0 || Q = 1
|-
| || 1 || Q = 0
|}
Note that the SR AND-OR latch has the benefit that S = 1, R = 1 is well defined. In above version of the SR AND-OR latch it gives priority to the R signal over the S signal. If priority of S over R is needed, this can be achieved by connecting output Q to the output of the OR gate instead of the output of the AND gate.
The SR AND-OR latch is easier to understand, because both gates can be explained in isolation, again with the control view of AND and OR from above. When neither S or R is set, then both the OR gate and the AND gate are in "hold mode", i.e., they let the input through, their output is the input from the feedback loop. When input S = 1, then the OR gate outputs 1, regardless of the other input from the feedback loop ("set mode"). When input R = 1 then the AND gate outputs 0, regardless of the other input from the feedback loop ("reset mode"). And since the AND gate takes the output of the OR gate as input, R has priority over S. Latches drawn as cross-coupled gates may look less intuitive, as the behavior of one gate appears to be intertwined with the other gate. The standard NOR or NAND latches could also be re-drawn with the feedback loop, but in their case the feedback loop does not show the same signal value throughout the whole feedback loop. However, the SR AND-OR latch has the drawback that it would need an extra inverter, if an inverted Q output is needed.
Note that the SR AND-OR latch can be transformed into the SR NOR latch using logic transformations: inverting the output of the OR gate and also the 2nd input of the AND gate and connecting the inverted Q output between these two added inverters; with the AND gate with both inputs inverted being equivalent to a NOR gate according to De Morgan's laws.
JK latch
The JK latch is much less frequently used than the JK flip-flop. The JK latch follows the following state table:
{| class=wikitable
|+ JK latch truth table
|-
! J !! K !! Qnext !! Comment
|-
| 0 || 0 || Q || No change
|-
| 0 || 1 || 0 || Reset
|-
| 1 || 0 || 1 || Set
|-
| 1 || 1 || || Toggle
|}
Hence, the JK latch is an SR latch that is made to toggle its output (oscillate between 0 and 1) when passed the input combination of 11. Unlike the JK flip-flop, the 11 input combination for the JK latch is not very useful because there is no clock that directs toggling.
Gated latches and conditional transparency
Latches are designed to be transparent. That is, input signal changes cause immediate changes in output. Additional logic can be added to a transparent latch to make it non-transparent or opaque when another input (an "enable" input) is not asserted. When several transparent latches follow each other, if they are all transparent at the same time, signals will propagate through them all. However, following a transparent-high latch by a transparent-low latch (or vice-versa) causes the state and output to only change on clock edges, forming what is called a master–slave flip-flop.
Gated SR latch
A gated SR latch can be made by adding a second level of NAND gates to an inverted SR latch. The extra NAND gates further invert the inputs so a latch becomes a gated SR latch (a SR latch would transform into a gated latch with inverted enable).
Alternatively, a gated SR latch (with non-inverting enable) can be made by adding a second level of AND gates to a SR latch.
With E high (enable true), the signals can pass through the input gates to the encapsulated latch; all signal combinations except for (0, 0) = hold then immediately reproduce on the (Q, ) output, i.e. the latch is transparent.
With E low (enable false) the latch is closed (opaque) and remains in the state it was left the last time E was high.
A periodic enable input signal may be called a write strobe. When the enable input is a clock signal, the latch is said to be level-sensitive (to the level of the clock signal), as opposed to edge-sensitive like flip-flops below.
Gated D latch
This latch exploits the fact that, in the two active input combinations (01 and 10) of a gated SR latch, R is the complement of S. The input NAND stage converts the two D input states (0 and 1) to these two input combinations for the next latch by inverting the data input signal. The low state of the enable signal produces the inactive "11" combination. Thus a gated D-latch may be considered as a one-input synchronous SR latch. This configuration prevents application of the restricted input combination. It is also known as transparent latch, data latch, or simply gated latch. It has a data input and an enable signal (sometimes named clock, or control). The word transparent comes from the fact that, when the enable input is on, the signal propagates directly through the circuit, from the input D to the output Q. Gated D-latches are also level-sensitive with respect to the level of the clock or enable signal.
Transparent latches are typically used as I/O ports or in asynchronous systems, or in synchronous two-phase systems (synchronous systems that use a two-phase clock), where two latches operating on different clock phases prevent data transparency as in a master–slave flip-flop.
The truth table below shows that when the enable/clock input is 0, the D input has no effect on the output. When E/C is high, the output equals D.
Earle latch
The classic gated latch designs have some undesirable characteristics. They require dual-rail logic or an inverter. The input-to-output propagation may take up to three gate delays. The input-to-output propagation is not constant – some outputs take two gate delays while others take three.
Designers looked for alternatives. A successful alternative is the Earle latch. It requires only a single data input, and its output takes a constant two gate delays. In addition, the two gate levels of the Earle latch can, in some cases, be merged with the last two gate levels of the circuits driving the latch because many common computational circuits have an OR layer followed by an AND layer as their last two levels. Merging the latch function can implement the latch with no additional gate delays. The merge is commonly exploited in the design of pipelined computers, and, in fact, was originally developed by John G. Earle to be used in the IBM System/360 Model 91 for that purpose.
The Earle latch is hazard free. If the middle NAND gate is omitted, then one gets the polarity hold latch, which is commonly used because it demands less logic. However, it is susceptible to logic hazard. Intentionally skewing the clock signal can avoid the hazard.
D flip-flop
The D flip-flop is widely used, and known as a "data" flip-flop. The D flip-flop captures the value of the D-input at a definite portion of the clock cycle (such as the rising edge of the clock). That captured value becomes the Q output. At other times, the output Q does not change. The D flip-flop can be viewed as a memory cell, a zero-order hold, or a delay line.
Truth table:
{| class="wikitable" style="text-align:center;"
|-
! Clock !! D !! Qnext
|-
| Rising edge || 0 || 0
|-
| Rising edge || 1 || 1
|-
| Non-rising || || Q
|}
(X denotes a don't care condition, meaning the signal is irrelevant)
Most D-type flip-flops in ICs have the capability to be forced to the set or reset state (which ignores the D and clock inputs), much like an SR flip-flop. Usually, the illegal S = R = 1 condition is resolved in D-type flip-flops. Setting S = R = 0 makes the flip-flop behave as described above. Here is the truth table for the other possible S and R configurations:
{| class="wikitable" style="text-align:center;" width=150
! colspan=4 | Inputs !! colspan=2 | Outputs
|-
! S !! R !! D !! > !! Q !!
|-
| 0 || 1 || || || 0 || 1
|-
| 1 || 0 || || || 1 || 0
|-
| 1 || 1 || || || 1 || 1
|}
These flip-flops are very useful, as they form the basis for shift registers, which are an essential part of many electronic devices. The advantage of the D flip-flop over the D-type "transparent latch" is that the signal on the D input pin is captured the moment the flip-flop is clocked, and subsequent changes on the D input will be ignored until the next clock event. An exception is that some flip-flops have a "reset" signal input, which will reset Q (to zero), and may be either asynchronous or synchronous with the clock.
The above circuit shifts the contents of the register to the right, one bit position on each active transition of the clock. The input X is shifted into the leftmost bit position.
Classical positive-edge-triggered D flip-flop
This circuit consists of two stages implemented by NAND latches. The input stage (the two latches on the left) processes the clock and data signals to ensure correct input signals for the output stage (the single latch on the right). If the clock is low, both the output signals of the input stage are high regardless of the data input; the output latch is unaffected and it stores the previous state. When the clock signal changes from low to high, only one of the output voltages (depending on the data signal) goes low and sets/resets the output latch: if D = 0, the lower output becomes low; if D = 1, the upper output becomes low. If the clock signal continues staying high, the outputs keep their states regardless of the data input and force the output latch to stay in the corresponding state as the input logical zero (of the output stage) remains active while the clock is high. Hence the role of the output latch is to store the data only while the clock is low.
The circuit is closely related to the gated D latch as both the circuits convert the two D input states (0 and 1) to two input combinations (01 and 10) for the output latch by inverting the data input signal (both the circuits split the single D signal in two complementary and signals). The difference is that NAND logical gates are used in the gated D latch, while NAND latches are used in the positive-edge-triggered D flip-flop. The role of these latches is to "lock" the active output producing low voltage (a logical zero); thus the positive-edge-triggered D flip-flop can also be thought of as a gated D latch with latched input gates.
Master–slave edge-triggered D flip-flop
A master–slave D flip-flop is created by connecting two gated D latches in series, and inverting the enable input to one of them. It is called master–slave because the master latch controls the slave latch's output value Q and forces the slave latch to hold its value whenever the slave latch is enabled, as the slave latch always copies its new value from the master latch and changes its value only in response to a change in the value of the master latch and clock signal.
For a positive-edge triggered master–slave D flip-flop, when the clock signal is low (logical 0) the "enable" seen by the first or "master" D latch (the inverted clock signal) is high (logical 1). This allows the "master" latch to store the input value when the clock signal transitions from low to high. As the clock signal goes high (0 to 1) the inverted "enable" of the first latch goes low (1 to 0) and the value seen at the input to the master latch is "locked". Nearly simultaneously, the twice inverted "enable" of the second or "slave" D latch transitions from low to high (0 to 1) with the clock signal. This allows the signal captured at the rising edge of the clock by the now "locked" master latch to pass through the "slave" latch. When the clock signal returns to low (1 to 0), the output of the "slave" latch is "locked", and the value seen at the last rising edge of the clock is held while the "master" latch begins to accept new values in preparation for the next rising clock edge.
Removing the leftmost inverter in the circuit creates a D-type flip-flop that strobes on the falling edge of a clock signal. This has a truth table like this:
{|class="wikitable" style="text-align:center;"
|-
! D !! Q !! > !! Qnext
|-
| 0 || || Falling || 0
|-
| 1 || || Falling || 1
|}
Dual-edge-triggered D flip-flop
Flip-Flops that read in a new value on the rising and the falling edge of the clock are called dual-edge-triggered flip-flops. Such a flip-flop may be built using two single-edge-triggered D-type flip-flops and a multiplexer, or by using two single-edge triggered D-type flip-flops and three XOR gates.
Edge-triggered dynamic D storage element
An efficient functional alternative to a D flip-flop can be made with dynamic circuits (where information is stored in a capacitance) as long as it is clocked often enough; while not a true flip-flop, it is still called a flip-flop for its functional role. While the master–slave D element is triggered on the edge of a clock, its components are each triggered by clock levels. The "edge-triggered D flip-flop", as it is called even though it is not a true flip-flop, does not have the master–slave properties.
Edge-triggered D flip-flops are often implemented in integrated high-speed operations using dynamic logic. This means that the digital output is stored on parasitic device capacitance while the device is not transitioning. This design facilities resetting by simply discharging one or more internal nodes. A common dynamic flip-flop variety is the true single-phase clock (TSPC) type which performs the flip-flop operation with little power and at high speeds. However, dynamic flip-flops will typically not work at static or low clock speeds: given enough time, leakage paths may discharge the parasitic capacitance enough to cause the flip-flop to enter invalid states.
T flip-flop
If the T input is high, the T flip-flop changes state ("toggles") whenever the clock input is strobed. If the T input is low, the flip-flop holds the previous value. This behavior is described by the characteristic equation:
(expanding the XOR operator)
and can be described in a truth table:
{| class="wikitable" style="text-align:center;"
|+ T flip-flop operation
|-
! colspan="4" | Characteristic table
! colspan="5" | Excitation table
|-
! !! !! !! Comment
! !! !! !! Comment
|-
| 0 || 0 || 0 || Hold state (no clock)
| 0 || 0 || 0 || No change
|-
| 0 || 1 || 1 || Hold state (no clock)
| 1 || 1 || 0 || No change
|-
| 1 || 0 || 1 || Toggle
| 0 || 1 || 1 || Complement
|-
| 1 || 1 || 0 || Toggle
| 1 || 0 || 1 || Complement
|}
When T is held high, the toggle flip-flop divides the clock frequency by two; that is, if clock frequency is 4 MHz, the output frequency obtained from the flip-flop will be 2 MHz. This "divide by" feature has application in various types of digital counters. A T flip-flop can also be built using a JK flip-flop (J & K pins are connected together and act as T) or a D flip-flop (T input XOR Qprevious drives the D input).
JK flip-flop
The JK flip-flop, augments the behavior of the SR flip-flop (J: Set, K: Reset) by interpreting the J = K = 1 condition as a "flip" or toggle command. Specifically, the combination J = 1, K = 0 is a command to set the flip-flop; the combination J = 0, K = 1 is a command to reset the flip-flop; and the combination J = K = 1 is a command to toggle the flip-flop, i.e., change its output to the logical complement of its current value. Setting J = K = 0 maintains the current state. To synthesize a D flip-flop, simply set K equal to the complement of J (input J will act as input D). Similarly, to synthesize a T flip-flop, set K equal to J. The JK flip-flop is therefore a universal flip-flop, because it can be configured to work as an SR flip-flop, a D flip-flop, or a T flip-flop.
The characteristic equation of the JK flip-flop is:
and the corresponding truth table is:
{|class="wikitable" style="text-align:center;"
|+ JK flip-flop operation
|-
! colspan="4" | Characteristic table
! colspan="5" | Excitation table
|-
! J !! K !! Comment !! Qnext
! Q !! Qnext !! Comment !! J !! K
|-
| 0 || 0 || Hold state || Q
| 0 || 0 || No change || 0 ||
|-
| 0 || 1 || Reset|| 0
| 0 || 1 || Set || 1 ||
|-
| 1 || 0 || Set|| 1
| 1 || 0 || Reset || || 1
|-
| 1 || 1 || Toggle ||
| 1 || 1 || No change || || 0
|}
Timing considerations
Timing parameters
The input must be held steady in a period around the rising edge of the clock known as the aperture. Imagine taking a picture of a frog on a lily-pad. Suppose the frog then jumps into the water. If you take a picture of the frog as it jumps into the water, you will get a blurry picture of the frog jumping into the water—it's not clear which state the frog was in. But if you take a picture while the frog sits steadily on the pad (or is steadily in the water), you will get a clear picture. In the same way, the input to a flip-flop must be held steady during the aperture of the flip-flop.
Setup time is the minimum amount of time the data input should be held steady before the clock event, so that the data is reliably sampled by the clock.
Hold time is the minimum amount of time the data input should be held steady after the clock event, so that the data is reliably sampled by the clock.
Aperture is the sum of setup and hold time. The data input should be held steady throughout this time period.
Recovery time is the minimum amount of time the asynchronous set or reset input should be inactive before the clock event, so that the data is reliably sampled by the clock. The recovery time for the asynchronous set or reset input is thereby similar to the setup time for the data input.
Removal time is the minimum amount of time the asynchronous set or reset input should be inactive after the clock event, so that the data is reliably sampled by the clock. The removal time for the asynchronous set or reset input is thereby similar to the hold time for the data input.
Short impulses applied to asynchronous inputs (set, reset) should not be applied completely within the recovery-removal period, or else it becomes entirely indeterminable whether the flip-flop will transition to the appropriate state. In another case, where an asynchronous signal simply makes one transition that happens to fall between the recovery/removal time, eventually the flip-flop will transition to the appropriate state, but a very short glitch may or may not appear on the output, dependent on the synchronous input signal. This second situation may or may not have significance to a circuit design.
Set and Reset (and other) signals may be either synchronous or asynchronous and therefore may be characterized with either Setup/Hold or Recovery/Removal times, and synchronicity is very dependent on the design of the flip-flop.
Differentiation between Setup/Hold and Recovery/Removal times is often necessary when verifying the timing of larger circuits because asynchronous signals may be found to be less critical than synchronous signals. The differentiation offers circuit designers the ability to define the verification conditions for these types of signals independently.
Metastability
Flip-flops are subject to a problem called metastability, which can happen when two inputs, such as data and clock or clock and reset, are changing at about the same time. When the order is not clear, within appropriate timing constraints, the result is that the output may behave unpredictably, taking many times longer than normal to settle to one state or the other, or even oscillating several times before settling. Theoretically, the time to settle down is not bounded. In a computer system, this metastability can cause corruption of data or a program crash if the state is not stable before another circuit uses its value; in particular, if two different logical paths use the output of a flip-flop, one path can interpret it as a 0 and the other as a 1 when it has not resolved to stable state, putting the machine into an inconsistent state.
The metastability in flip-flops can be avoided by ensuring that the data and control inputs are held valid and constant for specified periods before and after the clock pulse, called the setup time (tsu) and the hold time (th) respectively. These times are specified in the data sheet for the device, and are typically between a few nanoseconds and a few hundred picoseconds for modern devices. Depending upon the flip-flop's internal organization, it is possible to build a device with a zero (or even negative) setup or hold time requirement but not both simultaneously.
Unfortunately, it is not always possible to meet the setup and hold criteria, because the flip-flop may be connected to a real-time signal that could change at any time, outside the control of the designer. In this case, the best the designer can do is to reduce the probability of error to a certain level, depending on the required reliability of the circuit. One technique for suppressing metastability is to connect two or more flip-flops in a chain, so that the output of each one feeds the data input of the next, and all devices share a common clock. With this method, the probability of a metastable event can be reduced to a negligible value, but never to zero. The probability of metastability gets closer and closer to zero as the number of flip-flops connected in series is increased. The number of flip-flops being cascaded is referred to as the "ranking"; "dual-ranked" flip flops (two flip-flops in series) is a common situation.
So-called metastable-hardened flip-flops are available, which work by reducing the setup and hold times as much as possible, but even these cannot eliminate the problem entirely. This is because metastability is more than simply a matter of circuit design. When the transitions in the clock and the data are close together in time, the flip-flop is forced to decide which event happened first. However fast the device is made, there is always the possibility that the input events will be so close together that it cannot detect which one happened first. It is therefore logically impossible to build a perfectly metastable-proof flip-flop. Flip-flops are sometimes characterized for a maximum settling time (the maximum time they will remain metastable under specified conditions). In this case, dual-ranked flip-flops that are clocked slower than the maximum allowed metastability time will provide proper conditioning for asynchronous (e.g., external) signals.
Propagation delay
Another important timing value for a flip-flop is the clock-to-output delay (common symbol in data sheets: tCO) or propagation delay (tP), which is the time a flip-flop takes to change its output after the clock edge. The time for a high-to-low transition (tPHL) is sometimes different from the time for a low-to-high transition (tPLH).
When cascading flip-flops which share the same clock (as in a shift register), it is important to ensure that the tCO of a preceding flip-flop is longer than the hold time (th) of the following flip-flop, so data present at the input of the succeeding flip-flop is properly "shifted in" following the active edge of the clock. This relationship between tCO and th is normally guaranteed if the flip-flops are physically identical. Furthermore, for correct operation, it is easy to verify that the clock period has to be greater than the sum tsu + th.
Generalizations
Flip-flops can be generalized in at least two ways: by making them 1-of-N instead of 1-of-2, and by adapting them to logic with more than two states. In the special cases of 1-of-3 encoding, or multi-valued ternary logic, such an element may be referred to as a flip-flap-flop.
In a conventional flip-flop, exactly one of the two complementary outputs is high. This can be generalized to a memory element with N outputs, exactly one of which is high (alternatively, where exactly one of N is low). The output is therefore always a one-hot (respectively one-cold) representation. The construction is similar to a conventional cross-coupled flip-flop; each output, when high, inhibits all the other outputs. Alternatively, more or less conventional flip-flops can be used, one per output, with additional circuitry to make sure only one at a time can be true.
Another generalization of the conventional flip-flop is a memory element for multi-valued logic. In this case the memory element retains exactly one of the logic states until the control inputs induce a change. In addition, a multiple-valued clock can also be used, leading to new possible clock transitions.
| Technology | Digital logic | null |
42029762 | https://en.wikipedia.org/wiki/Deafness | Deafness | Deafness has varying definitions in cultural and medical contexts. In medical contexts, the meaning of deafness is hearing loss that precludes a person from understanding spoken language, an audiological condition. In this context it is written with a lower case d. It later came to be used in a cultural context to refer to those who primarily communicate through sign language regardless of hearing ability, often capitalized as Deaf and referred to as "big D Deaf" in speech and sign. The two definitions overlap but are not identical, as hearing loss includes cases that are not severe enough to impact spoken language comprehension, while cultural Deafness includes hearing people who use sign language, such as children of deaf adults.
Medical context
In a medical context, deafness is defined as a degree of hearing difference such that a person is unable to understand speech, even in the presence of amplification. In profound deafness, even the highest intensity sounds produced by an audiometer (an instrument used to measure hearing by producing pure tone sounds through a range of frequencies) may not be detected. In total deafness, no sounds at all, regardless of amplification or method of production, can be heard.
Neurologically, language is processed in the same areas of the brain whether one is deaf or hearing. The left hemisphere of the brain processes linguistic patterns whether by signed languages or by spoken languages.
Deafness can be broken down into four different types of hearing loss: conductive hearing loss, sensorineural hearing loss, mixed hearing loss, and auditory neuropathy spectrum disorder. All of these forms of hearing loss cause an impairment in a person's hearing where they are not able to hear sounds correctly. These different types of hearing loss occur in different parts of the ear, which make it difficult for the information being heard to get sent to the brain properly. To break it down even further, there are three different levels of hearing loss. According to the CDC, the first level is mild hearing loss. This is when someone is still able to hear noises, but it is more difficult to hear the softer sounds. The second level is moderate hearing loss and this is when someone can hear almost nothing when someone is talking to them at a normal volume. The next level is severe hearing loss. Severe hearing loss is when someone can not hear any sounds when they are being produced at a normal level and they can only hear minimum sounds that are being produced at a loud level. The final level is profound hearing loss, which is when someone is not able to hear any sounds except for very loud ones.
There are millions of people in the world who are living with deafness or hearing impairments. Survey of Income and Program Participation (SIPP) indicate that fewer than 1 in 20 Americans are currently deaf or hard of hearing. There are a lot of solutions available for people with hearing impairments. Some examples of solutions would be blinking lights on different things like their phones, alarms, and things that are important to alert them. Cochlear implants are an option too. Cochlear implants are surgically placed devices that stimulate the cochlear nerve in order to help the person hear. A cochlear implant is used instead of hearing aids in order to help when someone has difficulties understanding speech. A study by Anna Agostinelli et al., was done on four subjects with Single-Sided Deafness that use Cochlear Implants. This study showed their age, what made them lose their hearing, which ear was affected, and how long it has been since they had their Cochlear Implant activated. It was shown that the children had much improvement in their auditory use, Another study done by Shannon R. Culbertson et al., showed that children who had their activation at a younger age, had better auditory skill and perception. Children who had their activation earlier had a higher FLI (Functional Listening Index) score than those who had theirs activated later on. Functional Listening Index was developed by The Shepherd Centre. It is a 60- item scale that tracks the development of auditory skills from birth through 5 years of age for six categories: sound awareness, associating sound with meaning, comprehending simple spoken language, comprehending language in different listening conditions, listening through discourse and narratives, and advanced open listening set (Davis et al., 2015). Merv Hyde, Renee Punch, and Linda Komesaroff completed a study that says that parents have difficulties with making the decision to use Cochlear Implants for their child. A survey was done asking parents how they felt when making this decision. Many only made this decision due to feeling urgency with implanting their child. This can be a serious procedure, which comes with the risk of negative results. In the end, most of the parents felt that this was beneficial for their child.
Cultural context
In a cultural context, Deaf culture refers to a tight-knit cultural group of people whose primary language is signed, and who practice social and cultural norms which are distinct from those of the surrounding hearing community. This community does not automatically include all those who are clinically or legally deaf, nor does it exclude every hearing person. According to Baker and Padden, it includes any person who "identifies him/herself as a member of the Deaf community, and other members accept that person as a part of the community", an example being children of deaf adults with normal hearing ability. It includes the set of social beliefs, behaviors, art, literary traditions, history, values, and shared institutions of communities that are influenced by deafness and which use sign languages as the main means of communication. While deafness is often included within the umbrella of disability, members of the Deaf community tend to view deafness as a difference in human experience or itself as a language minority.
Many non-disabled people continue to assume that deaf people have no autonomy and fail to provide people with support beyond hearing aids, which is something that must be addressed. Different non-governmental organizations around the world have created programs towards closing the gap between deaf and non-disabled people in developing countries. As children, deaf people learn literacy differently than hearing children. They learn to speak and write, whereas hearing children naturally learn to speak and eventually learn to write later on. The Quota International organization with headquarters in the United States provided immense educational support in the Philippines, where it started providing free education to deaf children in the Leganes Resource Center for the Deaf. The Sounds Seekers British organization also provided support by offering audiology maintenance technology, to better assist those who are deaf in hard-to-reach places. The Nippon Foundation also supports deaf students at Gallaudet University and the National Technical Institute for the Deaf, through sponsoring international scholarships programs to encourage students to become future leaders in the deaf community. The more aid these organizations give to the deaf people, the more opportunities and resources disabled people must speak up about their struggles and goals that they aim to achieve. When more people understand how to leverage their privilege for the marginalized groups in the community, then we can build a more inclusive and tolerant environment for the generations that are yet to come.
History
The first known record of sign language in history comes from Plato's Cratylus, written in the fifth century BCE. In a dialogue on the "correctness of names", Socrates says, "Suppose that we had no voice or tongue, and wanted to communicate with one another, should we not, like the deaf and dumb, make signs with the hands and head and the rest of the body?" His belief that deaf people possessed an innate intelligence for language put him at odds with his student Aristotle, who said, "Those who are born deaf all become senseless and incapable of reason", and that "it is impossible to reason without the ability to hear".
This pronouncement would reverberate through the ages and it was not until the 17th century when manual alphabets began to emerge, as did various treatises on deaf education, such as Reducción de las letras y arte para enseñar a hablar a los mudos ('Reduction of letters and art for teaching mute people to speak'), written by Juan Pablo Bonet in Madrid in 1620, and Didascalocophus, or, The deaf and dumb mans tutor, written by George Dalgarno in 1680.
In 1760, French philanthropic educator Charles-Michel de l'Épée opened the world's first free school for the deaf. The school won approval for government funding in 1791 and became known as the "Institution Nationale des Sourds-Muets à Paris". The school inspired the opening of what is today known as the American School for the Deaf, the oldest permanent school for the deaf in the United States, and indirectly, Gallaudet University, the world's first school for the advanced education of the deaf and hard of hearing, and to date, the only higher education institution in which all programs and services are specifically designed to accommodate deaf and hard of hearing students.
Schooling
Nicole M. Stephens and Jill Duncan say that parents often encounter difficulties when it comes time for them to choose an educational setting for their child. There are many things they consider when choosing that setting for them. Three things to consider would be the needs and abilities of the child, how the school can make accommodations for the child, and the environment itself. There are four themes that connect to eight sub-themes that the author refers to. Child-Centered connects to Inclusion and Additional Needs and Well-Being. Familial connects to Complex Processes, Information Input and Flow, and Caregiver perceptions of Education. School connects to School Systems and Personnel, and School Character. And finally On Reflection connects to No Regrets. It can be profitable for both the child and the parent to do trial and error with different schools. This can lead to the child being in the proper environment for them and their needs.
| Biology and health sciences | Disabilities | Health |
39304988 | https://en.wikipedia.org/wiki/Time-domain%20astronomy | Time-domain astronomy | Time-domain astronomy is the study of how astronomical objects change with time. Said to have begun with Galileo's Letters on Sunspots, the field has now naturally expanded to encompass variable objects beyond the Solar System. Temporal variation may originate from movement of the source, or changes in the object itself. Common targets include novae, supernovae, pulsating stars, flare stars, blazars and active galactic nuclei. Optical time domain surveys include OGLE, HAT-South, PanSTARRS, SkyMapper, ASAS, WASP, CRTS, GOTO, and the forthcoming LSST at the Vera C. Rubin Observatory.
Time-domain astronomy studies transient astronomical events ("transients"), which include various types of variable stars, including periodic, quasi-periodic, high proper motion stars, and lifecycle events (supernovae, kilonovae) or other changes in behavior or type. Non-stellar transients include asteroids, planetary transits and comets.
Transients characterize astronomical objects or phenomena whose duration of presentation may be from milliseconds to days, weeks, or even several years. This is in contrast to the timescale of the millions or billions of years during which the galaxies and their component stars in the universe have evolved. The term is used for violent deep-sky events, such as supernovae, novae, dwarf nova outbursts, gamma-ray bursts, and tidal disruption events, as well as gravitational microlensing.
Time-domain astronomy also involves long-term studies of variable stars and their changes on the timescale of minutes to decades. Variability studied can be intrinsic, including periodic or semi-regular pulsating stars, young stellar objects, stars with outbursts, asteroseismology studies; or extrinsic, which results from eclipses (in binary stars, planetary transits), stellar rotation (in pulsars, spotted stars), or gravitational microlensing events.
Modern time-domain astronomy surveys often uses robotic telescopes, automatic classification of transient events, and rapid notification of interested people. Blink comparators have long been used to detect differences between two photographic plates, and image subtraction became more used when digital photography eased the normalization of pairs of images. Due to large fields of view required, the time-domain work involves storing and transferring a huge amount of data. This includes data mining techniques, classification, and the handling of heterogeneous data.
The importance of time-domain astronomy was recognized in 2018 by German Astronomical Society by awarding a Karl Schwarzschild Medal to Andrzej Udalski for "pioneering contribution to the growth of a new field of astrophysics research, time-domain astronomy, which studies the variability of brightness and other parameters of objects in the universe in different time scales."
Also the 2017 Dan David Prize was awarded to the three leading researchers in the field of time-domain astronomy: Neil Gehrels (Swift Gamma-Ray Burst Mission), Shrinivas Kulkarni (Palomar Transient Factory), Andrzej Udalski (Optical Gravitational Lensing Experiment).
History
Before the invention of telescopes, transient events that were visible to the naked eye, from within or near the Milky Way Galaxy, were very rare, and sometimes hundreds of years apart. However, such events were recorded in antiquity, such as the supernova in 1054 observed by Chinese, Japanese and Arab astronomers, and the event in 1572 known as "Tycho's Supernova" after Tycho Brahe, who studied it until it faded after two years. Even though telescopes made it possible to see more distant events, their small fields of view – typically less than 1 square degree – meant that the chances of looking in the right place at the right time were low. Schmidt cameras and other astrographs with wide field were invented in the 20th century, but mostly used to survey the unchanging heavens.
Historically time domain astronomy has come to include appearance of comets and variable brightness of Cepheid-type variable stars. Old astronomical plates exposed from the 1880s through the early 1990s held by the Harvard College Observatory are being digitized by the DASCH project.
The interest in transients has intensified when large CCD detectors started to be available to the astronomical community. As telescopes with larger fields of view and larger detectors come into use in the 1990s, first massive and regular survey observations were initiated - pioneered by the gravitational microlensing surveys such as Optical Gravitational Lensing Experiment and the MACHO Project. These efforts, beside the discovery of the microlensing events itself, resulted in the orders of magnitude more variable stars known to mankind.
Subsequent, dedicated sky surveys such as the Palomar Transient Factory, the spacecraft Gaia and the LSST, focused on expanding the coverage of the sky monitoring to fainter objects, more optical filters and better positional and proper motions measurement capabilities. In 2022, the Gravitational-wave Optical Transient Observer (GOTO) began looking for collisions between neutron stars.
The ability of modern instruments to observe in wavelengths invisible to the human eye (radio waves, infrared, ultraviolet, X-ray) increases the amount of information that may be obtained when a transient is studied.
In radio astronomy the LOFAR is looking for radio transients. Radio time domain studies have long included pulsars and scintillation. Projects to look for transients in X-ray and gamma rays include Cherenkov Telescope Array, eROSITA, AGILE, Fermi, HAWC, INTEGRAL, MAXI, Swift Gamma-Ray Burst Mission and Space Variable Objects Monitor. Gamma ray bursts are a well known high energy electromagnetic transient. The proposed ULTRASAT satellite will observe a field of more than 200 square degrees continuously in an ultraviolet wavelength that is particularly important for detecting supernovae within minutes of their occurrence.
| Physical sciences | Astronomy basics | Astronomy |
49557688 | https://en.wikipedia.org/wiki/Effects%20of%20climate%20change%20on%20small%20island%20countries | Effects of climate change on small island countries | The effects of climate change on small island countries are affecting people in coastal areas through sea level rise, increasing heavy rain events, tropical cyclones and storm surges. These effects of climate change threaten the existence of many island countries, their peoples and cultures. They also alter ecosystems and natural environments in those countries. Small island developing states (SIDS) are a heterogenous group of countries but many of them are particularly at risk to climate change. Those countries have been quite vocal in calling attention to the challenges they face from climate change. For example, the Maldives and nations of the Caribbean and Pacific Islands are already experiencing considerable impacts of climate change. It is critical for them to implement climate change adaptation measures fast.
Some small and low population islands do not have the resources to protect their islands and natural resources. They experience climate hazards which impact on human health, livelihoods, and inhabitable space. This can lead to pressure to leave these islands but resources to do so are often lacking as well.
Efforts to combat these challenges are ongoing and multinational. Many of the small island developing countries have a high vulnerability to climate change, whilst having contributed very little to global greenhouse gas emissions. Therefore, some small island countries have made advocacy for global cooperation on climate change mitigation a key aspect of their foreign policy.
History
Sea level was much lower during the last ice age when glaciers covered a third of Earth's land mass.
Common features
Small island developing states (SIDS) are identified as a group of 38 United Nations (UN) Member States and 20 Non-UN Member/Associate Members that are located in three regions: the Caribbean; the Pacific; and the Atlantic, Indian Ocean, Mediterranean and South China Seas (AIMS) and are home to approximately 65 million people. These nations are far from homogeneous but they do share numerous features, including narrow resource bases, dominance of economic sectors that are reliant on the natural environment, limited industrial activity, physical remoteness, and limited economies of scale.
Due to close connections between human communities and coastal environments, SIDS are particularly exposed to hazards associated with the ocean and cryosphere, including sea level rise, extreme sea levels, tropical cyclones, marine heatwaves, and ocean acidification. A common feature of SIDS is a high ratio of coastline-to-land area, with large portions of populations, infrastructure, and assets being located along the coast.
Patterns of increasing hazards, high levels of exposure, and acute vulnerability interact to result in high risk of small island developing states (SIDS) to climate change.
Small island developing states (SIDS) have long been recognized as being particularly at risk to climate change. These nations are often described as being on the “frontlines of climate change”, as “hot spots of climate change”, or as being “canaries in the coalmine”. The Intergovernmental Panel on Climate Change warned already in 2001 that small island countries will experience considerable economic and social consequences due to climate change.
Small island developing states make minimal contribution to global greenhouse gas emissions, with a combined total of less than 1%.
However, that does not indicate that greenhouse emissions are not produced at all, and it is recorded that the annual total greenhouse gas emissions from islands could range from 292.1 to 29,096.2 [metric] tonne -equivalent.
Impacts
Sea level rise
Sea level rise is especially threatening to low-lying island nations because seas are encroaching upon limited habitable land and threatening existing cultures. Stefan Rahmstorf, a professor of Ocean Physics at Potsdam University in Germany, notes "even limiting warming to 2 degrees, in my view, will still commit some island nations and coastal cities to drown."
Changes in temperatures and rain
Atmospheric temperature extremes have already increased in frequency and intensity in SIDS and are projected to continue along this trend. Heavy precipitation events in SIDS have also increased in frequency and intensity and are expected to further increase.
Agriculture and fisheries
Climate change poses a risk to food security in many Pacific Islands, impacting fisheries and agriculture. As sea level rises, island nations are at increased risk of losing coastal arable land to degradation as well as salination. Once the limited available soil on these islands becomes salinated, it becomes very difficult to produce subsistence crops such as breadfruit. This would severely impact the agricultural and commercial sector in nations such as the Marshall Islands and Kiribati.
In addition, local fisheries would also be affected by higher ocean temperatures and increased ocean acidification. As ocean temperatures rise and the pH of oceans decreases, many fish and other marine species would die out or change their habits and range. As well as this, water supplies and local ecosystems such as mangroves, are threatened by global warming.
Economic impacts
SIDS may also have reduced financial and human capital to mitigate climate change risk, as many rely on international aid to cope with disasters like severe storms. Worldwide, climate change is projected to have an average annual loss of 0.5% GDP by 2030; in Pacific SIDS, it will be 0.75–6.5% GDP by 2030. Caribbean SIDS will have average annual losses of 5% by 2025, escalating to 20% by 2100 in projections without regional mitigation strategies. The tourism sector of many island countries is particularly threatened by increased occurrences of extreme weather events such as hurricanes and droughts.
Public health
Climate change impacts small island ecosystems in ways that have a detrimental effect on public health. In island nations, changes in sea levels, temperature, and humidity may increase the prevalence of mosquitoes and diseases carried by them such as malaria and Zika virus. Rising sea levels and severe weather such as flooding and droughts may render agricultural land unusable and contaminate freshwater drinking supplies. Flooding and rising sea levels also directly threaten populations, and in some cases may be a threat to the entire existence of the island.
Others
Other impacts on small islands include:
deterioration in coastal conditions, such as beach erosion and coral bleaching, which will likely affect local resources such as fisheries, as well as the value of tourism destinations.
reduction of already limited water resources to the point that they become insufficient to meet demand during low-rainfall periods by mid-century, especially on small islands (such as in the Caribbean and the Pacific Ocean)
invasion by non-native species increasing with higher temperatures, particularly in mid- and high-latitude islands.
Adaptation
Governments face a complex task when combining grey infrastructure with green infrastructure and nature-based solutions to help with disaster risk management in areas such as flood control, early warning systems, and integrated water resource management.
Relocation and migration
Climate migration has been discussed in popular media as a potential adaptation approach for the populations of islands threatened by sea level rise. These depictions are often sensationalist or problematic, although migration may likely form a part of adaptation. Mobility has long been a part of life in islands, but could be used in combination with local adaptation measures.
A study that engaged the experiences of residents in atoll communities found that the cultural identities of these populations are strongly tied to these lands. Human rights activists argue that the potential loss of entire atoll countries, and consequently the loss of national sovereignty, self-determination, cultures, and indigenous lifestyles cannot be compensated for financially. Some researchers suggest that the focus of international dialogues on these issues should shift from ways to relocate entire communities to strategies that instead allow for these communities to remain on their lands.
Climate resilient economies
Many SIDS now understand the need to move towards low-carbon, climate resilient economies, as set out in the Caribbean Community (CARICOM) implementation plan for climate change-resilient development. SIDS often rely heavily on imported fossil fuels, spending an ever-larger proportion of their GDP on energy imports. Renewable technologies have the advantage of providing energy at a lower cost than fossil fuels and making SIDS more sustainable. Barbados has been successful in adopting the use of solar water heaters (SWHs). A 2012 report published by the Climate & Development Knowledge Network showed that its SWH industry now boasts over 50,000 installations. These have saved consumers as much as US$137 million since the early 1970s. The report suggested that Barbados' experience could be easily replicated in other SIDS with high fossil fuel imports and abundant sunshine.
International cooperation
The governments of several island nations have made political advocacy for greater international ambition on climate change mitigation and climate change adaptation a component of their foreign policy and international alliances.
The Alliance of Small Island States (ASIS) has been a strong negotiating group in the UNFCCC, highlighting that although they are negligible contributors to anthropogenic climate change, they are among the most vulnerable to its impacts. The 43 members of the alliance have held the position of limiting global warming to 1.5°C, and advocated for this at the 2015 United Nations Climate Change Conference, influencing the goals of the Paris Agreement. Marshall Islands Prime Minister Tony deBrum was central in forming the High Ambition Coalition at the conference. Meetings of the Pacific Islands Forum have also discussed the issue.
The Maldives and Tuvalu particularly have played a prominent role on the international stage. In 2002, Tuvalu threatened to sue the United States and Australia in the International Court of Justice for their contribution to climate change and for not ratifying the Kyoto Protocol. The governments of both of these countries have cooperated with environmental advocacy networks, non-governmental organisations and the media to draw attention to the threat of climate change to their countries. At the 2009 United Nations Climate Change Conference, Tuvalu delegate Ian Fry spearheaded an effort to halt negotiations and demand a comprehensive, legally binding agreement.
As of March 2022, the Asian Development Bank has committed $3.62 billion to help small island developing states with climate change, transport, energy, and health projects.
By country and region
Caribbean
East Timor
East Timor, or Timor-Leste, faces numerous challenges as a result of climate change and increased global temperatures. As an island country, rising sea levels threaten its coastal areas, including the capital city Dili. The country is considered highly vulnerable and is expected to experience worsening cyclones, flooding, heatwaves, and drought. As a large percentage of the population is dependent on local agriculture, these changes are expected to impact industry in the country as well.
Maldives
Pacific islands
Fiji
Kiribati
The existence of the nation of Kiribati is imperilled by rising sea levels, with the country losing land every year. Many of its islands are currently or becoming inhabitable due to their shrinking size. Thus, the majority of the country's population resides in only a handful of islands, with more than half of its residents living on one island alone, Tarawa. This leads to other issues such as severe overcrowding in such a small area. In 1999, the uninhabited islands of Tebua Tarawa and Abanuea both disappeared underwater. The government's Kiribati Adaptation Program was launched in 2003 to mitigate the country's vulnerability to the issue. In 2008, fresh water supplies began being encroached by seawater, prompting President Anote Tong to request international assistance to begin relocating the country's population elsewhere.
Marshall Islands
Palau
Solomon Islands
Between 1947 and 2014, six islands of the Solomon Islands disappeared due to sea level rise, while another six shrunk by between 20 and 62 per cent. Nuatambu Island was the most populated of these with 25 families living on it; 11 houses washed into the sea by 2011.
The Human Rights Measurement Initiative finds that the climate crisis has worsened human rights conditions in the Solomon Islands greatly (5.0 out of 6). Human rights experts provided that the climate crisis has contributed to conflict in communities, negative future socio-economic outlook, and food instability.
Tuvalu
Tuvalu is a small Polynesian island nation located in the Pacific Ocean. It can be found about halfway between Hawaii and Australia. It is made up of nine tiny islands, five of which are coral atolls while the other four consists of land rising from the sea bed. All are low-lying islands with no point on Tuvalu being higher than 4.5m above sea level. The analysis of years of sea level data from Funafuti, identified that the sea level rise rate was 5.9 mm per year (in the years to September 2008) and the sea level in the Funafuti area rose approximately 9.14 cm during that period of time. As well as this, the dangerous peak high tides in Tuvalu are becoming higher causing greater danger. In response to sea level rise, Tuvalu is considering resettlement plans in addition to pushing for increased action in confronting climate change at the UN. On 10 November 2023, Tuvalu signed the Falepili Union, a bilateral diplomatic relationship with Australia, under which Australia will provide a pathway for citizens of Tuvalu to migrate to Australia, to enable climate-related mobility for Tuvaluans.
São Tomé and Príncipe
Seychelles
In the Seychelles, the impacts of climate change were observable in precipitation, air temperature and sea surface temperature by the early 2000s. Climate change poses a threat to its coral reef ecosystems, with drought conditions in 1999 and a mass bleaching event in 1998. Water management will be critically impacted.
Singapore
| Physical sciences | Climate change | Earth science |
25160767 | https://en.wikipedia.org/wiki/Livestock | Livestock | Livestock are the domesticated animals raised in an agricultural setting in order to provide labour and produce diversified products for consumption such as meat, eggs, milk, fur, leather, and wool. The term is sometimes used to refer solely to animals who are raised for consumption, and sometimes used to refer solely to farmed ruminants, such as cattle, sheep, and goats. Horses are considered livestock in the United States. The USDA classifies pork, veal, beef, and lamb (mutton) as livestock, and all livestock as red meat. Poultry and fish are not included in the category. The latter is likely due to the fact that fish products are not governed by the USDA, but by the FDA.
The breeding, maintenance, slaughter and general subjugation of livestock called animal husbandry, is a part of modern agriculture and has been practiced in many cultures since humanity's transition to farming from hunter-gatherer lifestyles. Animal husbandry practices have varied widely across cultures and time periods. It continues to play a major economic and cultural role in numerous communities.
Livestock farming practices have largely shifted to intensive animal farming. Intensive animal farming increases the yield of the various commercial outputs, but also negatively impacts animal welfare, the environment, and public health. In particular, beef, dairy and sheep are an outsized source of greenhouse gas emissions from agriculture.
Etymology
The word livestock was first used between 1650 and 1660, as a compound word combining the words "live" and "stock". In some periods, "cattle" and "livestock" have been used interchangeably. Today, the modern meaning of cattle is domesticated bovines, while livestock has a wider sense.
United States federal legislation defines the term to make specified agricultural commodities eligible or ineligible for a program or activity. For example, the Livestock Mandatory Reporting Act of 1999 (P.L. 106–78, Title IX) defines livestock only as cattle, swine, and sheep, while the 1988 disaster assistance legislation defined the term as "cattle, sheep, goats, swine, poultry (including egg-producing poultry), equine animals used for food or in the production of food, fish used for food, and other animals designated by the Secretary".
Deadstock is defined in contradistinction to livestock as "animals that have died before slaughter, sometimes from illness or disease". It is illegal in many countries, such as Canada, to sell or process meat from dead animals for human consumption.
History
Animal-rearing originated during the cultural transition to settled farming communities from hunter-gatherer lifestyles. Animals are domesticated when their breeding and living conditions are controlled by humans. Over time, the collective behaviour, lifecycle and physiology of livestock have changed radically. Many modern farmed animals are unsuited to life in the natural world.
Dogs were domesticated early; dogs appear in Europe and the Far East from about 15,000 years ago. Goats and sheep were domesticated in multiple events sometime between 11,000 and 5,000 years ago in Southwest Asia. Pigs were domesticated by 8,500 BC in the Near East and 6,000 BC in China. Domestication of horses dates to around 4,000 BC. Cattle have been domesticated since approximately 10,500 years ago. Chickens and other poultry may have been domesticated around 7,000 BC.
Types
The term "livestock" is indistinct and may be defined narrowly or broadly. Broadly, livestock refers to any population of animals kept by humans for a useful, commercial purpose.
Micro-livestock
Micro-livestock is the term used for much-smaller animals, usually mammals. The two predominant categories are rodents and lagomorphs (rabbits). Even-smaller animals are kept and raised, such as crickets and honey bees. Micro-livestock does not generally include fish (aquaculture) or chickens (poultry farming).
Farming practices
Traditionally, animal husbandry was part of the subsistence farmer's way of life, producing not only the food needed by the family but also the fuel, fertiliser, clothing, transport and draught power. Killing the animal for food was a secondary consideration, and wherever possible their products, such as wool, eggs, milk and blood (by the Maasai) were harvested while the animal was still alive.
In the traditional system of transhumance, humans and livestock moved seasonally between fixed summer and winter pastures; in montane regions the summer pasture was up in the mountains, the winter pasture in the valleys.
Animals can be kept extensively or intensively. Extensive systems involve animals roaming at will, or under the supervision of a herdsman, often for their protection from predators. Ranching in the Western United States involves large herds of cattle grazing widely over public and private lands. Similar cattle stations are found in South America, Australia and other places with large areas of land and low rainfall. Ranching systems have been used for sheep, deer, ostrich, emu, llama and alpaca. In the uplands of the United Kingdom, sheep are turned out on the fells in spring and graze the abundant mountain grasses untended, being brought to lower altitudes late in the year, with supplementary feeding being provided in winter.
In rural locations, pigs and poultry can obtain much of their nutrition from scavenging, and in African communities, hens may live for months without being fed, and still produce one or two eggs a week. At the other extreme, in the more Western parts of the world, animals are often intensively managed; dairy cows may be kept in zero-grazing conditions with all their forage brought to them; beef cattle may be kept in high density feedlots; pigs may be housed in climate-controlled buildings and never go outdoors; poultry may be reared in barns and kept in cages as laying birds under lighting-controlled conditions. In between these two extremes are semi-intensive, often family-run farms where livestock graze outside for much of the year, silage or hay is made to cover the times of year when the grass stops growing, and fertiliser, feed and other inputs are bought onto the farm from outside.
Predation
Livestock farmers have often dealt with natural world animals' predation and theft by rustlers. In North America, animals such as gray wolves, grizzly bears, cougars, and coyotes are sometimes considered a threat to livestock. In Eurasia and Africa, predators include wolves, leopards, tigers, lions, dholes, Asiatic black bears, crocodiles, spotted hyenas, and other carnivores. In South America, feral dogs, jaguars, anacondas, and spectacled bears are threats to livestock. In Australia, dingoes, foxes, and wedge-tailed eagles are common predators, with an additional threat from domestic dogs who may kill in response to a hunting instinct, leaving the carcass uneaten.
Disease
Good husbandry, proper feeding, and hygiene are the main contributors to animal health on farms, bringing economic benefits through maximised production. When, despite these precautions, animals still become sick, they are treated with veterinary medicines, by the farmer and the veterinarian. In the European Union, when farmers treat the animals, they are required to follow the guidelines for treatment and to record the treatments given.
Animals are susceptible to a number of diseases and conditions that may affect their health. Some, like classical swine fever and scrapie are specific to one population of animals, while others, like foot-and-mouth disease affect all cloven-hoofed animals. Where the condition is serious, governments impose regulations on import and export, on the movement of livestock, quarantine restrictions and the reporting of suspected cases. Vaccines are available against certain diseases, and antibiotics are widely used where appropriate.
At one time, antibiotics were routinely added to certain compound foodstuffs to promote growth, but this is now considered poor practice in many countries because of the risk that it may lead to antibiotic resistance. Animals living under intensive conditions are particularly prone to internal and external parasites; increasing numbers of sea lice are affecting farmed salmon in Scotland. Reducing the parasite burdens of livestock results in increased productivity and profitability.
According to the Special Report on Climate Change and Land, livestock diseases are expected to get worse as climate change increases temperature and precipitation variability.
Transportation and marketing
Since many livestock are herd animals, they were historically driven to market "on the hoof" to a town or other central location. The method is still used in some parts of the world.
Truck transport is now common in developed countries.
Local and regional livestock auctions and specialized agricultural markets facilitate trade in livestock. In Canada at the Cargill slaughterhouse in High River, Alberta, 2,000 workers process 4,500 cattle per day, or more than one-third of Canada's capacity. It closed when some of its workers became infected with coronavirus disease 2019. The Cargill plant together with the JBS plant in Brooks, Alberta and the Harmony Beef plant in Balzac, Alberta represent fully three-quarters of the Canadian beef supply. In other areas, livestock may be bought and sold in a bazaar or wet market, such as may be found in many parts of Central Asia.
In non-Western countries, providing access to markets has encouraged farmers to invest in livestock, with the result being improved livelihoods. For example, the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) has worked in Zimbabwe to help farmers make their most of their livestock herds.
In stock shows, farmers bring their best livestock to compete with one another.
Biomass
Humans and livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined.
Economic and social benefits
The value of global livestock production in 2013 has been estimated at 883 billion dollars, (constant 2005–2006 dollars). However, economic implications of livestock production extend further: to downstream industry (saleyards, abattoirs, butchers, milk processors, refrigerated transport, wholesalers, retailers, food services, tanneries, etc.), upstream industry (feed producers, feed transport, farm and ranch supply companies, equipment manufacturers, seed companies, vaccine manufacturers, etc.) and associated services (veterinarians, nutrition consultants, shearers, etc.).
Livestock provide a variety of food and non-food products; the latter include leather, wool, pharmaceuticals, bone products, industrial protein, and fats. For many abattoirs, very little animal biomass may be wasted at slaughter. Even intestinal contents removed at slaughter may be recovered for use as fertilizer. Livestock manure helps maintain the fertility of grazing lands. Manure is commonly collected from barns and feeding areas to fertilize cropland. In some places, animal manure is used as fuel, either directly (as in some non-Western countries), or indirectly (as a source of methane for heating or for generating electricity). In regions where machine power is limited, some classes of livestock are used as draft stock, not only for tillage and other on-farm use, but also for transport of people and goods. In 1997, livestock provided energy for between an estimated 25 and 64% of cultivation energy in the world's irrigated systems, and that 300 million draft animals were used globally in small-scale agriculture.
Although livestock production serves as a source of income, it can provide additional economic values for rural families, often serving as a major contributor to food security and economic security. Livestock can serve as insurance against risk and is an economic buffer (of income and food supply) in some regions and some economies (e.g., during some African droughts). However, its use as a buffer may sometimes be limited where alternatives are present, which may reflect strategic maintenance of insurance in addition to a desire to retain productive assets. Even for some farmers in Western nations, livestock can serve as a kind of insurance. Some crop growers may produce livestock as a strategy for diversification of their income sources, to reduce risks related to weather, markets and other factors.
Many studies have found evidence of the social, as well as economic, importance of livestock in non-Western countries and in regions of rural poverty, and such evidence is not confined to pastoral and nomadic societies.
Social values in developed countries can also be considerable. For example, in a study of livestock ranching permitted on national forest land in New Mexico, US, it was concluded that "ranching maintains traditional values and connects families to ancestral lands and cultural heritage", and that a "sense of place, attachment to land, and the value of preserving open space were common themes". "The importance of land and animals as means of maintaining culture and way of life figured repeatedly in permittee responses, as did the subjects of responsibility and respect for land, animals, family, and community."
In the US, profit tends to rank low among motivations for involvement in livestock ranching. Instead, family, tradition and a desired way of life tend to be major motivators for ranch purchase, and ranchers "historically have been willing to accept low returns from livestock production".
Environmental impact
Animal husbandry has a significant impact on the world environment. It is responsible for somewhere between 20 and 33% of the fresh water usage in the world, and livestock, and the production of feed for them, occupy about a third of Earth's ice-free land. Livestock production is a contributing factor in species extinction, desertification, and habitat destruction. Meat is considered one of the prime factors contributing to the current sixth mass extinction. Animal agriculture contributes to species extinction in various ways. Habitat is destroyed by clearing forests and converting land to grow feed crops and for animal grazing (for example, animal husbandry is responsible for up to 91% of the deforestation in the Amazon region), while predators and herbivores are frequently targeted and hunted because of a perceived threat to livestock profits. The newest report released by the Intergovernmental Panel on Climate Change (IPCC) states that between the 1970s and 2000s agricultural emission increases were directly linked to an increase in livestock. The population growth of livestock (including cattle, buffalo, sheep, and goats) is done with the intention of increasing animal production, but in turn increases emissions. In addition, livestock produce greenhouse gases. The IPCC has estimated that agriculture (including not only livestock, but also food crop, biofuel and other production) accounted for about 10 to 12 percent of global anthropogenic greenhouse gas emissions (expressed as 100-year carbon dioxide equivalents) in 2005 and in 2010. Cattle produce some 79 million tons of methane per day. Live westock enteric methane account 30% of the overall methane emissions of the planet. Livestock are responsible for 34% of all human-related emissions of nitrous oxide, through feed production and manure..Best production practices are estimated to be able to reduce livestock emissions by 30%.
Impacts of climate change
Animal ethics
Animal ethics is a branch of ethics that examines human-animal relationships and the moral consideration of non-animals. Debates within the field address the moral implications of using animals for human consumption and the responsibilities humans have toward livestock.
It is estimated that worldwide, 74% of livestock are raised in factory farms, characterized by densely confined animals. Consumers are typically against intensive livestock farming when surveyed. A majority are unaware of routine controversial practices such as break trimming, separation of calves from their mothers and gas chamber slaughter. Three quarters of US adults surveyed believed the animal products they consumed came from animals that were treated "humanely".
Believing that livestock farming is cruel was cited as a reason for becoming vegan or vegetarian throughout the 2010s.
| Technology | Animal husbandry | null |
25164088 | https://en.wikipedia.org/wiki/Kleptothermy | Kleptothermy | In biology, kleptothermy is any form of thermoregulation by which an animal shares in the metabolic thermogenesis of another animal. It may or may not be reciprocal, and occurs in both endotherms and ectotherms. One of its forms is huddling. However, kleptothermy can happen between different species that share the same habitat, and can also happen in pre-hatching life where embryos are able to detect thermal changes in the environment.
This process requires two major conditions: the thermal heterogeneity created by the presence of a warm organism in a cool environment in addition to the use of that heterogeneity by another animal to maintain body temperatures at higher (and more stable) levels than would be possible elsewhere in the local area. The purpose of this behaviour is to enable these groups to increase its thermal inertia, retard heat loss and/or reduce the per capita metabolic expenditure needed to maintain stable body temperatures.
Kleptothermy is seen in cases where ectotherms regulate their own temperatures and exploit the high and constant body temperatures exhibited by endothermic species. In this case, the endotherms involved are not only mammals and birds; they can be termites that maintain high and constant temperatures within their mounds where they provide thermal regimes that are exploited by a wide array of lizards, snakes and crocodilians. However, many cases of kleptothermy involve ectotherms sheltering inside the burrows used by endotherms to help maintain a high constant body temperature.
Huddling
Huddling confers higher and more constant body temperatures than solitary resting. Some species of ectotherms including lizards and snakes, such as boa constrictors and tiger snakes, increase their effective mass by clustering tightly together. It is also widespread amongst gregarious endotherms such as bats and birds (such as the mousebird and emperor penguin) where it allows the sharing of body heat, particularly among juveniles.
In white-backed mousebirds (Colius colius), individuals maintain rest-phase body temperature above 32 °C despite air temperatures as low as -3.4 °C. This rest-phase body temperature was synchronized among individuals that cluster. Sometimes, kleptothermy is not reciprocal and might be accurately described as heat-stealing. For example, some male Canadian red sided garter snakes engage in female mimicry in which they produce fake pheromones after emerging from hibernation. This causes rival males to cover them in a mistaken attempt to mate, and so transfer heat to them. In turn, those males that mimic females become rapidly revitalized after hibernation (which depends upon raising their body temperature), giving them an advantage in their own attempts to mate.
On the other hand, huddling allows emperor penguins (Aptenodytes forsteri) to save energy, maintain a high body temperature and sustain their breeding fast during the Antarctic winter. This huddling behaviour raises the ambient temperature that these penguins are exposed to above 0 °C (at average external temperatures of -17 °C). As a consequence of tight huddles, ambient temperatures can be above 20 °C and can increase up to 37.5 °C, close to birds' body temperature. Therefore, this complex social behaviour is what enables all breeders to get an equal and normal access to an environment which allows them to save energy and successfully incubate their eggs during the Antarctic winter.
Habitat sharing
Many ectotherms exploit the heat produced by endotherms by sharing their nests and burrows. For example, mammal burrows are used by geckos and seabird burrows by Australian tiger snakes and New Zealand tuatara. Termites create high and regulated temperatures in their mounds, and this is exploited by some species of lizards, snakes and crocodiles.
Research has shown such kleptothermy can be advantageous in cases such as the blue-lipped sea krait (Laticauda laticaudata), where these reptiles occupy a burrow of a pair of wedge-tailed shearwater incubating their chick. This in turn, raises its body temperature to , compared to when present in other habitats. Its body temperature is also observed to be more stable. On the other hand, burrows without birds did not provide this heat, being only .
Another example would be the case of the fairy prion (Pachyptila turtur) that forms a close association with a medium-sized reptile, the tuatara (Sphenodon punctatus). These reptiles share the burrows made by the birds, and often stay when the birds are present which helps maintain a higher body temperature. Research has shown that fairy prions enable tuatara to maintain a higher body temperature through the night for several months of the year, October to January (austral spring to summer). During the night, tuatara sharing a burrow with a bird had the most thermal benefits and helped maintain their body temperature up to 15 hours the next day.
Pre-hatching life
Research done on embryos of Chinese softshell turtles (Pelodiscus sinensis) falsify the assumption that behavioural thermoregulation is possible only for post-hatching stages of the reptile life history. Remarkably, even undeveloped and tiny embryos were able to detect thermal differentials within the egg and move to exploit that small-scale heterogeneity. Research has shown that this behaviour exhibited by reptile embryos may well enhance offspring fitness where movements of these embryos enabled them to maximize heat gain from their surroundings and thus increase their body temperatures. This in turn leads to a variation in the embryonic development rate and the incubation period as well. This could benefit the embryos in which a warmer incubation increases developmental rate and therefore accelerating the hatching process.
On the other hand, decreased incubation periods also may minimize the embryo's exposure to risks of nest predation or lethal extremes thermal conditions where embryos move to cooler regions of the egg during periods of dangerously high temperatures.
In addition, embryonic thermoregulation could enhance hatching fitness via modifications to a range of phenotypic traits where embryos with minimal temperature differences hatch at the same time decreasing the individuals' risk of predation. Therefore, the developmental rates of embryos of reptiles are not passive consequences of maternally enforced decisions about the temperatures that the embryo will experience before hatching. Instead, the embryo's behaviour and physiology combine, allowing the smallest embryos to control aspects of their own pre-hatching environment showing that the embryo is not simply a work in progress, but is a functioning organism with surprisingly sophisticated and effective behaviours.
Evolution
Ectotherms and endotherms undergo different evolutionary perspectives where mammals and birds thermoregulate far more precisely than ectotherms. A major benefit of precise thermoregulation is the ability to enhance performance through thermal specialization. Therefore, mammals and birds are assumed to have evolved relatively narrow performance breadths. Thus, the heterothermy of these endotherms would lead to losses of performance during certain periods and therefore genetic variation in thermosensitivity would enable the evolution of thermal generalists in more heterothermic species. The physiologies of the endotherms allows them to adapt within the constraints imposed by genetics, development, and physics.
On the other side, the mechanisms for thermoregulation did not evolve separately, but rather in connection with other functions. These mechanisms were more likely quantitative rather than qualitative and it involved selection of appropriate habitats, changes in levels of locomotor activity, optimum energy liberation, and conservation of metabolic substrates. The evolution of endothermy is directly linked to the selection for high levels of activity sustained by aerobic metabolism. The evolution of the complex behaviour patterns among the birds and mammals requires the evolution of metabolic systems that support the activity prior to that.
Endothermy in vertebrates evolved along separate, but parallel lines from different groups of reptilian ancestors. The advantages of endothermy are manifested in the ability to occupy thermal areas that exclude many ectothermic vertebrates, a high degree of thermal independence from environmental temperature, high muscular power output and sustained levels of activity. Endothermy, however, is energetically very expensive and requires a great deal of food, compared with ectotherms in order to support high metabolic rates.
| Biology and health sciences | Basics | Biology |
25164668 | https://en.wikipedia.org/wiki/White%20blood%20cell | White blood cell | White blood cells (scientific name leukocytes), also called immune cells or immunocytes, are cells of the immune system that are involved in protecting the body against both infectious disease and foreign invaders. White blood cells are generally larger than red blood cells. They include three main subtypes: granulocytes, lymphocytes and monocytes.
All white blood cells are produced and derived from multipotent cells in the bone marrow known as hematopoietic stem cells. Leukocytes are found throughout the body, including the blood and lymphatic system. All white blood cells have nuclei, which distinguishes them from the other blood cells, the anucleated red blood cells (RBCs) and platelets. The different white blood cells are usually classified by cell lineage (myeloid cells or lymphoid cells). White blood cells are part of the body's immune system. They help the body fight infection and other diseases. Types of white blood cells are granulocytes (neutrophils, eosinophils, and basophils), and agranulocytes (monocytes, and lymphocytes (T cells and B cells)). Myeloid cells (myelocytes) include neutrophils, eosinophils, mast cells, basophils, and monocytes. Monocytes are further subdivided into dendritic cells and macrophages. Monocytes, macrophages, and neutrophils are phagocytic. Lymphoid cells (lymphocytes) include T cells (subdivided into helper T cells, memory T cells, cytotoxic T cells), B cells (subdivided into plasma cells and memory B cells), and natural killer cells. Historically, white blood cells were classified by their physical characteristics (granulocytes and agranulocytes), but this classification system is less frequently used now. Produced in the bone marrow, white blood cells defend the body against infections and disease. An excess of white blood cells is usually due to infection or inflammation. Less commonly, a high white blood cell count could indicate certain blood cancers or bone marrow disorders.
The number of leukocytes in the blood is often an indicator of disease, and thus the white blood cell count is an important subset of the complete blood count. The normal white cell count is usually between 4 × 109/L and 1.1 × 1010/L. In the US, this is usually expressed as 4,000 to 11,000 white blood cells per microliter of blood. White blood cells make up approximately 1% of the total blood volume in a healthy adult, making them substantially less numerous than the red blood cells at 40% to 45%. However, this 1% of the blood makes a large difference to health, because immunity depends on it. An increase in the number of leukocytes over the upper limits is called leukocytosis. It is normal when it is part of healthy immune responses, which happen frequently. It is occasionally abnormal, when it is neoplastic or autoimmune in origin. A decrease below the lower limit is called leukopenia. This indicates a weakened immune system.
Etymology
The name "white blood cell" derives from the physical appearance of a blood sample after centrifugation. White cells are found in the buffy coat, a thin, typically white layer of nucleated cells between the sedimented red blood cells and the blood plasma. The scientific term leukocyte directly reflects its description. It is derived from the Greek roots leuk- meaning "white" and cyt- meaning "cell". The buffy coat may sometimes be green if there are large amounts of neutrophils in the sample, due to the heme-containing enzyme myeloperoxidase that they produce.
Types
Overview
All white blood cells are nucleated, which distinguishes them from the anucleated red blood cells and platelets. Types of leukocytes can be classified in standard ways. Two pairs of broadest categories classify them either by structure (granulocytes or agranulocytes) or by cell lineage (myeloid cells or lymphoid cells). These broadest categories can be further divided into the five main types: neutrophils, eosinophils, basophils, lymphocytes, and monocytes. A good way to remember the relative proportions of WBCs is "Never Let Monkeys Eat Bananas". These types are distinguished by their physical and functional characteristics. Monocytes and neutrophils are phagocytic. Further subtypes can be classified.
Granulocytes are distinguished from agranulocytes by their nucleus shape (lobed versus round, that is, polymorphonuclear versus mononuclear) and by their cytoplasm granules (present or absent, or more precisely, visible on light microscopy or not thus visible). The other dichotomy is by lineage: Myeloid cells (neutrophils, monocytes, eosinophils and basophils) are distinguished from lymphoid cells (lymphocytes) by hematopoietic lineage (cellular differentiation lineage). Lymphocytes can be further classified as T cells, B cells, and natural killer cells.
Neutrophil
Neutrophils are the most abundant white blood cell, constituting 60–70% of the circulating leukocytes. They defend against bacterial or fungal infection. They are usually first responders to microbial infection; their activity and death in large numbers form pus. They are commonly referred to as polymorphonuclear (PMN) leukocytes, although, in the technical sense, PMN refers to all granulocytes. They have a multi-lobed nucleus, which consists of three to five lobes connected by slender strands. This gives the neutrophils the appearance of having multiple nuclei, hence the name polymorphonuclear leukocyte. The cytoplasm may look transparent because of fine granules that are pale lilac when stained. Neutrophils are active in phagocytosing bacteria and are present in large amount in the pus of wounds. These cells are not able to renew their lysosomes (used in digesting microbes) and die after having phagocytosed a few pathogens. Neutrophils are the most common cell type seen in the early stages of acute inflammation. The average lifespan of inactivated human neutrophils in the circulation has been reported by different approaches to be between 5 and 135 hours.
Eosinophil
Eosinophils compose about 2–4% of white blood cells in circulating blood. This count fluctuates throughout the day, seasonally, and during menstruation. It rises in response to allergies, parasitic infections, collagen diseases, and disease of the spleen and central nervous system. They are rare in the blood, but numerous in the mucous membranes of the respiratory, digestive, and lower urinary tracts.
They primarily deal with parasitic infections. Eosinophils are also the predominant inflammatory cells in allergic reactions. The most important causes of eosinophilia include allergies such as asthma, hay fever, and hives; and parasitic infections. They secrete chemicals that destroy large parasites, such as hookworms and tapeworms, that are too big for any one white blood cell to phagocytize. In general, their nuclei are bi-lobed. The lobes are connected by a thin strand. The cytoplasm is full of granules that assume a characteristic pink-orange color with eosin staining.
Basophil
Basophils are chiefly responsible for allergic and antigen response by releasing the chemical histamine causing the dilation of blood vessels. Because they are the rarest of the white blood cells (less than 0.5% of the total count) and share physicochemical properties with other blood cells, they are difficult to study. They can be recognized by several coarse, dark violet granules, giving them a blue hue. The nucleus is bi- or tri-lobed, but it is hard to see because of the number of coarse granules that hide it.
They secrete two chemicals that aid in the body's defenses: histamine and heparin. Histamine is responsible for widening blood vessels and increasing the flow of blood to injured tissue. It also makes blood vessels more permeable so neutrophils and clotting proteins can get into connective tissue more easily. Heparin is an anticoagulant that inhibits blood clotting and promotes the movement of white blood cells into an area. Basophils can also release chemical signals that attract eosinophils and neutrophils to an infection site.
Lymphocyte
Lymphocytes are much more common in the lymphatic system than in blood. Lymphocytes are distinguished by having a deeply staining nucleus that may be eccentric in location, and a relatively small amount of cytoplasm. Lymphocytes include:
B cells make antibodies that can bind to pathogens, block pathogen invasion, activate the complement system, and enhance pathogen destruction.
T cells:
CD4+ T helper cells: T cells displaying co-receptor CD4 are known as CD4+ T cells. These cells have T-cell receptors and CD4 molecules that, in combination, bind antigenic peptides presented on major histocompatibility complex (MHC) class II molecules on antigen-presenting cells. Helper T cells make cytokines and perform other functions that help coordinate the immune response. In HIV infection, these T cells are the main index to identify the individual's immune system integrity.
CD8+ cytotoxic T cells: T cells displaying co-receptor CD8 are known as CD8+ T cells. These cells bind antigens presented on MHC I complex of virus-infected or tumour cells and kill them. Nearly all nucleated cells display MHC I.
γδ T cells possess an alternative T cell receptor (different from the αβ TCR found on conventional CD4+ and CD8+ T cells). Found in tissue more commonly than in blood, γδ T cells share characteristics of helper T cells, cytotoxic T cells, and natural killer cells.
Natural killer cells are able to kill cells of the body that do not display MHC class I molecules, or display stress markers such as MHC class I polypeptide–related sequence A (MIC-A). Decreased expression of MHC class I and up-regulation of MIC-A can happen when cells are infected by a virus or become cancerous.
Monocyte
Monocytes, the largest type of white blood cell, share the "vacuum cleaner" (phagocytosis) function of neutrophils, but are much longer lived as they have an extra role: they present pieces of pathogens to T cells so that the pathogens may be recognized again and killed. This causes an antibody response to be mounted. Monocytes eventually leave the bloodstream and become tissue macrophages, which remove dead cell debris as well as attack microorganisms. Neither dead cell debris nor attacking microorganisms can be dealt with effectively by the neutrophils. Unlike neutrophils, monocytes are able to replace their lysosomal contents and are thought to have a much longer active life. They have the kidney-shaped nucleus and are typically not granulated. They also possess abundant cytoplasm.
Fixed leucocytes
Some leucocytes migrate into the tissues of the body to take up a permanent residence at that location rather than remaining in the blood. Often these cells have specific names depending upon which tissue they settle in, such as fixed macrophages in the liver, which become known as Kupffer cells. These cells still serve a role in the immune system.
Histiocytes
Dendritic cells (Although these will often migrate to local lymph nodes upon ingesting antigens)
Mast cells
Microglia
Disorders
The two commonly used categories of white blood cell disorders divide them quantitatively into those causing excessive numbers (proliferative disorders) and those causing insufficient numbers (leukopenias). Leukocytosis is usually healthy (e.g., fighting an infection), but it also may be dysfunctionally proliferative. Proliferative disorders of white blood cells can be classed as myeloproliferative and lymphoproliferative. Some are autoimmune, but many are neoplastic.
Another way to categorize disorders of white blood cells is qualitatively. There are various disorders in which the number of white blood cells is normal but the cells do not function normally.
Neoplasia of white blood cells can be benign but is often malignant. Of the various tumors of the blood and lymph, cancers of white blood cells can be broadly classified as leukemias and lymphomas, although those categories overlap and are often grouped together.
Leukopenias
A range of disorders can cause decreases in white blood cells. This type of white blood cell decreased is usually the neutrophil. In this case the decrease may be called neutropenia or granulocytopenia. Less commonly, a decrease in lymphocytes (called lymphocytopenia or lymphopenia) may be seen.
Neutropenia
Neutropenia can be acquired or intrinsic. A decrease in levels of neutrophils on lab tests is due to either decreased production of neutrophils or increased removal from the blood. The following list of causes is not complete.
Medications – chemotherapy, sulfas or other antibiotics, phenothiazines, benzodiazepines, antithyroid medications, anticonvulsants, quinine, quinidine, indometacin, procainamide, thiazides
Radiation
Toxins – alcohol, benzenes
Intrinsic disorders – Fanconi's, Kostmann's, cyclic neutropenia, Chédiak–Higashi
Immune dysfunction – connective tissue diseases, AIDS, rheumatoid arthritis
Blood cell dysfunction – megaloblastic anemia, myelodysplasia, marrow failure, marrow replacement, acute leukemia
Any major infection
Miscellaneous – starvation, hypersplenism
Symptoms of neutropenia are associated with the underlying cause of the decrease in neutrophils. For example, the most common cause of acquired neutropenia is drug-induced, so an individual may have symptoms of medication overdose or toxicity.
Treatment is also aimed at the underlying cause of the neutropenia. One severe consequence of neutropenia is that it can increase the risk of infection.
Lymphocytopenia
Defined as total lymphocyte count below 1.0x109/L, the cells most commonly affected are CD4+ T cells. Like neutropenia, lymphocytopenia may be acquired or intrinsic and there are many causes. This is not a complete list.
Inherited immune deficiency – severe combined immunodeficiency, common variable immunodeficiency, ataxia–telangiectasia, Wiskott–Aldrich syndrome, immunodeficiency with short-limbed dwarfism, immunodeficiency with thymoma, purine nucleoside phosphorylase deficiency, genetic polymorphism
Blood cell dysfunction – aplastic anemia
Infectious diseases – viral (AIDS, SARS, West Nile encephalitis, hepatitis, herpes, measles, others), bacterial (TB, typhoid, pneumonia, rickettsiosis, ehrlichiosis, sepsis), parasitic (acute phase of malaria)
Medications – chemotherapy (antilymphocyte globulin therapy, alemtuzumab, glucocorticoids)
Radiation
Major surgery
Miscellaneous – ECMO, kidney or bone marrow transplant, hemodialysis, kidney failure, severe burns, celiac disease, severe acute pancreatitis, sarcoidosis, protein-losing enteropathy, strenuous exercise, carcinoma
Immune dysfunction – arthritis, systemic lupus erythematosus, Sjögren syndrome, myasthenia gravis, systemic vasculitis, Behçet's-like syndrome, dermatomyositis, granulomatosis with polyangiitis
Nutritional/Dietary – alcohol use disorder, zinc deficiency
Like neutropenia, symptoms and treatment of lymphocytopenia are directed at the underlying cause of the change in cell counts.
Proliferative disorders
An increase in the number of white blood cells in circulation is called leukocytosis. This increase is most commonly caused by inflammation. There are four major causes: increase of production in bone marrow, increased release from storage in bone marrow, decreased attachment to veins and arteries, decreased uptake by tissues. Leukocytosis may affect one or more cell lines and can be neutrophilic, eosinophilic, basophilic, monocytosis, or lymphocytosis.
Neutrophilia
Neutrophilia is an increase in the absolute neutrophil count in the peripheral circulation. Normal blood values vary by age. Neutrophilia can be caused by a direct problem with blood cells (primary disease). It can also occur as a consequence of an underlying disease (secondary). Most cases of neutrophilia are secondary to inflammation.
Primary causes
Conditions with normally functioning neutrophils – hereditary neutrophilia, chronic idiopathic neutrophilia
Pelger–Huët anomaly
Down syndrome
Leukocyte adhesion deficiency
Familial cold urticaria
Leukemia (chronic myelogenous (CML)) and other myeloproliferative disorders
Surgical removal of spleen
Secondary causes
Infection
Chronic inflammation – especially juvenile idiopathic arthritis, rheumatoid arthritis, Still's disease, Crohn's disease, ulcerative colitis, granulomatous infections (for example, tuberculosis), and chronic hepatitis
Cigarette smoking – occurs in 25–50% of chronic smokers and can last up to 5 years after quitting
Stress – exercise, surgery, general stress
Medication induced – corticosteroids (for example, prednisone, β-agonists, lithium)
Cancer – either by growth factors secreted by the tumor or invasion of bone marrow by the cancer
Increased destruction of cells in peripheral circulation can stimulate bone marrow. This can occur in hemolytic anemia and idiopathic thrombocytopenic purpura
Eosinophilia
A normal eosinophil count is considered to be less than 0.65/L. Eosinophil counts are higher in newborns and vary with age, time (lower in the morning and higher at night), exercise, environment, and exposure to allergens. Eosinophilia is never a normal lab finding. Efforts should always be made to discover the underlying cause, though the cause may not always be found.
Counting and reference ranges
The complete blood cell count is a blood panel that includes the overall white blood cell count and differential count, a count of each type of white blood cell. Reference ranges for blood tests specify the typical counts in healthy people.
The normal total leucocyte count in an adult is 4000 to 11,000 per mm3 of blood.
Differential leucocyte count: number/ (%) of different types of leucocytes per cubic mm. of blood. Below are reference ranges for various types leucocytes.
| Biology and health sciences | Circulatory system | null |
25164793 | https://en.wikipedia.org/wiki/Seed%20plant | Seed plant | A seed plant or spermatophyte (; ), also known as a phanerogam (taxon Phanerogamae) or a phaenogam (taxon Phaenogamae), is any plant that produces seeds. It is a category of embryophyte (i.e. land plant) that includes most of the familiar land plants, including the flowering plants and the gymnosperms, but not ferns, mosses, or algae.
The term phanerogam or phanerogamae is derived from the Greek (), meaning "visible", in contrast to the term "cryptogam" or "cryptogamae" (, and (), 'to marry'). These terms distinguish those plants with hidden sexual organs (cryptogamae) from those with visible ones (phanerogamae).
Description
The extant spermatophytes form five divisions, the first four of which are classified as gymnosperms, plants that have unenclosed, "naked seeds":
Cycadophyta, the cycads, a subtropical and tropical group of plants,
Ginkgophyta, which includes a single living species of tree in the genus Ginkgo,
Pinophyta, the conifers, which are cone-bearing trees and shrubs, and
Gnetophyta, the gnetophytes, various woody plants in the relict genera Ephedra, Gnetum, and Welwitschia.
The fifth extant division is the flowering plants, also known as angiosperms or magnoliophytes, the largest and most diverse group of spermatophytes:
Angiosperms, the flowering plants, possess seeds enclosed in a fruit, unlike gymnosperms.
In addition to the five living taxa listed above, the fossil record contains evidence of many extinct taxa of seed plants, among those:
Pteridospermae, the so-called "seed ferns", were one of the earliest successful groups of land plants, and forests dominated by seed ferns were prevalent in the late Paleozoic.
Glossopteris was the most prominent tree genus in the ancient southern supercontinent of Gondwana during the Permian period.
By the Triassic period, seed ferns had declined in ecological importance, and representatives of modern gymnosperm groups were abundant and dominant through the end of the Cretaceous, when the angiosperms radiated.
Evolutionary history
A whole genome duplication event in the ancestor of seed plants occurred about . This gave rise to a series of evolutionary changes that resulted in the origin of modern seed plants.
A middle Devonian (385-million-year-old) precursor to seed plants from Belgium has been identified predating the earliest seed plants by about 20 million years. Runcaria, small and radially symmetrical, is an integumented megasporangium surrounded by a cupule. The megasporangium bears an unopened distal extension protruding above the mutlilobed integument. It is suspected that the extension was involved in anemophilous (wind) pollination. Runcaria sheds new light on the sequence of character acquisition leading to the seed. Runcaria has all of the qualities of seed plants except for a solid seed coat and a system to guide the pollen to the seed.
Runcaria was followed shortly after by plants with a more condensed cupule, such as Spermasporites and Moresnetia. Seed-bearing plants had diversified substantially by the Famennian, the last stage of the Devonian. Examples include Elkinsia, Xenotheca, Archaeosperma, "Hydrasperma", Aglosperma, and Warsteinia. Some of these Devonian seeds are now classified within the order Lyginopteridales.
Phylogeny
Seed-bearing plants are a clade within the vascular plants (tracheophytes).
Internal phylogeny
The spermatophytes were traditionally divided into angiosperms, or flowering plants, and gymnosperms, which includes the gnetophytes, cycads, ginkgo, and conifers. Older morphological studies believed in a close relationship between the gnetophytes and the angiosperms, in particular based on vessel elements. However, molecular studies (and some more recent morphological and fossil papers) have generally shown a clade of gymnosperms, with the gnetophytes in or near the conifers. For example, one common proposed set of relationships is known as the gne-pine hypothesis and looks like:
However, the relationships between these groups should not be considered settled.
Other classifications
Other classifications group all the seed plants in a single division, with classes for the five groups:
Division Spermatophyta
Cycadopsida, the cycads
Ginkgoopsida, the ginkgo
Pinopsida, the conifers, ("Coniferopsida")
Gnetopsida, the gnetophytes
Magnoliopsida, the flowering plants, or Angiospermopsida
A more modern classification ranks these groups as separate divisions (sometimes under the Superdivision Spermatophyta):
Cycadophyta, the cycads
Ginkgophyta, the ginkgo
Pinophyta, the conifers
Gnetophyta, the gnetophytes
Magnoliophyta, the flowering plants
Unassigned extinct spermatophyte orders, some of which qualify as "seed ferns":
†Cordaitales
†Calamopityales
†Callistophytales
†Caytoniales
†Gigantopteridales
†Glossopteridales
†Lyginopteridales
†Medullosales
†Peltaspermales
†Umkomasiales (corystosperms)
†Czekanowskiales
†Bennettitales
†Erdtmanithecales
†Pentoxylales
†Petriellales Taylor et al. 1994
†Avatiaceae Anderson & Anderson 2003
†Axelrodiopsida Anderson & Anderson
†Alexiales Anderson & Anderson 2003
†Hamshawviales Anderson & Anderson 2003
†Hexapterospermales Doweld 2001
†Hlatimbiales Anderson & Anderson 2003
†Matatiellales Anderson & Anderson 2003
†Arberiopsida Doweld 2001
†Iraniales E. Taylor et al. 2008
†Vojnovskyales E. Taylor et al. 2008
†Hermanophytales E. Taylor et al. 2008
†Dirhopalostachyaceae E. Taylor et al. 2008
| Biology and health sciences | Seed plants (except flowering plants) | Plants |
33816021 | https://en.wikipedia.org/wiki/Danionin | Danionin | The danionins are a group of small, minnow-type fish belonging to the family Cyprinidae. Species of this group are in the genera clades Danio and Devario (which also includes Chela, Laubuka, Microdevario, and Microrasbora genera), based on the latest phylo-genetic research by Fang et al in 2009. They are primarily native to the fresh waters of South and Southeast Asia, with fewer species in Africa. Many species are brightly coloured and are available as aquarium fish worldwide. Fishes of the danio clade tend to have horizontal stripes, rows of spots, or vertical bars, and often have long barbels. Species within the devario clade tend to have vertical or horizontal bars, and short, rudimentary barbels, if present at all. All danionins are egg scatterers, and breed in the rainy season in the wild. They are carnivores, living on insects and small crustaceans.
Fossil record
Two fossil danionins, tentatively assigned to Rasbora ('Rasbora' antiqua and Rasbora' mohri) are known from the Eocene of Sumatra, Indonesia, representing the earliest record of the group.
Danionin species
Common names
Since 2004, many new danionins have been discovered, which do not yet have scientific names and many other species, previously known only to the scientific fraternity, have become available in aquarist shops. This has predictably led to total confusion as to the naming of some fish, with some species having up to five different common names in use and some common names being used for up to four different species.
Scientific names
Individual danionin species are listed within the relevant pages for each genus, but many danionin species have been changed into different genera over the last decades, in some cases repeatedly; similarly, some species have been synonymised with other species and in some cases later unsynonymised, all of which has caused confusion.
List of genera
In the aquarium
They are generally active swimmers, occupying the top half of a tank and eat just about any type of aquarium food. They will not, however, generally eat plants or algae. Although boisterous and liable to chase each other and other fish, they are good community fish and do not generally attack each other or other fish, although they occasionally nip fins, and like most fish, eat eggs and any fish small enough to fit into their mouths.
These fish are easily stressed by flowing water and bright light. They occur in stagnant water with pH values between 3 and 5 caused by peat, which accumulates from a dense canopy. Generally, this also results in them being subtropical with temperatures of often being fine; they are good jumpers, so a tight-fitting lid is recommended.
Taxonomy
The grouping of fish now deemed danionins has been the subject of constant research and speculation throughout the 20th century. Nearly all the fish classed within the genera Danio and Devario were originally placed in the genus Danio upon discovery. However, in the first part of the 20th century, George S. Myers split them into three genera, Danio, Brachydanio, and Daniops. The sole species within Myers' Daniops, D. myersi, has long ago been found to be a synonym of Devario laoensis, but his genus Brachydanio lasted for much longer, as it included most of the fish now classed as Danio, whereas Danio included most of the fish now classed as Devario.
However, Danio dangila and Danio feegradei, both of which had most of the characteristics of the Brachydanio (with the exception that they were much larger than Brachydanio species) were placed within Danios. (Due to this and other misplacing, both Danio and Brachydanio were found to be paraphyletic by Fang Fang in 2003.). In 1941, H.M. Smith attempted to unite all the Brachydanios and Danios species into one genus on the basis of a fish from Thailand, which was supposed to bridge the gap. He downgraded both Danio and Brachydanio into subgenera and erected a new subgenus of Allodanio with one member, Allodanio ponticulus, but Myers later pointed out that A. ponticulus was actually a member of the genus Barilius.
The danionin group was thought to include Parabarilius, Danio, Brachydanio, and Danionella. In this scheme, danionins were distinguished from other cyprinids by the uniquely shared character of the "danionin notch", a large and peculiarly shaped indentation in the medial margin of the mandibles; this feature is not noted in rasborins, esomins, bariliins, or chelins. However, all of these categories at that time were informal. Microrasbora was not considered to be a part of the danionins, nor even closely related to Danionella, a part of the danionins as understood at that time.
In the late 1980s and 1990s, doubts grew about the validity of Brachydanio, with species being referred to their original naming of Danio, and Fang Fang determined that the genus Danio, recognized up to that point, was paraphyletic. Fang Fang restricted Danio to the species in the "D. dangila species group", which at the time comprised nine species, including D. dangila, D. rerio, D. nigrofasciatus, and D. albolineatus; the remaining Danio species were moved to Devario, which at this time included D. malabaricus, D. kakhienensis, D. devario, D. chrysotaeniatus, D. maetaengensis, D. interruptus, and D. apogon.
The only Danio species to have been consistently called Danio were D. dangila and D. feegradei. As D. dangila was the first discovered Danio (or type) the name Danio had to remain with D. dangila, which is why the vast majority of species were moved to Devario.
Also, the sister group to Devario was deemed to be a clade formed by Inlecypris and Chela, and more controversially, Esomus was found to be the sister group of Danio. The relationships of Sundadanio, Danionella, and Microrasbora remained unresolved. The danionin notch was found to not supported to be a danionin synapomorphy.
In another paper, Celestichthys margaritatus was described as a new species of the Danioninae. Apparently, it is most closely related to Microrasbora erythromicron; the other Microrasbora species differ significantly from Celestichthys. The genus is identified as a danionin due specializations of its lower jaw and its numerous anal fin rays. Though it lacks a danionin notch, Celestichthys exhibits the "danionin mandibular knob", a bony process on the side of the mandible behind the danionin notch or where the notch should be; it is perhaps diagnostic of danionins. This knob is better developed in males than females. The fish of Rasborinae almost invariably have anal fins with three spines and five rays. Celestichthys has three anal fin spines and 8–10 anal fin rays. Also, rasborins have the generalized cyprinid principal caudal fin ray count of 10/9, while all Asian cyprinids with fewer than 10/9 principal caudal fin rays are all diminutive species of Danioninae, including Celestichthys, M. erythromicron, Danionella, and Paedocypris.
In 2007, an analysis of the phylogenetic relationships of the recently described genus Paedocypris was published, placing it as the sister taxon to Sundadanio. The clade formed by these two genera was found to be sister to a clade including many danionin genera, as well as some rasborin genera such as Rasbora, Trigonostigma, and Boraras, making the danionin group paraphyletic without these rasborin genera based on these results. This paper considered the danionin genera to be within a larger Rasborinae.
Also in 2007, another study analyzed the relationships of Danio. These authors considered Rasborinae to have priority over Danioninae, suggesting that they have the same meaning. Also, Danio was found to be the sister group of a clade including Chela, Microrasbora, Devario, and Inlecypris, rather than in a clade exclusively with either Devario or Esomus as in previous studies. This paper supported the close relationship of "Microrasbora" erythromicron to Danio species; however, this study did not include Celestichthys, which was noted by Roberts as being likely to include Erythromicron, but with further research needed.
In 2009 and 2010, detailed mitochondrial and nuclear DNA analyses of the phylogenetic interrelationships of the danionins were published by Fang et al (Zoologica Scripta, 38, 3, 2009) and Tang et al (Molecular Phylogenetics and Evolution 57, 2010). These two significant studies confirmed or established several taxonomic standings: The Danionin Tribe (of the Cyprinid Sub-family, Danioninae) were reduced to 3 clades - danio (Danio and Danioella genera), devario (Devario, Chela, Laubuka, Microdevario genera), and a combined esomus/paedocypris/sundadanio''' genera clade. Rasbora and related genera were excluded. The aquarium popular Celestial Pearl Danio / Galaxy Rasbora was confirmed as Danio margaritatus, being most closely related to D. erythromicron, and next to D.choprae. Within the devario clade, Microdevario was erected for all but one of the former Microrasboras. There is still uncertainty over the exact relationships for the three miniature taxa: Danionella, Paedocypris, and Sundadanio as all three have shown variability in phylogenetic position among different studies, both molecular and morphological.Tanichthys is often regarded as a danionin by aquarists and grouped as such in some older aquatic publications, but no scientific basis exists for this, a fact stated on numerous occasions by Brittan and others. It is more closely related to the Rasbora species. The danionins can be classed as a subfamily Danioninae which is increasingly gaining credibility as a subfamily distinct from the Rasboriniae within the family Cyprinidae. However, in Nelson, 2006, Danioninae was listed as a synonym of Rasborinae. However, neither inter- nor intrarelationships among the "rasborins" has as yet been thoroughly analyzed.
A number of the species have only been recently discovered, in remote inland areas of Laos and Myanmar, and do not yet have scientific names. They are listed as Danio or Devario'' sp "xxxx" within the relevant genera and disambiguation pages.
| Biology and health sciences | Cypriniformes | Animals |
43507260 | https://en.wikipedia.org/wiki/Quantifier%20%28logic%29 | Quantifier (logic) | In logic, a quantifier is an operator that specifies how many individuals in the domain of discourse satisfy an open formula. For instance, the universal quantifier in the first order formula expresses that everything in the domain satisfies the property denoted by . On the other hand, the existential quantifier in the formula expresses that there exists something in the domain which satisfies that property. A formula where a quantifier takes widest scope is called a quantified formula. A quantified formula must contain a bound variable and a subformula specifying a property of the referent of that variable.
The most commonly used quantifiers are and . These quantifiers are standardly defined as duals; in classical logic, they are interdefinable using negation. They can also be used to define more complex quantifiers, as in the formula which expresses that nothing has the property . Other quantifiers are only definable within second order logic or higher order logics. Quantifiers have been generalized beginning with the work of Mostowski and Lindström.
In a first-order logic statement, quantifications in the same type (either universal quantifications or existential quantifications) can be exchanged without changing the meaning of the statement, while the exchange of quantifications in different types changes the meaning. As an example, the only difference in the definition of uniform continuity and (ordinary) continuity is the order of quantifications.
First order quantifiers approximate the meanings of some natural language quantifiers such as "some" and "all". However, many natural language quantifiers can only be analyzed in terms of generalized quantifiers.
Relations to logical conjunction and disjunction
For a finite domain of discourse , the universally quantified formula is equivalent to the logical conjunction .
Dually, the existentially quantified formula is equivalent to the logical disjunction .
For example, if is the set of binary digits, the formula abbreviates , which evaluates to true.
Infinite domain of discourse
Consider the following statement (using dot notation for multiplication):
1 · 2 = 1 + 1, and 2 · 2 = 2 + 2, and 3 · 2 = 3 + 3, ..., and 100 · 2 = 100 + 100, and ..., etc.
This has the appearance of an infinite conjunction of propositions. From the point of view of formal languages, this is immediately a problem, since syntax rules are expected to generate finite words.
The example above is fortunate in that there is a procedure to generate all the conjuncts. However, if an assertion were to be made about every irrational number, there would be no way to enumerate all the conjuncts, since irrationals cannot be enumerated. A succinct, equivalent formulation which avoids these problems uses universal quantification:
For each natural number n, n · 2 = n + n.
A similar analysis applies to the disjunction,
1 is equal to 5 + 5, or 2 is equal to 5 + 5, or 3 is equal to 5 + 5, ... , or 100 is equal to 5 + 5, or ..., etc.
which can be rephrased using existential quantification:
For some natural number n, n is equal to 5+5.
Algebraic approaches to quantification
It is possible to devise abstract algebras whose models include formal languages with quantification, but progress has been slow and interest in such algebra has been limited. Three approaches have been devised to date:
Relation algebra, invented by Augustus De Morgan, and developed by Charles Sanders Peirce, Ernst Schröder, Alfred Tarski, and Tarski's students. Relation algebra cannot represent any formula with quantifiers nested more than three deep. Surprisingly, the models of relation algebra include the axiomatic set theory ZFC and Peano arithmetic;
Cylindric algebra, devised by Alfred Tarski, Leon Henkin, and others;
The polyadic algebra of Paul Halmos.
Notation
The two most common quantifiers are the universal quantifier and the existential quantifier. The traditional symbol for the universal quantifier is "∀", a rotated letter "A", which stands for "for all" or "all". The corresponding symbol for the existential quantifier is "∃", a rotated letter "E", which stands for "there exists" or "exists".
An example of translating a quantified statement in a natural language such as English would be as follows. Given the statement, "Each of Peter's friends either likes to dance or likes to go to the beach (or both)", key aspects can be identified and rewritten using symbols including quantifiers. So, let X be the set of all Peter's friends, P(x) the predicate "x likes to dance", and Q(x) the predicate "x likes to go to the beach". Then the above sentence can be written in formal notation as , which is read, "for every x that is a member of X, P applies to x or Q applies to x".
Some other quantified expressions are constructed as follows,
for a formula P. These two expressions (using the definitions above) are read as "there exists a friend of Peter who likes to dance" and "all friends of Peter like to dance", respectively.
Variant notations include, for set X and set members x:
All of these variations also apply to universal quantification.
Other variations for the universal quantifier are
Some versions of the notation explicitly mention the range of quantification. The range of quantification must always be specified; for a given mathematical theory, this can be done in several ways:
Assume a fixed domain of discourse for every quantification, as is done in Zermelo–Fraenkel set theory,
Fix several domains of discourse in advance and require that each variable have a declared domain, which is the type of that variable. This is analogous to the situation in statically typed computer programming languages, where variables have declared types.
Mention explicitly the range of quantification, perhaps using a symbol for the set of all objects in that domain (or the type of the objects in that domain).
One can use any variable as a quantified variable in place of any other, under certain restrictions in which variable capture does not occur. Even if the notation uses typed variables, variables of that type may be used.
Informally or in natural language, the "∀x" or "∃x" might appear after or in the middle of P(x). Formally, however, the phrase that introduces the dummy variable is placed in front.
Mathematical formulas mix symbolic expressions for quantifiers with natural language quantifiers such as,
For every natural number x, ...
There exists an x such that ...
For at least one x, ....
Keywords for uniqueness quantification include:
For exactly one natural number x, ...
There is one and only one x such that ....
Further, x may be replaced by a pronoun. For example,
For every natural number, its product with 2 equals to its sum with itself.
Some natural number is prime.
Order of quantifiers (nesting)
The order of quantifiers is critical to meaning, as is illustrated by the following two propositions:
For every natural number n, there exists a natural number s such that s = n2.
This is clearly true; it just asserts that every natural number has a square. The meaning of the assertion in which the order of quantifiers is reversed is different:
There exists a natural number s such that for every natural number n, s = n2.
This is clearly false; it asserts that there is a single natural number s that is the square of every natural number. This is because the syntax directs that any variable cannot be a function of subsequently introduced variables.
A less trivial example from mathematical analysis regards the concepts of uniform and pointwise continuity, whose definitions differ only by an exchange in the positions of two quantifiers. A function f from R to R is called
Pointwise continuous if
Uniformly continuous if
In the former case, the particular value chosen for δ can be a function of both ε and x, the variables that precede it.
In the latter case, δ can be a function only of ε (i.e., it has to be chosen independent of x). For example, f(x) = x2 satisfies pointwise, but not uniform continuity (its slope is unbound).
In contrast, interchanging the two initial universal quantifiers in the definition of pointwise continuity does not change the meaning.
As a general rule, swapping two adjacent universal quantifiers with the same scope (or swapping two adjacent existential quantifiers with the same scope) doesn't change the meaning of the formula (see Example here), but swapping an existential quantifier and an adjacent universal quantifier may change its meaning.
The maximum depth of nesting of quantifiers in a formula is called its "quantifier rank".
Equivalent expressions
If D is a domain of x and P(x) is a predicate dependent on object variable x, then the universal proposition can be expressed as
This notation is known as restricted or relativized or bounded quantification. Equivalently one can write,
The existential proposition can be expressed with bounded quantification as
or equivalently
Together with negation, only one of either the universal or existential quantifier is needed to perform both tasks:
which shows that to disprove a "for all x" proposition, one needs no more than to find an x for which the predicate is false. Similarly,
to disprove a "there exists an x" proposition, one needs to show that the predicate is false for all x.
In classical logic, every formula is logically equivalent to a formula in prenex normal form, that is, a string of quantifiers and bound variables followed by a quantifier-free formula.
Quantifier elimination
Range of quantification
Every quantification involves one specific variable and a domain of discourse or range of quantification of that variable. The range of quantification specifies the set of values that the variable takes. In the examples above, the range of quantification is the set of natural numbers. Specification of the range of quantification allows us to express the difference between, say, asserting that a predicate holds for some natural number or for some real number. Expository conventions often reserve some variable names such as "n" for natural numbers, and "x" for real numbers, although relying exclusively on naming conventions cannot work in general, since ranges of variables can change in the course of a mathematical argument.
A universally quantified formula over an empty range (like ) is always vacuously true. Conversely, an existentially quantified formula over an empty range (like ) is always false.
A more natural way to restrict the domain of discourse uses guarded quantification. For example, the guarded quantification
For some natural number n, n is even and n is prime
means
For some even number n, n is prime.
In some mathematical theories, a single domain of discourse fixed in advance is assumed. For example, in Zermelo–Fraenkel set theory, variables range over all sets. In this case, guarded quantifiers can be used to mimic a smaller range of quantification. Thus in the example above, to express
For every natural number n, n·2 = n + n
in Zermelo–Fraenkel set theory, one would write
For every n, if n belongs to N, then n·2 = n + n,
where N is the set of all natural numbers.
Formal semantics
Mathematical semantics is the application of mathematics to study the meaning of expressions in a formal language. It has three elements: a mathematical specification of a class of objects via syntax, a mathematical specification of various semantic domains and the relation between the two, which is usually expressed as a function from syntactic objects to semantic ones. This article only addresses the issue of how quantifier elements are interpreted.
The syntax of a formula can be given by a syntax tree. A quantifier has a scope, and an occurrence of a variable x is free if it is not within the scope of a quantification for that variable. Thus in
the occurrence of both x and y in C(y, x) is free, while the occurrence of x and y in B(y, x) is bound (i.e. non-free).
An interpretation for first-order predicate calculus assumes as given a domain of individuals X. A formula A whose free variables are x1, ..., xn is interpreted as a Boolean-valued function F(v1, ..., vn) of n arguments, where each argument ranges over the domain X. Boolean-valued means that the function assumes one of the values T (interpreted as truth) or F (interpreted as falsehood). The interpretation of the formula
is the function G of n-1 arguments such that G(v1, ..., vn-1) = T if and only if F(v1, ..., vn-1, w) = T for every w in X. If F(v1, ..., vn-1, w) = F for at least one value of w, then G(v1, ..., vn-1) = F. Similarly the interpretation of the formula
is the function H of n-1 arguments such that H(v1, ..., vn-1) = T if and only if F(v1, ..., vn-1, w) = T for at least one w and H(v1, ..., vn-1) = F otherwise.
The semantics for uniqueness quantification requires first-order predicate calculus with equality. This means there is given a distinguished two-placed predicate "="; the semantics is also modified accordingly so that "=" is always interpreted as the two-place equality relation on X. The interpretation of
then is the function of n-1 arguments, which is the logical and of the interpretations of
Each kind of quantification defines a corresponding closure operator on the set of formulas, by adding, for each free variable x, a quantifier to bind x. For example, the existential closure of the open formula n>2 ∧ xn+yn=zn is the closed formula ∃n ∃x ∃y ∃z (n>2 ∧ xn+yn=zn); the latter formula, when interpreted over the positive integers, is known to be false by Fermat's Last Theorem. As another example, equational axioms, like x+y=y+x, are usually meant to denote their universal closure, like ∀x ∀y (x+y=y+x) to express commutativity.
Paucal, multal and other degree quantifiers
None of the quantifiers previously discussed apply to a quantification such as
There are many integers n < 100, such that n is divisible by 2 or 3 or 5.
One possible interpretation mechanism can be obtained as follows: Suppose that in addition to a semantic domain X, we have given a probability measure P defined on X and cutoff numbers 0 < a ≤ b ≤ 1. If A is a formula with free variables x1,...,xn whose interpretation is
the function F of variables v1,...,vn
then the interpretation of
is the function of v1,...,vn-1 which is T if and only if
and F otherwise. Similarly, the interpretation of
is the function of v1,...,vn-1 which is F if and only if
and T otherwise.
Other quantifiers
A few other quantifiers have been proposed over time. In particular, the solution quantifier, noted § (section sign) and read "those". For example,
is read "those n in N such that n2 ≤ 4 are in {0,1,2}." The same construct is expressible in set-builder notation as
Contrary to the other quantifiers, § yields a set rather than a formula.
Some other quantifiers sometimes used in mathematics include:
There are infinitely many elements such that...
For all but finitely many elements... (sometimes expressed as "for almost all elements...").
There are uncountably many elements such that...
For all but countably many elements...
For all elements in a set of positive measure...
For all elements except those in a set of measure zero...
History
Term logic, also called Aristotelian logic, treats quantification in a manner that is closer to natural language, and also less suited to formal analysis. Term logic treated All, Some and No in the 4th century BC, in an account also touching on the alethic modalities.
In 1827, George Bentham published his Outline of a New System of Logic: With a Critical Examination of Dr. Whately's Elements of Logic, describing the principle of the quantifier, but the book was not widely circulated.
William Hamilton claimed to have coined the terms "quantify" and "quantification", most likely in his Edinburgh lectures c. 1840. Augustus De Morgan confirmed this in 1847, but modern usage began with De Morgan in 1862 where he makes statements such as "We are to take in both all and some-not-all as quantifiers".
Gottlob Frege, in his 1879 Begriffsschrift, was the first to employ a quantifier to bind a variable ranging over a domain of discourse and appearing in predicates. He would universally quantify a variable (or relation) by writing the variable over a dimple in an otherwise straight line appearing in his diagrammatic formulas. Frege did not devise an explicit notation for existential quantification, instead employing his equivalent of ~∀x~, or contraposition. Frege's treatment of quantification went largely unremarked until Bertrand Russell's 1903 Principles of Mathematics.
In work that culminated in Peirce (1885), Charles Sanders Peirce and his student Oscar Howard Mitchell independently invented universal and existential quantifiers, and bound variables. Peirce and Mitchell wrote Πx and Σx where we now write ∀x and ∃x. Peirce's notation can be found in the writings of Ernst Schröder, Leopold Loewenheim, Thoralf Skolem, and Polish logicians into the 1950s. Most notably, it is the notation of Kurt Gödel's landmark 1930 paper on the completeness of first-order logic, and 1931 paper on the incompleteness of Peano arithmetic.
Peirce's approach to quantification also influenced William Ernest Johnson and Giuseppe Peano, who invented yet another notation, namely (x) for the universal quantification of x and (in 1897) ∃x for the existential quantification of x. Hence for decades, the canonical notation in philosophy and mathematical logic was (x)P to express "all individuals in the domain of discourse have the property P," and "(∃x)P" for "there exists at least one individual in the domain of discourse having the property P." Peano, who was much better known than Peirce, in effect diffused the latter's thinking throughout Europe. Peano's notation was adopted by the Principia Mathematica of Whitehead and Russell, Quine, and Alonzo Church. In 1935, Gentzen introduced the ∀ symbol, by analogy with Peano's ∃ symbol. ∀ did not become canonical until the 1960s.
Around 1895, Peirce began developing his existential graphs, whose variables can be seen as tacitly quantified. Whether the shallowest instance of a variable is even or odd determines whether that variable's quantification is universal or existential. (Shallowness is the contrary of depth, which is determined by the nesting of negations.) Peirce's graphical logic has attracted some attention in recent years by those researching heterogeneous reasoning and diagrammatic inference.
| Mathematics | Mathematical logic | null |
46659847 | https://en.wikipedia.org/wiki/Heavy%20metals | Heavy metals | Heavy metals is a controversial and ambiguous term for metallic elements with relatively high densities, atomic weights, or atomic numbers. The criteria used, and whether metalloids are included, vary depending on the author and context and has been argued should not be used. A heavy metal may be defined on the basis of density, atomic number or chemical behaviour. More specific definitions have been published, none of which have been widely accepted. The definitions surveyed in this article encompass up to 96 out of the 118 known chemical elements; only mercury, lead and bismuth meet all of them. Despite this lack of agreement, the term (plural or singular) is widely used in science. A density of more than 5 g/cm3 is sometimes quoted as a commonly used criterion and is used in the body of this article.
The earliest-known metals—common metals such as iron, copper, and tin, and precious metals such as silver, gold, and platinum—are heavy metals. From 1809 onward, light metals, such as magnesium, aluminium, and titanium, were discovered, as well as less well-known heavy metals including gallium, thallium, and hafnium.
Some heavy metals are either essential nutrients (typically iron, cobalt, copper and zinc), or relatively harmless (such as ruthenium, silver and indium), but can be toxic in larger amounts or certain forms. Other heavy metals, such as arsenic, cadmium, mercury, and lead, are highly poisonous. Potential sources of heavy metal poisoning include mining, tailings, smelting, industrial waste, agricultural runoff, occupational exposure, paints and treated timber.
Physical and chemical characterisations of heavy metals need to be treated with caution, as the metals involved are not always consistently defined. As well as being relatively dense, heavy metals tend to be less reactive than lighter metals and have far fewer soluble sulfides and hydroxides. While it is relatively easy to distinguish a heavy metal such as tungsten from a lighter metal such as sodium, a few heavy metals, such as zinc, mercury, and lead, have some of the characteristics of lighter metals; and lighter metals such as beryllium, scandium, and titanium, have some of the characteristics of heavier metals.
Heavy metals are relatively rare in the Earth's crust but are present in many aspects of modern life. They are used in, for example, golf clubs, cars, antiseptics, self-cleaning ovens, plastics, solar panels, mobile phones, and particle accelerators.
Definitions
Controversial terminology
The International Union of Pure and Applied Chemistry (IUPAC), which standardizes nomenclature, says "the term heavy metals is both meaningless and misleading". The IUPAC report focuses on the legal and toxicological implications of describing "heavy metals" as toxins when there is no scientific evidence to support a connection. The density implied by the adjective "heavy" has almost no biological consequences and pure metals are rarely the biologically active substance.
This characterization has been echoed by numerous reviews. The most widely used toxicology textbook, Casarett and Doull’s toxicology uses "toxic metal" not "heavy metals". Nevertheless, there are scientific and science related articles which continue to use "heavy metal" as a term for toxic substances To be an acceptable term in scientific papers, a strict definition has been encouraged.
Use outside toxicology
Even in applications other than toxicity, there no widely agreed criterion-based definition of a heavy metal. Reviews have recommended that it not be used. Different meanings may be attached to the term, depending on the context. For example, a heavy metal may be defined on the basis of density, the distinguishing criterion might be atomic number, or the chemical behaviour.
Density criteria range from above 3.5 g/cm3 to above 7 g/cm3. Atomic weight definitions can range from greater than sodium (atomic weight 22.98); greater than 40 (excluding s- and f-block metals, hence starting with scandium); or more than 200, i.e. from mercury onwards. Atomic numbers are sometimes capped at 92 (uranium). Definitions based on atomic number have been criticised for including metals with low densities. For example, rubidium in group (column) 1 of the periodic table has an atomic number of 37 but a density of only 1.532 g/cm3, which is below the threshold figure used by other authors. The same problem may occur with definitions which are based on atomic weight.
The United States Pharmacopeia includes a test for heavy metals that involves precipitating metallic impurities as their coloured sulfides. On the basis of this type of chemical test, the group would include the transition metals and post-transition metals.
A different chemistry-based approach advocates replacing the term "heavy metal" with two groups of metals and a gray area. Class A metal ions prefer oxygen donors; class B ions prefer nitrogen or sulfur donors; and borderline or ambivalent ions show either class A or B characteristics, depending on the circumstances. The distinction between the class A metals and the other two categories is sharp. The class A and class B terminology is analogous to the "hard acid" and "soft base" terminology sometimes used to refer to the behaviour of metal ions in inorganic systems. The system groups the elements by where is the metal ion electronegativity and is its ionic radius. This index gauges the importance of covalent interactions vs ionic interactions for a given metal ion. This scheme has been applied to analyze biologically active metals in sea water for example, but it has not been widely adopted.
Origins and use of the term
The heaviness of naturally occurring metals such as gold, copper, and iron may have been noticed in prehistory and, in light of their malleability, led to the first attempts to craft metal ornaments, tools, and weapons.
In 1817 the German chemist Leopold Gmelin divided the elements into nonmetals, light metals, and heavy metals. Light metals had densities of 0.860–5.0 g/cm3; heavy metals 5.308–22.000. The term heavy metal is sometimes used interchangeably with the term heavy element. For example, in discussing the history of nuclear chemistry, Magee notes that the actinides were once thought to represent a new heavy element transition group whereas Seaborg and co-workers "favoured ... a heavy metal rare-earth like series ...".
The counterparts to the heavy metals, the light metals, are defined by The Minerals, Metals and Materials Society as including "the traditional (aluminium, magnesium, beryllium, titanium, lithium, and other reactive metals) and emerging light metals (composites, laminates, etc.)"
Biological role
Trace amounts of some heavy metals, mostly in period 4, are required for certain biological processes. These are iron and copper (oxygen and electron transport); cobalt (complex syntheses and cell metabolism); vanadium and manganese (enzyme regulation or functioning); chromium (glucose utilisation); nickel (cell growth); arsenic (metabolic growth in some animals and possibly in humans) and selenium (antioxidant functioning and hormone production). Periods 5 and 6 contain fewer essential heavy metals, consistent with the general pattern that heavier elements tend to be less abundant and that scarcer elements are less likely to be nutritionally essential. In period 5, molybdenum is required for the catalysis of redox reactions; cadmium is used by some marine diatoms for the same purpose; and tin may be required for growth in a few species. In period 6, tungsten is required by some archaea and bacteria for metabolic processes. A deficiency of any of these period 4–6 essential heavy metals may increase susceptibility to heavy metal poisoning (conversely, an excess may also have adverse biological effects). An average 70 kg human body is about 0.01% heavy metals (~7 g, equivalent to the weight of two dried peas, with iron at 4 g, zinc at 2.5 g, and lead at 0.12 g comprising the three main constituents), 2% light metals (~1.4 kg, the weight of a bottle of wine) and nearly 98% nonmetals (mostly water).
A few non-essential heavy metals have been observed to have biological effects. Gallium, germanium (a metalloid), indium, and most lanthanides can stimulate metabolism, and titanium promotes growth in plants (though it is not always considered a heavy metal).
Toxicity
Heavy metals are often assumed to be highly toxic or damaging to the environment. Some are, while certain others are toxic only if taken in excess or encountered in certain forms. Inhalation of certain metals, either as fine dust or most commonly as fumes, can also result in a condition called metal fume fever.
Environmental heavy metals
Chromium, arsenic, cadmium, mercury, and lead have the greatest potential to cause harm on account of their extensive use, the toxicity of some of their combined or elemental forms, and their widespread distribution in the environment. Hexavalent chromium, for example, is highly toxic as are mercury vapour and many mercury compounds. These five elements have a strong affinity for sulfur; in the human body they usually bind, via thiol groups (–SH), to enzymes responsible for controlling the speed of metabolic reactions. The resulting sulfur-metal bonds inhibit the proper functioning of the enzymes involved; human health deteriorates, sometimes fatally. Chromium (in its hexavalent form) and arsenic are carcinogens; cadmium causes a degenerative bone disease; and mercury and lead damage the central nervous system.
Lead is the most prevalent heavy metal contaminant. Levels in the aquatic environments of industrialised societies have been estimated to be two to three times those of pre-industrial levels. As a component of tetraethyl lead, , it was used extensively in gasoline from the 1930s until the 1970s. Although the use of leaded gasoline was largely phased out in North America by 1996, soils next to roads built before this time retain high lead concentrations. Later research demonstrated a statistically significant correlation between the usage rate of leaded gasoline and violent crime in the United States; taking into account a 22-year time lag (for the average age of violent criminals), the violent crime curve virtually tracked the lead exposure curve.
Other heavy metals noted for their potentially hazardous nature, usually as toxic environmental pollutants, include manganese (central nervous system damage); cobalt and nickel (carcinogens); copper, zinc, selenium and silver (endocrine disruption, congenital disorders, or general toxic effects in fish, plants, birds, or other aquatic organisms); tin, as organotin (central nervous system damage); antimony (a suspected carcinogen); and thallium (central nervous system damage).
Other heavy metals
A few other non-essential heavy metals have one or more toxic forms. Kidney failure and fatalities have been recorded arising from the ingestion of germanium dietary supplements (~15 to 300 g in total consumed over a period of two months to three years). Exposure to osmium tetroxide (OsO4) may cause permanent eye damage and can lead to respiratory failure and death. Indium salts are toxic if more than few milligrams are ingested and will affect the kidneys, liver, and heart. Cisplatin (PtCl2(NH3)2), an important drug used to kill cancer cells, is also a kidney and nerve poison. Bismuth compounds can cause liver damage if taken in excess; insoluble uranium compounds, as well as the dangerous radiation they emit, can cause permanent kidney damage.
Exposure sources
Heavy metals can degrade air, water, and soil quality, and subsequently cause health issues in plants, animals, and people, when they become concentrated as a result of industrial activities. Common sources of heavy metals in this context include vehicle emissions; motor oil; fertilisers; glassworking; incinerators; treated timber; aging water supply infrastructure; and microplastics floating in the world's oceans. Recent examples of heavy metal contamination and health risks include the occurrence of Minamata disease, in Japan (1932–1968; lawsuits ongoing as of 2016); the Bento Rodrigues dam disaster in Brazil, high levels of lead in drinking water supplied to the residents of Flint, Michigan, in the north-east of the United States and 2015 Hong Kong heavy metal in drinking water incidents.
Formation, abundance, occurrence, and extraction
Heavy metals up to the vicinity of iron (in the periodic table) are largely made via stellar nucleosynthesis. In this process, lighter elements from hydrogen to silicon undergo successive fusion reactions inside stars, releasing light and heat and forming heavier elements with higher atomic numbers.
Heavier heavy metals are not usually formed this way since fusion reactions involving such nuclei would consume rather than release energy. Rather, they are largely synthesised (from elements with a lower atomic number) by neutron capture, with the two main modes of this repetitive capture being the s-process and the r-process. In the s-process ("s" stands for "slow"), singular captures are separated by years or decades, allowing the less stable nuclei to beta decay, while in the r-process ("rapid"), captures happen faster than nuclei can decay. Therefore, the s-process takes a more or less clear path: for example, stable cadmium-110 nuclei are successively bombarded by free neutrons inside a star until they form cadmium-115 nuclei which are unstable and decay to form indium-115 (which is nearly stable, with a half-life times the age of the universe). These nuclei capture neutrons and form indium-116, which is unstable, and decays to form tin-116, and so on. In contrast, there is no such path in the r-process. The s-process stops at bismuth due to the short half-lives of the next two elements, polonium and astatine, which decay to bismuth or lead. The r-process is so fast it can skip this zone of instability and go on to create heavier elements such as thorium and uranium.
Heavy metals condense in planets as a result of stellar evolution and destruction processes. Stars lose much of their mass when it is ejected late in their lifetimes, and sometimes thereafter as a result of a neutron star merger, thereby increasing the abundance of elements heavier than helium in the interstellar medium. When gravitational attraction causes this matter to coalesce and collapse, new stars and planets are formed.
The Earth's crust is made of approximately 5% of heavy metals by weight, with iron comprising 95% of this quantity. Light metals (~20%) and nonmetals (~75%) make up the other 95% of the crust. Despite their overall scarcity, heavy metals can become concentrated in economically extractable quantities as a result of mountain building, erosion, or other geological processes.
Heavy metals are found primarily as lithophiles (rock-loving) or chalcophiles (ore-loving). Lithophile heavy metals are mainly f-block elements and the more reactive of the d-block elements. They have a strong affinity for oxygen and mostly exist as relatively low density silicate minerals. Chalcophile heavy metals are mainly the less reactive d-block elements, and period 4–6 p-block metals and metalloids. They are usually found in (insoluble) sulfide minerals. Being denser than the lithophiles, hence sinking lower into the crust at the time of its solidification, the chalcophiles tend to be less abundant than the lithophiles.
In contrast, gold is a siderophile, or iron-loving element. It does not readily form compounds with either oxygen or sulfur. At the time of the Earth's formation, and as the most noble (inert) of metals, gold sank into the core due to its tendency to form high-density metallic alloys. Consequently, it is a relatively rare metal. Some other (less) noble heavy metals—molybdenum, rhenium, the platinum group metals (ruthenium, rhodium, palladium, osmium, iridium, and platinum), germanium, and tin—can be counted as siderophiles but only in terms of their primary occurrence in the Earth (core, mantle and crust), rather the crust. These metals otherwise occur in the crust, in small quantities, chiefly as chalcophiles (less so in their native form).
Concentrations of heavy metals below the crust are generally higher, with most being found in the largely iron-silicon-nickel core. Platinum, for example, comprises approximately 1 part per billion of the crust whereas its concentration in the core is thought to be nearly 6,000 times higher. Recent speculation suggests that uranium (and thorium) in the core may generate a substantial amount of the heat that drives plate tectonics and (ultimately) sustains the Earth's magnetic field.
Broadly speaking, and with some exceptions, lithophile heavy metals can be extracted from their ores by electrical or chemical treatments, while chalcophile heavy metals are obtained by roasting their sulphide ores to yield the corresponding oxides, and then heating these to obtain the raw metals. Radium occurs in quantities too small to be economically mined and is instead obtained from spent nuclear fuels. The chalcophile platinum group metals (PGM) mainly occur in small (mixed) quantities with other chalcophile ores. The ores involved need to be smelted, roasted, and then leached with sulfuric acid to produce a residue of PGM. This is chemically refined to obtain the individual metals in their pure forms. Compared to other metals, PGM are expensive due to their scarcity and high production costs.
Gold, a siderophile, is most commonly recovered by dissolving the ores in which it is found in a cyanide solution. The gold forms a dicyanoaurate(I), for example: 2 Au + H2O +½ O2 + 4 KCN → 2 K[Au(CN)2] + 2 KOH. Zinc is added to the mix and, being more reactive than gold, displaces the gold: 2 K[Au(CN)2] + Zn → K2[Zn(CN)4] + 2 Au. The gold precipitates out of solution as a sludge, and is filtered off and melted.
Uses
Some common uses of heavy metals depend on the general characteristics of metals such as electrical conductivity and reflectivity or the general characteristics of heavy metals such as density, strength, and durability. Other uses depend on the characteristics of the specific element, such as their biological role as nutrients or poisons or some other specific atomic properties. Examples of such atomic properties include: partly filled d- or f- orbitals (in many of the transition, lanthanide, and actinide heavy metals) that enable the formation of coloured compounds; the capacity of heavy metal ions (such as platinum, cerium or bismuth) to exist in different oxidation states and are used in catalysts; strong exchange interactions in 3d or 4f orbitals (in iron, cobalt, and nickel, or the lanthanide heavy metals) that give rise to magnetic effects; and high atomic numbers and electron densities that underpin their nuclear science applications. Typical uses of heavy metals can be broadly grouped into the following categories.
Weight- or density-based
Some uses of heavy metals, including in sport, mechanical engineering, military ordnance, and nuclear science, take advantage of their relatively high densities. In underwater diving, lead is used as a ballast; in handicap horse racing each horse must carry a specified lead weight, based on factors including past performance, so as to equalize the chances of the various competitors. In golf, tungsten, brass, or copper inserts in fairway clubs and irons lower the centre of gravity of the club making it easier to get the ball into the air; and golf balls with tungsten cores are claimed to have better flight characteristics. In fly fishing, sinking fly lines have a PVC coating embedded with tungsten powder, so that they sink at the required rate. In track and field sport, steel balls used in the hammer throw and shot put events are filled with lead in order to attain the minimum weight required under international rules. Tungsten was used in hammer throw balls at least up to 1980; the minimum size of the ball was increased in 1981 to eliminate the need for what was, at that time, an expensive metal (triple the cost of other hammers) not generally available in all countries. Tungsten hammers were so dense that they penetrated too deeply into the turf.
Heavy metals are used for ballast in boats, aeroplanes, and motor vehicles; or in balance weights on wheels and crankshafts, gyroscopes, and propellers, and centrifugal clutches, in situations requiring maximum weight in minimum space (for example in watch movements).
In military ordnance, tungsten or uranium is used in armour plating and armour piercing projectiles, as well as in nuclear weapons to increase efficiency (by reflecting neutrons and momentarily delaying the expansion of reacting materials). In the 1970s, tantalum was found to be more effective than copper in shaped charge and explosively formed anti-armour weapons on account of its higher density, allowing greater force concentration, and better deformability. Less-toxic heavy metals, such as copper, tin, tungsten, and bismuth, and probably manganese (as well as boron, a metalloid), have replaced lead and antimony in the green bullets used by some armies and in some recreational shooting munitions. Doubts have been raised about the safety (or green credentials) of tungsten.
Biological and chemical
The biocidal effects of some heavy metals have been known since antiquity. Platinum, osmium, copper, ruthenium, and other heavy metals, including arsenic, are used in anti-cancer treatments, or have shown potential. Antimony (anti-protozoal), bismuth (anti-ulcer), gold (anti-arthritic), and iron (anti-malarial) are also important in medicine. Copper, zinc, silver, gold, or mercury are used in antiseptic formulations; small amounts of some heavy metals are used to control algal growth in, for example, cooling towers. Depending on their intended use as fertilisers or biocides, agrochemicals may contain heavy metals such as chromium, cobalt, nickel, copper, zinc, arsenic, cadmium, mercury, or lead.
Selected heavy metals are used as catalysts in fuel processing (rhenium, for example), synthetic rubber and fibre production (bismuth), emission control devices (palladium and platinum), and in self-cleaning ovens (where cerium(IV) oxide in the walls of such ovens helps oxidise carbon-based cooking residues). In soap chemistry, heavy metals form insoluble soaps that are used in lubricating greases, paint dryers, and fungicides (apart from lithium, the alkali metals and the ammonium ion form soluble soaps).
Colouring and optics
The colours of glass, ceramic glazes, paints, pigments, and plastics are commonly produced by the inclusion of heavy metals (or their compounds) such as chromium, manganese, cobalt, copper, zinc, zirconium, molybdenum, silver, tin, praseodymium, neodymium, erbium, tungsten, iridium, gold, lead, or uranium. Tattoo inks may contain heavy metals, such as chromium, cobalt, nickel, and copper. The high reflectivity of some heavy metals is important in the construction of mirrors, including precision astronomical instruments. Headlight reflectors rely on the excellent reflectivity of a thin film of rhodium.
Electronics, magnets, and lighting
Heavy metals or their compounds can be found in electronic components, electrodes, and wiring and solar panels. Molybdenum powder is used in circuit board inks. Home electrical systems, for the most part, are wired with copper wire for its good conducting properties. Silver and gold are used in electrical and electronic devices, particularly in contact switches, as a result of their high electrical conductivity and capacity to resist or minimise the formation of impurities on their surfaces. Heavy metals have been used in batteries for over 200 years, at least since Volta invented his copper and silver voltaic pile in 1800.
Magnets are often made of heavy metals such as manganese, iron, cobalt, nickel, niobium, bismuth, praseodymium, neodymium, gadolinium, and dysprosium. Neodymium magnets are the strongest type of permanent magnet commercially available. They are key components of, for example, car door locks, starter motors, fuel pumps, and power windows.
Heavy metals are used in lighting, lasers, and light-emitting diodes (LEDs). Fluorescent lighting relies on mercury vapour for its operation. Ruby lasers generate deep red beams by exciting chromium atoms in aluminum oxide; the lanthanides are also extensively employed in lasers. Copper, iridium, and platinum are used in organic LEDs.
Nuclear
Because denser materials absorb more of certain types of radioactive emissions such as gamma rays than lighter ones, heavy metals are useful for radiation shielding and to focus radiation beams in linear accelerators and radiotherapy applications.
Niche uses of heavy metals with high atomic numbers occur in diagnostic imaging, electron microscopy, and nuclear science. In diagnostic imaging, heavy metals such as cobalt or tungsten make up the anode materials found in x-ray tubes. In electron microscopy, heavy metals such as lead, gold, palladium, platinum, or uranium have been used in the past to make conductive coatings and to introduce electron density into biological specimens by staining, negative staining, or vacuum deposition. In nuclear science, nuclei of heavy metals such as chromium, iron, or zinc are sometimes fired at other heavy metal targets to produce superheavy elements; heavy metals are also employed as spallation targets for the production of neutrons or isotopes of non-primordial elements such as astatine (using lead, bismuth, thorium, or uranium in the latter case).
| Physical sciences | Periodic table | Chemistry |
26594247 | https://en.wikipedia.org/wiki/Internal%20pressure | Internal pressure | Internal pressure is a measure of how the internal energy of a system changes when it expands or contracts at constant temperature. It has the same dimensions as pressure, the SI unit of which is the pascal.
Internal pressure is usually given the symbol . It is defined as a partial derivative of internal energy with respect to volume at constant temperature:
Thermodynamic equation of state
Internal pressure can be expressed in terms of temperature, pressure and their mutual dependence:
This equation is one of the simplest thermodynamic equations. More precisely, it is a thermodynamic property relation, since it holds true for any system and connects the equation of state to one or more thermodynamic energy properties. Here we refer to it as a "thermodynamic equation of state."
Derivation of the thermodynamic equation of state
The fundamental thermodynamic equation states for the exact differential of the internal energy:
Dividing this equation by at constant temperature gives:
And using one of the Maxwell relations:
, this gives
Perfect gas
In a perfect gas, there are no potential energy interactions between the particles, so any change in the internal energy of the gas is directly proportional to the change in the kinetic energy of its constituent species and therefore also to the change in temperature:
.
The internal pressure is taken to be at constant temperature, therefore
, which implies and finally ,
i.e. the internal energy of a perfect gas is independent of the volume it occupies. The above relation can be used as a definition of a perfect gas.
The relation can be proved without the need to invoke any molecular arguments. It follows directly from the thermodynamic equation of state if we use the ideal gas law . We have
Real gases
Real gases have non-zero internal pressures because their internal energy changes as the gases expand isothermally - it can increase on expansion (, signifying presence of dominant attractive forces between the particles of the gas) or decrease (, dominant repulsion).
In the limit of infinite volume these internal pressures reach the value of zero:
,
corresponding to the fact that all real gases can be approximated to be perfect in the limit of a suitably large volume. The above considerations are summarized on the graph on the right.
If a real gas can be described by the van der Waals equation
it follows from the thermodynamic equation of state that
Since the parameter is always positive, so is its internal pressure: internal energy of a van der Waals gas always increases when it expands isothermally.
The parameter models the effect of attractive forces between molecules in the gas. However, real non-ideal gases may be expected to exhibit a sign change between positive and negative internal pressures under the right environmental conditions if repulsive interactions become important, depending on the system of interest. Loosely speaking, this would tend to happen under conditions of temperature and pressure such that the compression factor of the gas, is greater than 1.
In addition, through the use of the Euler chain relation it can be shown that
Defining as the "Joule coefficient" and recognizing as the heat capacity at constant volume , we have
The coefficient can be obtained by measuring the temperature change for a constant- experiment, i.e., an adiabatic free expansion (see below). This coefficient is often small, and usually negative at modest pressures (as predicted by the van der Waals equation).
Experiment
James Joule tried to measure the internal pressure of air in his expansion experiment by adiabatically pumping high pressure air from one metal vessel into another evacuated one. The water bath in which the system was immersed did not change its temperature, signifying that no change in the internal energy occurred. Thus, the internal pressure of the air was apparently equal to zero and the air acted as a perfect gas. The actual deviations from the perfect behaviour were not observed since they are very small and the specific heat capacity of water is relatively high.
Much later, in 1925 Frederick Keyes and Francis Sears published measurements of the Joule effect for carbon dioxide at = 30 °C, = (13.3-16.5) atm using improved measurement techniques and better controls. Under these conditions the temperature dropped when the pressure was adiabatically lowered, which indicates that is negative. This is consistent with the van der Waals gas prediction that is positive.
| Physical sciences | Thermodynamics | Physics |
26595097 | https://en.wikipedia.org/wiki/Main%20battle%20tank | Main battle tank | A main battle tank (MBT), also known as a battle tank or universal tank or simply tank, is a tank that fills the role of armour-protected direct fire and maneuver in many modern armies. Cold War-era development of more powerful engines, better suspension systems and lighter composite armour allowed for the design of a tank that had the firepower of a super-heavy tank, the armour protection of a heavy tank, and the mobility of a light tank, in a package with the weight of a medium tank. The first designated MBT was the British Chieftain tank, which during its development in the 1950s was re-designed as an MBT. Throughout the 1960s and 1970s, the MBT replaced almost all other types of tanks, leaving only some specialist roles to be filled by lighter designs or other types of armoured fighting vehicles.
Main battle tanks are a key component of modern armies. Modern MBTs seldom operate alone, as they are organized into armoured units that include the support of infantry, who may accompany the tanks in infantry fighting vehicles. They are also often supported by surveillance or ground-attack aircraft. The average weight of MBTs varies from country to country. The average weight of Western MBTs is usually greater than that of Russian or Chinese MBTs.
History
Initial limited-role tank classes
During World War I, combining tracks, armour, and guns into a functional vehicle pushed the limits of mechanical technology. This limited the specific battlefield capabilities any one tank design could be expected to fulfill. A design might have good speed, armour, or firepower, but not all three together.
Facing the deadlock of trench warfare, the first tank designs focused on crossing wide trenches, requiring very long and large vehicles, such as the British Mark I tank and successors; these became known as heavy tanks. Tanks that focused on other combat roles were smaller, like the French Renault FT; these were light tanks or tankettes. Many late-war and inter-war tank designs diverged from these according to new, and mostly untried, concepts for future tank roles and tactics. Each nation tended to create its own list of tank classes with different intended roles, such as "cavalry tanks", "breakthrough tanks", "fast tanks", and "assault tanks". The British maintained cruiser tanks that in order to achieve high speed and hence manoeuvrability in the attack carried less armour, and infantry tanks which operating at infantryman pace could carry more armour.
Evolution of the general-purpose medium tank
After years of isolated and divergent development, the various interwar tank concepts were finally tested with the start of World War II. In the chaos of blitzkrieg, tanks designed for a single role often found themselves forced into battlefield situations they were ill-suited for. During the war, limited-role tank designs tended to be replaced by more general-purpose designs, enabled by improving tank technology. Tank classes became mostly based on weight (and the corresponding transport and logistical needs). This led to new definitions of heavy and light tank classes, with medium tanks covering the balance of those between. The German Panzer IV tank, designed before the war as a "heavy" tank for assaulting fixed positions, was redesigned during the war with armour and gun upgrades to allow it to take on anti-tank roles as well, and was reclassified as a medium tank.
The second half of World War II saw an increased reliance on general-purpose medium tanks, which became the bulk of the tank combat forces. Generally, these designs massed about , were armed with cannons around , and powered by engines in the range. Notable examples include the Soviet T-34 (the most-produced tank at that time) and the US M4 Sherman.
Late war tank development placed increased emphasis on armour, armament, and anti-tank capabilities for medium tanks:
The German Panther tank, designed to counter the Soviet T-34, had both armament and armour increased over previous medium tanks. Unlike previous Panzer designs, its frontal armour was sloped for increased effectiveness. It also was equipped with the high-velocity long-barreled 75 mm KwK 42 L/70 gun that was able to defeat the armour of all but the heaviest Allied tank at long range. The powerful Maybach HL230 P30 engine and robust running gear meant that even though the Panther tipped the scales at – sizeable for its day – it was actually quite manoeuvrable, offering better off-road speed than the Panzer IV. However, its rushed development led to reliability and maintenance issues.
The Soviet T-44 incorporated many of the lessons learned with the extensive use of the T-34 model, and some of those modifications were used in the first MBTs, like a modern torsion suspension, instead of the Christie suspension version of the T-34, and a transversally mounted engine that simplified its gearbox. It is also seen as direct predecessor of the T-54 Unlike the T-34, the T-44 had a suspension sturdy enough to be able to mount a cannon.
The American M26 Pershing, a medium tank of to replace the M4 Sherman, innovated in US tanks many features common on post-war MBTs. These features include an automatic transmission mounted in the rear, torsion bar suspension and had an early form of a powerpack, combining an engine and transmission into a compact package. The M26, however, suffered from a relatively weak engine for its weight (effectively the same engine as the lighter M4A3 Sherman), and as a result was somewhat underpowered. The design of the M26 had profound influence on American postwar medium and main battle tanks: "The M26 formed the basis for the postwar generation of US battle tanks from the M46 through the M47, M48, and M60 series."
British universal tank
Britain had continued on the path of parallel development of cruiser tanks and infantry tanks. Development of the Rolls-Royce Meteor engine for the Cromwell tank, combined with efficiency savings elsewhere in the design, almost doubled the horsepower for cruiser tanks. This led to speculation of a "Universal Tank", able to take on the roles of both a cruiser and an infantry tank by combining heavy armour and manoeuvrability.
Field Marshal Bernard Montgomery is acknowledged as the main advocate of the British universal tank concept as early as 1943, according to the writings of Giffard Le Quesne Martel, but little progress was made beyond development of the basic Cromwell cruiser tank that eventually led to the Centurion. The Centurion, at the time designated "heavy cruiser" and later "medium gun tank" was designed for mobility and firepower at the expense of armour, but more engine power permitted more armour protection, so the Centurion could also operate as an infantry tank, doing so well that development of a new universal tank was rendered unnecessary.
The Centurion, entering service just as World War II finished, was a multi-role tank that subsequently formed the main armoured element of the British Army of the Rhine, the armed forces of the British Empire and Commonwealth forces, and subsequently many other nations through exports, whose cost was met largely by the US. The introduction of the 20-pounder gun in 1948 gave the tank a significant advantage over other tanks of the era, paving the way for a new tank classification, the main battle tank, which gradually superseded previous weight and armament classes.
Cold War
A surplus of effective WWII-era designs in other forces, notably the US and the Soviet Union, led to slower introductions of similar designs on their part. By the early 1950s, these designs were clearly no longer competitive, especially in a world of shaped charge weapons, and new designs rapidly emerged from most armed forces.
The Quebec conference in 1957 between the US, UK and Canada identified the MBT as the route for development rather than separate medium and heavy tanks.
The concept of the medium tank gradually evolved into the MBT in the 1960s, as it was realized that medium tanks could carry guns (such as the American , Soviet , and especially the British L7 ) that could penetrate any practical level of armour then existing at long range. Also, the heaviest tanks were unable to use most existing bridges. The World War II concept of heavy tanks, armed with the most powerful guns and heaviest armour, became obsolete because the large tanks were too expensive and just as vulnerable to damage by mines, bombs, rockets, and artillery. Likewise, World War II had shown that lightly armed and armoured tanks were of limited value in most roles. Even reconnaissance vehicles had shown a trend towards heavier weight and greater firepower during World War II; speed was not a substitute for armour and firepower.
An increasing variety of anti-tank weapons and the perceived threat of a nuclear war prioritized the need for additional armour. The additional armour prompted the design of even more powerful guns. The main battle tank thus took on the role the British had once called the "universal tank", exemplified by the Centurion, filling almost all battlefield roles. Typical main battle tanks were as well armed as any other vehicle on the battlefield, highly mobile, and well armoured. Yet they were cheap enough to be built in large numbers. The first Soviet main battle tank was the T-64A (the T-54/55 and T-62 were considered "medium" tanks) and the first American nomenclature-designated MBT was the M60 tank.
Anti-tank weapons rapidly outpaced armour developments. By the 1960s, anti-tank rounds could penetrate a meter of steel so as to make the application of traditional rolled homogeneous armour unpragmatic. The first solution to this problem was the composite armor of Soviet T-64 tank, which included steel-glass-reinforced textolite-steel sandwich in heavily sloped glacis plates, and steel turret with aluminum inserts, which helped to resist both high-explosive anti-tank (HEAT) and APDS shells of the era. Later came British Chobham armour. This composite armour used layers of ceramics and other materials to help attenuate the effects of HEAT munitions. Another threat came by way of the widespread use of helicopters in battle. Before the advent of helicopters, armour was heavily concentrated to the front of the tank. This new threat caused designs to distribute armour on all sides of the tank (also having the effect of protecting the vehicle's occupants from nuclear explosion radiation).
By the late 1970s, MBTs were manufactured by China, France, West Germany, Britain, India, Italy, Japan, the Soviet Union, Sweden, Switzerland, and the United States.
The Soviet Union's war doctrine depended heavily on the main battle tank. Any weapon advancement making the MBT obsolete could have devastated the Soviet Union's fighting capability. The Soviet Union made novel advancements to the weapon systems including mechanical autoloaders and anti-tank guided missiles. Autoloaders were introduced to replace the human loader, permitting the turret to be reduced in size, making the tank smaller and less visible as a target, while missile systems were added to extend the range at which a vehicle could engage a target and thereby enhance the first-round hit probability.
The United States's experience in the Vietnam War contributed to the idea among army leadership that the role of the main battle tank could be fulfilled by attack helicopters. During the Vietnam War, helicopters and missiles competed with MBTs for research money.
Though the Persian Gulf War reaffirmed the role of main battle tanks, MBTs were outperformed by the attack helicopter. Other strategists considered that the MBT was entirely obsolete in light of the efficacy and speed with which coalition forces neutralized Iraqi armour.
Asymmetrical warfare
In asymmetric warfare, threats such as improvised explosive devices and mines have proven effective against MBTs. In response, nations that face asymmetric warfare, such as Israel, are reducing the size of their tank fleet and procuring more advanced models. Conversely, some insurgent groups like Hezbollah themselves operate main battle tanks, such as the T-72.
The United States Army used 1,100 M1 Abrams in the course of the Iraq War. They proved to have an unexpectedly high vulnerability to improvised explosive devices. A relatively new type of remotely detonated mine, the explosively formed penetrator, was used with some success against American armoured vehicles. However, with upgrades to their rear armour, M1s proved to be valuable in urban combat; at the Second Battle of Fallujah the United States Marines brought in two extra companies of M1s. Britain deployed its Challenger 2 tanks to support its operations in southern Iraq.
Advanced armour has reduced crew fatalities but has not improved vehicle survivability. Small unmanned turrets on top of the cupolas called remote controlled weapon stations armed with machine guns or mortars provide improved defence and enhance crew survivability. Experimental tanks with unmanned turrets locate crew members in the heavily armoured hull, improving survivability and reducing the vehicle's profile.
Technology is reducing the weight and size of the modern MBT. A British military document from 2001 indicated that the British Army would not procure a replacement for the Challenger 2 because of a lack of conventional warfare threats in the foreseeable future. The obsolescence of the tank has been asserted, but the history of the late 20th and early 21st century suggested that MBTs were still necessary. During the Russian invasion of Ukraine, Western and Russian MBTs saw large-scale combat in large numbers.
Design
The Organization for Security and Co-operation in Europe defines a main battle tank as "a self-propelled armoured fighting vehicle, capable of heavy firepower, primarily of a high muzzle velocity direct fire main gun necessary to engage armoured and other targets, with high cross-country mobility, with a high level of self-protection, and which is not designed and equipped primarily to transport combat troops."
Overview
Countermeasures
Originally, most MBTs relied on steel armour to defend against various threats. As newer threats emerged, however, the defensive systems used by MBTs had to evolve to counter them. One of the first new developments was the use of explosive reactive armour (ERA), developed by Israel in the early 1980s to defend against the shaped-charge warheads of modern anti-tank guided missiles and other such high-explosive anti-tank (HEAT) projectiles. This technology was subsequently adopted and expanded upon by the United States and the Soviet Union.
MBT armour is concentrated at the front of the tank, where it is layered up to thick.
Missiles are cheap and cost-effective anti-tank weapons. ERA can be quickly added to vehicles to increase their survivability. However, the detonation of ERA blocks creates a hazard to any supporting infantry near the tank. Despite this drawback, it is still employed on many Russian MBTs, the latest generation Kontakt-5 being capable of defeating both high-explosive anti-tank (HEAT) and kinetic energy penetrator threats. The Soviets also developed Active Protection Systems (APS) designed to more actively neutralize hostile projectiles before they could even strike the tank, namely the Shtora and Arena systems. The United States has also adopted similar technologies in the form of the Missile Countermeasure Device and as part of the Tank Urban Survival Kit used on M1 Abrams tanks serving in Iraq. The latest Russian MBT, according to many forum members the T-14 Armata, incorporates an AESA radar as part of its Afghanit APS and in conjunction with the rest of its armament, can also intercept aircraft and missiles.
MBTs can also be protected from radar detection by incorporating stealth technology. The T-14 Armata has a turret designed to be harder to detect with radars and thermal sights. Advanced camouflage, like the Russian Nakidka, will also reduce the radar and thermal signatures of a MBT.
Other defensive developments focused on improving the strength of the armour itself; one of the notable advancements coming from the British with the development of Chobham armour in the 1970s. It was first employed on the American M1 Abrams and later the British Challenger 1. Chobham armour uses a lattice of composite and ceramic materials along with metal alloys to defeat incoming threats, and proved highly effective in the conflicts in Iraq in the early 1990s and 2000s; surviving numerous impacts from 1950s, 1960s, and 1970s era rocket-propelled grenades with negligible damage. It is much less efficient against later models of RPGs. For example, the RPG-29 from the 1980s is able to penetrate the frontal hull armour of the Challenger 2.
Weaponry
Main battle tanks are equipped with a main gun and at least one machine gun.
MBT main guns are generally between and caliber, and can fire both anti-armour and, more recently, anti-personnel rounds. The cannon serves a dual role, able to engage other armoured targets such as tanks and fortifications, and soft targets such as light vehicles and infantry. It is fixed to the turret, along with the loading and fire mechanism. Modern tanks use a sophisticated fire-control system, including rangefinders, computerized fire control, and stabilizers, which are designed to keep the cannon stable and aimed even if the hull is turning or shaking, making it easier for the operators to fire on the move and/or against moving targets. Gun-missile systems are complicated and have been particularly unsatisfactory to the United States who abandoned gun-missile projects such as the M60A2 and MBT-70, but have been diligently developed by the Soviet Union, who even retrofitted them to T-55 tanks, in an effort to double the effective range of the vehicle's fire. The MBT's role could be compromised because of the increasing distances involved and the increased reliance on indirect fire. The tank gun is still useful in urban combat for precisely delivering powerful fire while minimizing collateral damage.
High-explosive anti-tank (HEAT), and some form of high velocity kinetic energy penetrator, such as armour-piercing fin-stabilized discarding sabot (APFSDS) rounds are carried for anti-armour purposes. Anti-personnel rounds such as high explosive or high explosive fragmentation have dual purpose. Less common rounds are Beehive anti-personnel rounds, and high-explosive squash head (HESH) rounds used for both anti-armour and bunker busting. Usually, an MBT carries 30–50 rounds of ammunition for its main tank gun, usually split between HE, HEAT, and KEP rounds. Some MBTs may also carry smoke or white phosphorus rounds. Some MBTs are equipped with an autoloader, such as the French Leclerc, or the Russian/Ukrainian T-64, T-72, T-80, T-84, T-90, and T-14 and, for this reason, the crew can be reduced to 3 members. MBTs with an autoloader require one less crew member and the autoloader requires less space than its human counterpart, allowing for a reduction in turret size. Further, an autoloader can be designed to handle rounds which would be too difficult for a human to load. This reduces the silhouette which improves the MBT's target profile. However, with a manual loader, the rounds can be isolated within a blowout chamber, rather than a magazine within the turret, which could improve crew survivability. However, the force of a modern depleted uranium APFSDS round at the muzzle can exceed 6000 kN (a rough estimate, considering a uranium 60 cm/2 cm rod, 19g/cm3, @ 1,750 m/s). Composite+reactive armour could withstand this kind of force through its deflection and deformation, but with a second hit in the same area, an armour breach is inevitable. As such, the speed of follow up shots is crucial within tank to tank combat.
As secondary weapons, an MBT usually uses between two and four machine guns to engage infantry and light vehicles. Many MBTs mount one heavy caliber anti-aircraft machine gun (AAMG), usually of .50 caliber (like the M2 Browning or DShK), which can be used against helicopters and low flying aircraft. However, their effectiveness is limited in comparison to dedicated anti-aircraft artillery. The tank's machine guns are usually equipped with between 500 and 3,000 rounds each.
Situational awareness
Performing situational awareness and communicating is one of four primary MBT functions. For situational awareness, the crew can use a circular review system combining augmented reality and artificial Intelligence technologies. These systems use several externally mounted video sensors to transfer a 360º view of the tank's surroundings onto crew helmet-mounted displays or other display systems.
Mobility
MBTs, like previous models of tanks, move on continuous tracks, which allow a decent level of mobility over most terrain including sand and mud. They also allow tanks to climb over most obstacles. MBTs can be made water-tight, so they can even dive into shallow water ( with snorkel). However, tracks are not as fast as wheels; the maximum speed of a tank is about . The extreme weight of vehicles of this type also limits their speed. They are usually equipped with a engine (more than ), with an operational range near .
The German Army has prioritized mobility in its Leopard 2 which is considered one of the fastest MBTs in existence.
The MBT is often cumbersome in traffic and frequently obstructs the normal flow of traffic. The tracks can damage some roads after repeated use. Many structures like bridges do not have the load capacity to support an MBT. In the fast pace of combat, it is often impossible to test the sturdiness of these structures. Though appreciated for its excellent off-road characteristics, the MBT can become immobilized in muddy conditions.
The high cost of MBTs can be attributed in part to the high-performance engine-transmission system and to the fire control system. Also, propulsion systems are not produced in high enough quantities to take advantage of economies of scale.
Crew fatigue limits the operational range of MBTs in combat. Reducing the crew to three and relocating all crewmembers from the turret to the hull could provide time to sleep for one off-shift crewmember located in the rear of the hull. In this scenario, crewmembers would rotate shifts regularly and all would require cross-training on all vehicle job functions.
Cargo aircraft are instrumental to the timely deployment of MBTs. The absence of sufficient numbers of strategic airlift assets can limit the rate of MBT deployments to the number of aircraft available.
Military planners anticipate that the airlift capability for MBTs will not improve in the future. To date, no helicopter has the capability to lift MBTs. Rail and road are heavily used to move MBTs nearer to the battle, ready to fight in prime condition. Where well maintained roads allow it, wheeled tank transporters can be used.
The task of resupply is usually accomplished with large trucks.
Storage
Main battle tanks have internal and external storage space. Internal space is reserved for ammunition. External space enhances independence of logistics and can accommodate extra fuel and some personal equipment of the crew.
The Israeli Merkava can accommodate crew members displaced from a destroyed vehicle in its ammunition compartment.
Crew
Emphasis is placed on selecting and training main battle tank crew members. The crew must perform their tasks faultlessly and harmoniously so commanders select teams taking into consideration personalities and talents.
Role
The main battle tank fulfills the role the British had once called the "universal tank", filling almost all battlefield roles. They were originally designed in the Cold War to combat other MBTs. The modern light tank supplements the MBT in expeditionary roles and situations where all major threats have been neutralized and excess weight in armour and armament would only hinder mobility and cost more money to operate.
Reconnaissance by MBTs is performed in high-intensity conflicts where reconnaissance by light vehicles would be insufficient due to the necessity to "fight" for information.
In asymmetric warfare, main battle tanks are deployed in small, highly concentrated units. MBTs fire only at targets at close range and instead rely on external support such as unmanned aircraft for long range combat.
Main battle tanks have significantly varied characteristics. Procuring too many varieties can place a burden on tactics, training, support and maintenance.
The MBT has a positive morale effect on the infantry it accompanies. It also instills fear in the opposing force who can often hear and even feel their arrival.
Procurement
Manufacture
MBT production is increasingly being outsourced to wealthy nations. Countries that are just beginning to produce tanks are having difficulties remaining profitable in an industry that is increasingly becoming more expensive through the sophistication of technology. Even some large-scale producers are seeing declines in production. Even China is divesting many of its MBTs.
The production of main battle tanks is limited to manufacturers that specialize in combat vehicles. Commercial manufacturers of civilian vehicles cannot easily be repurposed as MBT production facilities.
Prices for MBTs have more than tripled from 1943 to 2011, although this pales in comparison with the price increase in fighter aircraft from 1943 to 1975.
Marketing
Several MBT models, such as the AMX-40 and OF-40, were marketed almost solely as export vehicles. Several tank producers, such as Japan and Israel, choose not to market their creations for export. Others have export control laws in place.
| Technology | Maneuver | null |
39308394 | https://en.wikipedia.org/wiki/Galactic%20habitable%20zone | Galactic habitable zone | In astrobiology and planetary astrophysics, the galactic habitable zone is the region of a galaxy in which life is most likely to develop. The concept of a galactic habitable zone analyzes various factors, such as metallicity (the presence of elements heavier than hydrogen and helium) and the rate and density of major catastrophes such as supernovae, and uses these to calculate which regions of a galaxy are more likely to form terrestrial planets, initially develop simple life, and provide a suitable environment for this life to evolve and advance. According to research published in August 2015, very large galaxies may favor the birth and development of habitable planets more than smaller galaxies such as the Milky Way. In the case of the Milky Way, its galactic habitable zone is commonly believed to be an annulus with an outer radius of about and an inner radius close to the Galactic Center (with both radii lacking hard boundaries).
Galactic habitable-zone theory has been criticized due to an inability to accurately quantify the factors making a region of a galaxy favorable for the emergence of life. In addition, computer simulations suggest that stars may change their orbits around the galactic center significantly, therefore challenging at least part of the view that some galactic areas are necessarily more life-supporting than others.
History
Background
The idea of the circumstellar habitable zone was introduced in 1953 by Hubertus Strughold and Harlow Shapley and in 1959 by Su-Shu Huang as the region around a star in which an orbiting planet could retain water at its surface. From the 1970s, planetary scientists and astrobiologists began to consider various other factors required for the creation and sustenance of life, including the impact that a nearby supernova may have on the development of life. In 1981, computer scientist Jim Clarke proposed that the apparent lack of extraterrestrial civilizations in the Milky Way could be explained by Seyfert-type outbursts from an active galactic nucleus, with Earth alone being spared from this radiation by virtue of its location in the galaxy. In the same year, Wallace Hampton Tucker analyzed galactic habitability in a more general context, but later work superseded his proposals.
Modern galactic habitable-zone theory was introduced in 1986 by L.S. Marochnik and L.M. Mukhin of the Russian Space Research Institute, who defined the zone as the region in which intelligent life could flourish. Donald Brownlee and palaeontologist Peter Ward expanded upon the concept of a galactic habitable zone, as well as the other factors required for the emergence of complex life, in their 2000 book Rare Earth: Why Complex Life is Uncommon in the Universe. In that book, the authors used the galactic habitable zone, among other factors, to argue that intelligent life is not a common occurrence in the Universe.
The idea of a galactic habitable zone was further developed in 2001 in a paper by Ward and Brownlee, in collaboration with Guillermo Gonzalez of the University of Washington. In that paper, Gonzalez, Brownlee, and Ward stated that regions near the galactic halo would lack the heavier elements required to produce habitable terrestrial planets, thus creating an outward limit to the size of the galactic habitable zone. Being too close to the galactic center, however, would expose an otherwise habitable planet to numerous supernovae and other energetic cosmic events, as well as excessive cometary impacts caused by perturbations of the host star's Oort cloud. Therefore, the authors established an inner boundary for the galactic habitable zone, located just outside the galactic bulge.
Considerations
In order to identify a location in the galaxy as being a part of the galactic habitable zone, a variety of factors must be accounted for. These include the distribution of stars and spiral arms, the presence or absence of an active galactic nucleus, the frequency of nearby supernovae that can threaten the existence of life, the metallicity of that location, and other factors. Without fulfilling these factors, a region of the galaxy cannot create or sustain life with efficiency.
Chemical evolution
One of the most basic requirements for the existence of life around a star is the ability of that star to produce a terrestrial planet of sufficient mass to sustain it. Various elements, such as iron, magnesium, titanium, carbon, oxygen, silicon, and others, are required to produce habitable planets, and the concentration and ratios of these vary throughout the galaxy.
The most common benchmark elemental ratio is that of Fe/H, one of the factors determining the propensity of a region of the galaxy to produce terrestrial planets. The galactic bulge, the region of the galaxy closest to the Galactic Center, has an [Fe/H] distribution peaking at −0.2 decimal exponent units (dex) relative to the Sun's ratio (where −1 would be such metallicity); the thin disk, in which local sectors of the local Arm are, has an average metallicity of −0.02 dex at the orbital distance of the Sun around the galactic center, reducing by 0.07 dex for every additional kiloparsec of orbital distance. The extended thick disk has an average [Fe/H] of −0.6 dex, while the halo, the region farthest from the galactic center, has the lowest [Fe/H] distribution peak, at around −1.5 dex. In addition, ratios such as [C/O], [Mg/Fe], [Si/Fe], and [S/Fe] may be relevant to the ability of a region of a galaxy to form habitable terrestrial planets, and of these [Mg/Fe] and [Si/Fe] are slowly reducing over time, meaning that future terrestrial planets are more likely to possess larger iron cores.
In addition to specific amounts of the various stable elements that comprise a terrestrial planet's mass, an abundance of radionuclides such as 40K, 235U, 238U, and 232Th is required in order to heat the planet's interior and power life-sustaining processes such as plate tectonics, volcanism, and a geomagnetic dynamo. The [U/H] and [Th/H] ratios are dependent on the [Fe/H] ratio; however, a general function for the abundance of 40K cannot be created with existing data.
Even on a habitable planet with enough radioisotopes to heat its interior, various prebiotic molecules are required in order to produce life; therefore, the distribution of these molecules in the galaxy is important in determining the galactic habitable zone. A 2008 study by Samantha Blair and colleagues attempted to determine the outer edge of the galactic habitable zone by means of analyzing formaldehyde and carbon monoxide emissions from various giant molecular clouds scattered throughout the Milky Way; however, the data is neither conclusive nor complete.
While high metallicity is beneficial for the creation of terrestrial extrasolar planets, an excess amount can be harmful for life. Excess metallicity may lead to the formation of a large number of gas giants in a given system, which may subsequently migrate from beyond the system's frost line and become hot Jupiters, disturbing planets that would otherwise have been located in the system's circumstellar habitable zone. Thus, it was found that the Goldilocks principle applies to metallicity as well; low-metallicity systems have low probabilities of forming terrestrial-mass planets at all, while excessive metallicities cause a large number of gas giants to develop, disrupting the orbital dynamics of the system and altering the habitability of terrestrial planets in the system.
Catastrophic events
As well as being in a region of the galaxy that is chemically advantageous for the development of life, a star must also avoid an excessive number of catastrophic cosmic events with the potential to damage life on its otherwise habitable planets. Nearby supernovae, for example, have the potential to severely harm life on a planet; with excessive frequency, such catastrophic outbursts have the potential to sterilize an entire region of a galaxy for billions of years. The galactic bulge, for example, experienced an initial wave of extremely rapid star formation, triggering a cascade of supernovae that for five billion years left that area almost completely unable to develop life.
In addition to supernovae, gamma-ray bursts, excessive amounts of radiation, gravitational perturbations and various other events have been proposed to affect the distribution of life within the galaxy. These include, controversially, such proposals as "galactic tides" with the potential to induce cometary impacts or even cold bodies of dark matter that pass through organisms and induce genetic mutations. However, the impact of many of these events may be difficult to quantify.
Galactic morphology
Various morphological features of galaxies can affect their potential for habitability. Spiral arms, for example, are the location of star formation, but they contain numerous giant molecular clouds and a high density of stars that can perturb a star's Oort cloud, sending avalanches of comets and asteroids toward any planets further in. In addition, the high density of stars and rate of massive star formation can expose any stars orbiting within the spiral arms for too long to supernova explosions, reducing their prospects for the survival and development of life. Considering these factors, the Sun is advantageously placed within the galaxy because, in addition to being outside a spiral arm, it orbits near the corotation circle, maximizing the interval between spiral-arm crossings.
Spiral arms also have the ability to cause climatic changes on a planet. Passing through the dense molecular clouds of galactic spiral arms, stellar winds may be pushed back to the point that a reflective hydrogen layer accumulates in an orbiting planet's atmosphere, perhaps leading to a snowball Earth scenario.
A galactic bar also has the potential to affect the size of the galactic habitable zone. Galactic bars are thought to grow over time, eventually reaching the corotation radius of the galaxy and perturbing the orbits of the stars already there. High-metallicity stars like the Sun, for example, at an intermediate location between the low-metallicity galactic halo and the high-radiation galactic center, may be scattered throughout the galaxy, affecting the definition of the galactic habitable zone. It has been suggested that for this reason, it may be impossible to properly define a galactic habitable zone.
Boundaries
Early research on the galactic habitable zone, including the 2001 paper by Gonzalez, Brownlee, and Ward, did not demarcate any specific boundaries, merely stating that the zone was an annulus encompassing a region of the galaxy that was both enriched with metals and spared from excessive radiation, and that habitability would be more likely in the galaxy's thin disk. However, later research conducted in 2004 by Lineweaver and colleagues did create boundaries for this annulus, in the case of the Milky Way ranging from 7 kpc to 9 kpc from the galactic center.
The Lineweaver team also analyzed the evolution of the galactic habitable zone with respect to time, finding, for example, that stars close to the galactic bulge had to form within a time window of about two billion years in order to have habitable planets. Before that window, galactic-bulge stars would be prevented from having life-sustaining planets from frequent supernova events. After the supernova threat had subsided, though, the increasing metallicity of the galactic core would eventually mean that stars there would have a high number of giant planets, with the potential to destabilize star systems and radically alter the orbit of any planet located in a star's circumstellar habitable zone. Simulations conducted in 2005 at the University of Washington, however, show that even in the presence of hot Jupiters, terrestrial planets may remain stable over long timescales.
A 2006 study by Milan Ćirković and colleagues extended the notion of a time-dependent galactic habitable zone, analyzing various catastrophic events as well as the underlying secular evolution of galactic dynamics. The paper considers that the number of habitable planets may fluctuate wildly with time due to the unpredictable timing of catastrophic events, thereby creating a punctuated equilibrium in which habitable planets are more likely at some times than at others. Based on the results of Monte Carlo simulations on a toy model of the Milky Way, the team found that the number of habitable planets is likely to increase with time, though not in a perfectly linear pattern.
Subsequent studies saw more fundamental revision of the old concept of the galactic habitable zone as an annulus. In 2008, a study by Nikos Prantzos revealed that, while the probability of a planet escaping sterilization by supernova was highest at a distance of about 10 kpc from the galactic center, the sheer density of stars in the inner galaxy meant that the highest number of habitable planets could be found there. The research was corroborated in a 2011 paper by Michael Gowanlock, who calculated the frequency of supernova-surviving planets as a function of their distance from the galactic center, their height above the galactic plane, and their age, ultimately discovering that about 0.3% of stars in the galaxy could today support complex life, or 1.2% if one does not consider the tidal locking of red dwarf planets as precluding the development of complex life.
Criticism
The idea of the galactic habitable zone has been criticized by Nikos Prantzos, on the grounds that the parameters to create it are impossible to define even approximately, and that thus the galactic habitable zone may merely be a useful conceptual tool to enable a better understanding of the distribution of life, rather than an end to itself. For these reasons, Prantzos has suggested that the entire galaxy may be habitable, rather than habitability being restricted to a specific region in space and time. In addition, stars "riding" the galaxy's spiral arms may move tens of thousands of light years from their original orbits, thus supporting the notion that there may not be one specific galactic habitable zone. A Monte Carlo simulation, improving on the mechanisms used by Ćirković in 2006, was conducted in 2010 by Duncan Forgan of Royal Observatory Edinburgh. The data collected from the experiments support Prantzos's notion that there is no solidly defined galactic habitable zone, indicating the possibility of hundreds of extraterrestrial civilizations in the Milky Way, though further data will be required in order for a definitive determination to be made.
| Physical sciences | Planetary science | Astronomy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.