id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
57421 | https://en.wikipedia.org/wiki/Capsaicin | Capsaicin | Capsaicin (8-methyl-N-vanillyl-6-nonenamide) (, rarely ) is an active component of chili peppers, which are plants belonging to the genus Capsicum. It is a potent irritant for mammals, including humans, and produces a sensation of burning in any tissue with which it comes into contact. Capsaicin and several related amides (capsaicinoids) are produced as secondary metabolites by chili peppers, likely as deterrents against certain mammals and fungi. Pure capsaicin is a hydrophobic, colorless, highly pungent (i.e., spicy) crystalline solid.
Natural function
Capsaicin is present in large quantities in the placental tissue (which holds the seeds), the internal membranes and, to a lesser extent, the other fleshy parts of the fruits of plants in the genus Capsicum. The seeds themselves do not produce any capsaicin, although the highest concentration of capsaicin can be found in the white pith of the inner wall, where the seeds are attached.
The seeds of Capsicum plants are dispersed predominantly by birds. In birds, the TRPV1 channel does not respond to capsaicin or related chemicals but mammalian TRPV1 is very sensitive to it. This is advantageous to the plant, as chili pepper seeds consumed by birds pass through the digestive tract and can germinate later, whereas mammals have molar teeth which destroy such seeds and prevent them from germinating. Thus, natural selection may have led to increasing capsaicin production because it makes the plant less likely to be eaten by animals that do not help it disperse. There is also evidence that capsaicin may have evolved as an anti-fungal agent. The fungal pathogen Fusarium, which is known to infect wild chilies and thereby reduce seed viability, is deterred by capsaicin, which thus limits this form of predispersal seed mortality.
The vanillotoxin-containing venom of a certain tarantula species (Psalmopoeus cambridgei) activates the same pathway of pain as is activated by capsaicin, an example of a shared pathway in both plant and animal anti-mammalian defense.
Uses
Food
Because of the burning sensation caused by capsaicin when it comes in contact with mucous membranes, it is commonly used in food products to provide added spiciness or "heat" (piquancy), usually in the form of spices such as chili powder and paprika. In high concentrations, capsaicin will also cause a burning effect on other sensitive areas, such as skin or eyes. The degree of heat found within a food is often measured on the Scoville scale.
There has long been a demand for capsaicin-spiced products like chili pepper, and hot sauces such as Tabasco sauce and Mexican salsa. It is common for people to experience pleasurable and even euphoric effects from ingesting capsaicin. Folklore among self-described "chiliheads" attribute this to pain-stimulated release of endorphins, a different mechanism from the local receptor overload that makes capsaicin effective as a topical analgesic.
Research and pharmaceutical use
Capsaicin is used as an analgesic in topical ointments and dermal patches to relieve pain, typically in concentrations between 0.025% and 0.1%. It may be applied in cream form for the temporary relief of minor aches and pains of muscles and joints associated with arthritis, backache, strains and sprains, often in compounds with other rubefacients.
It is also used to reduce the symptoms of peripheral neuropathy, such as post-herpetic neuralgia caused by shingles. A capsaicin transdermal patch (Qutenza) for the management of this particular therapeutic indication (pain due to post-herpetic neuralgia) was approved in 2009, as a therapeutic by both the U.S. Food and Drug Administration (FDA) and the European Union. A subsequent application to the FDA for Qutenza to be used as an analgesic in HIV neuralgia was refused. One 2017 review of clinical studies having limited quality found that high-dose topical capsaicin (8%) compared with control (0.4% capsaicin) provided moderate to substantial pain relief from post-herpetic neuralgia, HIV-neuropathy, and diabetic neuropathy.
Although capsaicin creams have been used to treat psoriasis for reduction of itching, a review of six clinical trials involving topical capsaicin for treatment of pruritus concluded there was insufficient evidence of effect. Oral capsaicin decreases LDL cholesterol levels moderately.
There is insufficient clinical evidence to determine the role of ingested capsaicin on several human disorders, including obesity, diabetes, cancer and cardiovascular diseases.
Pepper spray and pests
Capsaicinoids are also an active ingredient in riot control and personal defense pepper spray agents. When the spray comes in contact with skin, especially eyes or mucous membranes, it produces pain and breathing difficulty in the affected individual.
Capsaicin is also used to deter pests, specifically mammalian pests. Targets of capsaicin repellants include voles, deer, rabbits, squirrels, bears, insects, and attacking dogs. Ground or crushed dried chili pods may be used in birdseed to deter rodents, taking advantage of the insensitivity of birds to capsaicin. The Elephant Pepper Development Trust claims that using chili peppers as a barrier crop can be a sustainable means for rural African farmers to deter elephants from eating their crops.
An article published in the Journal of Environmental Science and Health Part B in 2006 states that "Although hot chili pepper extract is commonly used as a component of household and garden insect-repellent formulas, it is not clear that the capsaicinoid elements of the extract are responsible for its repellency."
The first pesticide product using solely capsaicin as the active ingredient was registered with the U.S. Department of Agriculture in 1962.
Equestrian sports
Capsaicin is a banned substance in equestrian sports because of its hypersensitizing and pain-relieving properties. At the show jumping events of the 2008 Summer Olympics, four horses tested positive for capsaicin, which resulted in disqualification.
Irritant effects
Acute health effects
Capsaicin is a strong irritant requiring proper protective goggles, respirators, and proper hazardous material-handling procedures. Capsaicin takes effect upon skin contact (irritant, sensitizer), eye contact (irritant), ingestion, and inhalation (lung irritant, lung sensitizer). The in mice is 47.2 mg/kg.
Painful exposures to capsaicin-containing peppers are among the most common plant-related exposures presented to poison centers. They cause burning or stinging pain to the skin and, if ingested in large amounts by adults or small amounts by children, can produce nausea, vomiting, abdominal pain, and burning diarrhea. Eye exposure produces intense tearing, pain, conjunctivitis, and blepharospasm.
Treatment after exposure
The primary treatment is removal of the offending substance. Plain water is ineffective at removing capsaicin. Capsaicin is soluble in alcohol, which can be used to clean contaminated items.
When capsaicin is ingested, cold milk may be an effective way to relieve the burning sensation due to caseins in milk, and the water of milk acts as a surfactant, allowing the capsaicin to form an emulsion with it.
Weight loss and regain
As of 2007, there was no evidence showing that weight loss is directly correlated with ingesting capsaicin. Well-designed clinical research had not been performed because the pungency of capsaicin in prescribed doses under research prevented subjects from complying in the study. A 2014 meta-analysis of further trials found weak evidence that consuming capsaicin before a meal might slightly reduce the amount of food consumed, and might drive food preference toward carbohydrates.
Peptic ulcer
One 2006 review concluded that capsaicin may relieve symptoms of a peptic ulcer rather than being a cause of it.
Death
Ingestion of high quantities of capsaicin can be deadly, particularly in people with heart problems. Even healthy young people can suffer adverse health effects like myocardial infarction after ingestion of capsaicin capsules.
Mechanism of action
The burning and painful sensations associated with capsaicin result from "defunctionalization" of nociceptor nerve fibers by causing a topical hypersensitivity reaction in the skin. As a member of the vanilloid family, capsaicin binds to a receptor on nociceptor fibers called the vanilloid receptor subtype 1 (TRPV1). TRPV1, which can also be stimulated with heat, protons and physical abrasion, permits cations to pass through the cell membrane when activated. The resulting depolarization of the neuron stimulates it to send impulses to the brain. By binding to TRPV1 receptors, capsaicin produces similar sensations to those of excessive heat or abrasive damage, such as warming, tingling, itching, or stinging, explaining why capsaicin is described as an irritant on the skin and eyes or by ingestion.
Clarifying the mechanisms of capsaicin effects on skin nociceptors was part of awarding the 2021 Nobel Prize in Physiology or Medicine, as it led to the discovery of skin sensors for temperature and touch, and identification of the single gene causing sensitivity to capsaicin.
History
The compound was first extracted in impure form in 1816 by Christian Friedrich Bucholz (1770–1818).
In 1873 German pharmacologist Rudolf Buchheim (1820–1879) and in 1878 the Hungarian doctor Endre Hőgyes stated that "capsicol" (partially purified capsaicin) caused the burning feeling when in contact with mucous membranes and increased secretion of gastric acid.
Capsaicinoids
The most commonly occurring capsaicinoids are capsaicin (69%), dihydrocapsaicin (22%), nordihydrocapsaicin (7%), homocapsaicin (1%), and homodihydrocapsaicin (1%).
Capsaicin and dihydrocapsaicin (both 16.0 million SHU) are the most pungent capsaicinoids. Nordihydrocapsaicin (9.1 million SHU), homocapsaicin and homodihydrocapsaicin (both 8.6 million SHU) are about half as hot.
There are six natural capsaicinoids (table below). Although vanillylamide of n-nonanoic acid (Nonivamide, VNA, also PAVA) is produced synthetically for most applications, it does occur naturally in Capsicum species.
Biosynthesis
History
The general biosynthetic pathway of capsaicin and other capsaicinoids was elucidated in the 1960s by Bennett and Kirby, and Leete and Louden. Radiolabeling studies identified phenylalanine and valine as the precursors to capsaicin. Enzymes of the phenylpropanoid pathway, phenylalanine ammonia lyase (PAL), cinnamate 4-hydroxylase (C4H), caffeic acid O-methyltransferase (COMT) and their function in capsaicinoid biosynthesis were identified later by Fujiwake et al., and Sukrasno and Yeoman. Suzuki et al. are responsible for identifying leucine as another precursor to the branched-chain fatty acid pathway. It was discovered in 1999 that pungency of chili peppers is related to higher transcription levels of key enzymes of the phenylpropanoid pathway, phenylalanine ammonia lyase, cinnamate 4-hydroxylase, caffeic acid O-methyltransferase. Similar studies showed high transcription levels in the placenta of chili peppers with high pungency of genes responsible for branched-chain fatty acid pathway.
Biosynthetic pathway
Plants exclusively of the genus Capsicum produce capsaicinoids, which are alkaloids. Capsaicin is believed to be synthesized in the interlocular septum of chili peppers and depends on the gene AT3, which resides at the pun1 locus, and which encodes a putative acyltransferase.
Biosynthesis of the capsaicinoids occurs in the glands of the pepper fruit where capsaicin synthase condenses vanillylamine from the phenylpropanoid pathway with an acyl-CoA moiety produced by the branched-chain fatty acid pathway.
Capsaicin is the most abundant capsaicinoid found in the genus Capsicum, but at least ten other capsaicinoid variants exist. Phenylalanine supplies the precursor to the phenylpropanoid pathway while leucine or valine provide the precursor for the branched-chain fatty acid pathway. To produce capsaicin, 8-methyl-6-nonenoyl-CoA is produced by the branched-chain fatty acid pathway and condensed with vanillylamine. Other capsaicinoids are produced by the condensation of vanillylamine with various acyl-CoA products from the branched-chain fatty acid pathway, which is capable of producing a variety of acyl-CoA moieties of different chain length and degrees of unsaturation. All condensation reactions between the products of the phenylpropanoid and branched-chain fatty acid pathway are mediated by capsaicin synthase to produce the final capsaicinoid product.
Evolution
The Capsicum genus split from Solanaceae 19.6 million years ago, 5.4 million years after the appearance of Solanaceae, and is native only to the Americas. Chilies only started to quickly evolve in the past 2 million years into markedly different species. This evolution can be partially attributed to a key compound found in peppers, 8-methyl-N-vanillyl-6-nonenamide, otherwise known as capsaicin. Capsaicin evolved similarly across species of chilies that produce capsaicin. Its evolution over the course of centuries is due to genetic drift and natural selection, across the genus Capsicum. Despite the fact that chilies within the Capsicum genus are found in diverse environments, the capsaicin found within them all exhibit similar properties that serve as defensive and adaptive features. Capsaicin evolved to preserve the fitness of peppers against fungi infections, insects, and granivorous mammals.
Antifungal properties
Capsaicin acts as an antifungal agent in four primary ways. First, capsaicin inhibits the metabolic rate of the cells that make up the fungal biofilm. This inhibits the area and growth rate of the fungus, since the biofilm creates an area where a fungus can grow and adhere to the chili in which capsaicin is present. Capsaicin also inhibits fungal hyphae formation, which impacts the amount of nutrients that the rest of the fungal body can receive. Thirdly, capsaicin disrupts the structure of fungal cells and the fungal cell membranes. This has consequential negative impacts on the integrity of fungal cells and their ability to survive and proliferate. Additionally, the ergosterol synthesis of growing fungi decreases in relation to the amount of capsaicin present in the growth area. This impacts the fungal cell membrane, and how it is able to reproduce and adapt to stressors in its environment.
Insecticidal properties
Capsaicin deters insects in multiple ways. The first is by deterring insects from laying their eggs on the pepper due to the effects capsaicin has on these insects. Capsaicin can cause intestinal dysplasia upon ingestion, disrupting insect metabolism and causing damage to cell membranes within the insect. This in turn disrupts the standard feeding response of insects.
Seed dispersion and deterrents against granivorous mammals
Granivorous mammals pose a risk to the propagation of chilies because their molars grind the seeds of chilies, rendering them unable to grow into new chili plants. As a result, modern chilies evolved defense mechanisms to mitigate the risk of granivorous mammals. While capsaicin is present at some level in every part of the pepper, the chemical has its highest concentration in the tissue near the seeds within chilies. Birds are able to eat chilies, then disperse the seeds in their excrement, enabling propagation.
Adaptation to varying moisture levels
Capsaicin is a potent defense mechanism for chilies, but it does come at a cost. Varying levels of capsaicin in chilies currently appear to be caused by an evolutionary split between surviving in dry environments, and having defense mechanisms against fungal growth, insects, and granivorous mammals. Capsaicin synthesis in chilies places a strain on their water resources. This directly affects their fitness, as it has been observed that standard concentration of capsaicin of peppers in high moisture environments in the seeds and pericarps of the peppers reduced the seeds production by 50%.
| Physical sciences | Alkaloids | Chemistry |
57500 | https://en.wikipedia.org/wiki/Antler | Antler | Antlers are extensions of an animal's skull found in members of the Cervidae (deer) family. Antlers are a single structure composed of bone, cartilage, fibrous tissue, skin, nerves, and blood vessels. They are generally found only on males, with the exception of reindeer/caribou. Antlers are shed and regrown each year and function primarily as objects of sexual attraction and as weapons.
Etymology
Antler comes from the Old French antoillier (see present French : "Andouiller", from ant-, meaning before, oeil, meaning eye and -ier, a suffix indicating an action or state of being) possibly from some form of an unattested Latin word *anteocularis, "before the eye" (and applied to the word for "branch" or "horn").
Structure and development
Antlers are unique to cervids. The ancestors of deer had tusks (long upper canine teeth). In most species, antlers appear to replace tusks. However, one modern species (the water deer) has tusks and no antlers and the muntjacs have small antlers and tusks. The musk deer, which are not true cervids, also bear tusks in place of antlers.
In contrast to antlers, horns—found on pronghorns and bovids, such as sheep, goats, bison and cattle—are two-part structures that usually do not shed. A horn's interior of bone is covered by an exterior sheath made of keratin (the same material as human fingernails and toenails).
Antlers are usually found only on males. Only reindeer (known as caribou in North America) have antlers on the females, and these are normally smaller than those of the males. Nevertheless, fertile does from other species of deer have the capacity to produce antlers on occasion, usually due to increased testosterone levels. The "horns" of a pronghorn (which is not a cervid but a antilocaprid) meet some of the criteria of antlers, but are not considered true antlers because they contain keratin.
Each antler grows from an attachment point on the skull called a pedicle. While an antler is growing, it is covered with highly vascular skin called velvet, which supplies oxygen and nutrients to the growing bone. Antlers are considered one of the most exaggerated cases of male secondary sexual traits in the animal kingdom, and grow faster than any other mammal bone. Growth occurs at the tip, and is initially cartilage, which is later replaced by bone tissue. Once the antler has achieved its full size, the velvet is lost and the antler's bone dies. This dead bone structure is the mature antler. In most cases, the bone at the base is destroyed by osteoclasts and the antlers fall off at some point. As a result of their fast growth rate, antlers are considered a handicap since there is an immense nutritional demand on deer to re-grow antlers annually, and thus can be honest signals of metabolic efficiency and food gathering capability.
In most Arctic and temperate-zone species, antler growth and shedding is annual, and is controlled by the length of daylight. Although the antlers are regrown each year, their size varies with the age of the animal in many species, increasing annually over several years before reaching maximum size. In tropical species, antlers may be shed at any time of year, and in some species such as the sambar, antlers are shed at different times in the year depending on multiple factors. Some equatorial deer never shed their antlers.
A 2019 study published in Science identified eight genes active in antler formation that are normally associated with bone cancer, particularly osteosarcoma. Additional tumor-suppressing and tumor-growth-inhibiting genes were determined to be responsible for regulating antler growth. This was taken to indicate that antler formation is more similar to a highly controlled form of cancer growth than to normal bone development.
Antlers function as both weapons in male-male competition and as displays of sexual ornaments for females. Because mature antlers are no longer living during combat, antler fractures are incapable of being repaired following competition. A study in 2019 hypothesized that the periodic casting and regrowth of antlers might have evolved as a way to ensure the availability of complete antler sets to display each year. Antler regeneration in male deer ensures that every mating season begins on a clean slate, as an increase in branching size and complexity happens each regeneration cycle in an individual.
Mechanical properties
Bones typically serve a structural purpose, with load bearing abilities that are greater than any other part of an animal's body. Bones typically differ in shape and properties to better fit their overall function. Antlers are not structural and typically have different properties when compared to structural bones like femurs.
While antlers are classified as bone, they differ in some ways from human bones and bovine bones. Bone is characterized as being made up of primarily collagen and a mineral phase. In antlers, the mineral content is considerably lower than other examples of bone tissue, while having a high volume of collagen. This leads antlers to having lower yield strength and stiffness, but higher fracture toughness when compared to human cortical bone. Mineral content differs among species and also depends on the food availability. In recent studies, increase in mineral content has been linked to the increase in stiffness with a decrease in fracture toughness.
Further, bones are highly anisotropic due to their hierarchical structure. Thus, mechanical properties are highly dependent on testing conditions and directions. Due to their cylindrical shape, antlers can be tested using bending along three different orientations. Bend testing in these orientations have resulted in different mechanical properties. In samples from antler bone taken in the transverse direction, an elastic modulus of 8.92-10.02 GPa was reported. For the longitudinal and radial orientations, the elastic modulus was 7.19-8.23 and 4.01-4.27 GPa respectively. The transverse direction was overall found to be the stronger orientation with higher mechanical properties. The ultimate tensile strength of 262.96-274.38 MPa in the transverse direction was statistically significant when compared to the longitudinal and radial directions' values of 46.91-48.55 and 41.75-43.67 MPa.
Tensile testing of antler bones has also been conducted to compare to bovine femur results. The antler samples were tested in dry and wet conditions as done in other studies. The wetness of a sample resulted in a difference in mean maximum strain: 1.46% and 2.2%, dry and wet respectively. Further, the ultimate tensile strength of wet, dry and bovine difference showed differences as well: 188 MPa, 108 MPa, and 99.2 MPa for dry, wet and bovine samples respectively. Similarly, the elastic modulus for dry samples was 17.1 GPa, 7.5 GPa for wet samples, and 17.7 GPa for bovine femur. This difference in elastic modulus is due to the difference in function of a bovine femur versus an antler. Bovine femurs must withstand greater stresses, holding up the body of the animal, whereas an antler is used for sexual selection and competition.
Function
Sexual selection
The principal means of evolution of antlers is sexual selection, which operates via two mechanisms: male-to-male competition (behaviorally, physiologically) and female mate choice. Male-male competition can take place in two forms. First, they can compete behaviorally where males use their antlers as weapons to compete for access to mates; second, they can compete physiologically where males present their antlers to display their strength and fertility competitiveness to compete for access to mates. Males with the largest antlers are more likely to obtain mates and achieve the highest fertilization success due to their competitiveness, dominance and high phenotypic quality. Whether this is a result of male-male fighting or display, or of female choosiness differs depending on the species as the shape, size, and function of antlers vary between species.
Heritability and reproductive advantage
There is evidence to support that antler size influences mate selection in the red deer, and has a heritable component. Despite this, a 30-year study showed no shift in the median size of antlers in a population of red deer. The lack of response could be explained by environmental covariance, meaning that lifetime breeding success is determined by an unmeasured trait which is phenotypically correlated with antler size but for which there is no genetic correlation of antler growth. Alternatively, the lack of response could be explained by the relationship between heterozygosity and antler size, which states that males heterozygous at multiple loci, including MHC loci, have larger antlers. The evolutionary response of traits that depend on heterozygosity is slower than traits that are dependent on additive genetic components and thus the evolutionary change is slower than expected. A third possibility is that the costs of having larger antlers (resource use, and mobility detriments, for instance) exert enough selective pressure to offset the benefit of attracting mates; thereby stabilizing antler size in the population.
Protection against predation
If antlers functioned only in male–male competition for mates, the best evolutionary strategy would be to shed them immediately after the rutting season, both to free the male from a heavy encumbrance and to give him more time to regrow a larger new pair. Yet antlers are commonly retained through the winter and into the spring, suggesting that they have another use. Wolves in Yellowstone National Park are 3.6 times more likely to attack individual male elk without antlers, or groups of elk in which at least one male is without antlers. Half of all male elk killed by wolves lack antlers, at times in which only one quarter of all males have shed antlers. These findings suggest that antlers have a secondary function in deterring predation.
Female antlers in reindeer
Reindeer (Rangifer tarandus) are the only cervid species that inhabit the Arctic and subarctic regions of the globe, yet their most striking distinction is the presence of pedicles after birth and antlers in both males and females. One possible reason that females of this species evolved antlers is to clear away snow so they can eat the vegetation underneath. Another possible reason is for female competition during winter foraging. Espmark (1964) observed that the presence of antlers on females is related to the hierarchy rank and is a result of the harsh winter conditions and the female dominated parental investment. Males shed their antlers prior to winter, while female antlers are retained throughout winter. Also, female antler size plateaus at the onset of puberty, around age three, while males' antler size increases during their lifetime. This likely reflects the differing life history strategies of the two sexes, where females are resource limited in their reproduction and cannot afford costly antlers, while male reproductive success depends on the size of their antlers because they are under directional sexual selection. In other species of deer, the presence of antlers in females indicates some degree of intersex condition, the frequency of which has been seen to vary from 1.5% to 0.02%.
Antenna for hearing
In moose, antlers may act as large hearing aids. Equipped with large, highly adjustable external ears, moose have highly sensitive hearing. Moose with antlers have more sensitive hearing than moose without, and a study of trophy antlers with an artificial ear confirmed that the large flattened (palmate) antler behaves like a parabolic reflector.
Diversification
The diversification of antlers, body size and tusks has been strongly influenced by changes in habitat and behavior (fighting and mating).
Capreolinae
Cervinae
Homology and evolution of tines
Antlers originated once in the cervid lineage. The earliest fossil remains of antlers that have been found are dated to the early Miocene, about 17 million years ago. These early antlers were small and had just two forks. As antlers evolved, they lengthened and gained many branches, or tines, becoming more complex. The homology of tines has been discussed since the 1900s and has provided great insight into the evolutionary history of the Cervidae family.
Recently, a new method to describe the branching structure of antlers was developed. It is by using antler grooves, which are formed on the surface of antlers by growth, projecting the branching structure on the burr circumference, and making diagrams. Comparing the positional order among species on the diagram, the tine on the same position is homologous. The study revealed that three-pointed structures of Capreolinae and Cervini are homoplasious, and their subclades gained synapomorphous tines.
Exploitation by other species
Ecological role
Discarded antlers represent a source of calcium, phosphorus and other minerals and are often gnawed upon by small animals, including squirrels, porcupines, rabbits and mice. This is more common among animals inhabiting regions where the soil is deficient in these minerals. Antlers shed in oak forest inhabited by squirrels are rapidly chewed to pieces by them.
Trophy hunting
Antlered heads are prized as trophies with larger sets being more highly prized. The first organization to keep records of sizes was Rowland Ward Ltd., a London taxidermy firm, in the early 20th century. For a time only total length or spread was recorded. In the middle of the century, the Boone and Crockett Club and the Safari Club International developed complex scoring systems based on various dimensions and the number of tines or points, and they keep extensive records of high-scoring antlers. Deer bred for hunting on farms are selected based on the size of the antlers.
Hunters have developed terms for antler parts: beam, palm, brow, bez or bay, trez or tray, royal, and surroyal. These are the main shaft, flattened center, first tine, second tine, third tine, fourth tine, and fifth or higher tines, respectively. The second branch is also called an advancer.
In Yorkshire in the United Kingdom roe deer hunting is especially popular due to the large antlers produced there. This is due to the high levels of chalk in Yorkshire. The chalk is high in calcium which is ingested by the deer and helps growth in the antlers.
Shed antler hunting
Gathering shed antlers or "sheds" attracts dedicated practitioners who refer to it colloquially as shed hunting, or bone picking. In the United States, the middle of December to the middle of February is considered shed hunting season, when deer, elk, and moose begin to shed. The North American Shed Hunting Club, founded in 1991, is an organization for those who take part in this activity.
In the United States in 2017 sheds fetch around US$10 per pound, with larger specimens in good condition attracting higher prices. The most desirable antlers have been found soon after being shed. The value is reduced if they have been damaged by weathering or being gnawed by small animals. A matched pair from the same animal is a very desirable find but often antlers are shed separately and may be separated by several miles. Some enthusiasts for shed hunting use trained dogs to assist them. Most hunters will follow 'game trails' (trails where deer frequently run) to find these sheds or they will build a shed trap to collect the loose antlers in the late winter/early spring.
In most US states, the possession of or trade in parts of game animals is subject to some degree of regulation, but the trade in antlers is widely permitted. In the national parks of Canada, the removal of shed antlers is an offense punishable by a maximum fine of C$25,000, as the Canadian government considers antlers to belong to the people of Canada and part of the ecosystems in which they are discarded.
Carving for decorative and tool uses
Antler has been used through history as a material to make tools, weapons, ornaments, and toys. It was an especially important material in the European Late Paleolithic, used by the Magdalenian culture to make carvings and engraved designs on objects such as the so-called Bâton de commandements and the Bison Licking Insect Bite. In the Viking Age and medieval period, it formed an important raw material in the craft of comb-making. In later periods, antler—used as a cheap substitute for ivory—was a material especially associated with equipment for hunting, such as saddles and horse harness, guns and daggers, powder flasks, as well as buttons and the like. The decorative display of wall-mounted pairs of antlers has been popular since medieval times at least.
The Netsilik, an Inuit group, made bows and arrows using antler, reinforced with strands of animal tendons braided to form a cable-backed bow. Several Indigenous American tribes also used antler to make bows, gluing tendons to the bow instead of tying them as cables. An antler bow, made in the early 19th century, is on display at Brooklyn Museum. Its manufacture is attributed to the Yankton Sioux.
Through history large deer antler from a suitable species (e.g. red deer) were often cut down to its shaft and its lowest tine and used as a one-pointed pickax.
Ceremonial roles
Antler headdresses were worn by shamans and other spiritual figures in various cultures, and for dances; 21 antler "frontlets" apparently for wearing on the head, and over 10,000 years old, have been excavated at the English Mesolithic site of Starr Carr. Antlers are still worn in traditional dances such as Yaqui deer dances and carried in the Abbots Bromley Horn Dance.
Dietary usage
In the velvet antler stage, antlers of elk and deer have been used in Asia as a dietary supplement or alternative medicinal substance for more than 2,000 years. Recently, deer antler extract has become popular among Western athletes and body builders because the extract, with its trace amounts of IGF-1, is believed to help build and repair muscle tissue; however, one double-blind study did not find evidence of intended effects.
Elk, deer, and moose antlers have also become popular forms of dog chews that owners purchase for their pet canines.
Shed hunting with dogs
Dogs are sometimes used to find shed antlers. The North American Shed Hunting Dog Association (NASHDA) has resources for people who want to train their dogs to find shed antlers and hold shed dog hunting events.
| Biology and health sciences | Skeletal system | Biology |
57522 | https://en.wikipedia.org/wiki/Einkorn | Einkorn | Einkorn wheat (from German Einkorn, literally "single grain") is either a wild species of wheat (Triticum) or its domesticated form. The wild form is T. boeoticum (syn. T. m. subsp. boeoticum), and the domesticated form is T. monococcum (syn. T. m. subsp. monococcum). Einkorn is a diploid species (2n = 14 chromosomes) of hulled wheat, with tough glumes (husks) that tightly enclose the grains. The cultivated form is similar to the wild, except that the ear stays intact when ripe and the seeds are larger. The domestic form is known as in French, in German, "einkorn" or "littlespelt" in English, in Italian and in Spanish. The name refers to the fact that each spikelet contains only one grain.
Einkorn wheat was one of the first plants to be domesticated and cultivated. The earliest clear evidence of the domestication of einkorn dates from 10,600 to 9,900 years before present (8650 BCE to 7950 BCE) from Çayönü and Cafer Höyük, two Early Pre-Pottery Neolithic B archaeological sites in southern Turkey. Remnants of einkorn were found with the iceman mummy Ötzi, dated the late 4th millennium BCE.
History
Einkorn wheat commonly grows wild in the hill country in the northern part of the Fertile Crescent and Anatolia, although it has a wider distribution reaching into the Balkans and south to Jordan near the Dead Sea. It is a short variety of wild wheat, usually less than tall and is not very productive of edible seeds.
The principal difference between wild einkorn and cultivated einkorn is the method of seed dispersal. In the wild variety the seed head usually shatters and drops the kernels (seeds) of wheat onto the ground. This facilitates a new crop of wheat. In the domestic variety, the seed head remains intact. While such a mutation may occasionally occur in the wild, it is not viable there in the long term: the intact seed head will only drop to the ground when the stalk rots, and the kernels will not scatter but form a tight clump which inhibits germination and makes the mutant seedlings susceptible to disease. But harvesting einkorn with intact seed heads was easier for early human harvesters, who could then manually break apart the seed heads and scatter any kernels not eaten. Over time and through selection, conscious or unconscious, the human preference for intact seed heads created the domestic variety, which has slightly larger kernels than wild einkorn. Domesticated einkorn thus requires human planting and harvesting for its continuing existence. This process of domestication may have taken only 20 to 200 years, resulting in a wheat that was easier to harvest.
Einkorn wheat is one of the earliest cultivated forms of wheat, alongside emmer wheat (T. dicoccum). Hunter gatherers in the Fertile Crescent may have started harvesting einkorn as early as 30,000 years ago, according to archaeological evidence from Syria. Although gathered from the wild for thousands of years, einkorn wheat was first domesticated approximately 10,000 years BP in the Pre-Pottery Neolithic A (PPNA) or B (PPNB) periods. Evidence from DNA fingerprinting suggests einkorn was first domesticated near Karaca Dağ in southeast Turkey, an area in which a number of PPNB farming villages have been found. One theory by Yuval Noah Harari suggests that the domestication of einkorn was linked to intensive agriculture to support the nearby Göbekli Tepe site.
An important characteristic facilitating the domestication of einkorn and other annual grains is that the plants are largely self-pollinating. Thus, the desirable (for human management) traits of einkorn could be perpetuated at less risk of cross-fertilization with wild plants which might have traits – e.g. smaller seeds, shattering seed heads, as less desirable for human management.
From the northern part of the Fertile Crescent, the cultivation of einkorn wheat spread to the Caucasus, the Balkans, and central Europe. Einkorn wheat was more commonly grown in cooler climates than emmer wheat, the other domesticated wheat. Cultivation of einkorn in the Middle East began to decline in favor of emmer wheat around 2000 BC. Cultivation of einkorn was never extensive in Italy, southern France, and Spain. Einkorn continued to be cultivated in some areas of northern Europe throughout the Middle Ages and until the early part of the 20th century.
Taxonomy and phylogeny
Wild and domesticated einkorns are diploid wheats. Unlike emmer and bread wheat, which were formed by hybridisation with Aegilops goatgrasses, einkorn is not a hybrid.
Description
Einkorn wheat is low-yielding but can survive on poor, dry, marginal soils where other varieties of wheat will not. It is primarily eaten boiled in whole grains or in porridge. As with other ancient varieties of wheat such as emmer, Einkorn is a "covered wheat" as its kernels do not break free from its seed coat (glume) with threshing. This makes it difficult to separate the husk from the seed.
Einkorn is a common food in northern Provence (France). It is used for bulgur or as animal feed in mountainous areas of countries including France, India, Italy, Morocco, the former Yugoslavia, and Turkey. It contains gluten (so is not suitable for people with gluten-related disorders) and has a higher percentage of protein than modern red wheats. It is considered more nutritious because it has higher levels of fat, phosphorus, potassium, pyridoxine, and beta-carotene.
Genetics
Einkorn is the source of many potential introgressions for immunity – Nikolai Vavilov called it an "accumulator of complex immunities." T. monococcum is the source of Sr21, a stem rust resistance gene which has been introgressed into hexaploid worldwide. It is also the source of Yr34, a resistance gene for yellow rust.
The salt-tolerance feature of T. monococcum has been bred into durum wheat.
Images
| Biology and health sciences | Grains | Plants |
57546 | https://en.wikipedia.org/wiki/Caenorhabditis%20elegans | Caenorhabditis elegans | Caenorhabditis elegans () is a free-living transparent nematode about 1 mm in length that lives in temperate soil environments. It is the type species of its genus. The name is a blend of the Greek caeno- (recent), rhabditis (rod-like) and Latin elegans (elegant). In 1900, Maupas initially named it Rhabditides elegans. Osche placed it in the subgenus Caenorhabditis in 1952, and in 1955, Dougherty raised Caenorhabditis to the status of genus.
C. elegans is an unsegmented pseudocoelomate and lacks respiratory or circulatory systems. Most of these nematodes are hermaphrodites and a few are males. Males have specialised tails for mating that include spicules.
In 1963, Sydney Brenner proposed research into C. elegans, primarily in the area of neuronal development. In 1974, he began research into the molecular and developmental biology of C. elegans, which has since been extensively used as a model organism. It was the first multicellular organism to have its whole genome sequenced, and in 2019 it was the first organism to have its connectome (neuronal "wiring diagram") completed.
four Nobel prizes have been won for work done on C. elegans.
Anatomy
C.elegans is unsegmented, vermiform, and bilaterally symmetrical. It has a cuticle (a strong outer covering, as an exoskeleton), four main epidermal cords, and a fluid-filled pseudocoelom (body cavity). It also has some of the same organ systems as larger animals. About one in a thousand individuals is male and the rest are hermaphrodites. The basic anatomy of C.elegans includes a mouth, pharynx, intestine, gonad, and collagenous cuticle. Like all nematodes, they have neither a circulatory nor a respiratory system. The four bands of muscles that run the length of the body are connected to a neural system that allows the muscles to move the animal's body only as dorsal bending or ventral bending, but not left or right, except for the head, where the four muscle quadrants are wired independently from one another. When a wave of dorsal/ventral muscle contractions proceeds from the back to the front of the animal, the animal is propelled backwards. When a wave of contractions is initiated at the front and proceeds posteriorly along the body, the animal is propelled forwards. Because of this dorsal/ventral bias in body bends, any normal living, moving individual tends to lie on either its left side or its right side when observed crossing a horizontal surface. A set of ridges on the lateral sides of the body cuticle, the alae, is believed to give the animal added traction during these bending motions.
In relation to lipid metabolism, C.elegans does not have any specialized adipose tissues, a pancreas, a liver, or even blood to deliver nutrients compared to mammals. Neutral lipids are instead stored in the intestine, epidermis, and embryos. The epidermis corresponds to the mammalian adipocytes by being the main triglyceride depot.
The pharynx is a muscular food pump in the head of C.elegans, which is triangular in cross-section. This grinds food and transports it directly to the intestine. A set of "valve cells" connects the pharynx to the intestine, but how this valve operates is not understood. After digestion, the contents of the intestine are released via the rectum, as is the case with all other nematodes. No direct connection exists between the pharynx and the excretory canal, which functions in the release of liquid urine.
Males have a single-lobed gonad, a vas deferens, and a tail specialized for mating, which incorporates spicules. Hermaphrodites have two ovaries, oviducts, and spermatheca, and a single uterus.
There are 302 neurons in C.elegans, approximately one-third of all the somatic cells in the whole body. Many neurons contain dendrites which extend from the cell to receive neurotransmitters or other signals, and a process that extends to the nerve ring (the "brain") for a synaptic connection with other neurons. C.elegans has excitatory cholinergic and inhibitory GABAergic motor neurons which connect with body wall muscles to regulate movement. In addition, these neurons and other neurons such as interneurons use a variety of neurotransmitters to control behaviors.
Gut granules
Numerous gut granules are present in the intestine of C.elegans, the functions of which are still not fully known, as are many other aspects of this nematode, despite the many years that it has been studied. These gut granules are found in all of the Rhabditida orders. They are very similar to lysosomes in that they feature an acidic interior and the capacity for endocytosis, but they are considerably larger, reinforcing the view of their being storage organelles.
A particular feature of the granules is that when they are observed under ultraviolet light, they react by emitting an intense blue fluorescence. Another phenomenon seen is termed 'death fluorescence'. As the worms die, a dramatic burst of blue fluorescence is emitted. This death fluorescence typically takes place in an anterior to posterior wave that moves along the intestine, and is seen in both young and old worms, whether subjected to lethal injury or peacefully dying of old age.
Many theories have been posited on the functions of the gut granules, with earlier ones being eliminated by later findings. They are thought to store zinc as one of their functions. Recent chemical analysis has identified the blue fluorescent material they contain as a glycosylated form of anthranilic acid (AA). The need for the large amounts of AA the many gut granules contain is questioned. One possibility is that the AA is antibacterial and used in defense against invading pathogens. Another possibility is that the granules provide photoprotection; the bursts of AA fluorescence entail the conversion of damaging UV light to relatively harmless visible light. This is seen as a possible link to the melanin–containing melanosomes.
Reproduction
The hermaphroditic worm is considered to be a specialized form of self-fertile female, as its soma is female. The hermaphroditic germline produces male gametes first, and lays eggs through its uterus after internal fertilization. Hermaphrodites produce all their sperm in the L4 stage (150 sperm cells per gonadal arm) and then produce only oocytes. The hermaphroditic gonad acts as an ovotestis with sperm cells being stored in the same area of the gonad as the oocytes until the first oocyte pushes the sperm into the spermatheca (a chamber wherein the oocytes become fertilized by the sperm).
The male can inseminate the hermaphrodite, which will preferentially use male sperm (both types of sperm are stored in the spermatheca).
The sperm of C. elegans is amoeboid, lacking flagella and acrosomes. When self-inseminated, the wild-type worm lays about 300 eggs. When inseminated by a male, the number of progeny can exceed 1,000. Hermaphrodites do not typically mate with other hermaphrodites. At 20 °C, the laboratory strain of C. elegans (N2) has an average lifespan around 2–3 weeks and a generation time of 3 to 4 days.
C. elegans has five pairs of autosomes and one pair of sex chromosomes. Sex in C. elegans is based on an X0 sex-determination system. Hermaphrodites of C. elegans have a matched pair of sex chromosomes (XX); the rare males have only one sex chromosome (X0).
Sex determination
C. elegans are mostly hermaphroditic organisms, producing both sperms and oocytes. Males do occur in the population in a rate of approximately 1 in 200 hermaphrodites, but the two sexes are highly differentiated. Males differ from their hermaphroditic counterparts in that they are smaller and can be identified through the shape of their tail. C.elegans reproduce through a process called androdioecy. This means that they can reproduce in two ways: either through self-fertilization in hermaphrodites or through hermaphrodites breeding with males. Males are produced through non-disjunction of the X chromosomes during meiosis. The worms that reproduce through self-fertilization are at risk for high linkage disequilibrium, which leads to lower genetic diversity in populations and an increase in accumulation of deleterious alleles. In C. elegans, somatic sex determination is attributed to the tra-1 gene. The tra-1 is a gene within the TRA-1 transcription factor sex determination pathway that is regulated post-transcriptionally and works by promoting female development. In hermaphrodites (XX), there are high levels of tra-1 activity, which produces the female reproductive system and inhibits male development. At a certain time in their life cycle, one day before adulthood, hermaphrodites can be identified through the addition of a vulva near their tail. In males (XO), there are low levels of tra-1 activity, resulting in a male reproductive system. Recent research has shown that there are three other genes, fem-1, fem-2, and fem-3, that negatively regulate the TRA-1 pathway and act as the final determiner of sex in C. elegans.
Evolution
The sex determination system in C. elegans is a topic that has been of interest to scientists for years. Since they are used as a model organism, any information discovered about the way their sex determination system might have evolved could further the same evolutionary biology research in other organisms. After almost 30 years of research, scientists have begun to put together the pieces in the evolution of such a system. What they have discovered is that there is a complex pathway involved that has several layers of regulation. The closely related organism Caenorhabditis briggsae has been studied extensively and its whole genome sequence has helped put together the missing pieces in the evolution of C. elegans sex determination. It has been discovered that two genes have assimilated, leading to the proteins XOL-1 and MIX-1 having an effect on sex determination in C. elegans as well. Mutations in the XOL-1 pathway leads to feminization in C. elegans . The mix-1 gene is known to hypoactivate the X chromosome and regulates the morphology of the male tail in C. elegans. Looking at the nematode as a whole, the male and hermaphrodite sex likely evolved from parallel evolution. Parallel evolution is defined as similar traits evolving from an ancestor in similar conditions; simply put, the two species evolve in similar ways over time. An example of this would be marsupial and placental mammals. Scientists have also hypothesized that hermaphrodite asexual reproduction, or "selfing", could have evolved convergently by studying species similar to C. elegans Other studies on the sex determination evolution suggest that genes involving sperm evolve at the faster rate than female genes. However, sperm genes on the X chromosome have reduced evolution rates. Sperm genes have short coding sequences, high codon bias, and disproportionate representation among orphan genes. These characteristics of sperm genes may be the reason for their high rates of evolution and may also suggest how sperm genes evolved out of hermaphrodite worms. Overall, scientists have a general idea of the sex determination pathway in C. elegans, however, the evolution of how this pathway came to be is not yet well defined.
Development
Embryonic development
The fertilized zygote undergoes rotational holoblastic cleavage.
Sperm entry into the oocyte commences formation of an anterior-posterior axis. The sperm microtubule organizing center directs the movement of the sperm pronucleus to the future posterior pole of the embryo, while also inciting the movement of PAR proteins, a group of cytoplasmic determination factors, to their proper respective locations. As a result of the difference in PAR protein distribution, the first cell division is highly asymmetric. C. elegans embryogenesis is among the best understood examples of asymmetric cell division.
All cells of the germline arise from a single primordial germ cell, called the P4 cell, established early in embryogenesis. This primordial cell divides to generate two germline precursors that do not divide further until after hatching.
Axis formation
The resulting daughter cells of the first cell division are called the AB cell (containing PAR-6 and PAR-3) and the P1 cell (containing PAR-1 and PAR-2). A second cell division produces the ABp and ABa cells from the AB cell, and the EMS and P2 cells from the P1 cell. This division establishes the dorsal-ventral axis, with the ABp cell forming the dorsal side and the EMS cell marking the ventral side. Through Wnt signaling, the P2 cell instructs the EMS cell to divide along the anterior-posterior axis. Through Notch signaling, the P2 cell differentially specifies the ABp and ABa cells, which further defines the dorsal-ventral axis. The left-right axis also becomes apparent early in embryogenesis, although it is unclear exactly when specifically the axis is determined. However, most theories of the L-R axis development involve some kind of differences in cells derived from the AB cell.
Gastrulation
Gastrulation occurs after the embryo reaches the 24-cell stage. C. elegans are a species of protostomes, so the blastopore eventually forms the mouth. Involution into the blastopore begins with movement of the endoderm cells and subsequent formation of the gut, followed by the P4 germline precursor, and finally the mesoderm cells, including the cells that eventually form the pharynx. Gastrulation ends when epiboly of the hypoblasts closes the blastopore.
Post-embryonic development
Under environmental conditions favourable for reproduction, hatched larvae develop through four larval stages - L1, L2, L3, and L4 - in just 3 days at 20 °C. When conditions are stressed, as in food insufficiency, excessive population density or high temperature, C. elegans can enter an alternative third larval stage, L2d, called the dauer stage (Dauer is German for permanent). A specific dauer pheromone regulates entry into the dauer state. This pheromone is composed of similar derivatives of the 3,6-dideoxy sugar, ascarylose. Ascarosides, named after the ascarylose base, are involved in many sex-specific and social behaviors. In this way, they constitute a chemical language that C. elegans uses to modulate various phenotypes. Dauer larvae are stress-resistant; they are thin and their mouths are sealed with a characteristic dauer cuticle and cannot take in food. They can remain in this stage for a few months. The stage ends when conditions improve favour further growth of the larva, now moulting into the L4 stage, even though the gonad development is arrested at the L2 stage.
Each stage transition is punctuated by a molt of the worm's transparent cuticle. Transitions through these stages are controlled by genes of the heterochronic pathway, an evolutionarily conserved set of regulatory factors. Many heterochronic genes code for microRNAs, which repress the expression of heterochronic transcription factors and other heterochronic miRNAs. miRNAs were originally discovered in C. elegans. Important developmental events controlled by heterochronic genes include the division and eventual syncitial fusion of the hypodermic seam cells, and their subsequent secretion of the alae in young adults. It is believed that the heterochronic pathway represents an evolutionarily conserved predecessor to circadian clocks.
Some nematodes have a fixed, genetically determined number of cells, a phenomenon known as eutely. The adult C. elegans hermaphrodite has 959 somatic cells and the male has 1033 cells, although it has been suggested that the number of their intestinal cells can increase by one to three in response to gut microbes experienced by mothers. Much of the literature describes the cell number in males as 1031, but the discovery of a pair of left and right MCM neurons increased the number by two in 2015. The number of cells does not change after cell division ceases at the end of the larval period, and subsequent growth is due solely to an increase in the size of individual cells.
Ecology
The different Caenorhabditis species occupy various nutrient- and bacteria-rich environments. They feed on the bacteria that develop in decaying organic matter (microbivory). They possess chemosensory receptors which enable the detection of bacteria and bacterial-secreted metabolites (such as iron siderophores), so that they can migrate towards their bacterial prey. Soil lacks enough organic matter to support self-sustaining populations. C. elegans can survive on a diet of a variety of bacteria, but its wild ecology is largely unknown. Most laboratory strains were taken from artificial environments such as gardens and compost piles. More recently, C. elegans has been found to thrive in other kinds of organic matter, particularly rotting fruit.
C. elegans can also ingest pollutants, especially tiny nanoplastics, which could enable the association with antibiotic-resistant bacteria, resulting in the dissemination of nanoplastics and antibiotic-resistant bacteria by C. elegans across the soil. C. elegans can also use different species of yeast, including Cryptococcus laurentii and C. kuetzingii, as sole sources of food. Although a bacterivore, C. elegans can be killed by a number of pathogenic bacteria, including human pathogens such as Staphylococcus aureus, Pseudomonas aeruginosa, Salmonella enterica or Enterococcus faecalis. Pathogenic bacteria can also form biofilms, whose sticky exopolymer matrix could impede C. elegans motility and cloaks bacterial quorum sensing chemoattractants from predator detection.
Invertebrates such as millipedes, insects, isopods, and gastropods can transport dauer larvae to various suitable locations. The larvae have also been seen to feed on their hosts when they die.
Nematodes can survive desiccation, and in C. elegans, the mechanism for this capability has been demonstrated to be late embryogenesis abundant proteins.
C. elegans, as other nematodes, can be eaten by predator nematodes and other omnivores, including some insects.
The Orsay virus is a virus that affects C. elegans, as well as the Caenorhabditis elegans Cer1 virus and the Caenorhabditis elegans Cer13 virus.
Interactions with fungi
Wild isolates of Caenorhabditis elegans are regularly found with infections by Microsporidia fungi. One such species, Nematocida parisii, replicates in the intestines of C. elegans.
Arthrobotrys oligospora is the model organism for interactions between fungi and nematodes. It is the most common and widespread nematode capturing fungus.
Use as a model organism
In 1963, Sydney Brenner proposed using C. elegans as a model organism for the investigation primarily of neural development in animals. It is one of the simplest organisms with a nervous system. The neurons do not fire action potentials, and do not express any voltage-gated sodium channels. In the hermaphrodite, this system comprises 302 neurons the pattern of which has been comprehensively mapped, in what is known as a connectome, and shown to be a small-world network.
Research has explored the neural and molecular mechanisms that control several behaviors of C. elegans, including chemotaxis, thermotaxis, mechanotransduction, learning, memory, and mating behaviour. In 2019 the connectome of the male was published using a technique distinct from that used for the hermaphrodite. The same paper used the new technique to redo the hermaphrodite connectome, finding 1,500 new synapses.
It has been used as a model organism to study molecular mechanisms in metabolic diseases.
Brenner also chose it as it is easy to grow in bulk populations, and convenient for genetic analysis. It is a multicellular eukaryotic organism, yet simple enough to be studied in great detail. The transparency of C. elegans facilitates the study of cellular differentiation and other developmental processes in the intact organism. The spicules in the male clearly distinguish males from females. Strains are cheap to breed and can be frozen. When subsequently thawed, they remain viable, allowing long-term storage. Maintenance is easy when compared to other multicellular model organisms. A few hundred nematodes can be kept on a single agar plate and suitable growth medium. Brenner described the use of a mutant of E. coli – OP50. OP50 is a uracil-requiring organism and its deficiency in the plate prevents the overgrowth of bacteria which would obscure the worms. The use of OP50 does not demand any major laboratory safety measures, since it is non-pathogenic and easily grown in Luria-Bertani (LB) media overnight.
Cell lineage mapping
The developmental fate of every single somatic cell (959 in the adult hermaphrodite; 1031 in the adult male) has been mapped. These patterns of cell lineage are largely invariant between individuals, whereas in mammals, cell development is more dependent on cellular cues from the embryo.
As mentioned previously, the first cell divisions of early embryogenesis in C. elegans are among the best understood examples of asymmetric cell divisions, and the worm is a very popular model system for studying developmental biology.
Programmed cell death
Programmed cell death (apoptosis) eliminates many additional cells (131 in the hermaphrodite, most of which would otherwise become neurons); this "apoptotic predictability" has contributed to the elucidation of some apoptotic genes. Cell death-promoting genes and a single cell-death inhibitor have been identified.
RNA interference and gene silencing
RNA interference (RNAi) is a relatively straightforward method of disrupting the function of specific genes. Silencing the function of a gene can sometimes allow a researcher to infer its possible function. The nematode can be soaked in, injected with, or fed with genetically transformed bacteria that express the double-stranded RNA of interest, the sequence of which complements the sequence of the gene that the researcher wishes to disable.
RNAi has emerged as a powerful tool in the study of functional genomics. C. elegans has been used to analyse gene functions and claim the promise of future findings in the systematic genetic interactions.
Environmental RNAi uptake is much worse in other species of worms in the genus Caenorhabditis. Although injecting RNA into the body cavity of the animal induces gene silencing in most species, only C. elegans and a few other distantly related nematodes can take up RNA from the bacteria they eat for RNAi. This ability has been mapped down to a single gene, sid-2, which, when inserted as a transgene in other species, allows them to take up RNA for RNAi as C. elegans does.
Cell division and cell cycle
Research into meiosis has been considerably simplified since every germ cell nucleus is at the same given position as it moves down the gonad, so is at the same stage in meiosis. In an early phase of meiosis, the oocytes become extremely resistant to radiation and this resistance depends on expression of genes rad51 and atm that have key roles in recombinational repair. Gene mre-11 also plays a crucial role in recombinational repair of DNA damage during meiosis. Furthermore, during meiosis in C. elegans the tumor suppressor BRCA1/BRC-1 and the structural maintenance of chromosomes SMC5/SMC6 protein complex interact to promote high fidelity repair of DNA double-strand breaks.
A study of the frequency of outcrossing in natural populations showed that selfing is the predominant mode of reproduction in C. elegans, but that infrequent outcrossing events occur at a rate around 1%. Meioses that result in selfing are unlikely to contribute significantly to beneficial genetic variability, but these meioses may provide the adaptive benefit of recombinational repair of DNA damages that arise, especially under stressful conditions.
Drug abuse and addiction
Nicotine dependence can also be studied using C. elegans because it exhibits behavioral responses to nicotine that parallel those of mammals. These responses include acute response, tolerance, withdrawal, and sensitization.
Biological databases
As for most model organisms, scientists that work in the field curate a dedicated online database and WormBase is that for C. elegans. The WormBase attempts to collate all published information on C. elegans and other related nematodes. Information on C. elegans is included with data on other model organisms in the Alliance of Genome Resources.
Ageing
C. elegans has been a model organism for research into ageing; for example, the inhibition of an insulin-like growth factor signaling pathway has been shown to increase adult lifespan threefold; while glucose feeding promotes oxidative stress and reduces adult lifespan by a half. Similarly, induced degradation of an insulin/IGF-1 receptor late in life extended life expectancy of worms dramatically.
Long-lived mutants of C. elegans were demonstrated to be resistant to oxidative stress and UV light. These long-lived mutants had a higher DNA repair capability than wild-type C. elegans. Knockdown of the nucleotide excision repair gene Xpa-1 increased sensitivity to UV and reduced the life span of the long-lived mutants. These findings indicate that DNA repair capability underlies longevity. Consistent with the idea that oxidative DNA damage causes aging, it was found that in C. elegans, exosome-mediated delivery of superoxide dismutase (SOD) reduces the level of reactive oxygen species (ROS) and significantly extends lifespan, i.e. delays aging under normal, as well as hostile conditions.
The capacity to repair DNA damage by the process of nucleotide excision repair declines with age.
C. elegans exposed to 5mM lithium chloride (LiCl) showed lengthened life spans. When exposed to 10μM LiCl, reduced mortality was observed, but not with 1μM.
C. elegans has been instrumental in the identification of the functions of genes implicated in Alzheimer's disease, such as presenilin. Moreover, extensive research on C. elegans has identified RNA-binding proteins as essential factors during germline and early embryonic development.
Telomeres, the length of which have been shown to correlate with increased lifespan and delayed onset of senescence in a multitude of organisms, from C. elegans to humans, show an interesting behaviour in C. elegans. While C. elegans maintains its telomeres in a canonical way similar to other eukaryotes, in contrast Drosophila melanogaster is noteworthy in its use of retrotransposons to maintain its telomeres, during knock-out of the catalytic subunit of the telomerase (trt-1) C. elegans can gain the ability of alternative telomere lengthening (ALT). C. elegans was the first eukaryote to gain ALT functionality after knock-out of the canonical telomerase pathway. ALT is also observed in about 10-15% of all clinical cancers. Thus C. elegans is a prime candidate for ALT research. Bayat et al. showed the paradoxical shortening of telomeres during trt-1 over-expression which lead to near sterility while the worms even exhibited a slight increase in lifespan, despite shortened telomeres.
Sleep
C. elegans is notable in animal sleep studies as the most primitive organism to display sleep-like states. In C. elegans, a lethargus phase occurs shortly before each moult. C. elegans has also been demonstrated to sleep after exposure to physical stress, including heat shock, UV radiation, and bacterial toxins.
Sensory biology
While the worm has no eyes, it has been found to be sensitive to light due to a third type of light-sensitive animal photoreceptor protein, LITE-1, which is 10 to 100 times more efficient at absorbing light than the other two types of photopigments (opsins and cryptochromes) found in the animal kingdom.
C. elegans is remarkably adept at tolerating acceleration. It can withstand 400,000 g's, according to geneticists at the University of São Paulo in Brazil. In an experiment, 96% of them were still alive without adverse effects after an hour in an ultracentrifuge.
Drug library screening
Having a small size and short life cycle, C. elegans is one of the few organisms that can enable in vivo high throughput screening (HTS) platforms for the evaluation of chemical libraries of drugs and toxins in a multicellular organism. Orthologous phenotypes observable in C. elegans for human diseases have the potential to enable profiling of drug library profiling that can inform potential repurposing of existing approved drugs for therapeutic indications in humans.
Spaceflight research
C. elegans made news when specimens were discovered to have survived the Space Shuttle Columbia disaster in February 2003. Later, in January 2009, live samples of C. elegans from the University of Nottingham were announced to be spending two weeks on the International Space Station that October, in a space research project to explore the effects of zero gravity on muscle development and physiology. The research was primarily about genetic basis of muscle atrophy, which relates to spaceflight or being bed-ridden, geriatric, or diabetic. Descendants of the worms aboard Columbia in 2003 were launched into space on Endeavour for the STS-134 mission. Additional experiments on muscle dystrophy during spaceflight were carried on board the ISS starting in 2018. It was shown that the genes affecting muscles attachment were expressed less in space. However, it has yet to be seen if this affects muscle strength.
Genetics
Genome
C. elegans was the first multicellular organism to have its whole genome sequenced. The sequence was published in 1998, although some small gaps were present; the last gap was finished by October 2002. In the run up to the whole genome the C. elegans Sequencing Consortium/C. elegans Genome Project released several partial scans including Wilson et al. 1994.
Size and gene content
The C. elegans genome is about 100 million base pairs long and consists of six pairs of chromosomes in hermaphrodites or five pairs of autosomes with XO chromosome in male C. elegans and a mitochondrial genome. Its gene density is about one gene per five kilo-base pairs. Introns make up 26% and intergenic regions 47% of the genome. Many genes are arranged in clusters and how many of these are operons is unclear. C. elegans and other nematodes are among the few eukaryotes currently known to have operons; these include trypanosomes, flatworms (notably the trematode Schistosoma mansoni), and a primitive chordate tunicate Oikopleura dioica. Many more organisms are likely to be shown to have these operons.
The genome contains an estimated 20,470 protein-coding genes. About 35% of C. elegans genes have human homologs. Remarkably, human genes have been shown repeatedly to replace their C. elegans homologs when introduced into C. elegans. Conversely, many C. elegans genes can function similarly to mammalian genes.
The number of known RNA genes in the genome has increased greatly due to the 2006 discovery of a new class called 21U-RNA genes, and the genome is now believed to contain more than 16,000 RNA genes, up from as few as 1,300 in 2005.
Scientific curators continue to appraise the set of known genes; new gene models continue to be added and incorrect ones modified or removed.
The reference C. elegans genome sequence continues to change as new evidence reveals errors in the original sequencing. Most changes are minor, adding or removing only a few base pairs of DNA. For example, the WS202 release of WormBase (April 2009) added two base pairs to the genome sequence. Sometimes, more extensive changes are made as noted in the WS197 release of December 2008, which added a region of over 4,300 bp to the sequence.
The C. elegans Genome Project's Wilson et al. 1994 found CelVav and a von Willebrand factor A domain and with Wilson et al. 1998 provides the first credible evidence for an aryl hydrocarbon receptor (AHR) homolog outside of vertebrates. 2
Related genomes
In 2003, the genome sequence of the related nematode C. briggsae was also determined, allowing researchers to study the comparative genomics of these two organisms. The genome sequences of more nematodes from the same genus e.g., C. remanei, C. japonica and C. brenneri (named after Brenner), have also been studied using the shotgun sequencing technique. These sequences have now been completed.
Other genetic studies
As of 2014, C. elegans is the most basal species in the 'Elegans' group (10 species) of the 'Elegans' supergroup (17 species) in phylogenetic studies. It forms a branch of its own distinct to any other species of the group.
Tc1 transposon is a DNA transposon active in C. elegans.
Scientific community
Several scientists have won the Nobel Prize in Physiology or Medicine for scientific discoveries made working with C. elegans. It was awarded in 2002 to Sydney Brenner, H. Robert Horvitz, and John Sulston for their work on the genetics of organ development and programmed cell death, in 2006 to Andrew Fire and Craig C. Mello for their discovery of RNA interference, and in 2024 to Victor Ambros and Gary Ruvkun for their discovery of microRNA and its role in gene regulation.
In 2008, Martin Chalfie shared a Nobel Prize in Chemistry for his work on green fluorescent protein; some of the research involved the use of C. elegans.
Many scientists who research C. elegans closely connect to Sydney Brenner, with whom almost all research in this field began in the 1970s; they have worked as either a postdoctoral or a postgraduate researcher in Brenner's lab or in the lab of someone who previously worked with Brenner. Most who worked in his lab later established their own worm research labs, thereby creating a fairly well-documented "lineage" of C. elegans scientists, which was recorded into the WormBase database in some detail at the 2003 International Worm Meeting.
| Biology and health sciences | Ecdysozoa | null |
57547 | https://en.wikipedia.org/wiki/Xenopus | Xenopus | Xenopus () (Gk., ξενος, xenos = strange, πους, pous = foot, commonly known as the clawed frog) is a genus of highly aquatic frogs native to sub-Saharan Africa. Twenty species are currently described within it. The two best-known species of this genus are Xenopus laevis and Xenopus tropicalis, which are commonly studied as model organisms for developmental biology, cell biology, toxicology, neuroscience and for modelling human disease and birth defects.
The genus is also known for its polyploidy, with some species having up to 12 sets of chromosomes.
Characteristics
Xenopus laevis is a rather inactive creature. It is incredibly hardy and can live up to 15 years. At times the ponds that Xenopus laevis is found in dry up, compelling it, in the dry season, to burrow into the mud, leaving a tunnel for air. It may lie dormant for up to a year. If the pond dries up in the rainy season, Xenopus laevis may migrate long distances to another pond, maintaining hydration by the rains. It is an adept swimmer, swimming in all directions with ease. It is barely able to hop, but it is able to crawl. It spends most of its time underwater and comes to surface to breathe. Respiration is predominantly through its well-developed lungs; there is little cutaneous respiration.
Description
All species of Xenopus have flattened, somewhat egg-shaped and streamlined bodies, and very slippery skin (because of a protective mucus covering). The frog's skin is smooth, but with a lateral line sensory organ that has a stitch-like appearance. The frogs are all excellent swimmers and have powerful, fully webbed toes, though the fingers lack webbing. Three of the toes on each foot have conspicuous black claws.
The frog's eyes are on top of the head, looking upwards. The pupils are circular. They have no moveable eyelids, tongues (rather it is completely attached to the floor of the mouth) or eardrums (similarly to Pipa pipa, the common Suriname toad).
Unlike most amphibians, they have no haptoglobin in their blood.
Behaviour
Xenopus species are entirely aquatic, though they have been observed migrating on land to nearby bodies of water during times of drought or in heavy rain. They are usually found in lakes, rivers, swamps, potholes in streams, and man-made reservoirs.
Adult frogs are usually both predators and scavengers, and since their tongues are unusable, the frogs use their small fore limbs to aid in the feeding process. Since they also lack vocal sacs, they make clicks (brief pulses of sound) underwater (again similar to Pipa pipa). Males establish a hierarchy of social dominance in which primarily one male has the right to make the advertisement call. The females of many species produce a release call, and Xenopus laevis females produce an additional call when sexually receptive and soon to lay eggs. The Xenopus species are also active during the twilight (or crepuscular) hours.
During breeding season, the males develop ridge-like nuptial pads (black in color) on their fingers to aid in grasping the female. The frogs' mating embrace is inguinal, meaning the male grasps the female around her waist.
Species
Extant species
Xenopus allofraseri
Xenopus amieti (volcano clawed frog)
Xenopus andrei (Andre's clawed frog)
Xenopus borealis (Marsabit clawed frog)
Xenopus boumbaensis (Mawa clawed frog)
Xenopus calcaratus
Xenopus clivii (Eritrea clawed frog)
Xenopus epitropicalis (Cameroon clawed frog)
Xenopus eysoole
Xenopus fischbergi
Xenopus fraseri (Fraser's platanna)
Xenopus gilli (Cape platanna)
Xenopus itombwensis
Xenopus kobeli
Xenopus laevis (African clawed frog or common platanna)
Xenopus largeni (Largen's clawed frog)
Xenopus lenduensis (Lendu Plateau clawed frog)
Xenopus longipes (Lake Oku clawed frog)
Xenopus mellotropicalis
Xenopus muelleri (Müller's platanna)
Xenopus parafraseri
Xenopus petersii (Peters' platanna)
Xenopus poweri
Xenopus pygmaeus (Bouchia clawed frog)
Xenopus ruwenzoriensis (Uganda clawed frog)
Xenopus tropicalis (western clawed frog)
Xenopus vestitus (Kivu clawed frog)
Xenopus victorianus (Lake Victoria clawed frog)
Xenopus wittei (De Witte's clawed frog)
Fossil species
The following fossil species have been described:
†Xenopus arabiensis - Oligocene Yemen Volcanic Group, Yemen
†Xenopus hasaunus
†Xenopus romeri - Itaboraian Itaboraí Formation, Brazil
†Xenopus stromeri
cf. Xenopus sp. - Campanian - Los Alamitos Formation, Argentina
Xenopus (Xenopus) sp. - Late Oligocene Nsungwe Formation, Tanzania
Xenopus sp. - Miocene Morocco
Xenopus sp. - Early Pleistocene Olduvai Formation, Tanzania
Model organism for biological research
Like many other frogs, they are often used in laboratory as research subjects. Xenopus embryos and eggs are a popular model system for a wide variety of biological studies. This animal is used because of its powerful combination of experimental tractability and close evolutionary relationship with humans, at least compared to many model organisms.
Xenopus has long been an important tool for in vivo studies in molecular, cell, and developmental biology of vertebrate animals. However, the wide breadth of Xenopus research stems from the additional fact that cell-free extracts made from Xenopus are a premier in vitro system for studies of fundamental aspects of cell and molecular biology. Thus, Xenopus is a vertebrate model system that allows for high-throughput in vivo analyses of gene function and high-throughput biochemistry. Furthermore, Xenopus oocytes are a leading system for studies of ion transport and channel physiology. Xenopus is also a unique system for analyses of genome evolution and whole genome duplication in vertebrates, as different Xenopus species form a ploidy series formed by interspecific hybridization.
In 1931, Lancelot Hogben noted that Xenopus laevis females ovulated when injected with the urine of pregnant women. This led to a pregnancy test that was later refined by South African researchers Hillel Abbe Shapiro and Harry Zwarenstein. A female Xenopus frog injected with a woman's urine was put in a jar with a little water. If eggs were in the water a day later it meant the woman was pregnant. Four years after the first Xenopus test, Zwarenstein's colleague, Dr Louis Bosman, reported that the test was accurate in more than 99% of cases. From the 1930s to the 1950s, thousands of frogs were exported across the world for use in these pregnancy tests.
The of the Marine Biological Laboratory is an in vivo repository for transgenic and mutant strains and a training center.
Online Model Organism Database
Xenbase is the Model Organism Database (MOD) for both Xenopus laevis and Xenopus tropicalis.
Investigation of human disease genes
All modes of Xenopus research (embryos, cell-free extracts, and oocytes) are commonly used in direct studies of human disease genes and to study the basic science underlying initiation and progression of cancer. Xenopus embryos for in vivo studies of human disease gene function: Xenopus embryos are large and easily manipulated, and moreover, thousands of embryos can be obtained in a single day. Indeed, Xenopus was the first vertebrate animal for which methods were developed to allow rapid analysis of gene function using misexpression (by mRNA injection). Injection of mRNA in Xenopus that led to the cloning of interferon. Moreover, the use of morpholino-antisense oligonucleotides for gene knockdowns in vertebrate embryos, which is now widely used, was first developed by Janet Heasman using Xenopus.
In recent years, these approaches have played in important role in studies of human disease genes. The mechanism of action for several genes mutated in human cystic kidney disorders (e.g. nephronophthisis) have been extensively studied in Xenopus embryos, shedding new light on the link between these disorders, ciliogenesis and Wnt signaling. Xenopus embryos have also provided a rapid test bed for validating newly discovered disease genes. For example, studies in Xenopus confirmed and elucidated the role of PYCR1 in cutis laxa with progeroid features.
Transgenic Xenopus for studying transcriptional regulation of human disease genes: Xenopus embryos develop rapidly, so transgenesis in Xenopus is a rapid and effective method for analyzing genomic regulatory sequences. In a recent study, mutations in the SMAD7 locus were revealed to associate with human colorectal cancer. The mutations lay in conserved, but noncoding sequences, suggesting these mutations impacted the patterns of SMAD7 transcription. To test this hypothesis, the authors used Xenopus transgenesis, and revealed this genomic region drove expression of GFP in the hindgut. Moreover, transgenics made with the mutant version of this region displayed substantially less expression in the hindgut.
Xenopus cell-free extracts for biochemical studies of proteins encoded by human disease genes: A unique advantage of the Xenopus system is that cytosolic extracts contain both soluble cytoplasmic and nuclear proteins (including chromatin proteins). This is in contrast to cellular extracts prepared from somatic cells with already distinct cellular compartments. Xenopus egg extracts have provided numerous insights into the basic biology of cells with particular impact on cell division and the DNA transactions associated with it (see below).
Studies in Xenopus egg extracts have also yielded critical insights into the mechanism of action of human disease genes associated with genetic instability and elevated cancer risk, such as ataxia telangiectasia, BRCA1 inherited breast and ovarian cancer, Nbs1 Nijmegen breakage syndrome, RecQL4 Rothmund-Thomson syndrome, c-Myc oncogene and FANC proteins (Fanconi anemia).
Xenopus oocytes for studies of gene expression and channel activity related to human disease: Yet another strength of Xenopus is the ability to rapidly and easily assay the activity of channel and transporter proteins using expression in oocytes. This application has also led to important insights into human disease, including studies related to trypanosome transmission, Epilepsy with ataxia and sensorineural deafness Catastrophic cardiac arrhythmia (Long-QT syndrome) and Megalencephalic leukoencephalopathy.
Gene editing by the CRISPR/CAS system has recently been demonstrated in Xenopus tropicalis and Xenopus laevis. This technique is being used to screen the effects of human disease genes in Xenopus and the system is sufficiently efficient to study the effects within the same embryos that have been manipulated.
Investigation of fundamental biological processes
Signal transduction: Xenopus embryos and cell-free extracts are widely used for basic research in signal transduction. In just the last few years, Xenopus embryos have provided crucial insights into the mechanisms of TGF-beta and Wnt signal transduction. For example, Xenopus embryos were used to identify the enzymes that control ubiquitination of Smad4, and to demonstrate direct links between TGF-beta superfamily signaling pathways and other important networks, such as the MAP kinase pathway and the Wnt pathway. Moreover, new methods using egg extracts revealed novel, important targets of the Wnt/GSK3 destruction complex.
Cell division: Xenopus egg extracts have allowed the study of many complicated cellular events in vitro. Because egg cytosol can support successive cycling between mitosis and interphase in vitro, it has been critical to diverse studies of cell division. For example, the small GTPase Ran was first found to regulate interphase nuclear transport, but Xenopus egg extracts revealed the critical role of Ran GTPase in mitosis independent of its role in interphase nuclear transport. Similarly, the cell-free extracts were used to model nuclear envelope assembly from chromatin, revealing the function of RanGTPase in regulating nuclear envelope reassembly after mitosis. More recently, using Xenopus egg extracts, it was possible to demonstrate the mitosis-specific function of the nuclear lamin B in regulating spindle morphogenesis and to identify new proteins that mediate kinetochore attachment to microtubules. Cell-free systems have recently become practical investigatory tools, and Xenopus oocytes are often the source of the extracts used. This has produced significant results in understanding mitotic oscillation and microtubules.
Embryonic development: Xenopus embryos are widely used in developmental biology. A summary of recent advances made by Xenopus research in recent years would include:
Epigenetics of cell fate specification and epigenome reference maps
microRNA in germ layer patterning and eye development
Link between Wnt signaling and telomerase
Development of the vasculature
Gut morphogenesis
Contact inhibition and neural crest cell migration and the generation of neural crest from pluripotent blastula cells
- Role of Notch: Dorsky et al 1995 elucidated a pattern of expression followed by downregulation
DNA replication: Xenopus cell-free extracts also support the synchronous assembly and the activation of origins of DNA replication. They have been instrumental in characterizing the biochemical function of the prereplicative complex, including MCM proteins.
DNA damage response: Cell-free extracts have been instrumental to unravel the signaling pathways activated in response to DNA double-strand breaks (ATM), replication fork stalling (ATR) or DNA interstrand crosslinks (FA proteins and ATR). Notably, several mechanisms and components of these signal transduction pathways were first identified in Xenopus.
Apoptosis: Xenopus oocytes provide a tractable model for biochemical studies of apoptosis. Recently, oocytes were used recently to study the biochemical mechanisms of caspase-2 activation; importantly, this mechanism turns out to be conserved in mammals.
Regenerative medicine: In recent years, tremendous interest in developmental biology has been stoked by the promise of regenerative medicine. Xenopus has played a role here, as well. For example, expression of seven transcription factors in pluripotent Xenopus cells rendered those cells able to develop into functional eyes when implanted into Xenopus embryos, providing potential insights into the repair of retinal degeneration or damage. In a vastly different study, Xenopus embryos was used to study the effects of tissue tension on morphogenesis, an issue that will be critical for in vitro tissue engineering. Xenopus species are important model organisms for the study of spinal cord regeneration, because while capable of regeneration in their larval stages, Xenopus lose this capacity in early metamorphosis.
Physiology: The directional beating of multiciliated cells is essential to development and homeostasis in the central nervous system, the airway, and the oviduct. The multiciliated cells of the Xenopus epidermis have recently been developed as the first in vivo test-bed for live-cell studies of such ciliated tissues, and these studies have provided important insights into the biomechanical and molecular control of directional beating.
Actin: Another result from cell-free Xenopus oocyte extracts has been improved understanding of actin.
Small molecule screens to develop novel therapies
Because huge amounts of material are easily obtained, all modalities of Xenopus research are now being used for small-molecule based screens.
Chemical genetics of vascular growth in Xenopus tadpoles: Given the important role of neovascularization in cancer progression, Xenopus embryos were recently used to identify new small molecules inhibitors of blood vessel growth. Notably, compounds identified in Xenopus were effective in mice. Notably, frog embryos figured prominently in a study that used evolutionary principles to identify a novel vascular disrupting agent that may have chemotherapeutic potential. That work was featured in the New York Times Science Times
In vivo testing of potential endocrine disruptors in transgenic Xenopus embryos; A high-throughput assay for thyroid disruption has recently been developed using transgenic Xenopus embryos.
Small molecule screens in Xenopus egg extracts: Egg extracts provide ready analysis of molecular biological processes and can rapidly screened. This approach was used to identify novel inhibitors of proteasome-mediated protein degradation and DNA repair enzymes.
Genetic studies
While Xenopus laevis is the most commonly used species for developmental biology studies, genetic studies, especially forward genetic studies, can be complicated by their pseudotetraploid genome. Xenopus tropicalis provides a simpler model for genetic studies, having a diploid genome.
Gene expression knockdown techniques
The expression of genes can be reduced by a variety of means, for example by using antisense oligonucleotides targeting specific mRNA molecules. DNA oligonucleotides complementary to specific mRNA molecules are often chemically modified to improve their stability in vivo. The chemical modifications used for this purpose include phosphorothioate, 2'-O-methyl, morpholino, MEA phosphoramidate and DEED phosphoramidate.
Morpholino oligonucleotides
Morpholino oligos are used in both X. laevis and X. tropicalis to probe the function of a protein by observing the results of eliminating the protein's activity. For example, a set of X. tropicalis genes has been screened in this fashion.
Morpholino oligos (MOs) are short, antisense oligos made of modified nucleotides. MOs can knock down gene expression by inhibiting mRNA translation, blocking RNA splicing, or inhibiting miRNA activity and maturation. MOs have proven to be effective knockdown tools in developmental biology experiments and RNA-blocking reagents for cells in culture. MOs do not degrade their RNA targets, but instead act via a steric blocking mechanism RNAseH-independent manner. They remain stable in cells and do not induce immune responses. Microinjection of MOs in early Xenopus embryos can suppress gene expression in a targeted manner.
Like all antisense approaches, different MOs can have different efficacy, and may cause off-target, non-specific effects. Often, several MOs need to be tested to find an effective target sequence. Rigorous controls are used to demonstrate specificity, including:
Phenocopy of genetic mutation
Verification of reduced protein by western or immunostaining
mRNA rescue by adding back a mRNA immune to the MO
use of 2 different MOs (translation blocking and splice blocking)
injection of control MOs
Xenbase provides a searchable catalog of over 2000 MOs that have been specifically used in Xenopus research. The data is searchable via sequence, gene symbol and various synonyms (as used in different publications). Xenbase maps the MOs to the latest Xenopus genomes in GBrowse, predicts 'off-target' hits, and lists all Xenopus literature in which the morpholino has been published.
| Biology and health sciences | Frogs and toads | Animals |
57559 | https://en.wikipedia.org/wiki/Predation | Predation | Predation is a biological interaction in which one organism, the predator, kills and eats another organism, its prey. It is one of a family of common feeding behaviours that includes parasitism and micropredation (which usually do not kill the host) and parasitoidism (which always does, eventually). It is distinct from scavenging on dead prey, though many predators also scavenge; it overlaps with herbivory, as seed predators and destructive frugivores are predators.
Predation behavior varies significantly depending on the organism. Many predators, especially carnivores, have evolved distinct hunting strategies. Pursuit predation involves the active search for and pursuit of prey, whilst ambush predators instead wait for prey to present an opportunity for capture, and often use stealth or aggressive mimicry. Other predators are opportunistic or omnivorous and only practice predation occasionally.
Most obligate carnivores are specialized for hunting. They may have acute senses such as vision, hearing, or smell for prey detection. Many predatory animals have sharp claws or jaws to grip, kill, and cut up their prey. Physical strength is usually necessary for large carnivores such as big cats to kill larger prey. Other adaptations include stealth, endurance, intelligence, social behaviour, and aggressive mimicry that improve hunting efficiency.
Predation has a powerful selective effect on prey, and the prey develops anti-predator adaptations such as warning colouration, alarm calls and other signals, camouflage, mimicry of well-defended species, and defensive spines and chemicals. Sometimes predator and prey find themselves in an evolutionary arms race, a cycle of adaptations and counter-adaptations. Predation has been a major driver of evolution since at least the Cambrian period.
Definition
At the most basic level, predators kill and eat other organisms. However, the concept of predation is broad, defined differently in different contexts, and includes a wide variety of feeding methods; moreover, some relationships that result in the prey's death are not necessarily called predation. A parasitoid, such as an ichneumon wasp, lays its eggs in or on its host; the eggs hatch into larvae, which eat the host, and it inevitably dies. Zoologists generally call this a form of parasitism, though conventionally parasites are thought not to kill their hosts. A predator can be defined to differ from a parasitoid in that it has many prey, captured over its lifetime, where a parasitoid's larva has just one, or at least has its food supply provisioned for it on just one occasion.
There are other difficult and borderline cases. Micropredators are small animals that, like predators, feed entirely on other organisms; they include fleas and mosquitoes that consume blood from living animals, and aphids that consume sap from living plants. However, since they typically do not kill their hosts, they are now often thought of as parasites. Animals that graze on phytoplankton or mats of microbes are predators, as they consume and kill their food organisms, while herbivores that browse leaves are not, as their food plants usually survive the assault. When animals eat seeds (seed predation or granivory) or eggs (egg predation), they are consuming entire living organisms, which by definition makes them predators.
Scavengers, organisms that only eat organisms found already dead, are not predators, but many predators such as the jackal and the hyena scavenge when the opportunity arises. Among invertebrates, social wasps such as yellowjackets are both hunters and scavengers of other insects.
Taxonomic range
While examples of predators among mammals and birds are well known, predators can be found in a broad range of taxa including arthropods. They are common among insects, including mantids, dragonflies, lacewings and scorpionflies. In some species such as the alderfly, only the larvae are predatory (the adults do not eat). Spiders are predatory, as well as other terrestrial invertebrates such as scorpions; centipedes; some mites, snails and slugs; nematodes; and planarian worms. In marine environments, most cnidarians (e.g., jellyfish, hydroids), ctenophora (comb jellies), echinoderms (e.g., sea stars, sea urchins, sand dollars, and sea cucumbers) and flatworms are predatory. Among crustaceans, lobsters, crabs, shrimps and barnacles are predators, and in turn crustaceans are preyed on by nearly all cephalopods (including octopuses, squid and cuttlefish).
Seed predation is restricted to mammals, birds, and insects but is found in almost all terrestrial ecosystems. Egg predation includes both specialist egg predators such as some colubrid snakes and generalists such as foxes and badgers that opportunistically take eggs when they find them.
Some plants, like the pitcher plant, the Venus fly trap and the sundew, are carnivorous and consume insects. Methods of predation by plants varies greatly but often involves a food trap, mechanical stimulation, and electrical impulses to eventually catch and consume its prey. Some carnivorous fungi catch nematodes using either active traps in the form of constricting rings, or passive traps with adhesive structures.
Many species of protozoa (eukaryotes) and bacteria (prokaryotes) prey on other microorganisms; the feeding mode is evidently ancient, and evolved many times in both groups. Among freshwater and marine zooplankton, whether single-celled or multi-cellular, predatory grazing on phytoplankton and smaller zooplankton is common, and found in many species of nanoflagellates, dinoflagellates, ciliates, rotifers, a diverse range of meroplankton animal larvae, and two groups of crustaceans, namely copepods and cladocerans.
Foraging
To feed, a predator must search for, pursue and kill its prey. These actions form a foraging cycle. The predator must decide where to look for prey based on its geographical distribution; and once it has located prey, it must assess whether to pursue it or to wait for a better choice. If it chooses pursuit, its physical capabilities determine the mode of pursuit (e.g., ambush or chase). Having captured the prey, it may also need to expend energy handling it (e.g., killing it, removing any shell or spines, and ingesting it).
Search
Predators have a choice of search modes ranging from sit-and-wait to active or widely foraging. The sit-and-wait method is most suitable if the prey are dense and mobile, and the predator has low energy requirements. Wide foraging expends more energy, and is used when prey is sedentary or sparsely distributed. There is a continuum of search modes with intervals between periods of movement ranging from seconds to months. Sharks, sunfish, Insectivorous birds and shrews are almost always moving while web-building spiders, aquatic invertebrates, praying mantises and kestrels rarely move. In between, plovers and other shorebirds, freshwater fish including crappies, and the larvae of coccinellid beetles (ladybirds), alternate between actively searching and scanning the environment.
Prey distributions are often clumped, and predators respond by looking for patches where prey is dense and then searching within patches. Where food is found in patches, such as rare shoals of fish in a nearly empty ocean, the search stage requires the predator to travel for a substantial time, and to expend a significant amount of energy, to locate each food patch. For example, the black-browed albatross regularly makes foraging flights to a range of around , up to a maximum foraging range of for breeding birds gathering food for their young. With static prey, some predators can learn suitable patch locations and return to them at intervals to feed. The optimal foraging strategy for search has been modelled using the marginal value theorem.
Search patterns often appear random. One such is the Lévy walk, that tends to involve clusters of short steps with occasional long steps. It is a good fit to the behaviour of a wide variety of organisms including bacteria, honeybees, sharks and human hunter-gatherers.
Assessment
Having found prey, a predator must decide whether to pursue it or keep searching. The decision depends on the costs and benefits involved. A bird foraging for insects spends a lot of time searching but capturing and eating them is quick and easy, so the efficient strategy for the bird is to eat every palatable insect it finds. By contrast, a predator such as a lion or falcon finds its prey easily but capturing it requires a lot of effort. In that case, the predator is more selective.
One of the factors to consider is size. Prey that is too small may not be worth the trouble for the amount of energy it provides. Too large, and it may be too difficult to capture. For example, a mantid captures prey with its forelegs and they are optimized for grabbing prey of a certain size. Mantids are reluctant to attack prey that is far from that size. There is a positive correlation between the size of a predator and its prey.
A predator may assess a patch and decide whether to spend time searching for prey in it. This may involve some knowledge of the preferences of the prey; for example, ladybirds can choose a patch of vegetation suitable for their aphid prey.
Capture
To capture prey, predators have a spectrum of pursuit modes that range from overt chase (pursuit predation) to a sudden strike on nearby prey (ambush predation). Another strategy in between ambush and pursuit is ballistic interception, where a predator observes and predicts a prey's motion and then launches its attack accordingly.
Ambush
Ambush or sit-and-wait predators are carnivorous animals that capture prey by stealth or surprise. In animals, ambush predation is characterized by the predator's scanning the environment from a concealed position until a prey is spotted, and then rapidly executing a fixed surprise attack. Vertebrate ambush predators include frogs, fish such as the angel shark, the northern pike and the eastern frogfish. Among the many invertebrate ambush predators are trapdoor spiders and Australian Crab spiders on land and mantis shrimps in the sea. Ambush predators often construct a burrow in which to hide, improving concealment at the cost of reducing their field of vision. Some ambush predators also use lures to attract prey within striking range. The capturing movement has to be rapid to trap the prey, given that the attack is not modifiable once launched.
Ballistic interception
Ballistic interception is the strategy where a predator observes the movement of a prey, predicts its motion, works out an interception path, and then attacks the prey on that path. This differs from ambush predation in that the predator adjusts its attack according to how the prey is moving. Ballistic interception involves a brief period for planning, giving the prey an opportunity to escape. Some frogs wait until snakes have begun their strike before jumping, reducing the time available to the snake to recalibrate its attack, and maximising the angular adjustment that the snake would need to make to intercept the frog in real time. Ballistic predators include insects such as dragonflies, and vertebrates such as archerfish (attacking with a jet of water), chameleons (attacking with their tongues), and some colubrid snakes.
Pursuit
In pursuit predation, predators chase fleeing prey. If the prey flees in a straight line, capture depends only on the predator's being faster than the prey. If the prey manoeuvres by turning as it flees, the predator must react in real time to calculate and follow a new intercept path, such as by parallel navigation, as it closes on the prey. Many pursuit predators use camouflage to approach the prey as close as possible unobserved (stalking) before starting the pursuit. Pursuit predators include terrestrial mammals such as humans, African wild dogs, spotted hyenas and wolves; marine predators such as dolphins, orcas and many predatory fishes, such as tuna; predatory birds (raptors) such as falcons; and insects such as dragonflies.
An extreme form of pursuit is endurance or persistence hunting, in which the predator tires out the prey by following it over a long distance, sometimes for hours at a time. The method is used by human hunter-gatherers and by canids such as African wild dogs and domestic hounds. The African wild dog is an extreme persistence predator, tiring out individual prey by following them for many miles at relatively low speed.
A specialised form of pursuit predation is the lunge feeding of baleen whales. These very large marine predators feed on plankton, especially krill, diving and actively swimming into concentrations of plankton, and then taking a huge gulp of water and filtering it through their feathery baleen plates.
Pursuit predators may be social, like the lion and wolf that hunt in groups, or solitary.
Handling
Once the predator has captured the prey, it has to handle it: very carefully if the prey is dangerous to eat, such as if it possesses sharp or poisonous spines, as in many prey fish. Some catfish such as the Ictaluridae have spines on the back (dorsal) and belly (pectoral) which lock in the erect position; as the catfish thrashes about when captured, these could pierce the predator's mouth, possibly fatally. Some fish-eating birds like the osprey avoid the danger of spines by tearing up their prey before eating it.
Solitary versus social predation
In social predation, a group of predators cooperates to kill prey. This makes it possible to kill creatures larger than those they could overpower singly; for example, hyenas, and wolves collaborate to catch and kill herbivores as large as buffalo, and lions even hunt elephants. It can also make prey more readily available through strategies like flushing of prey and herding it into a smaller area. For example, when mixed flocks of birds forage, the birds in front flush out insects that are caught by the birds behind. Spinner dolphins form a circle around a school of fish and move inwards, concentrating the fish by a factor of 200. By hunting socially chimpanzees can catch colobus monkeys that would readily escape an individual hunter, while cooperating Harris hawks can trap rabbits.
Predators of different species sometimes cooperate to catch prey. In coral reefs, when fish such as the grouper and coral trout spot prey that is inaccessible to them, they signal to giant moray eels, Napoleon wrasses or octopuses. These predators are able to access small crevices and flush out the prey. Killer whales have been known to help whalers hunt baleen whales.
Social hunting allows predators to tackle a wider range of prey, but at the risk of competition for the captured food. Solitary predators have more chance of eating what they catch, at the price of increased expenditure of energy to catch it, and increased risk that the prey will escape. Ambush predators are often solitary to reduce the risk of becoming prey themselves. Of 245 terrestrial members of the Carnivora (the group that includes the cats, dogs, and bears), 177 are solitary; and 35 of the 37 wild cats are solitary, including the cougar and cheetah. However, the solitary cougar does allow other cougars to share in a kill, and the coyote can be either solitary or social. Other solitary predators include the northern pike, wolf spiders and all the thousands of species of solitary wasps among arthropods, and many microorganisms and zooplankton.
Specialization
Physical adaptations
Under the pressure of natural selection, predators have evolved a variety of physical adaptations for detecting, catching, killing, and digesting prey. These include speed, agility, stealth, sharp senses, claws, teeth, filters, and suitable digestive systems.
For detecting prey, predators have well-developed vision, smell, or hearing. Predators as diverse as owls and jumping spiders have forward-facing eyes, providing accurate binocular vision over a relatively narrow field of view, whereas prey animals often have less acute all-round vision. Animals such as foxes can smell their prey even when it is concealed under of snow or earth. Many predators have acute hearing, and some such as echolocating bats hunt exclusively by active or passive use of sound.
Predators including big cats, birds of prey, and ants share powerful jaws, sharp teeth, or claws which they use to seize and kill their prey. Some predators such as snakes and fish-eating birds like herons and cormorants swallow their prey whole; some snakes can unhinge their jaws to allow them to swallow large prey, while fish-eating birds have long spear-like beaks that they use to stab and grip fast-moving and slippery prey. Fish and other predators have developed the ability to crush or open the armoured shells of molluscs.
Many predators are powerfully built and can catch and kill animals larger than themselves; this applies as much to small predators such as ants and shrews as to big and visibly muscular carnivores like the cougar and lion.
Diet and behaviour
Predators are often highly specialized in their diet and hunting behaviour; for example, the Eurasian lynx only hunts small ungulates. Others such as leopards are more opportunistic generalists, preying on at least 100 species. The specialists may be highly adapted to capturing their preferred prey, whereas generalists may be better able to switch to other prey when a preferred target is scarce. When prey have a clumped (uneven) distribution, the optimal strategy for the predator is predicted to be more specialized as the prey are more conspicuous and can be found more quickly; this appears to be correct for predators of immobile prey, but is doubtful with mobile prey.
In size-selective predation, predators select prey of a certain size. Large prey may prove troublesome for a predator, while small prey might prove hard to find and in any case provide less of a reward. This has led to a correlation between the size of predators and their prey. Size may also act as a refuge for large prey. For example, adult elephants are relatively safe from predation by lions, but juveniles are vulnerable.
Camouflage and mimicry
Members of the cat family such as the snow leopard (treeless highlands), tiger (grassy plains, reed swamps), ocelot (forest), fishing cat (waterside thickets), and lion (open plains) are camouflaged with coloration and disruptive patterns suiting their habitats.
In aggressive mimicry, certain predators, including insects and fishes, make use of coloration and behaviour to attract prey. Female Photuris fireflies, for example, copy the light signals of other species, thereby attracting male fireflies, which they capture and eat. Flower mantises are ambush predators; camouflaged as flowers, such as orchids, they attract prey and seize it when it is close enough. Frogfishes are extremely well camouflaged, and actively lure their prey to approach using an esca, a bait on the end of a rod-like appendage on the head, which they wave gently to mimic a small animal, gulping the prey in an extremely rapid movement when it is within range.
Venom
Many smaller predators such as the box jellyfish use venom to subdue their prey, and venom can also aid in digestion (as is the case for rattlesnakes and some spiders). The marbled sea snake that has adapted to egg predation has atrophied venom glands, and the gene for its three finger toxin contains a mutation (the deletion of two nucleotides) that inactives it. These changes are explained by the fact that its prey does not need to be subdued.
Electric fields
Several groups of predatory fish have the ability to detect, track, and sometimes, as in the electric ray, to incapacitate their prey by sensing and generating electric fields. The electric organ is derived from modified nerve or muscle tissue.
Physiology
Physiological adaptations to predation include the ability of predatory bacteria to digest the complex peptidoglycan polymer from the cell walls of the bacteria that they prey upon. Carnivorous vertebrates of all five major classes (fishes, amphibians, reptiles, birds, and mammals) have lower relative rates of sugar to amino acid transport than either herbivores or omnivores, presumably because they acquire plenty of amino acids from the animal proteins in their diet.
Antipredator adaptations
To counter predation, prey have evolved defences for use at each stage of an attack. They can try to avoid detection, such as by using camouflage and mimicry. They can detect predators and warn others of their presence.
If detected, they can try to avoid being the target of an attack, for example, by signalling that they are toxic or unpalatable, by signalling that a chase would be unprofitable, or by forming groups. If they become a target, they can try to fend off the attack with defences such as armour, quills, unpalatability, or mobbing; and they can often escape an attack in progress by startling the predator, playing dead, shedding body parts such as tails, or simply fleeing.
Coevolution
Predators and prey are natural enemies, and many of their adaptations seem designed to counter each other. For example, bats have sophisticated echolocation systems to detect insects and other prey, and insects have developed a variety of defences including the ability to hear the echolocation calls. Many pursuit predators that run on land, such as wolves, have evolved long limbs in response to the increased speed of their prey. Their adaptations have been characterized as an evolutionary arms race, an example of the coevolution of two species. In a gene centered view of evolution, the genes of predator and prey can be thought of as competing for the prey's body. However, the "life-dinner" principle of Dawkins and Krebs predicts that this arms race is asymmetric: if a predator fails to catch its prey, it loses its dinner, while if it succeeds, the prey loses its life.
The metaphor of an arms race implies ever-escalating advances in attack and defence. However, these adaptations come with a cost; for instance, longer legs have an increased risk of breaking, while the specialized tongue of the chameleon, with its ability to act like a projectile, is useless for lapping water, so the chameleon must drink dew off vegetation.
The "life-dinner" principle has been criticized on multiple grounds. The extent of the asymmetry in natural selection depends in part on the heritability of the adaptive traits. Also, if a predator loses enough dinners, it too will lose its life. On the other hand, the fitness cost of a given lost dinner is unpredictable, as the predator may quickly find better prey. In addition, most predators are generalists, which reduces the impact of a given prey adaption on a predator. Since specialization is caused by predator-prey coevolution, the rarity of specialists may imply that predator-prey arms races are rare.
It is difficult to determine whether given adaptations are truly the result of coevolution, where a prey adaptation gives rise to a predator adaptation that is countered by further adaptation in the prey. An alternative explanation is escalation, where predators are adapting to competitors, their own predators or dangerous prey. Apparent adaptations to predation may also have arisen for other reasons and then been co-opted for attack or defence. In some of the insects preyed on by bats, hearing evolved before bats appeared and was used to hear signals used for territorial defence and mating. Their hearing evolved in response to bat predation, but the only clear example of reciprocal adaptation in bats is stealth echolocation.
A more symmetric arms race may occur when the prey are dangerous, having spines, quills, toxins or venom that can harm the predator. The predator can respond with avoidance, which in turn drives the evolution of mimicry. Avoidance is not necessarily an evolutionary response as it is generally learned from bad experiences with prey. However, when the prey is capable of killing the predator (as can a coral snake with its venom), there is no opportunity for learning and avoidance must be inherited. Predators can also respond to dangerous prey with counter-adaptations. In western North America, the common garter snake has developed a resistance to the toxin in the skin of the rough-skinned newt.
Role in ecosystems
Predators affect their ecosystems not only directly by eating their own prey, but by indirect means such as reducing predation by other species, or altering the foraging behaviour of a herbivore, as with the biodiversity effect of wolves on riverside vegetation or sea otters on kelp forests. This may explain population dynamics effects such as the cycles observed in lynx and snowshoe hares.
Trophic level
One way of classifying predators is by trophic level. Carnivores that feed on herbivores are secondary consumers; their predators are tertiary consumers, and so forth. At the top of this food chain are apex predators such as lions. Many predators however eat from multiple levels of the food chain; a carnivore may eat both secondary and tertiary consumers. This means that many predators must contend with intraguild predation, where other predators kill and eat them. For example, coyotes compete with and sometimes kill gray foxes and bobcats.
Trophic transfer
Trophic transfer within an ecosystem refers to the transport of energy and nutrients as a result of predation. Energy passes from one trophic level to the next as predators consume organic matter from another organism's body. Within each transfer, while there are uses of energy, there are also losses of energy.
Marine trophic levels vary depending on locality and the size of the primary producers. There are generally up to six trophic levels in the open ocean, four over continental shelves, and around three in upwelling zones. For example, a marine habitat with five trophic levels could be represented as follows: Herbivores (feed primarily on phytoplankton); Carnivores (feed primarily on other zooplankton/animals); Detritivores (feed primarily on dead organic matter/detritus; Omnivores (feed on a mixed diet of phyto- and zooplankton and detritus); and Mixotrophs which combine autotrophy (using light energy to grow without intake of any additional organic compounds or nutrients) with heterotrophy (feeding on other plants and animals for energy and nutrients—herbivores, omnivores and carnivores, and detritivores).
Trophic transfer efficiency measures how effectively energy is transferred or passed up through higher trophic levels of the marine food web. As energy moves up the trophic levels, it decreases due to heat, waste, and the natural metabolic processes that occur as predators consume their prey. The result is that only about 10% of the energy at any trophic level is transferred to the next level. This is often referred to as "the 10% rule" which limits the number of trophic levels that an individual ecosystem is capable of supporting.
Biodiversity maintained by apex predation
Predators may increase the biodiversity of communities by preventing a single species from becoming dominant. Such predators are known as keystone species and may have a profound influence on the balance of organisms in a particular ecosystem. Introduction or removal of this predator, or changes in its population density, can have drastic cascading effects on the equilibrium of many other populations in the ecosystem. For example, grazers of a grassland may prevent a single dominant species from taking over.
The elimination of wolves from Yellowstone National Park had profound impacts on the trophic pyramid. In that area, wolves are both keystone species and apex predators. Without predation, herbivores began to over-graze many woody browse species, affecting the area's plant populations. In addition, wolves often kept animals from grazing near streams, protecting the beavers' food sources. The removal of wolves had a direct effect on the beaver population, as their habitat became territory for grazing. Increased browsing on willows and conifers along Blacktail Creek due to a lack of predation caused channel incision because the reduced beaver population was no longer able to slow the water down and keep the soil in place. The predators were thus demonstrated to be of vital importance in the ecosystem.
Population dynamics
In the absence of predators, the population of a species can grow exponentially until it approaches the carrying capacity of the environment. Predators limit the growth of prey both by consuming them and by changing their behavior. Increases or decreases in the prey population can also lead to increases or decreases in the number of predators, for example, through an increase in the number of young they bear.
Cyclical fluctuations have been seen in populations of predator and prey, often with offsets between the predator and prey cycles. A well-known example is that of the snowshoe hare and lynx. Over a broad span of boreal forests in Alaska and Canada, the hare populations fluctuate in near synchrony with a 10-year period, and the lynx populations fluctuate in response. This was first seen in historical records of animals caught by fur hunters for the Hudson's Bay Company over more than a century.
A simple model of a system with one species each of predator and prey, the Lotka–Volterra equations, predicts population cycles. However, attempts to reproduce the predictions of this model in the laboratory have often failed; for example, when the protozoan Didinium nasutum is added to a culture containing its prey, Paramecium caudatum, the latter is often driven to extinction.
The Lotka–Volterra equations rely on several simplifying assumptions, and they are structurally unstable, meaning that any change in the equations can stabilize or destabilize the dynamics. For example, one assumption is that predators have a linear functional response to prey: the rate of kills increases in proportion to the rate of encounters. If this rate is limited by time spent handling each catch, then prey populations can reach densities above which predators cannot control them. Another assumption is that all prey individuals are identical. In reality, predators tend to select young, weak, and ill individuals, leaving prey populations able to regrow.
Many factors can stabilize predator and prey populations. One example is the presence of multiple predators, particularly generalists that are attracted to a given prey species if it is abundant and look elsewhere if it is not. As a result, population cycles tend to be found in northern temperate and subarctic ecosystems because the food webs are simpler. The snowshoe hare-lynx system is subarctic, but even this involves other predators, including coyotes, goshawks and great horned owls, and the cycle is reinforced by variations in the food available to the hares.
A range of mathematical models have been developed by relaxing the assumptions made in the Lotka–Volterra model; these variously allow animals to have geographic distributions, or to migrate; to have differences between individuals, such as sexes and an age structure, so that only some individuals reproduce; to live in a varying environment, such as with changing seasons; and analysing the interactions of more than just two species at once. Such models predict widely differing and often chaotic predator-prey population dynamics. The presence of refuge areas, where prey are safe from predators, may enable prey to maintain larger populations but may also destabilize the dynamics.
Evolutionary history
Predation dates from before the rise of commonly recognized carnivores by hundreds of millions (perhaps billions) of years. Predation has evolved repeatedly in different groups of organisms. The rise of eukaryotic cells at around 2.7 Gya, the rise of multicellular organisms at about 2 Gya, and the rise of mobile predators (around 600 Mya - 2 Gya, probably around 1 Gya) have all been attributed to early predatory behavior, and many very early remains show evidence of boreholes or other markings attributed to small predator species. It likely triggered major evolutionary transitions including the arrival of cells, eukaryotes, sexual reproduction, multicellularity, increased size, mobility (including insect flight) and armoured shells and exoskeletons.
The earliest predators were microbial organisms, which engulfed or grazed on others. Because the fossil record is poor, these first predators could date back anywhere between 1 and over 2.7 Gya (billion years ago). Predation visibly became important shortly before the Cambrian period—around —as evidenced by the almost simultaneous development of calcification in animals and algae, and predation-avoiding burrowing. However, predators had been grazing on micro-organisms since at least , with evidence of selective (rather than random) predation from a similar time.
Auroralumina attenboroughii is an Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predatory animals, catching small prey with its nematocysts as modern cnidarians do.
The fossil record demonstrates a long history of interactions between predators and their prey from the Cambrian period onwards, showing for example that some predators drilled through the shells of bivalve and gastropod molluscs, while others ate these organisms by breaking their shells.
Among the Cambrian predators were invertebrates like the anomalocaridids with appendages suitable for grabbing prey, large compound eyes and jaws made of a hard material like that in the exoskeleton of an insect.
Some of the first fish to have jaws were the armoured and mainly predatory placoderms of the Silurian to Devonian periods, one of which, the Dunkleosteus, is considered the world's first vertebrate "superpredator", preying upon other predators.
Insects developed the ability to fly in the Early Carboniferous or Late Devonian, enabling them among other things to escape from predators.
Among the largest predators that have ever lived were the theropod dinosaurs such as Tyrannosaurus from the Cretaceous period. They preyed upon herbivorous dinosaurs such as hadrosaurs, ceratopsians and ankylosaurs.
In human society
Practical uses
Humans, as omnivores, are to some extent predatory, using weapons and tools to fish, hunt and trap animals. They also use other predatory species such as dogs, cormorants, and falcons to catch prey for food or for sport.
Two mid-sized predators, dogs and cats, are the animals most often kept as pets in western societies.
Human hunters, including the San of southern Africa, use persistence hunting, a form of pursuit predation where the pursuer may be slower than prey such as a kudu antelope over short distances, but follows it in the midday heat until it is exhausted, a pursuit that can take up to five hours.
In biological pest control, predators (and parasitoids) from a pest's natural range are introduced to control populations, at the risk of causing unforeseen problems. Natural predators, provided they do no harm to non-pest species, are an environmentally friendly and sustainable way of reducing damage to crops and an alternative to the use of chemical agents such as pesticides.
Symbolic uses
In film, the idea of the predator as a dangerous if humanoid enemy is used in the 1987 science fiction horror action film Predator and its three sequels. A terrifying predator, a gigantic Man-eater|man-eating great white shark, is central, too, to Steven Spielberg's 1974 thriller Jaws.
Among poetry on the theme of predation, a predator's consciousness might be explored, such as in Ted Hughes's Pike. The phrase "Nature, red in tooth and claw" from Alfred, Lord Tennyson's 1849 poem "In Memoriam A.H.H." has been interpreted as referring to the struggle between predators and prey.
In mythology and folk fable, predators such as the fox and wolf have mixed reputations. The fox was a symbol of fertility in ancient Greece, but a weather demon in northern Europe, and a creature of the devil in early Christianity; the fox is presented as sly, greedy, and cunning in fables from Aesop onwards. The big bad wolf is known to children in tales such as Little Red Riding Hood, but is a demonic figure in the Icelandic Edda sagas, where the wolf Fenrir appears in the apocalyptic ending of the world. In the Middle Ages, belief spread in werewolves, men transformed into wolves. In ancient Rome, and in ancient Egypt, the wolf was worshipped, the she-wolf appearing in the founding myth of Rome, suckling Romulus and Remus. More recently, in Rudyard Kipling's 1894 The Jungle Book, Mowgli is raised by the wolf pack. Attitudes to large predators in North America, such as wolf, grizzly bear and cougar, have shifted from hostility or ambivalence, accompanied by active persecution, towards positive and protective in the second half of the 20th century.
| Biology and health sciences | Ethology | null |
57621 | https://en.wikipedia.org/wiki/Dragonfly | Dragonfly | A dragonfly is a flying insect belonging to the infraorder Anisoptera below the order Odonata. About 3,000 extant species of dragonflies are known. Most are tropical, with fewer species in temperate regions. Loss of wetland habitat threatens dragonfly populations around the world. Adult dragonflies are characterised by a pair of large, multifaceted, compound eyes, two pairs of strong, transparent wings, sometimes with coloured patches, and an elongated body. Many dragonflies have brilliant iridescent or metallic colours produced by structural coloration, making them conspicuous in flight. An adult dragonfly's compound eyes have nearly 24,000 ommatidia each.
Dragonflies can be mistaken for the closely related damselflies, which make up the other odonatan infraorder (Zygoptera) and are similar in body plan, though usually lighter in build; however, the wings of most dragonflies are held flat and away from the body, while damselflies hold their wings folded at rest, along or above the abdomen. Dragonflies are agile fliers, while damselflies have a weaker, fluttery flight. Dragonflies make use of motion camouflage when attacking prey or rivals.
Dragonflies are predatory insects, both in their aquatic nymphal stage (also known as "naiads") and as adults. In some species, the nymphal stage lasts up to five years, and the adult stage may be as long as 10 weeks, but most species have an adult lifespan in the order of five weeks or less, and some survive for only a few days. They are fast, agile fliers capable of highly accurate aerial ambush, sometimes migrating across oceans, and often live near water. They have a uniquely complex mode of reproduction involving indirect insemination, delayed fertilisation, and sperm competition. During mating, the male grasps the female at the back of the head, and the female curls her abdomen under her body to pick up sperm from the male's secondary genitalia at the front of his abdomen, forming the "heart" or "wheel" posture.
Fossils of very large dragonfly-like insects, sometimes called griffinflies, are found from 325 million years ago (Mya) in Upper Carboniferous rocks; these had wingspans up to about , though they were only distant relatives, not true dragonflies which first appeared during the Early Jurassic.
Dragonflies are represented in human culture on artefacts such as pottery, rock paintings, statues, and Art Nouveau jewellery. They are used in traditional medicine in Japan and China, and caught for food in Indonesia. They are symbols of courage, strength, and happiness in Japan, but seen as sinister in European folklore. Their bright colours and agile flight are admired in the poetry of Lord Tennyson and the prose of H. E. Bates.
Etymology
The infraorder Anisoptera comes from Greek anisos "unequal" and pteron "wing" because dragonflies' hindwings are broader than their forewings.
Evolution
Dragonflies and their relatives are similar in structure to an ancient group, the Meganisoptera or griffinflies, from the 325 Mya Upper Carboniferous of Europe, a group that included one of the largest insects that ever lived, Meganeuropsis permiana from the Early Permian, with a wingspan around . The Protanisoptera, another ancestral group that lacks certain wing-vein characters found in modern Odonata, lived in the Permian.
Anisoptera first appeared during the Toarcian age of the Early Jurassic, and the crown group developed in the Middle Jurassic. They retain some traits of their distant predecessors, and are in a group known as the Palaeoptera, meaning 'ancient-winged'. Like the gigantic griffinflies, dragonflies lack the ability to fold their wings up against their bodies in the way modern insects do, although some evolved their own different way to do so. The forerunners of modern Odonata are included in a clade called the Panodonata, which include the basal Zygoptera (damselflies) and the Anisoptera (true dragonflies). Today, some 3,000 species are extant around the world.
The relationships of anisopteran families are not fully resolved as of 2021, but all the families are monophyletic except the Corduliidae, and the Austropetaliidae are sister to the Aeshnoidea:
Distribution and diversity
About 3,012 species of dragonflies were known in 2010; these are classified into in . The distribution of diversity within the biogeographical regions are summarized below (the world numbers are not ordinary totals, as overlaps in species occur).
Dragonflies live on every continent except Antarctica. In contrast to the damselflies (Zygoptera), which tend to have restricted distributions, some genera and species are spread across continents. For example, the blue-eyed darner Rhionaeschna multicolor lives all across North America, and in Central America; emperors Anax live throughout the Americas from as far north as Newfoundland to as far south as Bahia Blanca in Argentina, across Europe to central Asia, North Africa, and the Middle East. The globe skimmer Pantala flavescens is probably the most widespread dragonfly species in the world; it is cosmopolitan, occurring on all continents in the warmer regions. Most Anisoptera species are tropical, with far fewer species in temperate regions.
Some dragonflies, including libellulids and aeshnids, live in desert pools, for example in the Mojave Desert, where they are active in shade temperatures between ; these insects were able to survive body temperatures above the thermal death point of insects of the same species in cooler places.
Dragonflies live from sea level up to the mountains, decreasing in species diversity with altitude. Their altitudinal limit is about 3700 m, represented by a species of Aeshna in the Pamirs.
Dragonflies become scarce at higher latitudes. They are not native to Iceland, but individuals are occasionally swept in by strong winds, including a Hemianax ephippiger native to North Africa, and an unidentified darter species. In Kamchatka, only a few species of dragonfly including the treeline emerald Somatochlora arctica and some aeshnids such as Aeshna subarctica are found, possibly because of the low temperature of the lakes there. The treeline emerald also lives in northern Alaska, within the Arctic Circle, making it the most northerly of all dragonflies.
General description
Dragonflies (suborder Anisoptera) are heavy-bodied, strong-flying insects that hold their wings horizontally both in flight and at rest. By contrast, damselflies (suborder Zygoptera) have slender bodies and fly more weakly; most species fold their wings over the abdomen when stationary, and the eyes are well separated on the sides of the head.
An adult dragonfly has three distinct segments, the head, thorax, and abdomen, as in all insects. It has a chitinous exoskeleton of hard plates held together with flexible membranes. The head is large with very short antennae. It is dominated by the two compound eyes, which cover most of its surface. The compound eyes are made up of ommatidia, the numbers being greater in the larger species. Aeshna interrupta has 22650 ommatidia of two varying sizes, 4500 being large. The facets facing downward tend to be smaller. Petalura gigantea has 23890 ommatidia of just one size. These facets provide complete vision in the frontal hemisphere of the dragonfly. The compound eyes meet at the top of the head (except in the Petaluridae and Gomphidae, as also in the genus Epiophlebia). Also, they have three simple eyes or ocelli. The mouthparts are adapted for biting with a toothed jaw; the flap-like labrum, at the front of the mouth, can be shot rapidly forward to catch prey. The head has a system for locking it in place that consists of muscles and small hairs on the back of the head that grip structures on the front of the first thoracic segment. This arrester system is unique to the Odonata, and is activated when feeding and during tandem flight.
The thorax consists of three segments as in all insects. The prothorax is small and flattened dorsally into a shield-like disc, which has two transverse ridges. The mesothorax and metathorax are fused into a rigid, box-like structure with internal bracing, and provide a robust attachment for the powerful wing muscles inside. The thorax bears two pairs of wings and three pairs of legs. The wings are long, veined, and membranous, narrower at the tip and wider at the base. The hindwings are broader than the forewings and the venation is different at the base. The veins carry haemolymph, which is analogous to blood in vertebrates, and carries out many similar functions, but which also serves a hydraulic function to expand the body between nymphal stages (instars) and to expand and stiffen the wings after the adult emerges from the final nymphal stage. The leading edge of each wing has a node where other veins join the marginal vein, and the wing is able to flex at this point. In most large species of dragonflies, the wings of females are shorter and broader than those of males. The legs are rarely used for walking, but are used to catch and hold prey, for perching, and for climbing on plants. Each has two short basal joints, two long joints, and a three-jointed foot, armed with a pair of claws. The long leg joints bear rows of spines, and in males, one row of spines on each front leg is modified to form an "eyebrush", for cleaning the surface of the compound eye.
The abdomen is long and slender and consists of 10 segments. Three terminal appendages are on segment 10; a pair of superiors (claspers) and an inferior. The second and third segments are enlarged, and in males, the underside of the second segment has a cleft, forming the secondary genitalia consisting of the lamina, hamule, genital lobe, and penis. There are remarkable variations in the presence and the form of the penis and the related structures, the flagellum, cornua, and genital lobes. Sperm is produced at the 9th segment, and is transferred to the secondary genitalia prior to mating. The male holds the female behind the head using a pair of claspers on the terminal segment. In females, the genital opening is on the underside of the eighth segment, and is covered by a simple flap (vulvar lamina) or an ovipositor, depending on species and the method of egg-laying. Dragonflies having simple flaps shed the eggs in water, mostly in flight. Dragonflies having ovipositors use them to puncture soft tissues of plants and place the eggs singly in each puncture they make.
Dragonfly nymphs vary in form with species, and are loosely classed into claspers, sprawlers, hiders, and burrowers. The first instar is known as a prolarva, a relatively inactive stage from which it quickly moults into the more active nymphal form. The general body plan is similar to that of an adult, but the nymph lacks wings and reproductive organs. The lower jaw has a huge, extensible labium, armed with hooks and spines, which is used for catching prey. This labium is folded under the body at rest and struck out at great speed by hydraulic pressure created by the abdominal muscles. Both damselfly and dragonfly nymphs ventilate the rectum, but just some damselfly nymphs have a rectal epithelium that is rich in trachea, relying mostly on three feathery external gills as their major source of respiration. Only dragonfly nymphs have internal gills, called a branchial chamber, located around the fourth and fifth abdominal segments. These internal gills consist originally of six longitudinal folds, each side supported by cross-folds. But this system has been modified in several families. Water is pumped in and out of the abdomen through an opening at the tip. The naiads of some clubtails (Gomphidae) that burrow into the sediment, have a snorkel-like tube at the end of the abdomen enabling them to draw in clean water while they are buried in mud. Naiads can forcefully expel a jet of water to propel themselves with great rapidity.
Coloration
Many adult dragonflies have brilliant iridescent or metallic colours produced by structural colouration, making them conspicuous in flight. Their overall coloration is often a combination of yellow, red, brown, and black pigments, with structural colours. Blues are typically created by microstructures in the cuticle that reflect blue light. Greens often combine a structural blue with a yellow pigment. Freshly emerged adults, known as tenerals, are often pale, and obtain their typical colours after a few days. Some have their bodies covered with a pale blue, waxy powderiness called pruinosity; it wears off when scraped during mating, leaving darker areas.
Some dragonflies, such as the green darner, Anax junius, have a noniridescent blue that is produced structurally by scatter from arrays of tiny spheres in the endoplasmic reticulum of epidermal cells underneath the cuticle.
The wings of dragonflies are generally clear, apart from the dark veins and pterostigmata. In the chasers (Libellulidae), however, many genera have areas of colour on the wings: for example, groundlings (Brachythemis) have brown bands on all four wings, while some scarlets (Crocothemis) and dropwings (Trithemis) have bright orange patches at the wing bases. Some aeshnids such as the brown hawker (Aeshna grandis) have translucent, pale yellow wings.
Dragonfly nymphs are usually a well-camouflaged blend of dull brown, green, and grey.
Biology
Ecology
Dragonflies and damselflies are predatory both in the aquatic nymphal and adult stages. Nymphs feed on a range of freshwater invertebrates and larger ones can prey on tadpoles and small fish. Naiads of one species, Phanogomphus militaris, may even act as parasites, feeding on the gills of gravid freshwater mussels. Adults capture insect prey in the air, making use of their acute vision and highly controlled flight.
The mating system of dragonflies is complex, and they are among the few insect groups that have a system of indirect sperm transfer along with sperm storage, delayed fertilisation, and sperm competition.
Adult males vigorously defend territories near water; these areas provide suitable habitat for the nymphs to develop, and for females to lay their eggs. Swarms of feeding adults aggregate to prey on swarming prey such as emerging flying ants or termites.
Dragonflies as a group occupy a considerable variety of habitats, but many species, and some families, have their own specific environmental requirements. Some species prefer flowing waters, while others prefer standing water. For example, the Gomphidae (clubtails) live in running water, and the Libellulidae (skimmers) live in still water. Some species live in temporary water pools and are capable of tolerating changes in water level, desiccation, and the resulting variations in temperature, but some genera such as Sympetrum (darters) have eggs and nymphs that can resist drought and are stimulated to grow rapidly in warm, shallow pools, also often benefiting from the absence of predators there. Vegetation and its characteristics including submerged, floating, emergent, or waterside are also important. Adults may require emergent or waterside plants to use as perches; others may need specific submerged or floating plants on which to lay eggs. Requirements may be highly specific, as in Aeshna viridis (green hawker), which lives in swamps with the water-soldier, Stratiotes aloides. The chemistry of the water, including its trophic status (degree of enrichment with nutrients) and pH can also affect its use by dragonflies. Most species need moderate conditions, not too eutrophic, not too acidic; a few species such as Sympetrum danae (black darter) and Libellula quadrimaculata (four-spotted chaser) prefer acidic waters such as peat bogs, while others such as Libellula fulva (scarce chaser) need slow-moving, eutrophic waters with reeds or similar waterside plants.
Behaviour
Many dragonflies, particularly males, are territorial. Some defend a territory against others of their own species, some against other species of dragonfly and a few against insects in unrelated groups. A particular perch may give a dragonfly a good view over an insect-rich feeding ground; males of many species such as the Pachydiplax longipennis (blue dasher) jostle other dragonflies to maintain the right to alight there. Defending a breeding territory is common among male dragonflies, especially in species that congregate around ponds. The territory contains desirable features such as a sunlit stretch of shallow water, a special plant species, or the preferred substrate for egg-laying. The territory may be small or large, depending on its quality, the time of day, and the number of competitors, and may be held for a few minutes or several hours. Dragonflies including Tramea lacerata (black saddlebags) may notice landmarks that assist in defining the boundaries of the territory. Landmarks may reduce the costs of territory establishment, or might serve as a spatial reference. Some dragonflies signal ownership with striking colours on the face, abdomen, legs, or wings. The Plathemis lydia (common whitetail) dashes towards an intruder holding its white abdomen aloft like a flag. Other dragonflies engage in aerial dogfights or high-speed chases. A female must mate with the territory holder before laying her eggs. There is also conflict between the males and females. Females may sometimes be harassed by males to the extent that it affects their normal activities including foraging and in some dimorphic species females have evolved multiple forms with some forms appearing deceptively like males. In some species females have evolved behavioural responses such as feigning death to escape the attention of males. Similarly, selection of habitat by adult dragonflies is not random, and terrestrial habitat patches may be held for up to 3 months. A species tightly linked to its birth site utilises a foraging area that is several orders of magnitude larger than the birth site.
Reproduction
Mating in dragonflies is a complex, precisely choreographed process. First, the male has to attract a female to his territory, continually driving off rival males. When he is ready to mate, he transfers a packet of sperm from his primary genital opening on segment 9, near the end of his abdomen, to his secondary genitalia on segments 2–3, near the base of his abdomen. The male then grasps the female by the head with the claspers at the end of his abdomen; the structure of the claspers varies between species, and may help to prevent interspecific mating. The pair flies in tandem with the male in front, typically perching on a twig or plant stem. The female then curls her abdomen downwards and forwards under her body to pick up the sperm from the male's secondary genitalia, while the male uses his "tail" claspers to grip the female behind the head: this distinctive posture is called the "heart" or "wheel"; the pair may also be described as being "in cop".
Egg-laying (ovipositing) involves not only the female darting over floating or waterside vegetation to deposit eggs on a suitable substrate, but also the male hovering above her or continuing to clasp her and flying in tandem. This behaviour following the transfer of sperm is termed as mate guarding and the guarding male attempts to increase the probability of his sperm fertilising eggs. Sexual selection with sperm competition occurs within the spermatheca of the female and sperm can remain viable for at least 12 days in some species. Females can fertilise their eggs using sperm from the spermatheca at any time. Males use their penis and associated genital structures to compress or scrape out sperm from previous matings; this activity takes up much of the time that a copulating pair remains in the heart posture. Flying in tandem has the advantage that less effort is needed by the female for flight and more can be expended on egg-laying, and when the female submerges to deposit eggs, the male may help to pull her out of the water.
Egg-laying takes two different forms depending on the species. The female in some families (Aeshnidae, Petaluridae) has a sharp-edged ovipositor with which she slits open a stem or leaf of a plant on or near the water, so she can push her eggs inside. In other families such as clubtails (Gomphidae), cruisers (Macromiidae), emeralds (Corduliidae), and skimmers (Libellulidae), the female lays eggs by tapping the surface of the water repeatedly with her abdomen, by shaking the eggs out of her abdomen as she flies along, or by placing the eggs on vegetation. In a few species, the eggs are laid on emergent plants above the water, and development is delayed until these have withered and become immersed.
Life cycle
Dragonflies are hemimetabolous insects; they do not have a pupal stage and undergo an incomplete metamorphosis with a series of nymphal stages from which the adult emerges. Eggs laid inside plant tissues are usually shaped like grains of rice, while other eggs are the size of a pinhead, ellipsoidal, or nearly spherical. A clutch may have as many as 1500 eggs, and they take about a week to hatch into aquatic nymphs or naiads which moult between six and 15 times (depending on species) as they grow. Most of a dragonfly's life is spent as a nymph, beneath the water's surface. The nymph extends its hinged labium (a toothed mouthpart similar to a lower mandible, which is sometimes termed as a "mask" as it is normally folded and held before the face) that can extend forward and retract rapidly to capture prey such as mosquito larvae, tadpoles, and small fish. They breathe through gills in their rectum, and can rapidly propel themselves by suddenly expelling water through the anus. Some naiads, such as the later stages of Antipodophlebia asthenes, hunt on land.
The nymph stage of dragonflies lasts up to five years in large species, and between two months and three years in smaller species. When the naiad is ready to metamorphose into an adult, it stops feeding and makes its way to the surface, generally at night. It remains stationary with its head out of the water, while its respiration system adapts to breathing air, then climbs up a reed or other emergent plant, and moults (ecdysis). Anchoring itself firmly in a vertical position with its claws, its exoskeleton begins to split at a weak spot behind the head. The adult dragonfly crawls out of its nymph exoskeleton, the exuvia, arching backwards when all but the tip of its abdomen is free, to allow its exoskeleton to harden. Curling back upwards, it completes its emergence, swallowing air, which plumps out its body, and pumping haemolymph into its wings, which causes them to expand to their full extent.
Dragonflies in temperate areas can be categorized into two groups: an early group and a later one. In any one area, individuals of a particular "spring species" emerge within a few days of each other. The springtime darner (Basiaeschna janata), for example, is suddenly very common in the spring, but disappears a few weeks later and is not seen again until the following year. By contrast, a "summer species" emerges over a period of weeks or months, later in the year. They may be seen on the wing for several months, but this may represent a whole series of individuals, with new adults hatching out as earlier ones complete their lifespans.
Sex ratios
The sex ratio of male to female dragonflies varies both temporally and spatially. Adult dragonflies have a high male-biased ratio at breeding habitats. The male-bias ratio has contributed partially to the females using different habitats to avoid male harassment. As seen in Hine's emerald dragonfly (Somatochlora hineana), male populations use wetland habitats, while females use dry meadows and marginal breeding habitats, only migrating to the wetlands to lay their eggs or to find mating partners. Unwanted mating is energetically costly for females because it affects the amount of time that they are able to spend foraging.
Flight
Dragonflies are powerful and agile fliers, capable of migrating across the sea, moving in any direction, and changing direction suddenly. In flight, the adult dragonfly can propel itself in six directions: upward, downward, forward, backward, to left and to right. They have four different styles of flight.
Counter-stroking, with forewings beating 180° out of phase with the hindwings, is used for hovering and slow flight. This style is efficient and generates a large amount of lift.
Phased-stroking, with the hindwings beating 90° ahead of the forewings, is used for fast flight. This style creates more thrust, but less lift than counter-stroking.
Synchronised-stroking, with forewings and hindwings beating together, is used when changing direction rapidly, as it maximises thrust.
Gliding, with the wings held out, is used in three situations: free gliding, for a few seconds in between bursts of powered flight; gliding in the updraft at the crest of a hill, effectively hovering by falling at the same speed as the updraft; and in certain dragonflies such as darters, when "in cop" with a male, the female sometimes simply glides while the male pulls the pair along by beating his wings.
The wings are powered directly, unlike most families of insects, with the flight muscles attached to the wing bases. Dragonflies have a high power/weight ratio, and have been documented accelerating at 4 G linearly and 9 G in sharp turns while pursuing prey.
Dragonflies generate lift in at least four ways at different times, including classical lift like an aircraft wing; supercritical lift with the wing above the critical angle, generating high lift and using very short strokes to avoid stalling; and creating and shedding vortices. Some families appear to use special mechanisms, as for example the Libellulidae which take off rapidly, their wings beginning pointed far forward and twisted almost vertically. Dragonfly wings behave highly dynamically during flight, flexing and twisting during each beat. Among the variables are wing curvature, length and speed of stroke, angle of attack, forward/back position of wing, and phase relative to the other wings.
Flight speed
Old and unreliable claims are made that dragonflies such as the southern giant darner can fly up to . However, the greatest reliable flight speed records are for other types of insects. In general, large dragonflies like the hawkers have a maximum speed of with average cruising speed of about . Dragonflies can travel at 100 body-lengths per second in forward flight, and three lengths per second backwards.
Motion camouflage
In high-speed territorial battles between male Australian emperors (Hemianax papuensis), the fighting dragonflies adjust their flight paths to appear stationary to their rivals, minimizing the chance of being detected as they approach. To achieve the effect, the attacking dragonfly flies towards his rival, choosing his path to remain on a line between the rival and the start of his attack path. The attacker thus looms larger as he closes on the rival, but does not otherwise appear to move. Researchers found that six of 15 encounters involved motion camouflage.
Temperature control
The flight muscles need to be kept at a suitable temperature for the dragonfly to be able to fly. Being cold-blooded, they can raise their temperature by basking in the sun. Early in the morning, they may choose to perch in a vertical position with the wings outstretched, while in the middle of the day, a horizontal stance may be chosen. Another method of warming up used by some larger dragonflies is wing-whirring, a rapid vibration of the wings that causes heat to be generated in the flight muscles. The green darner (Anax junius) is known for its long-distance migrations, and often resorts to wing-whirring before dawn to enable it to make an early start.
Becoming too hot is another hazard, and a sunny or shady position for perching can be selected according to the ambient temperature. Some species have dark patches on the wings which can provide shade for the body, and a few use the obelisk posture to avoid overheating. This behaviour involves doing a "handstand", perching with the body raised and the abdomen pointing towards the sun, thus minimising the amount of solar radiation received. On a hot day, dragonflies sometimes adjust their body temperature by skimming over a water surface and briefly touching it, often three times in quick succession. This may also help to avoid desiccation.
Feeding
Adult dragonflies hunt on the wing using their exceptionally acute eyesight and strong, agile flight. They are almost exclusively carnivorous, eating a wide variety of insects ranging from small midges and mosquitoes to butterflies, moths, damselflies, and smaller dragonflies. A large prey item is subdued by being bitten on the head and is carried by the legs to a perch. Here, the wings are discarded and the prey usually ingested head first. A dragonfly may consume as much as a fifth of its body weight in prey per day. Dragonflies are also some of the insect world's most efficient hunters, catching up to 95% of the prey they pursue.
The nymphs are voracious predators, eating most living things that are smaller than they are. Their staple diet is mostly bloodworms and other insect larvae, but they also feed on tadpoles and small fish. A few species, especially those that live in temporary waters, are likely to leave the water to feed. Nymphs of Cordulegaster bidentata sometimes hunt small arthropods on the ground at night, while some species in the Anax genus have even been observed leaping out of the water to attack and kill full-grown tree frogs.
Eyesight
Dragonfly vision is thought to be like slow motion for humans. Dragonflies see faster than humans do; they see around 200 images per second. A dragonfly can see in 360 degrees, and nearly 80 per cent of the insect's brain is dedicated to its sight.
Predators
Although dragonflies are swift and agile fliers, some predators are fast enough to catch them. These include falcons such as the American kestrel, the merlin, and the hobby; nighthawks, swifts, flycatchers and swallows also take some adults; some species of wasps, too, prey on dragonflies, using them to provision their nests, laying an egg on each captured insect. In the water, various species of ducks and herons eat dragonfly nymphs and they are also preyed on by newts, frogs, fish, and water spiders. Amur falcons, which migrate over the Indian Ocean at a period that coincides with the migration of the globe skimmer dragonfly, Pantala flavescens, may actually be feeding on them while on the wing.
Parasites
Dragonflies are affected by three groups of parasites: water mites, gregarine protozoa, and trematode flatworms (flukes). Water mites, Hydracarina, can kill smaller dragonfly nymphs, and may also be seen on adults. Gregarines infect the gut and may cause blockage and secondary infection. Trematodes are parasites of vertebrates such as frogs, with complex life cycles often involving a period as a stage called a cercaria in a secondary host, a snail. Dragonfly nymphs may swallow cercariae, or these may tunnel through a nymph's body wall; they then enter the gut and form a cyst or metacercaria, which remains in the nymph for the whole of its development. If the nymph is eaten by a frog, the amphibian becomes infected by the adult or fluke stage of the trematode.
Dragonflies and humans
Conservation
Most odonatologists live in temperate areas and the dragonflies of North America and Europe have been the subject of much research. However, the majority of species live in tropical areas and have been little studied. With the destruction of rainforest habitats, many of these species are in danger of becoming extinct before they have even been named. The greatest cause of decline is forest clearance with the consequent drying up of streams and pools which become clogged with silt. The damming of rivers for hydroelectric schemes and the drainage of low-lying land has reduced suitable habitat, as has pollution and the introduction of alien species.
In 1997, the International Union for Conservation of Nature set up a status survey and conservation action plan for dragonflies. This proposes the establishment of protected areas around the world and the management of these areas to provide suitable habitat for dragonflies. Outside these areas, encouragement should be given to modify forestry, agricultural, and industrial practices to enhance conservation. At the same time, more research into dragonflies needs to be done, consideration should be given to pollution control and the public should be educated about the importance of biodiversity.
Habitat degradation has reduced dragonfly populations across the world, for example in Japan. Over 60% of Japan's wetlands were lost in the 20th century, so its dragonflies now depend largely on rice fields, ponds, and creeks. Dragonflies feed on pest insects in rice, acting as a natural pest control. Dragonflies are steadily declining in Africa, and represent a conservation priority.
The dragonfly's long lifespan and low population density makes it vulnerable to disturbance, such as from collisions with vehicles on roads built near wetlands. Species that fly low and slow may be most at risk.
Dragonflies are attracted to shiny surfaces that produce polarization which they can mistake for water, and they have been known to aggregate close to polished gravestones, solar panels, automobiles, and other such structures on which they attempt to lay eggs. These can have a local impact on dragonfly populations; methods of reducing the attractiveness of structures such as solar panels are under experimentation.
In culture
A blue-glazed faience dragonfly amulet was found by Flinders Petrie at Lahun, from the Late Middle Kingdom of ancient Egypt.
For the Navajo, dragonflies symbolize pure water. Often stylized in a double-barred cross design, dragonflies are a common motif in Zuni pottery, as well as Hopi rock art and Pueblo necklaces.
As a seasonal symbol in Japan, dragonflies are associated with the season of autumn.
In Japan, they are symbols of rebirth, courage, strength, and happiness. They are also depicted frequently in Japanese art and literature, especially haiku poetry. Japanese children catch large dragonflies as a game, using a hair with a small pebble tied to each end, which they throw into the air. The dragonfly mistakes the pebbles for prey, gets tangled in the hair, and is dragged to the ground by the weight.
In both China and Japan, dragonflies have been used in traditional medicine. In Indonesia, adult dragonflies are caught on poles made sticky with birdlime, then fried in oil as a delicacy.
Images of dragonflies are common in Art Nouveau, especially in jewellery designs. They have also been used as a decorative motif on fabrics and home furnishings. Douglas, a British motorcycle manufacturer based in Bristol, named its innovatively designed postwar 350-cc flat-twin model the Dragonfly.
Among the classical names of Japan are Akitsukuni (秋津国), Akitsushima (秋津島), Toyo-akitsushima (豊秋津島). Akitsu is an old word for dragonfly, so one interpretation of Akitsushima is "Dragonfly Island". This is attributed to a legend in which Japan's mythical founder, Emperor Jimmu, was bitten by a mosquito, which was then eaten by a dragonfly.
In Europe, dragonflies have often been seen as sinister. Some English vernacular names, such as "horse-stinger", "devil's darning needle", and "ear cutter", link them with evil and injury. Some of these reference the popular misconception that dragonflies can bite or sting humans. Swedish folklore holds that the devil uses dragonflies to weigh people's souls. The Norwegian name for dragonflies is Øyenstikker ("eye-poker"), and in Portugal, they are sometimes called tira-olhos ("eyes-snatcher"). They are often associated with snakes, as in the Welsh name gwas-y-neidr, "adder's servant". The Southern United States terms "snake doctor" and "snake feeder" refer to a folk belief that dragonflies catch insects for snakes or follow snakes around and stitch them back together if they are injured.
The watercolourist Moses Harris (1731–1785), known for his The Aurelian or natural history of English insects (1766), published in 1780, the first scientific descriptions of several Odonata including the banded demoiselle, Calopteryx splendens. He was the first English artist to make illustrations of dragonflies accurate enough to be identified to species (Aeshna grandis at top left of plate illustrated), though his rough drawing of a nymph (at lower left) with the mask extended appears to be plagiarised.
More recently, dragonfly watching has become popular in America as some birdwatchers seek new groups to observe.
In heraldry, like other winged insects, the dragonfly is typically depicted tergiant (with its back facing the viewer), with its head to chief (at the top).
In poetry and literature
Lafcadio Hearn wrote in his 1901 book A Japanese Miscellany that Japanese poets had created dragonfly haiku "almost as numerous as are the dragonflies themselves in the early autumn." The poet Matsuo Bashō (1644–1694) wrote haiku such as "Crimson pepper pod / add two pairs of wings, and look / darting dragonfly", relating the autumn season to the dragonfly. Hori Bakusui (1718–1783) similarly wrote "Dyed he is with the / Colour of autumnal days, / O red dragonfly."
The poet Lord Tennyson, described a dragonfly splitting its old skin and emerging shining metallic blue like "sapphire mail" in his 1842 poem "The Two Voices", with the lines "An inner impulse rent the veil / Of his old husk: from head to tail / Came out clear plates of sapphire mail."
The novelist H. E. Bates described the rapid, agile flight of dragonflies in his 1937 nonfiction book Down the River:
In technology
A dragonfly has been genetically modified with light-sensitive "steering neurons" in its nerve cord to create a cyborg-like "DragonflEye". The neurons contain genes like those in the eye to make them sensitive to light. Miniature sensors, a computer chip, and a solar panel were fitted in a "backpack" over the insect's thorax in front of its wings. Light is sent down flexible light-pipes named optrodes from the backpack into the nerve cord to give steering commands to the insect. The result is a "micro-aerial vehicle that's smaller, lighter and stealthier than anything else that's manmade".
| Biology and health sciences | Odonata | null |
57624 | https://en.wikipedia.org/wiki/Multituberculata | Multituberculata | Multituberculata (commonly known as multituberculates, named for the multiple tubercles of their teeth) is an extinct order of rodent-like mammals with a fossil record spanning over 130 million years. They first appeared in the Middle Jurassic, and reached a peak diversity during the Late Cretaceous and Paleocene. They eventually declined from the mid-Paleocene onwards, disappearing from the known fossil record in the late Eocene. They are the most diverse order of Mesozoic mammals with more than 200 species known, ranging from mouse-sized to beaver-sized. These species occupied a diversity of ecological niches, ranging from burrow-dwelling to squirrel-like arborealism to jerboa-like hoppers. Multituberculates are usually placed as crown mammals outside either of the two main groups of living mammals—Theria, including placentals and marsupials, and Monotremata—but usually as closer to Theria than to monotremes. They are considered to be closely related to Euharamiyida and Gondwanatheria as part of Allotheria.
Description
The multituberculates had a cranial and dental anatomy superficially similar to rodents such as mice and rats, with cheek-teeth separated from the chisel-like front teeth by a wide tooth-less gap (the diasteme). Each cheek-tooth displayed several rows of small cusps (or tubercles, hence the name) that operated against similar rows in the teeth of the jaw; the exact homology of these cusps to therian ones is still a matter of debate. Unlike rodents, which have ever-growing teeth, multituberculates underwent dental replacement patterns typical of most mammals (though in at least some species the lower incisors continued to erupt long after the root's closure). Multituberculates are notable for the presence of a massive fourth lower premolar, the plagiaulacoid; other mammals, like Plesiadapiformes and diprotodontian marsupials, also have similar premolars in both upper and lower jaws, but in multituberculates this tooth is massive and the upper premolars are not modified this way. In basal multituberculates all three lower premolars were plagiaulacoids, increasing in size posteriorly, but in Cimolodonta only the fourth lower premolar remained, with the third one remaining only as a vestigial peg-like tooth, and in several taxa like taeniolabidoideans, the plagiaulacoid disappeared entirely or was reconverted into a molariform tooth.
Unlike rodents and similar therians, multituberculates had a palinal jaw stroke (front-to-back), instead of a propalinal (back-to-front) or transverse (side-to-side) one; as a consequence, their jaw musculature and cusp orientation is radically different. Palinal jaw strokes are almost entirely absent in modern mammals (with the possible exception of the dugong), but are also present in haramiyidans, argyrolagoideans and tritylodontids, the former historically united with multituberculates on that basis. Multituberculate mastication is thought to have operated in a two stroke cycle: first, food held in place by the last upper premolar was sliced by the bladelike lower pre-molars as the dentary moved orthally (upward). Then the lower jaw moved palinally, grinding the food between the molar cusp rows.
The structure of the pelvis in the Multituberculata suggests that they gave birth to tiny helpless, underdeveloped young, similar to modern marsupials, such as kangaroos. However, a 2022 study reveals that they might actually have had long gestation periods like placentals. However, in 2024, all Allotheria (including multituberculates) fell outside the crown group of Mammalia, implying that cimolodonts developed placental-like gestation (and viviparity in general) independently, rather than multituberculates and therians having a common viviparous ancestor.
At least two lineages developed hypsodonty, in which tooth enamel extends beyond the gumline: lambdopsalid taeniolabidoideans and sudamericid gondwanatheres.
Studies published in 2018 demonstrated that multituberculates had relatively complex brains, some braincase regions even absent in therian mammals.
Evolution
Multituberculates first appear in the fossil record during the Jurassic period, and then survived and even dominated for over one hundred million years, longer than any other order of mammaliforms, including placental mammals. The earliest known multituberculates are from the Middle Jurassic (Bathonian ~166-168 million years ago) of England and Russia, including Hahnotherium and Kermackodon from the Forest Marble Formation of England, and Tashtykia and Tagaria from the Itat Formation of Russia. These forms are only known from isolated teeth, which bear close similarity to those of euharamyidans, which they are suspected to be closely related to. During the Late Jurassic and Early Cretaceous, primitive multituberculates, collectively grouped into the paraphyletic "Plagiaulacida", were abundant and widespread across Laurasia (including Europe, Asia and North America). During the Aptian stage of the Early Cretaceous, the advanced subgroup Cimolodonta appeared in North America, characterised by a reduced number of lower premolars, with a blade-like lower fourth premolar. By the early Late Cretaceous (Cenomanian) Cimolodonta had replaced all other multituberculate lineages.
During the Late Cretaceous, multituberculates experienced an adaptive radiation, corresponding with a shift towards herbivory. Multituberculates reached their peak diversity during the early Paleocene, shortly after the Cretaceous–Paleogene extinction event, but declined from the mid Paleocene onwards, likely due to competition with placental mammals such as rodents and ungulates, the group finally became extinct in the Late Eocene.
There are some isolated records of multituberculates from the Southern Hemisphere, including the cimolodontan Corriebaatar from the Early Cretaceous of Australia, and fragmentary remains from the Late Cretaceous Maevarano Formation of Madagascar. The family Ferugliotheriidae from the Late Cretaceous of South America, traditionally considered gondwanatherians, may actually be cimolodontan multituberculates.
During the Late Cretaceous and Paleocene the multituberculates radiated into a wide variety of morphotypes, including the squirrel-like arboreal ptilodonts. The peculiar shape of their last lower premolar is their most outstanding feature. These teeth were larger and more elongated than the other cheek-teeth and had an occlusive surface forming a serrated slicing blade. Though it can be assumed that this was used for crushing seeds and nuts, it is believed that most small multituberculates also supplemented their diet with insects, worms, and fruits. Tooth marks attributed to multituberculates are known on Champsosaurus fossils, indicating that at least some of these mammals were scavengers. A ptilodont that thrived in North America was Ptilodus. Thanks to the well-preserved Ptilodus specimens found in the Bighorn Basin, Wyoming, we know that these multituberculates were able to abduct and adduct their big toes, and thus that their foot mobility was similar to that of modern squirrels, which descend trees head first.
Another group of multituberculates, the taeniolabids, were heavier and more massively built, indicating that they lived a fully terrestrial life. The largest specimens weighed probably as much as , making them comparable in size to large rodents like the modern beaver.
Classification
Multituberculate is generally placed within Allotheria alongside Euharamiyida, a clade of mammals known from the Middle Jurassic to Early Cretaceous of Asia and possibly Europe that possess several morphological similarities with multituberculates.
Gondwanatheria is a monophyletic group of allotherians that was diverse in the Late Cretaceous of South America, India, Madagascar and possibly Africa and occurs onwards into the Paleogene of South America and Antarctica. Their placement within Allotheria is highly controversial, with some phylogenies recovering the group as deeply nested within multituberculates, while others recover them as a distinct branch of allotherians separate from multituberculates.
In their 2001 study, Kielan-Jaworowska and Hurum found that most multituberculates could be referred to two suborders: "Plagiaulacida" and Cimolodonta. The exception is the genus Arginbaatar, which shares characteristics with both groups.
"Plagiaulacida" is paraphyletic, representing the more primitive evolutionary grade. Its members are the more basal Multituberculata. Chronologically, they ranged from perhaps the Middle Jurassic until the mid-Cretaceous. This group is further subdivided into three informal groupings: the allodontid line, the paulchoffatiid line, and the plagiaulacid line.
Cimolodonta is, apparently, a natural (monophyletic) suborder. This includes the more derived Multituberculata, which have been identified from the lower Cretaceous to the Eocene. The superfamilies Djadochtatherioidea, Taeniolabidoidea, Ptilodontoidea are recognized, as is the Paracimexomys group. Additionally, there are the families Cimolomyidae, Boffiidae, Eucosmodontidae, Kogaionidae, Microcosmodontidae and the two genera Uzbekbaatar and Viridomys. More precise placement of these types awaits further discoveries and analysis.
Taxonomy
Based on the combined works of Mikko's Phylogeny Archive and Paleofile.com.
Suborder †Plagiaulacida Simpson 1925
Genus ?†Argillomys Cifelli, Gordon & Lipka 2013
Species †Argillomys marylandensis Cifelli, Gordon & Lipka 2013
Genus ?†Janumys Eaton & Cifelli 2001
Species †Janumys erebos Eaton & Cifelli 2001
Super family †Allodontoidea Marsh 1889
Genus †?Glirodon Engelmann & Callison, 2001
Species †G. grandis Engelmann & Callison, 2001
Family †Arginbaataridae Hahn & Hahn, 1983
Genus †Arginbaatar Trofimov, 1980
Species †A. dmitrievae Trofimov, 1980
Family †Zofiabaataridae Bakker, 1992
Genus †Zofiabaatar Bakker & Carpenter, 1990
Species †Z. pulcher Bakker & Carpenter, 1990
Family †Allodontidae Marsh, 1889
Genus †Passumys Cifelli, Davis & Sames 2014
Species †Passumys angelli Cifelli, Davis & Sames 2014
Genus †Ctenacodon Marsh, 1879
Species †C. serratus Marsh, 1879
Species †C. nanus Marsh, 1881
Species †C. laticeps (Marsh, 1881) [Allodon laticeps Marsh 1881]
Species †C. scindens Simpson, 1928
Genus †Psalodon Simpson, 1926
Species †P. potens (Marsh, 1887) [Ctenacodon potens Marsh 1887]
Species †P. fortis (Marsh, 1887) Simpson 1929 [Allodon fortis Marsh 1887]
Species †P. marshi Simpson, 1929
Super family †Paulchoffatioidea Hahn 1969 sensu Hahn & Hahn 2003
Genus ?†Mojo Hahn, LePage & Wouters 1987
Species †Mojo usuratus Hahn, LePage & Wouters 1987
Genus ?†Rugosodon Yuan et al., 2013
Species †Rugosodon eurasiaticus Yuan et al., 2013
Family †Pinheirodontidae Hahn & Hahn, 1999
Genus †Bernardodon Hahn & Hahn, 1999
Species †B. atlanticus Hahn & Hahn, 1999
Species †B. sp. Hahn & Hahn, 1999
Genus †Cantalera Badiola, Canudo & Cuenca-Bescos, 2008
Species †Cantalera abadi Badiola, Canudo & Cuenca-Bescos, 2008
Genus †Ecprepaulax Hahn & Hahn, 1999
Species †E. anomala Hahn & Hahn, 1999
Genus †Gerhardodon Kielan-Jaworowska & Ensom, 1992
Species †G. purbeckensis Kielan-Jaworowska & Ensom, 1992
Genus †Iberodon Hahn & Hahn, 1999
Species †I. quadrituberculatus Hahn & Hahn, 1999
Genus †Lavocatia Canudo & Cuenca-Bescós, 1996
Species †L. alfambrensis Canudo & Cuenca-Bescós, 1996
Genus †Pinheirodon Hahn & Hahn, 1999
Species †P. pygmaeus Hahn & Hahn, 1999
Species †P. vastus Hahn & Hahn, 1999
Family †Paulchoffatiidae Hahn, 1969
Genus ?†Galveodon Hahn & Hahn, 1992
Species †G. nannothus Hahn & Hahn, 1992
Genus ?†Sunnyodon Kielan-Jaworowska & Ensom, 1992
Species †S. notleyi Kielan-Jaworowska & Ensom, 1992
subfamily †Paulchoffatiinae Hahn, 1971
Genus †Paulchoffatia Kühne, 1961
Species †P. delgador Kühne, 1961
Genus †Pseudobolodon Hahn, 1977
Species †P. oreas Hahn, 1977
Species †P. krebsi Hahn & Hahn, 1994
Genus †Henkelodon Hahn, 1987
Species †H. naias Hahn, 1987
Genus †Guimarotodon Hahn, 1969
Species †G. leiriensis Hahn, 1969
Genus †Meketibolodon (Hahn, 1978) Hahn, 1993
Species †M. robustus (Hahn, 1978) Hahn, 1993 [Pseudobolodon robusutus Hahn 1978]
Genus †Plesiochoffatia Hahn & Hahn, 1999 [Parachoffatia Hahn & Hahn 1998 non Mangold 1970]
Species †P. thoas (Hahn & Hahn, 1998) Hahn & Hahn 1999 [Parachoffatia thoa Hahn & Hahn 1998]
Species †P. peparethos (Hahn & Hahn, 1998) Hahn & Hahn 1999 [Parachoffatia peparethos Hahn & Hahn 1998]
Species †P. staphylos (Hahn & Hahn, 1998) Hahn & Hahn 1999 [Parachoffatia staphylos Hahn & Hahn 1998]
Genus †Xenachoffatia Hahn & Hahn, 1998
Species †X. oinopion Hahn & Hahn, 1998
Genus †Bathmochoffatia Hahn & Hahn, 1998
Species †B. hapax Hahn & Hahn, 1998
Genus †Kielanodon Hahn, 1987
Species †K. hopsoni Hahn, 1987
Genus †Meketichoffatia Hahn, 1993
Species †M. krausei Hahn, 1993
Genus †Renatodon Hahn, 2001
Species †Renatodon amalthea Hahn, 2001
Subfamily †Kuehneodontinae Hahn, 1971
Genus †Kuehneodon Hahn, 1969
Species †K. dietrichi Hahn, 1969
Species †K. barcasensis Hahn & Hahn, 2001
Species †K. dryas Hahn, 1977
Species †K. guimarotensis Hahn, 1969
Species †K. hahni Antunes, 1988
Species †K. simpsoni Hahn, 1969
Species †K. uniradiculatus Hahn, 1978
Super family †Plagiaulacoidea Ameghino, 1894
Family †Plagiaulacidae Gill, 1872 sensu Kielan-Jaworowska & Hurum, 2001 [Bolodontidae Osborn 1887]
Genus ?†Morrisonodon Hahn & Hahn, 2004
Species †Morrisonodon brentbaatar (Bakker, 1998) Hahn & Hahn, 2004 [Ctenacodon brentbaatar Bakker, 1998]
Genus †Plagiaulax Falconer, 1857
Species †P. becklesii Falconer, 1857
Species †P. dawsoni Woodward, 1891 [Plioprion dawsoni Woodward, 1891; Loxaulax dawsoni (Woodward, 1891) Sloan, 1979]
Genus †Bolodon Owen, 1871 [Plioprion Cope, 1884]
Species †B. crassidens Owen, 1871
Species †B. falconeri Owen, 1871 [Pligiaulax falconeri Owen, 1871; Plioprion falconeri (Owen, 1871)]
Species †B. hydei Cifelli, Davis & Sames, 2014
Species †B. minor Falconer, 1857 [Pligiaulax minor Falconer, 1857; Plioprion minor (Falconer, 1857)]
Species †B. osborni Simpson, 1928 [Plioprion osborni (Simpson, 1928); Ctenacodon osborni Simpson, 1928]
Species ?†B. elongatus Simpson, 1928
Family †Eobaataridae Kielan-Jaworowska, Dashzeveg & Trofimov, 1987
Genus †Eobaatar Kielan-Jaworowska, Dashzeveg & Trofimov, 1987
Species †E. clemensi Sweetman, 2009
Species †E. hispanicus Hahn & Hahn, 1992
Species †E. magnus Kielan-Jaworowska, Dashzeveg & Trofimov, 1987
Species †E. minor Kielan-Jaworowska, Dashzeveg & Trofimov, 1987
Species †E. pajaronensis Hahn & Hahn, 2001
Genus †Hakusanobaatar Kusuhashi et al., 2008
Species †H. matsuoi Kusuhashi et al., 2008
Genus †Heishanobaatar Kusuhashi et al., 2010
Species †H. triangulus Kusuhashi et al., 2010
Genus †Iberica Badiola et al., 2011
Species †Iberica hahni Badiola et al., 2011
Genus †Liaobaatar Kusuhashi et al., 2009
Species †L. changi Kusuhashi et al., 2009
Genus †Loxaulax Simpson, 1928 [Parendotherium Crusafont Pairó & Adrover, 1966]
Species †L. valdensis (Woodward, 1911) Simpson, 1928[Dipriodon valdensis Woodward, 1911]
Species †L. herreroi (Crusafont Pairó & Adrover, 1966) [Parendotherium herreroi Crusafont Pairó & Adrover 1966]
Genus †Monobaatar Kielan-Jaworowska, Dashzeveg & Trofimov, 1987
Species †M. mimicus Kielan-Jaworowska, Dashzeveg & Trofimov, 1987
Genus †Sinobaatar Hu & Wang, 2002
Species †S. lingyuanensis Hu & Wang, 2002
Species †S. xiei Kusuhashi et al., 2009
Species †S. fuxinensis Kusuhashi et al., 2009
Genus †Tedoribaatar Kusuhashi et al., 2008
Species †T. reini Kusuhashi et al., 2008
Genus †Teutonodon Martin et al., 2016
Species †Teutonodon langenbergensis Martin et al. 2016
Family †Albionbaataridae Kielan-Jaworowska & Ensom, 1994
Genus †Albionbaatar Kielan-Jaworowska & Ensom, 1994
Species †A. denisae Kielan-Jaworowska & Ensom, 1994
Genus †Kielanobaatar Kusuhashi et al., 2010
Species †K. badaohaoensis Kusuhashi et al., 2010
Genus †Proalbionbaatar Hahn & Hahn, 1998
Species †P. plagiocyrtus Hahn & Hahn, 1998
Suborder †Gondwanatheria McKenna 1971 [Gondwanatheroidea Krause & Bonaparte 1993]
Family †Groeberiidae Patterson, 1952
Genus †Groeberia Patterson 1952
Species †G. minoprioi Ryan Patterson, 1952
Species †G. pattersoni G. G. Simpson, 1970
Genus †Klohnia Flynn & Wyss 1999
Species †K. charrieri Flynn & Wyss 1999
Species †K. major Goin et al., 2010
Genus ?†Epiklohnia Goin et al., 2010
Species †Epiklohnia verticalis Goin et al., 2010
Genus ?†Praedens Goin et al., 2010
Species †Praedens aberrans Goin et al., 2010
Family †Ferugliotheriidae Bonaparte, 1986
Genus †Ferugliotherium Bonaparte, 1986a [Vucetichia Bonaparte, 1990]
†Ferugliotherium windhauseni Bonaparte, 1986a [Vucetichia gracilis Bonaparte, 1990]
Genus †Trapalcotherium Rougier et al., 2008
†Trapalcotherium matuastensis Rougier et al., 2008
Family †Sudamericidae Scillato-Yané & Pascual, 1984 [Gondwanatheridae Bonaparte, 1986; Patagonidae Pascual & Carlini, 1987]
Genus †Greniodon Goin et al., 2012
†Greniodon sylvanicus Goin et al., 2012
Genus †Vintana Krause et al., 2014
†Vintana sertichi Krause et al., 2014
Genus †Dakshina Wilson, Das Sarama & Anantharaman, 2007
†Dakshina jederi Wilson, Das Sarama & Anantharaman, 2007
Genus †Gondwanatherium Bonaparte, 1986
†Gondwanatherium patagonicum Bonaparte, 1986
Genus †Sudamerica Scillato-Yané & Pascual, 1984
†Sudamerica ameghinoi Scillato-Yané & Pascual, 1984
Genus †Lavanify Krause et al., 1997
†Lavanify miolaka Krause et al., 1997
Genus †Bharattherium Prasad et al., 2007
†Bharattherium bonapartei Prasad et al.,, 2007
Genus †Patagonia Pascual & Carlini' 1987
†Patagonia peregrina Pascual & Carlini' 1987
Suborder †Cimolodonta McKenna, 1975
Genus ?†Allocodon non Marsh, 1881
Species †A. fortis Marsh, 1889
Species †A. lentus Marsh, 1892 [Cimolomys lentus]
Species †A. pumilis Marsh, 1892 [Cimolomys pumilus]
Species †A. rarus Marsh, 1889
Genus ?†Ameribaatar Eaton & Cifelli, 2001
Species †A. zofiae Eaton & Cifelli, 2001
Genus ?†Bubodens Wilson, 1987
Species †Bubodens magnus Wilson, 1987
Genus ?†Clemensodon Krause, 1992
Species †Clemensodon megaloba Krause, 1992 [Kimbetohia cambi, in partim]
Genus ?†Fractinus Higgins 2003
Species †Fractinus palmorum Higgins, 2003
Genus ?†Uzbekbaatar Kielan-Jaworowska & Nesov, 1992
Species †Uzbekbaatar kizylkumensis Kielan-Jaworowska & Nesov, 1992
Genus ?†Viridomys Fox 1971
Species †Viridomys orbatus Fox 1971
Family †Corriebaataridae Rich et al., 2009
Genus ?†Corriebaatar Rich et al., 2009
Species †Corriebaatar marywaltersae Rich et al., 2009
Paracimexomys group
Genus Paracimexomys Archibald, 1982
Species? †P. crossi Cifelli, 1997
Species? †P. dacicus Grigorescu & Hahn, 1989
Species? †P. oardaensis (Codrea et al., 2014) [Barbatodon oardaensis Codrea et al., 2014]
Species †P. magnus (Sahni, 1972) Archibald, 1982 [Cimexomys magnus Sahni, 1972]
Species †P. magister (Fox, 1971) Archibald, 1982 [Cimexomys magister Fox, 1971]
Species †P. perplexus Eaton & Cifelli, 2001
Species †P. robisoni Eaton & Nelson, 1991
Species †P. priscus (Lillegraven, 1969) Archibald, 1982 [Cimexomys priscus Lillegraven, 1969; genotype Paracimexomys sensu Eaton & Cifelli, 2001]
Species †P. propriscus Hunter, Heinrich & Weishampel 2010
Genus Cimexomys Sloan & Van Valen, 1965
Species †C. antiquus Fox, 1971
Species †C. gregoryi Eaton, 1993
Species †C. judithae Sahni, 1972 [Paracimexomys judithae (Sahni, 1972) Archibald, 1982]
Species †C. arapahoensis Middleton & Dewar, 2004
Species †C. minor Sloan & Van Valen, 1965
Species? †C. gratus (Jepson, 1930) Lofgren, 1995 [Cimexomys hausoi Archibald, 1983; Eucosmodon gratus Jepson, 1930; Mesodma ambigua? Jepson, 1940; Stygimus gratus Jepson, 1930]
Genus †Bryceomys Eaton, 1995
Species †B. fumosus Eaton, 1995
Species †B. hadrosus Eaton, 1995
Species †B. intermedius Eaton & Cifelli, 2001
Genus †Cedaromys Eaton & Cifelli, 2001
Species †C. bestia (Eaton & Nelson, 1991) Eaton & Cifelli, 2001 [Paracimexomys bestia Eaton & Nelson, 1991]
Species †C. hutchisoni Eaton 2002
Species †C. minimus Eaton 2009
Species †C. parvus Eaton & Cifelli, 2001
Genus †Dakotamys Eaton, 1995
Species? †D. sp. Eaton, 1995
Species †D. malcolmi Eaton, 1995
Species †D. shakespeari Eaton 2013
Family †Boffidae Hahn & Hahn, 1983 sensu Kielan-Jaworowska & Hurum 2001
Genus †Boffius Vianey-Liaud, 1979
Species †Boffius splendidus Vianey-Liaud, 1979 [Boffiidae Hahn & Hahn, 1983 sensu Kielan-Jaworowska & Hurum, 2001]
Family †Cimolomyidae Marsh, 1889 sensu Kielan-Jaworowska & Hurum, 2001
Genus †Paressodon Wilson, Dechense & Anderson, 2010
Species †Paressodon nelsoni Wilson, Dechense & Anderson, 2010
Genus †Cimolomys Marsh, 1889 [?Allacodon Marsh, 1889; Selenacodon Marsh, 1889]
Species †C. clarki Sahni, 1972
Species †C. gracilis Marsh, 1889 [Cimolomys digona Marsh, 1889; Meniscoessus brevis; Ptilodus gracilis Osborn, 1893 non Gidley 1909; Selenacodon brevis Marsh, 1889]
Species †C. trochuus Lillegraven, 1969
Species †C. milliensis Eaton, 1993a
Species ?†C. bellus Marsh, 1889
Genus ?†Essonodon Simpson, 1927
Species †E. browni Simpson, 1927 [cimolodontidae? Kielan-Jaworowska & Hurum 2001]
Genus ?†Buginbaatar Kielan-Jaworowska & Sochava, 1969
Species †Buginbaatar transaltaiensis Kielan-Jaworowska & Sochava, 1969
Genus ?†Meniscoessus Cope, 1882 [Dipriodon Marsh, 1889; Tripriodon Marsh, 1889 nomen dubium; Triprotodon Chure & McIntosh, 1989 nomen dubium; Selenacodon Marsh, 1889, Halodon Marsh, 1889, Oracodon Marsh, 1889]
Species †M. caperatus Marsh, 1889
Species †M. collomensis Lillegraven, 1987
Species †M. conquistus Cope 1882
Species †M. ferox Fox, 1971a
Species †M. intermedius Fox, 1976b
Species †M. major (Russell, 1936) [Cimolomys major Russell 1937]
Species †M. robustus (Marsh, 1889) [Dipriodon robustus Marsh 1889; Dipriodon lacunatus Marsh, 1889; Tripriodon coelatus Marsh, 1889; Meniscoessus coelatus Marsh, 1889; Selenacodon fragilis Marsh, 1889; Meniscoessus fragilis Marsh, 1889; Halodon sculptus (Marsh, 1889); Cimolomys sculptus Marsh, 1889; Meniscoessus sculptus Marsh, 1889; Oracodon anceps Marsh, 1889; Oracodon conulus Marsh, 1892; Meniscoessus borealis Simpson, 1927c; Meniscoessus greeni Wilson, 1987]
Species †M. seminoensis Eberle & Lillegraven, 1998a
Family †Kogaionidae Rãdulescu & Samson, 1996
Genus †Kogaionon Rãdulescu & Samson, 1996
Species †K. ungureanui Rãdulescu & Samson, 1996
Genus †Hainina Vianey-Liaud, 1979
Species †H. belgica Vianey-Liaud, 1979
Species †H. godfriauxi Vianey-Liaud, 1979
Species †H. pyrenaica Peláez-Campomanes, López-Martínez, Álvarez-Sierra & Daams, 2000
Species †H. vianeyae Peláez-Campomanes, López-Martínez, Álvarez-Sierra & Daams, 2000
Genus †Barbatodon Rãdulescu & Samson, 1986
Species †B. transylvanicum Rãdulescu & Samson, 1986
Family †Eucosmodontidae Jepsen, 1940 sensu Kielan-Jaworowska & Hurum, 2001 [Eucosmodontidae: Eucosmodontinae Jepsen, 1940 sensu McKenna & Bell, 1997]
Genus †Eucosmodon Matthew & Granger, 1921
Species †E. primus Granger & Simpson, 1929
Species †E. americanus Cope, 1885
Species †E. molestus Cope, 1869 [Neoplagiaulax molestus Cope, 1869]
Genus †Stygimys Sloan & Van Valen, 1965
Species †S. camptorhiza Johnston & Fox, 1984
Species †S. cupressus Fox, 1981
Species †S. kuszmauli [Eucosmodon kuszmauli]
Species †S. jepseni Simpson, 1935
Species †S. teilhardi Granger & Simpson, 1929
Family †Microcosmodontidae Holtzman & Wolberg, 1977 [Eucosmodontidae: Microcosmodontinae Holtzman & Wolberg, 1977 sensu McKenna & Bell, 1997]
Genus †PentacosmodonJepsen, 1940
Species †P. pronus Jepsen, 1940 [Djadochtatheroid? (Kielan-Jaworowska & Hurum, 2001)]
Genus †Acheronodon Archibald, 1982
Species †A. garbani Archibald, 1982
Genus †Microcosmodon Jepsen, 1930
Species †M. conus Jepsen, 1930
Species †M. rosei Krause, 1980
Species †M. arcuatus Johnston & Fox, 1984
Species †M. woodi Holtzman & Wolberg, 1977 [Eucosmodontine?]
Species †M. harleyi Weil, 1998
Superfamily †Ptilodontoidea Cope, 1887 sensu McKenna & Bell, 1997 e Kielan-Jaworowska & Hurum, 2001
Family †Cimolodontidae Marsh, 1889 sensu Kielan-Jaworowska & Hurum, 2001
Genus †Liotomus Lemoine, 1882 [Neoctenacodon Lemoine 1891]
Species? †L. marshi (Lemoine, 1882) Cope, 1884 [Neoctenacodon marshi Lemoine, 1882; Neoplagiaulax marshi (Lemoine 1882); Plagiaulax marshi (Lemoine 1882)] [Eucosmodontidae? McKenna & Bell, 1997]
Genus †Yubaatar Xu et al., 2015
Species †Yubaatar zhongyuanensis Xu et al., 2015
Genus †Anconodon Jepsen, 1940
Species? †A. lewisi (Simpson 1935) Sloan, 1987
Species †A. gibleyi (Simpson, 1935) [Ptilodus gidleyi Simpson, 1935]
Species †A. cochranensis (Russell, 1929) [Liotomus russelli (Simpson, 1935); Anconodon russelli (Simpson, 1935) Sloan, 1987; Ectopodon cochranensis (Russell, 1967)]
Genus †Cimolodon Marsh, 1889 [Nanomys Marsh, 1889, Nanomyops Marsh, 1892]
Species †C. agilis Marsh, 1889
Species †C. foxi Eaton, 2002
Species †C. gracilis Marsh, 1889
Species †C. electus Fox, 1971
Species †C. nitidus Marsh, 1889 [Allacodon rarus Marsh, 1892 sensu Clemens, 1964a; Nanomys minutus Marsh, 1889; Nanomyops minutus (Marsh, 1889) Marsh, 1892; Halodon serratus Marsh, 1889; Ptilodus serratus (Marsh, 1889) Gidley 1909]
Species †C. parvus Marsh, 1889
Species †C. peregrinus Donohue, Wilson & Breithaupt, 2013
Species †C. similis Fox, 1971
Species †C. wardi Eaton, 2006
Family Incertae sedis
Genus Neoliotomus Jepsen, 1930
Species †N. conventus Jepsen, 1930
Species †N. ultimus (Granger & Simpson, 1928)
Family †Neoplagiaulacidae Ameghino, 1890 [Ptilodontidae: Neoplagiaulacinae Ameghino, 1890 sensu McKenna & Bell, 1997]
Genus †Mesodma Jepsen, 1940
Species? †M. hensleighi Lillegraven, 1969
Species? †M. senecta Fox, 1971
Species †M. ambigua Jepsen, 1940
Species? †M. pygmaea Sloan, 1987
Species †M. formosa (Marsh, 1889) [Halodon formosus Marsh, 1889]
Species †M. primaeva (Lambe, 1902)
Species †M. thompsoni Clemens, 1964
Genus Ectypodus Matthew & Cranger, 1921 [Charlesmooria Kühne, 1969 ]
Species †E. aphronorus Sloan, 1981
Species? †E. childei Kühne, 1969
Species? †E. elaphus Scott, 2005
Species? †E. lovei (Sloan, 1966) Krishtlaka & Black, 1975
Species †E. musculus Matthew & Granger, 1921
Species †E. powelli Jepsen, 1940
Species? †E. simpsoni Jepsen, 1930
Species †E. szalayi Sloan, 1981
Species †E. tardus Jepsen, 1930
Genus †Mimetodon Jepsen, 1940
Species †M. krausei Sloan, 1981
Species †M. nanophus Holtzman, 1978 [Neoplagiaulax nanophus Holtzman, 1978]
Species †M. siberlingi(Simpson, 1935) Schiebout, 1974
Species †M. churchilli Jepsen, 1940
Genus †Neoplagiaulax Lemoine, 1882
Species †N. annae Vianey-Liaud, 1986
Species? †N. burgessi Archibald, 1982
Species †N. cimolodontoides Scott, 2005
Species †N. copei Lemoine, 1885
Species †N. donaldorum Scott & Krause, 2006
Species †N. eocaenus Lemoine, 1880
Species †N. grangeri Simpson, 1935
Species †N. hazeni Jepsen, 1940
Species †N. hunteri Krishtalka, 1973
Species †N. jepi Sloan, 1987
Species †N. kremnus Johnston & Fox, 1984
Species †N. macintyrei Slaon, 1981
Species †N. macrotomeus Wilson, 1956
Species †N. mckennai Sloan, 1987
Species †N. nelsoni Sloan, 1987
Species †N. nicolai Vianey-Liaud, 1986
Species †N. paskapooensis Scott, 2005
Species? †N. serrator Scott, 2005
Species †N. sylvani Vianey-Liaud, 1986
Genus †Parectypodus Jepsen, 1930
Species †P. armstrongi Johnston & Fox, 1984
Species? †P. corystes Scott, 2003
Species? †P. foxi Storer, 1991
Species †P. laytoni Jepsen, 1940
Species †P. lunatus Krause, 1982 [P. childei Kühne, 1969]
Species †P. simpsoni Jepsen, 1940
Species †P. sinclairi Simpson, 1935
Species †P. sloani Schiebout, 1974
Species †P. trovessartianus Cope, 1882 [P. trouessarti; Ptilodus; Mimetodon; Neoplagiaulax]
Species †P. sylviae Rigsby, 1980 [Ectypodus sylviae Rigby, 1980]
Species? †P. vanvaleni Sloan, 1981
Genus †Cernaysia Vianey-Liaud, 1986
Species †C. manueli Vianey-Liaud, 1986
Species †C. davidi Vianey-Liaud, 1986
Genus †Krauseia Vianey-Liaud, 1986
Species †K. clemensi Sloan, 1981 [Parectypodus clemensi Sloan, 1981]
Genus †XyronomysRigby, 1980
Species †X. swainae Rigby, 1980 [Xironomys (sic); ?Eucosmodontidae]
Genus †Xanclomys Rigby, 1980
Species †X. mcgrewiRigby, 1980
Genus †Mesodmops Tong & Wang, 1994
Species †M. dawsonae Tong & Wang, 1994
Family †Ptilodontidae Cope, 1887 [Ptilodontidae: Ptilodontinae Cope, 1887 sensu McKenna & Bell, 1997]
Genus †Kimbetohia Simpson, 1936
Species †K. cambi [Granger, Gregory & Colbert in Matthew, 1937, or Simpson, 1936]
Species †K. sp. cf. K. cambi
Genus †Ptilodus Cope, 1881 [Chirox Cope, 1884]
Species? †P. fractus
Species †P. kummae Krause, 1977
Species †P. gnomus Scott, Fox & Youzwyshyn, 2002 [cf. Ectypodus hazeni (Jepsen, 1940) Gazin, 1956]
Species †P. mediaevus Cope, 1881 [Ptilodus plicatus (Cope, 1884); Chirox plicatus Cope, 1884 P. ferronensis Gazin, 1941]
Species †P. montanus Douglass, 1908 [P. gracilis Gidley, 1909; P. admiralis Hay, 1930]
Species †P. tsosiensis Sloan, 1981
Species †P. wyomingensis Jepsen, 1940
Genus †Baiotomeus Krause, 1987
Species †B. douglassi Simpson, 1935 [Ptilodus; Mimetodon; Neoplagiaulax]
Species †B. lamberti Krause, 1987
Species †B. russelli Scott, Fox & Youzwyshyn, 2002
Species †B. rhothonion Scott, 2003
Genus †Prochetodon Jepsen, 1940
Species †P. cavus Jespen, 1940
Species †P. foxi Krause, 1987
Species †P. taxus Krause, 1987
Species? †P. speirsae Scott, 2004
Superfamily †Taeniolabidoidea Granger & Simpson, 1929 sensu Kielan-Jaworowska & Hurum, 2001
Genus †Prionessus Matthew & Granger, 1925
Species †P. lucifer Matthew & Granger, 1925
Family †Lambdopsalidae
Genus †Lambdopsalis Chow & Qi, 1978
Species †L. bulla Chow & Qi, 1978
Genus †Sphenopsalis Matthew, Granger & Simpson, 1928
Species †S. nobilis Matthew, Granger & Simpson, 1928
Family †Taeniolabididae Granger & Simpson, 1929
Genus †Taeniolabis Cope, 1882
Species †T. lamberti Simmons, 1987
Species †T. taoensis Cope, 1882
Genus †Kimbetopsalis
Species †K. simmonsae
Superfamily †Djadochtatherioidea Kielan-Jaworowska & Hurum, 1997 sensu Kielan-Jaworowska & Hurum, 2001[Djadochtatheria Kielan-Jaworowska & Hurum, 1997]
Genus? †Bulganbaatar Kielan-Jaworowska, 1974
Species? †B. nemegtbaataroides Kielan-Jaworowska, 1974
Genus †Nemegtbaatar Kielan-Jaworowska, 1974
Species? †N. gobiensis Kielan-Jaworowska, 1974
Family †Chulsanbaataridae Kielan-Jaworowska, 1974
Genus †Chulsanbaatar Kielan-Jaworowska, 1974
Species †C. vulgaris Kielan-Jaworowska, 1974
Family †Sloanbaataridae Kielan-Jaworowska, 1974
Genus †Kamptobaatar Kielan-Jaworowska, 1970
Species? †K. kuczynskii Kielan-Jaworowska, 1970
Genus †Nessovbaatar Kielan-Jaworowska & Hurum, 1997
Species †N. multicostatus Kielan-Jaworowska & Hurum, 1997
Genus †Sloanbaatar Kielan-Jaworowska, 1974
Species †S. mirabilis Kielan-Jaworowska, 1974 [Sloanbaatarinae]
Family †Djadochtatheriidae Kielan-Jaworowska $ Hurum, 1997
Genus †Djadochtatherium Simpson, 1925
Species †D. matthewi Simpson, 1925[Catopsalis matthewi Simpson, 1925]
Genus †Catopsbaatar Kielan-Jaworowska, 1974
Species †C. catopsaloides (Kielan-Jaworowska, 1974) Kielan-Jaworowska, 1994 [Djadochtatherium catopsaloides Kielan-Jaworowska, 1974]
Genus †Tombaatar Kielan-Jaworowska, 1974
Species †T. sabuli Rougier, Novacek & Dashzeveg, 1997
Genus †Kryptobaatar Kielan-Jaworowska, 1970 [Gobibaatar Kielan-Jaworowska, 1970, Tugrigbaatar Kielan-Jaworowska & Dashzeveg, 1978]
Species †K. saichanensis Kielan-Jaworowska & Dashzeveg, 1978 [Tugrigbaatar saichaenensis Kielan-Jaworowska & Dashzeveg, 1978??]
Species †K. dashzevegi Kielan-Jaworowska, 1970
Species †K. mandahuensis Smith, Guo & Sun, 2001
Species †K. gobiensis Kielan-Jaworowska, 1970 [Gobibaatar parvus Kielan-Jaworowska, 1970 ]
Phylogeny
Paleoecology
Behaviour
Multituberculates are some of the earliest mammals to display complex social behaviours. One species, Filikomys, from the Late Cretaceous of North America, engaged in multi-generational group nesting and burrowing.
Extinction
The extinction of multituberculates has been a topic of controversy for several decades. After at least 88 million years of dominance over most mammalian assemblies, multituberculates reached the peak of their diversity in the early Palaeocene, before gradually declining across the final stages of the epoch and the Eocene, finally disappearing in the early Oligocene.
The last multituberculate species, Ectypodus childei, went extinct near the end of the Eocene in North America. It is unclear why this particular species persisted for so long when all of its counterparts succumbed to replacement by rodents.
Traditionally, the extinction of multituberculates has been linked to the rise of rodents (and, to a lesser degree, earlier placental competitors like hyopsodonts and Plesiadapiformes), which supposedly competitively excluded multituberculates from most mammalian faunas.
However, the idea that multituberculates were replaced by rodents and other placentals has been criticised by several authors. For one thing, it relies on the assumption that these mammals are "inferior" to more derived placentals, and ignores the fact that rodents and multituberculates had co-existed for at least 15 million years. According to some researchers, multituberculate "decline" is shaped by sharp extinction events, most notably after the Tiffanian, where a sudden drop in diversity occurs. Finally, the youngest known multituberculates do not exemplify patterns of competitive exclusion; the Oligocene Ectypodus is a rather generalistic species, rather than a specialist. This combination of factors suggests that, rather than gradually declining due to pressure from rodents and similar placentals, multituberculates simply could not cope with climatic and vegetation changes, as well as the rise of new predatory eutherians, such as miacids.
More recent studies show a mixed effect. Multituberculate faunas in North America and Europe do indeed decline in correlation to the introduction of rodents in these areas. However, Asian multituberculate faunas co-existed with rodents with minimal extinction events, implying that competition was not the main cause for the extinction of Asiatic multituberculates. As a whole, it seems that Asian multituberculates, unlike North American and European species, never recovered from the KT event, which allowed the evolution and propagation of rodents in the first place. A recent study seems to indeed indicate that eutherians recovered more quickly from the KT event than multituberculates. Conversely, another study has shown that placental radiation did not start significantly until after the decline of multituberculates.
| Biology and health sciences | Stem-mammals | Animals |
57630 | https://en.wikipedia.org/wiki/Monsoon | Monsoon | A monsoon () is traditionally a seasonal reversing wind accompanied by corresponding changes in precipitation but is now used to describe seasonal changes in atmospheric circulation and precipitation associated with annual latitudinal oscillation of the Intertropical Convergence Zone (ITCZ) between its limits to the north and south of the equator. Usually, the term monsoon is used to refer to the rainy phase of a seasonally changing pattern, although technically there is also a dry phase. The term is also sometimes used to describe locally heavy but short-term rains.
The major monsoon systems of the world consist of the West African, Asian–Australian, the North American, and South American monsoons.
The term was first used in English in British India and neighboring countries to refer to the big seasonal winds blowing from the Bay of Bengal and Arabian Sea in the southwest bringing heavy rainfall to the area.
Etymology
The etymology of the word monsoon is not wholly certain. The English monsoon came from Portuguese ultimately from Arabic (, "season"), "perhaps partly via early modern Dutch monson".
History
Asian monsoon
Strengthening of the Asian monsoon has been linked to the uplift of the Tibetan Plateau after the collision of the Indian subcontinent and Asia around 50 million years ago. Because of studies of records from the Arabian Sea and that of the wind-blown dust in the Loess Plateau of China, many geologists believe the monsoon first became strong around 8 million years ago. More recently, studies of plant fossils in China and new long-duration sediment records from the South China Sea led to a timing of the monsoon beginning 15–20 million years ago and linked to early Tibetan uplift. Testing of this hypothesis awaits deep ocean sampling by the Integrated Ocean Drilling Program. The monsoon has varied significantly in strength since this time, largely linked to global climate change, especially the cycle of the Pleistocene ice ages. A study of Asian monsoonal climate cycles from 123,200 to 121,210 years BP, during the Eemian interglacial, suggests that they had an average duration of around 64 years, with the minimum duration being around 50 years and the maximum approximately 80 years, similar to today.
A study of marine plankton suggested that the South Asian Monsoon (SAM) strengthened around 5 million years ago. Then, during ice periods, the sea level fell and the Indonesian Seaway closed. When this happened, cold waters in the Pacific were impeded from flowing into the Indian Ocean. It is believed that the resulting increase in sea surface temperatures in the Indian Ocean increased the intensity of monsoons. In 2018, a study of the SAM's variability over the past million years found that precipitation resulting from the monsoon was significantly reduced during glacial periods compared to interglacial periods like the present day. The Indian Summer Monsoon (ISM) underwent several intensifications during the warming following the Last Glacial Maximum, specifically during the time intervals corresponding to 16,100–14,600 BP, 13,600–13,000 BP, and 12,400–10,400 BP as indicated by vegetation changes in the Tibetan Plateau displaying increases in humidity brought by an intensifying ISM. Though the ISM was relatively weak for much of the Late Holocene, significant glacial accumulation in the Himalayas still occurred due to cold temperatures brought by westerlies from the west.
During the Middle Miocene, the July ITCZ, the zone of rainfall maximum, migrated northwards, increasing precipitation over southern China during the East Asian Summer Monsoon (EASM) while making Indochina drier. During the Late Miocene Global Cooling (LMCG), from 7.9 to 5.8 million years ago, the East Asian Winter Monsoon (EAWM) became stronger as the subarctic front shifted southwards. An abrupt intensification of the EAWM occurred 5.5 million years ago. The EAWM was still significantly weaker relative to today between 4.3 and 3.8 million years ago but abruptly became more intense around 3.8 million years ago as crustal stretching widened the Tsushima Strait and enabled greater inflow of the warm Tsushima Current into the Sea of Japan. Circa 3.0 million years ago, the EAWM became more stable, having previously been more variable and inconsistent, in addition to being enhanced further amidst a period of global cooling and sea level fall. The EASM was weaker during cold intervals of glacial periods such as the Last Glacial Maximum (LGM) and stronger during interglacials and warm intervals of glacial periods. Another EAWM intensification event occurred 2.6 million years ago, followed by yet another one around 1.0 million years ago. During Dansgaard–Oeschger events, the EASM grew in strength, but it has been suggested to have decreased in strength during Heinrich events. The EASM expanded its influence deeper into the interior of Asia as sea levels rose following the LGM; it also underwent a period of intensification during the Middle Holocene, around 6,000 years ago, due to orbital forcing made more intense by the fact that the Sahara at the time was much more vegetated and emitted less dust. This Middle Holocene interval of maximum EASM was associated with an expansion of temperate deciduous forest steppe and temperate mixed forest steppe in northern China. By around 5,000 to 4,500 BP, the East Asian monsoon's strength began to wane, weakening from that point until the present day. A particularly notable weakening took place ~3,000 BP. The location of the EASM shifted multiple times over the course of the Holocene: first, it moved southward between 12,000 and 8,000 BP, followed by an expansion to the north between approximately 8,000 and 4,000 BP, and most recently retreated southward once more between 4,000 and 0 BP.
Australian monsoon
The January ITCZ migrated further south to its present location during the Middle Miocene, strengthening the summer monsoon of Australia that had previously been weaker.
Five episodes during the Quaternary at 2.22 Ma (PL-1), 1.83 Ma (PL-2), 0.68 Ma (PL-3), 0.45 Ma (PL-4) and 0.04 Ma (PL-5) were identified which showed a weakening of the Leeuwin Current (LC). The weakening of the LC would have an effect on the sea surface temperature (SST) field in the Indian Ocean, as the Indonesian Throughflow generally warms the Indian Ocean. Thus these five intervals could probably be those of considerable lowering of SST in the Indian Ocean and would have influenced Indian monsoon intensity. During the weak LC, there is the possibility of reduced intensity of the Indian winter monsoon and strong summer monsoon, because of change in the Indian Ocean dipole due to reduction in net heat input to the Indian Ocean through the Indonesian Throughflow. Thus a better understanding of the possible links between El Niño, Western Pacific Warm Pool, Indonesian Throughflow, wind pattern off western Australia, and ice volume expansion and contraction can be obtained by studying the behaviour of the LC during Quaternary at close stratigraphic intervals.
South American monsoon
The South American summer monsoon (SASM) is known to have become weakened during Dansgaard–Oeschger events. The SASM has been suggested to have been enhanced during Heinrich events.
Process
Monsoons were once considered as a large-scale sea breeze caused by higher temperature over land than in the ocean. This is no longer considered as the cause and the monsoon is now considered a planetary-scale phenomenon involving the annual migration of the Intertropical Convergence Zone between its northern and southern limits. The limits of the ITCZ vary according to the land–sea heating contrast and it is thought that the northern extent of the monsoon in South Asia is influenced by the high Tibetan Plateau. These temperature imbalances happen because oceans and land absorb heat in different ways. Over oceans, the air temperature remains relatively stable for two reasons: water has a relatively high heat capacity (3.9 to 4.2 J g−1 K−1), and because both conduction and convection will equilibrate a hot or cold surface with deeper water (up to 50 metres). In contrast, dirt, sand, and rocks have lower heat capacities (0.19 to 0.35 J g−1 K−1), and they can only transmit heat into the earth by conduction and not by convection. Therefore, bodies of water stay at a more even temperature, while land temperatures are more variable.
During warmer months sunlight heats the surfaces of both land and oceans, but land temperatures rise more quickly. As the land's surface becomes warmer, the air above it expands and an area of low pressure develops. Meanwhile, the ocean remains at a lower temperature than the land, and the air above it retains a higher pressure. This difference in pressure causes sea breezes to blow from the ocean to the land, bringing moist air inland. This moist air rises to a higher altitude over land and then it flows back toward the ocean (thus completing the cycle). However, when the air rises, and while it is still over the land, the air cools. This decreases the air's ability to hold water, and this causes precipitation over the land. This is why summer monsoons cause so much rain over land.
In the colder months, the cycle is reversed. Then the land cools faster than the oceans and the air over the land has higher pressure than air over the ocean. This causes the air over the land to flow to the ocean. When humid air rises over the ocean, it cools, and this causes precipitation over the oceans. (The cool air then flows towards the land to complete the cycle.)
Most summer monsoons have a dominant westerly component and a strong tendency to ascend and produce copious amounts of rain (because of the condensation of water vapor in the rising air). The intensity and duration, however, are not uniform from year to year. Winter monsoons, by contrast, have a dominant easterly component and a strong tendency to diverge, subside and cause drought.
Similar rainfall is caused when moist ocean air is lifted upwards by mountains, surface heating, convergence at the surface, divergence aloft, or from storm-produced outflows at the surface. However the lifting occurs, the air cools due to expansion in lower pressure, and this produces condensation.
Global monsoon
Summary table
Africa (West African and Southeast African)
The monsoon of western Sub-Saharan Africa is the result of the seasonal shifts of the Intertropical Convergence Zone and the great seasonal temperature and humidity differences between the Sahara and the equatorial Atlantic Ocean. The ITCZ migrates northward from the equatorial Atlantic in February, reaches western Africa on or near June 22, then moves back to the south by October. The dry, northeasterly trade winds, and their more extreme form, the harmattan, are interrupted by the northern shift in the ITCZ and resultant southerly, rain-bearing winds during the summer. The semiarid Sahel and Sudan depend upon this pattern for most of their precipitation.
North America
The North American monsoon (NAM) occurs from late June or early July into September, originating over Mexico and spreading into the southwest United States by mid-July. It affects Mexico along the Sierra Madre Occidental as well as Arizona, New Mexico, Nevada, Utah, Colorado, West Texas and California. It pushes as far west as the Peninsular Ranges and Transverse Ranges of Southern California, but rarely reaches the coastal strip (a wall of desert thunderstorms only a half-hour's drive away is a common summer sight from the sunny skies along the coast during the monsoon). The North American monsoon is known to many as the Summer, Southwest, Mexican or Arizona monsoon. It is also sometimes called the Desert monsoon as a large part of the affected area are the Mojave and Sonoran deserts. However, it is controversial whether the North and South American weather patterns with incomplete wind reversal should be counted as true monsoons.
Asia
The Asian monsoons may be classified into a few sub-systems, such as the Indian Subcontinental Monsoon which affects the Indian subcontinent and surrounding regions including Nepal, and the East Asian Monsoon which affects southern China, Taiwan, Korea and parts of Japan.
South Asian monsoon
Southwest monsoon
The southwestern summer monsoons occur from June through September. The Thar Desert and adjoining areas of the northern and central Indian subcontinent heat up considerably during the hot summers. This causes a low pressure area over the northern and central Indian subcontinent. To fill this void, the moisture-laden winds from the Indian Ocean rush into the subcontinent. These winds, rich in moisture, are drawn towards the Himalayas. The Himalayas act like a high wall, blocking the winds from passing into Central Asia, and forcing them to rise. As the clouds rise, their temperature drops, and precipitation occurs. Some areas of the subcontinent receive up to of rain annually.
The southwest monsoon is generally expected to begin around the beginning of June and fade away by the end of September. The moisture-laden winds on reaching the southernmost point of the Indian Peninsula, due to its topography, become divided into two parts: the Arabian Sea Branch and the Bay of Bengal Branch.
The Arabian Sea Branch of the Southwest Monsoon first hits the Western Ghats of the coastal state of Kerala, India, thus making this area the first state in India to receive rain from the Southwest Monsoon. This branch of the monsoon moves northwards along the Western Ghats (Konkan and Goa) with precipitation on coastal areas, west of the Western Ghats. The eastern areas of the Western Ghats do not receive much rain from this monsoon as the wind does not cross the Western Ghats.
The Bay of Bengal Branch of Southwest Monsoon flows over the Bay of Bengal heading towards north-east India and Bengal, picking up more moisture from the Bay of Bengal. The winds arrive at the Eastern Himalayas with large amounts of rain. Mawsynram, situated on the southern slopes of the Khasi Hills in Meghalaya, India, is one of the wettest places on Earth. After the arrival at the Eastern Himalayas, the winds turns towards the west, travelling over the Indo-Gangetic Plain at a rate of roughly 1–2 weeks per state, pouring rain all along its way. June 1 is regarded as the date of onset of the monsoon in India, as indicated by the arrival of the monsoon in the southernmost state of Kerala.
The monsoon accounts for nearly 80% of the rainfall in India. Indian agriculture (which accounts for 25% of the GDP and employs 70% of the population) is heavily dependent on the rains, for growing crops especially like cotton, rice, oilseeds and coarse grains. A delay of a few days in the arrival of the monsoon can badly affect the economy, as evidenced in the numerous droughts in India in the 1990s.
The monsoon is widely welcomed and appreciated by city-dwellers as well, for it provides relief from the climax of summer heat in June. However, the roads take a battering every year. Often houses and streets are waterlogged and slums are flooded despite drainage systems. A lack of city infrastructure coupled with changing climate patterns causes severe economic loss including damage to property and loss of lives, as evidenced in the 2005 flooding in Mumbai that brought the city to a standstill. Bangladesh and certain regions of India like Assam and West Bengal, also frequently experience heavy floods during this season. Recently, areas in India that used to receive scanty rainfall throughout the year, like the Thar Desert, have surprisingly ended up receiving floods due to the prolonged monsoon season.
The influence of the Southwest Monsoon is felt as far north as in China's Xinjiang. It is estimated that about 70% of all precipitation in the central part of the Tian Shan Mountains falls during the three summer months, when the region is under the monsoon influence; about 70% of that is directly of "cyclonic" (i.e., monsoon-driven) origin (as opposed to "local convection"). The effects also extend westwards to the Mediterranean, where however the impact of the monsoon is to induce drought via the Rodwell-Hoskins mechanism.
Northeast monsoon
Around September, with the sun retreating south, the northern landmass of the Indian subcontinent begins to cool off rapidly, and air pressure begins to build over northern India. The Indian Ocean and its surrounding atmosphere still hold their heat, causing cold wind to sweep down from the Himalayas and Indo-Gangetic Plain towards the vast spans of the Indian Ocean south of the Deccan peninsula. This is known as the Northeast Monsoon or Retreating Monsoon.
While travelling towards the Indian Ocean, the cold dry wind picks up some moisture from the Bay of Bengal and pours it over peninsular India and parts of Sri Lanka. Cities like Chennai, which get less rain from the Southwest Monsoon, receive rain from this Monsoon. About 50% to 60% of the rain received by the state of Tamil Nadu is from the Northeast Monsoon. In Southern Asia, the northeastern monsoons take place from October to December when the surface high-pressure system is strongest. The jet stream in this region splits into the southern subtropical jet and the polar jet. The subtropical flow directs northeasterly winds to blow across southern Asia, creating dry air streams which produce clear skies over India. Meanwhile, a low pressure system known as a monsoon trough develops over South-East Asia and Australasia and winds are directed toward Australia. In the Philippines, northeast monsoon is called Amihan.
East Asian monsoon
The East Asian monsoon affects large parts of Indochina, the Philippines, China, Taiwan, Korea, Japan, and Siberia. It is characterised by a warm, rainy summer monsoon and a cold, dry winter monsoon. The rain occurs in a concentrated belt that stretches east–west except in East China where it is tilted east-northeast over Korea and Japan. The seasonal rain is known as Meiyu in China, Jangma in Korea, and Bai-u in Japan, with the latter two resembling frontal rain.
The onset of the summer monsoon is marked by a period of premonsoonal rain over South China and Taiwan in early May. From May through August, the summer monsoon shifts through a series of dry and rainy phases as the rain belt moves northward, beginning over Indochina and the South China Sea (May), to the Yangtze River Basin and Japan (June) and finally to northern China and Korea (July). When the monsoon ends in August, the rain belt moves back to southern China.
Australia
The rainy season occurs from September to February and it is a major source of energy for the Hadley circulation during boreal winter. It is associated with the development of the Siberian High and the movement of the heating maxima from the Northern Hemisphere to the Southern Hemisphere. North-easterly winds flow down Southeast Asia, are turned north-westerly/westerly by Borneo topography towards Australia. This forms a cyclonic circulation vortex over Borneo, which together with descending cold surges of winter air from higher latitudes, cause significant weather phenomena in the region. Examples are the formation of a rare low-latitude tropical storm in 2001, Tropical Storm Vamei, and the devastating flood of Jakarta in 2007.
The onset of the monsoon over Australia tends to follow the heating maxima down Vietnam and the Malay Peninsula (September), to Sumatra, Borneo and the Philippines (October), to Java, Sulawesi (November), Irian Jaya and northern Australia (December, January). However, the monsoon is not a simple response to heating but a more complex interaction of topography, wind and sea, as demonstrated by its abrupt rather than gradual withdrawal from the region. The Australian monsoon (the "Wet") occurs in the southern summer when the monsoon trough develops over Northern Australia. Over three-quarters of annual rainfall in Northern Australia falls during this time.
Europe
The European Monsoon (more commonly known as the return of the westerlies) is the result of a resurgence of westerly winds from the Atlantic, where they become loaded with wind and rain. These westerly winds are a common phenomenon during the European winter, but they ease as spring approaches in late March and through April and May. The winds pick up again in June, which is why this phenomenon is also referred to as "the return of the westerlies".
The rain usually arrives in two waves, at the beginning of June, and again in mid- to late June. The European monsoon is not a monsoon in the traditional sense in that it doesn't meet all the requirements to be classified as such. Instead, the return of the westerlies is more regarded as a conveyor belt that delivers a series of low-pressure centres to Western Europe where they create unsettled weather. These storms generally feature significantly lower-than-average temperatures, fierce rain or hail, thunder, and strong winds.
The return of the westerlies affects Europe's Northern Atlantic coastline, more precisely Ireland, Great Britain, the Benelux countries, western Germany, northern France and parts of Scandinavia.
| Physical sciences | Winds | null |
57688 | https://en.wikipedia.org/wiki/Anxiety%20disorder | Anxiety disorder | Anxiety disorders are a group of mental disorders characterized by significant and uncontrollable feelings of anxiety and fear such that a person's social, occupational, and personal functions are significantly impaired. Anxiety may cause physical and cognitive symptoms, such as restlessness, irritability, easy fatigue, difficulty concentrating, increased heart rate, chest pain, abdominal pain, and a variety of other symptoms that may vary based on the individual.
In casual discourse, the words anxiety and fear are often used interchangeably. In clinical usage, they have distinct meanings; anxiety is clinically defined as an unpleasant emotional state for which the cause is either not readily identified or perceived to be uncontrollable or unavoidable, whereas fear is clinically defined as an emotional and physiological response to a recognized external threat. The umbrella term 'anxiety disorder' refers to a number of specific disorders that include fears (phobias) and/or anxiety symptoms.
There are several types of anxiety disorders, including generalized anxiety disorder, hypochondriasis, specific phobia, social anxiety disorder, separation anxiety disorder, agoraphobia, panic disorder, and selective mutism. Individual disorders can be diagnosed using the specific and unique symptoms, triggering events, and timing. A medical professional must evaluate a person before diagnosing them with an anxiety disorder to ensure that their anxiety cannot be attributed to another medical illness or mental disorder. It is possible for an individual to have more than one anxiety disorder during their life or to have more than one anxiety disorder at the same time. Comorbid mental disorders or substance use disorders are common in those with anxiety. Comorbid depression (lifetime prevalence) is seen in 20-70% of those with social anxiety disorder, 50% of those with panic disorder and 43% of those with general anxiety disorder. The 12 month prevalence of alcohol or substance use disorders in those with anxiety disorders is 16.5%.
Worldwide, anxiety disorders are the second most common type of mental disorders after depressive disorders. Anxiety disorders affect nearly 30% of adults at some point in their lives, with an estimated 4% of the global population currently experiencing an anxiety disorder. However, anxiety disorders are treatable, and a number of effective treatments are available. Most people are able to lead normal, productive lives with some form of treatment.
Types
Generalized anxiety disorder
Generalized anxiety disorder (GAD) is a common disorder characterized by long-lasting anxiety that is not focused on any one object or situation. Those with generalized anxiety disorder experience non-specific persistent fear and worry and become overly concerned with everyday matters. Generalized anxiety disorder is "characterized by chronic excessive worry accompanied by three or more of the following symptoms: restlessness, fatigue, concentration problems, irritability, muscle tension, and sleep disturbance". Generalized anxiety disorder is the most common anxiety disorder to affect older adults. Anxiety can be a symptom of a medical or substance use disorder problem, and medical professionals must be aware of this. A diagnosis of GAD is made when a person has been excessively worried about an everyday problem for six months or more. These stresses can include family life, work, social life, or their own health. A person may find that they have problems making daily decisions and remembering commitments as a result of a lack of concentration and/or preoccupation with worry. A symptom can be a strained appearance, with increased sweating from the hands, feet, and axillae, along with tearfulness, which can suggest depression. Before a diagnosis of anxiety disorder is made, physicians must rule out drug-induced anxiety and other medical causes.
In children, GAD may be associated with headaches, restlessness, abdominal pain, and heart palpitations. Typically, it begins around eight to nine years of age.
Specific phobias
The largest category of anxiety disorders is that of specific phobias, which includes all cases in which fear and anxiety are triggered by a specific stimulus or situation. Between 5% and 12% of the population worldwide has specific phobias. According to the National Institute of Mental Health, a phobia is an intense fear of or aversion to specific objects or situations. Individuals with a phobia typically anticipate terrifying consequences from encountering the object of their fear, which can be anything from an animal to a location to a bodily fluid to a particular situation. Common phobias are flying, blood, water, highway driving, and tunnels. When people are exposed to their phobia, they may experience trembling, shortness of breath, or rapid heartbeat. People with specific phobias often go to extreme lengths to avoid encountering their phobia. People with specific phobias understand that their fear is not proportional to the actual potential danger, but they can still become overwhelmed by it.
Panic disorder
With panic disorder, a person has brief attacks of intense terror and apprehension, often marked by trembling, shaking, confusion, dizziness, or difficulty breathing. These panic attacks are defined by the APA as fear or discomfort that abruptly arises and peaks in less than ten minutes but can last for several hours. Attacks can be triggered by stress, irrational thoughts, general fear, fear of the unknown, or even when engaging in exercise. However, sometimes the trigger is unclear, and attacks can arise without warning. To help prevent an attack, one can avoid the trigger. This can mean avoiding places, people, types of behaviors, or certain situations that have been known to cause a panic attack. This being said, not all attacks can be prevented.
In addition to recurrent and unexpected panic attacks, a diagnosis of panic disorder requires that said attacks have chronic consequences: either worry over the attacks' potential implications, persistent fear of future attacks, or significant changes in behavior related to the attacks. As such, those with panic disorder experience symptoms even outside of specific panic episodes. Often, normal changes in heartbeat are noticed, leading them to think something is wrong with their heart or they are about to have another panic attack. In some cases, a heightened awareness (hypervigilance) of body functioning occurs during panic attacks, wherein any perceived physiological change is interpreted as a possible life-threatening illness (i.e., extreme hypochondriasis).
Panic disorder is commonly comorbid with anxiety due to the consistent fight or flight response that one’s brain is being put under at such a high repetitive rate. Another one of the very big leading causes of someone developing a panic disorder has a lot to do with one’s childhood. The article provides knowledge on a positive trend in children who experience abuse and have low self-esteem to later on develop disorders such as generalized anxiety disorder and panic disorder.
Agoraphobia
Agoraphobia is a specific anxiety disorder wherein an individual is afraid of being in a place or situation where escape is difficult or embarrassing or where help may be unavailable. Agoraphobia is strongly linked with panic disorder and is often precipitated by the fear of having a panic attack. A common manifestation involves needing to be in constant view of a door or other escape route. In addition to the fears themselves, the term agoraphobia is often used to refer to avoidance behaviors that individuals often develop. For example, following a panic attack while driving, someone with agoraphobia may develop anxiety over driving and will therefore avoid driving. These avoidance behaviors can have serious consequences and often reinforce the fear they are caused by. In a severe case of agoraphobia, the person may never leave their home.
Social anxiety disorder
Social anxiety disorder (SAD), also known as social phobia, describes an intense fear and avoidance of negative public scrutiny, public embarrassment, humiliation, or social interaction. This fear can be specific to particular social situations (such as public speaking) or it can be experienced in most or all social situations. Roughly 7% of American adults have social anxiety disorder, and more than 75% of people experience their first symptoms in their childhood or early teenage years. Social anxiety often manifests specific physical symptoms, including blushing, sweating, rapid heart rate, and difficulty speaking. As with all phobic disorders, those with social anxiety often attempt to avoid the source of their anxiety; in the case of social anxiety, this is particularly problematic, and in severe cases, it can lead to complete social isolation.
Children are also affected by social anxiety disorder, although their associated symptoms are different from those of teenagers and adults. They may experience difficulty processing or retrieving information, sleep deprivation, disruptive behaviors in class, and irregular class participation.
Social physique anxiety (SPA) is a sub-type of social anxiety involving concern over the evaluation of one's body by others. SPA is common among adolescents, especially females.
Post-traumatic stress disorder
Post-traumatic stress disorder (PTSD) was once an anxiety disorder (now moved to trauma- and stressor-related disorders in the DSM-V) that results from a traumatic experience. PTSD affects approximately 3.5% of U.S. adults every year, and an estimated one in eleven people will be diagnosed with PTSD in their lifetime. Post-traumatic stress can result from an extreme situation, such as combat, natural disaster, rape, hostage situations, child abuse, bullying, or even a serious accident. It can also result from long-term (chronic) exposure to a severe stressor— for example, soldiers who endure individual battles but cannot cope with continuous combat. Common symptoms include hypervigilance, flashbacks, avoidant behaviors, anxiety, anger, and depression. In addition, individuals may experience sleep disturbances. People who have PTSD often try to detach themselves from their friends and family and have difficulty maintaining these close relationships. There are a number of treatments that form the basis of the care plan for those with PTSD; such treatments include cognitive behavioral therapy (CBT), prolonged exposure therapy, stress inoculation therapy, medication, psychotherapy, and support from family and friends.
Post-traumatic stress disorder research began with US military veterans of the Vietnam War, as well as natural and non-natural disaster victims. Studies have found the degree of exposure to a disaster to be the best predictor of PTSD.
Separation anxiety disorder
Separation anxiety disorder (SepAD) is the feeling of excessive and inappropriate levels of anxiety over being separated from a person or place. Separation anxiety is a normal part of development in babies or children, and it is only when this feeling is excessive or inappropriate that it can be considered a disorder. Separation anxiety disorder affects roughly 7% of adults and 4% of children, but childhood cases tend to be more severe; in some instances, even a brief separation can produce panic. Treating a child earlier may prevent problems. This may include training the parents and family on how to deal with it. Often, the parents will reinforce the anxiety because they do not know how to properly work through it with the child. In addition to parent training and family therapy, medication, such as SSRIs, can be used to treat separation anxiety.
Obsessive–compulsive disorder
Obsessive–compulsive disorder (OCD) is not an anxiety disorder in the DSM-5 or the ICD-11. However, it was classified as such in older versions of the DSM-IV and ICD-10. OCD manifests in the form of obsessions (distressing, persistent, and intrusive thoughts or images) and compulsions (urges to repeatedly perform specific acts or rituals) that are not caused by drugs or physical disorders and which cause anxiety or distress plus (more or less important) functional disabilities. OCD affects roughly 1–2% of adults (somewhat more women than men) and under 3% of children and adolescents.
A person with OCD knows that the symptoms are unreasonable and struggles against both the thoughts and the behavior. Their symptoms could be related to external events they fear, such as their home burning down because they forgot to turn off the stove, or they could worry that they will behave inappropriately. The compulsive rituals are personal rules they follow to relieve discomfort, such as needing to verify that the stove is turned off a specific number of times before leaving the house.
It is not certain why some people have OCD, but behavioral, cognitive, genetic, and neurobiological factors may be involved. Risk factors include family history, being single, being of a higher socioeconomic class, or not being in paid employment. Of those with OCD, about 20% of people will overcome it, and symptoms will at least reduce over time for most people (a further 50%).
Selective mutism
Selective mutism (SM) is a disorder in which a person who is normally capable of speech does not speak in specific situations or to specific people. Selective mutism usually co-exists with shyness or social anxiety. People with selective mutism stay silent even when the consequences of their silence include shame, social ostracism, or even punishment. Selective mutism affects about 0.8% of people at some point in their lives.
Testing for selective mutism is important because doctors must determine if it is an issue associated with the child's hearing or movements associated with the jaw or tongue and if the child can understand when others are speaking to them. Generally, cognitive behavioral therapy (CBT) is the recommended approach for treating selective mutism, but prospective long-term outcome studies are lacking.
Diagnosis
The diagnosis of anxiety disorders is made by symptoms, triggers, and a person's personal and family histories. There are no objective biomarkers or laboratory tests that can diagnose anxiety. It is important for a medical professional to evaluate a person for other medical and mental causes of prolonged anxiety because treatments will vary considerably.
Numerous questionnaires have been developed for clinical use and can be used for an objective scoring system. Symptoms may vary between each sub-type of generalized anxiety disorder. Generally, symptoms must be present for at least six months, occur more days than not, and significantly impair a person's ability to function in daily life. Symptoms may include: feeling nervous, anxious, or on edge; worrying excessively; difficulty concentrating; restlessness; and irritability.
Questionnaires developed for clinical use include the State-Trait Anxiety Inventory (STAI), the Generalized Anxiety Disorder 7 (GAD-7), the Beck Anxiety Inventory (BAI), the Zung Self-Rating Anxiety Scale, and the Taylor Manifest Anxiety Scale. Other questionnaires combine anxiety and depression measurements, such as the Hamilton Anxiety Rating Scale, the Hospital Anxiety and Depression Scale (HADS), the Patient Health Questionnaire (PHQ), and the Patient-Reported Outcomes Measurement Information System (PROMIS). Examples of specific anxiety questionnaires include the Liebowitz Social Anxiety Scale (LSAS), the Social Interaction Anxiety Scale (SIAS), the Social Phobia Inventory (SPIN), the Social Phobia Scale (SPS), and the Social Anxiety Questionnaire (SAQ-A30).
The GAD-7 has a sensitivity of 57-94% and a specificity of 82-88% in the diagnosis of general anxiety disorder. All screening questionnaires, if positive, should be followed by clinical interview including assessment of impairment and distress, avoidance behaviors, symptom history and persistence to definitively diagnose an anxiety disorder. Some organizations support routinely screening all adults for anxiety disorders, with the US Preventative Services Task Force recommending screening for all adults younger than 65.
Differential diagnosis
Anxiety disorders differ from developmentally normal fear or anxiety by being excessive or persisting beyond developmentally appropriate periods. They differ from transient fear or anxiety, often stress-induced, by being persistent (e.g., typically lasting 6 months or more), although the criterion for duration is intended as a general guide with allowance for some degree of flexibility and is sometimes of shorter duration in children.
The diagnosis of an anxiety disorder requires first ruling out an underlying medical cause. Diseases that may present similar to an anxiety disorder include certain endocrine diseases (hypo- and hyperthyroidism, hyperprolactinemia), metabolic disorders (diabetes), deficiency states (low levels of vitamin D, B2, B12, folic acid), gastrointestinal diseases (celiac disease, non-celiac gluten sensitivity, inflammatory bowel disease), heart diseases, blood diseases (anemia), and brain degenerative diseases (Parkinson's disease, dementia, multiple sclerosis, Huntington's disease).
Several drugs can also cause or worsen anxiety, whether through intoxication, withdrawal, or chronic use. These include alcohol, tobacco, cannabis, sedatives (including prescription benzodiazepines), opioids (including prescription painkillers and illicit drugs like heroin), stimulants (such as caffeine, cocaine, and amphetamines), hallucinogens, and inhalants.
Prevention
Focus is increasing on the prevention of anxiety disorders. There is tentative evidence to support the use of cognitive behavioral therapy and mindfulness therapy. A 2013 review found no effective measures to prevent GAD in adults. A 2017 review found that psychological and educational interventions had a small benefit for the prevention of anxiety. Research indicates that predictors of the emergence of anxiety disorders partly differ from the factors that predict their persistence.
A big factor that goes into anxiety disorder prevention starts in childhood. Based on the cited article, parents have a big part in whether or not their child will develop anxiety in their future. Specific interventions have been tested to educate parents with young children on how to care and prevent a disorder like anxiety from becoming a bigger issue in their child’s teen to adult life. The study also shared that since it is such a new intervention that there is not much information on long term results, however it does seem to be looking in a positive direction.
Perception and discrimination
Stigma
People with an anxiety disorder may be challenged by prejudices and stereotypes held by other people, most likely as a result of misconceptions around anxiety and anxiety disorders. Misconceptions found in a data analysis from the National Survey of Mental Health Literacy and Stigma include: (1) many people believe anxiety is not a real medical illness; and (2) many people believe that people with anxiety could turn it off if they wanted to. For people experiencing the physical and mental symptoms of an anxiety disorder, stigma and negative social perception can make an individual less likely to seek treatment.
Prejudice that some people with mental illness turn against themselves is called self-stigma.
There is no explicit evidence for the exact cause of stigma towards anxiety. Stigma can be divided by social scale, into the macro, intermediate, and micro levels. The macro-level marks society as a whole with the influence of mass media. The intermediate level includes healthcare professionals and their perspectives. The micro-level details the individual's contributions to the process through self-stigmatization.
It has become very prevalent that many college students undergo some sort of mental disorder in their early adulthood. Anxiety has become one of the main ones that has grown in prevalence over time. This is due to many issues such as different social pressures, school, career worries, etc. This has not only affected a lot of the youth in today’s world but their overall quality of life. However, it is important to bring this issue to light since there is such a negative stigma when it comes to mental health; but rather than ignoring it and letting the issue grow exponentially larger, it is important to recognize ways that it can be lessened for future generations.
Stigma can be described in three conceptual ways: cognitive, emotional, and behavioral. This allows for differentiation between stereotypes, prejudice, and discrimination.
Treatment
Treatment options include psychotherapy, medications and lifestyle changes. There is no clear evidence as to whether psychotherapy or medication is more effective; the specific medication decision can be made by a doctor and patient with consideration for the patient's specific circumstances and symptoms. If, while on treatment with a chosen medication, the person's anxiety does not improve, another medication may be offered. Specific treatments will vary by sub-type of anxiety disorder, a person's other medical conditions, and medications.
Psychological techniques
Cognitive behavioral therapy (CBT) is effective for anxiety disorders and is a first-line treatment. CBT is the most widely studied and preferred form of psychotherapy for anxiety disorders. CBT appears to be equally effective when carried out via the internet compared to sessions completed face-to-face. There are specific CBT curriculums or strategies for the specific type of anxiety disorder. CBT has similar effectiveness to pharmacotherapy and in a meta analysis, CBT was associated with medium to large benefit effect sizes for GAD, panic disorder and social anxiety disorder. CBT has low dropout rates and its positive effects have been shown to be maintained at least for 12 months. CBT is sometimes given as once weekly sessions for 8–20 weeks, but regimens vary widely. Booster sessions may need to be restarted for patients who have a relapse of symptoms.
Exposure and response prevention (ERP) has been found effective for treating OCD.
Mindfulness-based programs also appear to be effective for managing anxiety disorders. It is unclear if meditation has an effect on anxiety, and transcendental meditation appears to be no different from other types of meditation.
A 2015 Cochrane review of Morita therapy for anxiety disorder in adults found not enough evidence to draw a conclusion.
Medications
First-line choices for medications include SSRIs or SNRIs to treat generalized anxiety disorder, social anxiety disorder or panic disorder. For adults, there is no good evidence supporting which specific medication in the SSRI or SNRI class is best for treating anxiety, so cost often drives drug choice. Fluvoxamine is effective in treating a range of anxiety disorders in children and adolescents.
Fluoxetine, sertraline, and paroxetine can also help with some forms of anxiety in children and adolescents. If the chosen medicine is effective, it is recommended that it be continued for at least a year to potentiate the risk of a relapse.
Benzodiazepines are a second line option for the pharmacologic treatment of anxiety. Benzodiazepines are associated with moderate to high effect sizes with regard to symptom relief and they have an onset usually within 1 week. Clonazepam has a longer half life and may possibly be used as once per day dosing. Benzodiazepines may also be used with SNRIs or SSRIs to initially reduce anxiety symptoms, and they may potentially be continued long term. Benzodiazepines are not a first line pharmacologic treatment of anxiety disorders and they carry risks of physical dependence, psychological dependence, overdose death (especially when combined with opioids), misuse, cognitive impairment, falls and motor vehicle crashes.
Buspirone and pregabalin are second-line treatments for people who do not respond to SSRIs or SNRIs. Pregabalin and gabapentin are effective in treating some anxiety disorders, but there is concern regarding their off-label use due to the lack of strong scientific evidence for their efficacy in multiple conditions and their proven side effects.
Medications need to be used with care among older adults, who are more likely to have side effects because of coexisting physical disorders. Adherence problems are more likely among older people, who may have difficulty understanding, seeing, or remembering instructions.
In general, medications are not seen as helpful for specific phobias, but benzodiazepines are sometimes used to help resolve acute episodes. In 2007, data were sparse for the efficacy of any drug.
Lifestyle and diet
Lifestyle changes include exercise, for which there is moderate evidence for some improvement, regularizing sleep patterns, reducing caffeine intake, and stopping smoking. Stopping smoking has benefits for anxiety as great as or greater than those of medications. A meta-analysis found 2000 mg/day or more of omega-3 polyunsaturated fatty acids, such as fish oil, tended to reduce anxiety in placebo-controlled and uncontrolled studies, particularly in people with more significant symptoms.
Cannabis
, there is little evidence for the use of cannabis in treating anxiety disorders.
Treatments for children
Both therapy and a number of medications have been found to be useful for treating childhood anxiety disorders. Therapy is generally preferred to medication.
Cognitive behavioral therapy (CBT) is a good first-line therapy approach. Studies have gathered substantial evidence for treatments that are not CBT-based as effective forms of treatment, expanding treatment options for those who do not respond to CBT. Although studies have demonstrated the effectiveness of CBT for anxiety disorders in children and adolescents, evidence that it is more effective than treatment as usual, medication, or wait list controls is inconclusive. Like adults, children may undergo psychotherapy, cognitive-behavioral therapy, or counseling. Family therapy is a form of treatment in which the child meets with a therapist together with the primary guardians and siblings. Each family member may attend individual therapy, but family therapy is typically a form of group therapy. Art and play therapy are also used. Art therapy is most commonly used when the child will not or cannot verbally communicate due to trauma or a disability in which they are nonverbal. Participating in art activities allows the child to express what they otherwise may not be able to communicate to others. In play therapy, the child is allowed to play however they please as a therapist observes them. The therapist may intercede from time to time with a question, comment, or suggestion. This is often most effective when the family of the child plays a role in the treatment.
Epidemiology
Globally, as of 2010, approximately 273 million (4.5% of the population) had an anxiety disorder. It is more common in females (5.2%) than males (2.8%).
In Europe, Africa, and Asia, lifetime rates of anxiety disorders are between 9 and 16%, and yearly rates are between 4 and 7%. In the United States, the lifetime prevalence of anxiety disorders is about 29%, and between 11 and 18% of adults have the condition in a given year. This difference is affected by the range of ways in which different cultures interpret anxiety symptoms and what they consider to be normative behavior. In general, anxiety disorders represent the most prevalent psychiatric condition in the United States, outside of substance use disorder.
Like adults, children can experience anxiety disorders; between 10 and 20 percent of all children will develop a full-fledged anxiety disorder prior to the age of 18, making anxiety the most common mental health issue in young people. Anxiety disorders in children are often more challenging to identify than their adult counterparts, owing to the difficulty many parents face in discerning them from normal childhood fears. Likewise, anxiety in children is sometimes misdiagnosed as attention deficit hyperactivity disorder, or, due to the tendency of children to interpret their emotions physically (as stomachaches, headaches, etc.), anxiety disorders may initially be confused with physical ailments.
Anxiety in children has a variety of causes; sometimes anxiety is rooted in biology and may be a product of another existing condition, such as autism spectrum disorder. Gifted children are also often more prone to excessive anxiety than non-gifted children. Other cases of anxiety arise from the child having experienced a traumatic event of some kind, and in some cases, the cause of the child's anxiety cannot be pinpointed.
Anxiety in children tends to manifest along age-appropriate themes, such as fear of going to school (not related to bullying) or not performing well enough at school, fear of social rejection, fear of something happening to loved ones, etc. What separates disordered anxiety from normal childhood anxiety is the duration and intensity of the fears involved.
According to 2011 study, people who high in hypercompetitive traits are at increased risk of both anxiety and depression.
| Biology and health sciences | Mental disorder | null |
57713 | https://en.wikipedia.org/wiki/Dermatitis | Dermatitis | Dermatitis is a term used for different types of skin inflammation, typically characterized by itchiness, redness and a rash. In cases of short duration, there may be small blisters, while in long-term cases the skin may become thickened. The area of skin involved can vary from small to covering the entire body. Dermatitis is also called eczema but the same term is often used for the most common type of skin inflammation, atopic dermatitis.
The exact cause of the condition is often unclear. Cases may involve a combination of allergy and poor venous return. The type of dermatitis is generally determined by the person's history and the location of the rash. For example, irritant dermatitis often occurs on the hands of those who frequently get them wet. Allergic contact dermatitis occurs upon exposure to an allergen, causing a hypersensitivity reaction in the skin.
Prevention of atopic dermatitis is typically with essential fatty acids, and may be treated with moisturizers and steroid creams. The steroid creams should generally be of mid-to high strength and used for less than two weeks at a time, as side effects can occur. Antibiotics may be required if there are signs of skin infection. Contact dermatitis is typically treated by avoiding the allergen or irritant. Antihistamines may help with sleep and decrease nighttime scratching.
Dermatitis was estimated to affect 245 million people globally in 2015, or 3.34% of the world population. Atopic dermatitis is the most common type and generally starts in childhood. In the United States, it affects about 10–30% of people. Contact dermatitis is twice as common in females as in males. Allergic contact dermatitis affects about 7% of people at some point in their lives. Irritant contact dermatitis is common, especially among people with certain occupations; exact rates are unclear.
Terminology
The terms dermatitis and eczema are sometimes used synonymously. However the term eczema is often used to specifically mean atopic dermatitis (also known as atopic eczema). Terminology might also differ according to countries. In some languages, dermatitis and eczema mean the same thing, while in other languages dermatitis implies an acute condition and eczema a chronic one.
Signs and symptoms
There are several types of dermatitis including atopic dermatitis, contact dermatitis, stasis dermatitis and seborrhoeic dermatitis. Dermatitis symptoms vary with all different forms of the condition. Although every type of dermatitis has different symptoms, there are certain signs that are common for all of them, including redness of the skin, swelling, itching and skin lesions with sometimes oozing and scarring. Also, the area of the skin on which the symptoms appear tends to be different with every type of dermatitis, whether on the neck, wrist, forearm, thigh or ankle.
Although the location may vary, the primary symptom of this condition is itchy skin. More rarely, it may appear on the genital area, such as the vulva or scrotum. Symptoms of this type of dermatitis may be very intense and may come and go. Irritant contact dermatitis is usually more painful than itchy.
Although the symptoms of atopic dermatitis vary from person to person, the most common symptoms are dry, itchy, red skin, on light skin. However, this redness does not appear on darker skin and dermatitis can appear darker brown or purple in color. Typical affected skin areas include the folds of the arms, the back of the knees, wrists, face and hands. Perioral dermatitis refers to a red bumpy rash around the mouth.
Dermatitis herpetiformis symptoms include itching, stinging and a burning sensation. Papules and vesicles are commonly present. The small red bumps experienced in this type of dermatitis are usually about 1 cm in size, red in color and may be found symmetrically grouped or distributed on the upper or lower back, buttocks, elbows, knees, neck, shoulders and scalp.
The symptoms of seborrhoeic dermatitis, on the other hand, tend to appear gradually, from dry or greasy scaling of the scalp (dandruff) to scaling of facial areas, sometimes with itching, but without hair loss. In newborns, the condition causes a thick and yellowish scalp rash, often accompanied by a diaper rash. In severe cases, symptoms may appear along the hairline, behind the ears, on the eyebrows, on the bridge of the nose, around the nose, on the chest, and on the upper back.
Complications
People with eczema should not receive the smallpox vaccination due to risk of developing eczema vaccinatum, a potentially severe and sometimes fatal complication. Other major health risks for people with dermatitis are viral and bacterial infections because atopic dermatitis patients have deficiencies in their proteins and lipids that have barrier functions along with defects in dendritic cells and as a result are unable to keep foreign invaders out, leading to recurring infections. If left untreated, these infections may be life-threatening, so skin barrier improvement (such as daily moisturizing to minimize transepidermal water loss) and anti-inflammatory therapy are recommended as preventative measures.
Cause
The cause of dermatitis is unknown but is presumed to be a combination of genetic and environmental factors. Eczema is not contagious.
Environmental
The hygiene hypothesis postulates that the cause of asthma, eczema, and other allergic diseases is an unusually clean environment in childhood which leads to an insufficient human microbiota. It is supported by epidemiologic studies for asthma. The hypothesis states that exposure to bacteria and other immune system modulators is important during development, and missing out on this exposure increases the risk for asthma and allergy. One systematic review of literature on eczema found that urban areas have an increased prevalence of eczema compared to rural areas. While it has been suggested that eczema may sometimes be an allergic reaction to the excrement from house dust mites, with up to 5% of people showing antibodies to the mites, the overall role this plays awaits further corroboration.
Malnutrition
Essential fatty acid deficiency results in a dermatitis similar to that seen in zinc or biotin deficiency.
Genetic
A number of genes have been associated with eczema, one of which affects production of filaggrin. Genome-wide studies found three new genetic variants associated with eczema: OVOL1, ACTL9 and IL4-KIF3A.
Eczema occurs about three times more frequently in individuals with celiac disease and about two times more frequently in relatives of those with celiac disease, potentially indicating a genetic link between the conditions.
Diagnosis
Diagnosis of eczema is based mostly on the history and physical examination. In uncertain cases, skin biopsy may be taken for a histopathologic diagnosis of dermatitis. Those with eczema may be especially prone to misdiagnosis of food allergies.
Patch tests are used in the diagnosis of allergic contact dermatitis.
Classification
The term eczema refers to a set of clinical characteristics. Classification of the underlying diseases has been haphazard with numerous different classification systems, and many synonyms being used to describe the same condition.
A type of dermatitis may be described by location (e.g., hand eczema), by specific appearance (eczema craquele or discoid) or by possible cause (varicose eczema). Further adding to the confusion, many sources use the term eczema interchangeably for the most common type: atopic dermatitis.
The European Academy of Allergology and Clinical Immunology (EAACI) published a position paper in 2001, which simplifies the nomenclature of allergy-related diseases, including atopic and allergic contact eczemas. Non-allergic eczemas are not affected by this proposal.
Histopathologic classification
By histopathology, superficial dermatitis (in the epidermis, papillary dermis, and superficial vascular plexus) can basically be classified into either of the following groups:
Vesiculobullous lesions
Pustular dermatosis
Non vesiculobullous, non-pustular
With epidermal changes
Without epidermal changes. These characteristically have a superficial perivascular inflammatory infiltrate and can be classified by type of cell infiltrate:
Lymphocytic (most common)
Lymphoeosinophilic
Lymphoplasmacytic
Mast cell
Lymphohistiocytic
Neutrophilic
Common types
Atopic
Atopic dermatitis is an allergic disease believed to have a hereditary component and often runs in families whose members have asthma. Itchy rash is particularly noticeable on the head and scalp, neck, inside of elbows, behind knees, and buttocks. It is very common in developed countries and rising. Irritant contact dermatitis is sometimes misdiagnosed as atopic dermatitis. Stress can cause atopic dermatitis to worsen.
Contact
Contact dermatitis is of two types: allergic (resulting from a delayed reaction to an allergen, such as poison ivy, nickel, or Balsam of Peru), and irritant (resulting from direct reaction to a detergent, such as sodium lauryl sulfate, for example).
Some substances act both as allergen and irritants (wet cement, for example). Other substances cause a problem after sunlight exposure, bringing on phototoxic dermatitis. About three-quarters of cases of contact eczema are of the irritant type, which is the most common occupational skin disease. Contact eczema is curable, provided the offending substance can be avoided and its traces removed from one's environment. (ICD-10 L23; L24; L56.1; L56.0)
Seborrhoeic
Seborrhoeic dermatitis or seborrheic dermatitis is a condition sometimes classified as a form of eczema that is closely related to dandruff. It causes dry or greasy peeling of the scalp, eyebrows, and face, and sometimes trunk. In newborns, it causes a thick, yellow, crusty scalp rash called cradle cap, which seems related to lack of biotin and is often curable. (ICD-10 L21; L21.0)
There is a connection between seborrheic dermatitis and Malassezia fungus, and antifungals such as anti-dandruff shampoo can be helpful in treating it.
Less common types
Dyshidrosis
Dyshidrosis (dyshidrotic eczema, pompholyx, vesicular palmoplantar dermatitis) only occurs on palms, soles, and sides of fingers and toes. Tiny opaque bumps called vesicles, thickening, and cracks are accompanied by itching, which gets worse at night. A common type of hand eczema, it worsens in warm weather. (ICD-10 L30.1)
Discoid
Discoid eczema (nummular eczema, exudative eczema, microbial eczema) is characterized by round spots of oozing or dry rash, with clear boundaries, often on lower legs. It is usually worse in winter. The cause is unknown, and the condition tends to come and go. (ICD-10 L30.0)
Venous
Venous eczema (gravitational eczema, stasis dermatitis, varicose eczema) occurs in people with impaired circulation, varicose veins, and edema, and is particularly common in the ankle area of people over 50. There is redness, scaling, darkening of the skin, and itching. The disorder predisposes to leg ulcers. (ICD-10 I83.1)
Herpetiformis
Dermatitis herpetiformis (Duhring's disease) causes an intensely itchy and typically symmetrical rash on arms, thighs, knees, and back. It is directly related to celiac disease, can often be put into remission with an appropriate diet, and tends to get worse at night. (ICD-10 L13.0)
Hyperkeratotic
Hyperkeratotic hand dermatitis presents with hyperkeratotic, fissure-prone, erythematous areas of the middle or proximal palm, and the volar surfaces of the fingers may also be involved.
Neurodermatitis
Neurodermatitis (lichen simplex chronicus, localized scratch dermatitis) is an itchy area of thickened, pigmented eczema patch that results from habitual rubbing and scratching. Usually, there is only one spot. Often curable through behaviour modification and anti-inflammatory medication. Prurigo nodularis is a related disorder showing multiple lumps. (ICD-10 L28.0; L28.1)
Autoeczematization
Autoeczematization (id reaction, auto sensitization) is an eczematous reaction to an infection with parasites, fungi, bacteria, or viruses. It is completely curable with the clearance of the original infection that caused it. The appearance varies depending on the cause. It always occurs some distance away from the original infection. (ICD-10 L30.2)
Viral
There are eczemas overlaid by viral infections (eczema herpeticum or vaccinatum), and eczemas resulting from underlying disease (e.g., lymphoma).
Eczemas originating from ingestion of medications, foods, and chemicals, have not yet been clearly systematized. Other rare eczematous disorders exist in addition to those listed here.
Prevention
There have been various studies on the prevention of dermatitis through diet, none of which have proven any positive effect.
Exclusive breastfeeding of infants during at least the first few months may decrease the risk. There is no good evidence that a mother's diet during pregnancy or breastfeeding affects the risk, nor is there evidence that delayed introduction of certain foods is useful. There is tentative evidence that probiotics in infancy may reduce rates but it is insufficient to recommend its use. There is moderate certainty evidence that the use of skin care interventions such as emollients within the first year of life of an infant's life is not effective in preventing eczema. In fact, it may increase the risk of skin infection and of unwanted effects such as allergic reaction to certain moisturizers and a stinging sensation.
Healthy diet
There has not been adequate evaluation of changing the diet to reduce eczema. There is some evidence that infants with an established egg allergy may have a reduction in symptoms if eggs are eliminated from their diets. Benefits have not been shown for other elimination diets, though the studies are small and poorly executed. Establishing that there is a food allergy before dietary change could avoid unnecessary lifestyle changes.
Fatty acids
Oils with fatty acids that have been studied to prevent dermatitis include:
Corn oil: Linoleic acid (LA)
Fish oil: Eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA)
Hemp seed oil: Linoleic acid (LA), and alpha-Linolenic acid (ALA)
In the 1950s Arild Hansen showed that infants fed skimmed milk developed essential fatty acid deficiency which was characterized by an increased food intake, poor growth, and a scaly dermatitis, and was cured by the administration of corn oil.
Management
There is no known cure for some types of dermatitis, with treatment aiming to control symptoms by reducing inflammation and relieving itching. Contact dermatitis is treated by avoiding what is causing it.
Seborrheic dermatitis is treated with antifungals such as anti-dandruff shampoo.
Lifestyle
Bathing once or more a day is recommended, usually for five to ten minutes in warm water. Soaps should be avoided, as they tend to strip the skin of natural oils and lead to excessive dryness. The American Academy of Dermatology suggests using a controlled amount of bleach diluted in a bath to help with atopic dermatitis.
People can wear clothing designed to manage the itching, scratching and peeling.
House dust mite reduction and avoidance measures have been studied in low quality trials and have not shown evidence of improving eczema.
Moisturizers
Low-quality evidence indicates that moisturizing agents (emollients) may reduce eczema severity and lead to fewer flares. In children, oil–based formulations appear to be better, and water–based formulations are not recommended. It is unclear if moisturizers that contain ceramides are more or less effective than others. Products that contain dyes, perfumes, or peanuts should not be used. Occlusive dressings at night may be useful.
Some moisturizers or barrier creams may reduce irritation in occupational irritant hand dermatitis, a skin disease that can affect people in jobs that regularly come into contact with water, detergents, chemicals or other irritants. Some emollients may reduce the number of flares in people with dermatitis.
Medications
Corticosteroids
If symptoms are well controlled with moisturizers, steroids may only be required when flares occur. Corticosteroids are effective in controlling and suppressing symptoms in most cases. Once daily use is generally enough. For mild-moderate eczema a weak steroid may be used (e.g., hydrocortisone), while in more severe cases a higher-potency steroid (e.g., clobetasol propionate) may be used. In severe cases, oral or injectable corticosteroids may be used. While these usually bring about rapid improvements, they have greater side effects.
Long term use of topical steroids may result in skin atrophy, stria, and telangiectasia. Their use on delicate skin (face or groin) is therefore typically with caution. They are, however, generally well tolerated. Red burning skin, where the skin turns red upon stopping steroid use, has been reported among adults who use topical steroids at least daily for more than a year.
Antihistamines
There is little evidence supporting the use of antihistamine medications for the relief of dermatitis. Sedative antihistamines, such as diphenhydramine, may be useful in those who are unable to sleep due to eczema. Second generation antihistamines have minimal evidence of benefit. Of the second generation antihistamines studied, fexofenadine is the only one to show evidence of improvement in itching with minimal side effects.
Immunosuppressants
Topical immunosuppressants like pimecrolimus and tacrolimus may be better in the short term and appear equal to steroids after a year of use. Their use is reasonable in those who do not respond to or are not tolerant of steroids. Treatments are typically recommended for short or fixed periods of time rather than indefinitely. Tacrolimus 0.1% has generally proved more effective than pimecrolimus, and equal in effect to mid-potency topical steroids. There is no association to increased risk of cancer from topical use of pimecrolimus nor tacrolimus.
When eczema is severe and does not respond to other forms of treatment, systemic immunosuppressants are sometimes used. Immunosuppressants can cause significant side effects and some require regular blood tests. The most commonly used are ciclosporin, azathioprine, and methotrexate.
Dupilumab is a new medication that improves eczema lesions, especially moderate to severe eczema. Dupilumab, a monoclonal antibody, suppresses inflammation by targeting the interleukin-4 receptor.
Antifungals
Antifungals are used in the treatment of seborrheic dermatitis.
Others
In September 2021, ruxolitinib cream (Opzelura) was approved by the U.S. Food and Drug Administration (FDA) for the topical treatment of mild to moderate atopic dermatitis. It is a topical Janus kinase inhibitor.
Light therapy
Narrowband UVB
Atopic dermatitis (AD) may be treated with narrowband UVB, which increases 25-hydroxyvitamin D3 in persons in individuals with AD.
Light therapy using heliotherapy, balneophototherapy, psoralen plus UVA (PUVA), light has tentative support but the quality of the evidence is not very good compared with narrowband UVB, and UVA1. However, UVB is more effective than UVA1 for treatment of atopical dermatitis.
Overexposure to ultraviolet light carries its own risks, particularly that of skin cancer.
Alternative medicine
Topical
Chiropractic spinal manipulation lacks evidence to support its use for dermatitis. There is little evidence supporting the use of psychological treatments. While dilute bleach baths have been used for infected dermatitis there is little evidence for this practice.
Supplements
Sulfur: There is currently no scientific evidence for the claim that sulfur treatment relieves eczema.
Chinese herbology: it is unclear whether Chinese herbs help or harm. Dietary supplements are commonly used by people with eczema.
Neither evening primrose oil nor borage seed oil taken orally have been shown to be effective. Both are associated with gastrointestinal upset.
Probiotics are likely to make little to no difference in symptoms.
Prognosis
Most cases are well managed with topical treatments and ultraviolet light. About 2% of cases are not. In more than 60% of young children, the condition subsides by adolescence.
Epidemiology
Globally dermatitis affected approximately 230 million people as of 2010 (3.5% of the population). Dermatitis is most commonly seen in infancy, with female predominance of eczema presentations occurring during the reproductive period of 15–49 years. In the UK about 20% of children have the condition, while in the United States about 10% are affected.
Although little data on the rates of eczema over time exists prior to the 1940s, the rate of eczema has been found to have increased substantially in the latter half of the 20th century, with eczema in school-aged children being found to increase between the late 1940s and 2000. In the developed world there has been rise in the rate of eczema over time. The incidence and lifetime prevalence of eczema in England has been seen to increase in recent times.
Dermatitis affected about 10% of U.S. workers in 2010, representing over 15 million workers with dermatitis. Prevalence rates were higher among females than among males and among those with some college education or a college degree compared to those with a high school diploma or less. Workers employed in healthcare and social assistance industries and life, physical, and social science occupations had the highest rates of reported dermatitis. About 6% of dermatitis cases among U.S. workers were attributed to work by a healthcare professional, indicating that the prevalence rate of work-related dermatitis among workers was at least 0.6%.
Etymology and history
The term atopic dermatitis was coined in 1933 by Wise and Sulzberger. Sulfur as a topical treatment for eczema was fashionable in the Victorian and Edwardian eras.
The word dermatitis is from the Greek 'skin' and 'inflammation' and eczema is from 'eruption'.
Society and culture
Some cosmetics are marketed as hypoallergenic to imply that their use is less likely to lead to an allergic reaction than other products. However, the term hypoallergenic is not regulated, and no research has been done showing that products labeled hypoallergenic are less problematic than any others. In 1977, courts overruled the U.S. Food and Drug Administration's regulation of the use of the term hypoallergenic. In 2019, the European Union released a document about claims made concerning cosmetics, but this was issued as guidance, not a regulation.
Research
Monoclonal antibodies are under preliminary research to determine their potential as treatments for atopic dermatitis, with only dupilumab showing evidence of efficacy, as of 2018.
| Biology and health sciences | Specific diseases | Health |
57759 | https://en.wikipedia.org/wiki/Biophoton | Biophoton | Biophotons (from the Greek βίος meaning "life" and φῶς meaning "light") are photons of light in the ultraviolet and low visible light range that are produced by a biological system. They are non-thermal in origin, and the emission of biophotons is technically a type of bioluminescence, though the term "bioluminescence" is generally reserved for higher luminance systems (typically with emitted light visible to the naked eye, using biochemical means such as luciferin/luciferase). The term biophoton used in this narrow sense should not be confused with the broader field of biophotonics, which studies the general interaction of light with biological systems.
Biological tissues typically produce an observed radiant emittance in the visible and ultraviolet frequencies ranging from 10−17 to 10−23 W/cm2 (approx 1-1000 photons/cm2/second). This low level of light has a much weaker intensity than the visible light produced by bioluminescence, but biophotons are detectable above the background of thermal radiation that is emitted by tissues at their normal temperature.
While detection of biophotons has been reported by several groups, hypotheses that such biophotons indicate the state of biological tissues and facilitate a form of cellular communication are still under investigation, Alexander Gurwitsch, who discovered the existence of biophotons, was awarded the Stalin Prize in 1941 for his work.
Detection and measurement
Biophotons may be detected with photomultipliers or by means of an ultra low noise CCD camera to produce an image, using an exposure time of typically 15 minutes for plant materials. Photomultiplier tubes have been used to measure biophoton emissions from fish eggs, and some applications have measured biophotons from animals and humans. Electron Multiplying CCD (EM-CCD) optimized for the detection of ultraweak light have also been used to detect the bioluminescence produced by yeast cells at the onset of their growth.
The typical observed radiant emittance of biological tissues in the visible and ultraviolet frequencies ranges from 10−17 to 10−23 W/cm2 with a photon count from a few to nearly 1000 photons per cm2 in the range of 200 nm to 800 nm.
Proposed physical mechanisms
Chemi-excitation via oxidative stress by reactive oxygen species or catalysis by enzymes (i.e., peroxidase, lipoxygenase) is a common event in the biomolecular milieu. Such reactions can lead to the formation of triplet excited species, which release photons upon returning to a lower energy level in a process analogous to phosphorescence. That this process is a contributing factor to spontaneous biophoton emission has been indicated by studies demonstrating that biophoton emission can be increased by depleting assayed tissue of antioxidants or by addition of carbonyl derivatizing agents. Further support is provided by studies indicating that emission can be increased by addition of reactive oxygen species.
Plants
Imaging of biophotons from leaves has been used as a method for assaying R gene responses. These genes and their associated proteins are responsible for pathogen recognition and activation of defense signaling networks leading to the hypersensitive response, which is one of the mechanisms of the resistance of plants to pathogen infection. It involves the generation of reactive oxygen species (ROS), which have crucial roles in signal transduction or as toxic agents leading to cell death.
Biophotons have been also observed in the roots of stressed plants. In healthy cells, the concentration of ROS is minimized by a system of biological antioxidants. However, heat shock and other stresses changes the equilibrium between oxidative stress and antioxidant activity, for example, the rapid rise in temperature induces biophoton emission by ROS.
Hypothesized involvement in cellular communication
In the 1920s, the Russian embryologist Alexander Gurwitsch reported "ultraweak" photon emissions from living tissues in the UV-range of the spectrum. He named them "mitogenetic rays" because his experiments convinced him that they had a stimulating effect on cell division.
In the 1970s Fritz-Albert Popp and his research group at the University of Marburg (Germany) showed that the spectral distribution of the emission fell over a wide range of wavelengths, from 200 to 750 nm. Popp's work on the biophoton emission's statistical properties, namely the claims on its coherence, was criticised for lack of scientific rigour.
One biophoton mechanism focuses on injured cells that are under higher levels of oxidative stress, which is one source of light, and can be deemed to constitute a "distress signal" or background chemical process, but this mechanism is yet to be demonstrated. The difficulty of teasing out the effects of any supposed biophotons amid the other numerous chemical interactions between cells makes it difficult to devise a testable hypothesis. A 2010 review article discusses various published theories on this kind of signaling.
The hypothesis of cellular communication by biophotons was highly criticised for failing to explain how could cells detect photonic signals several orders of magnitude weaker than the natural background illumination.
| Physical sciences | Bosons | Physics |
57762 | https://en.wikipedia.org/wiki/Psychiatric%20medication | Psychiatric medication | A psychiatric or psychotropic medication is a psychoactive drug taken to exert an effect on the chemical makeup of the brain and nervous system. Thus, these medications are used to treat mental illnesses. These medications are typically made of synthetic chemical compounds and are usually prescribed in psychiatric settings, potentially involuntarily during commitment. Since the mid-20th century, such medications have been leading treatments for a broad range of mental disorders and have decreased the need for long-term hospitalization, thereby lowering the cost of mental health care. The recidivism or rehospitalization of the mentally ill is at a high rate in many countries, and the reasons for the relapses are under research.
History
Several significant psychiatric drugs were developed in the mid-20th century. In 1948, lithium was first used as a psychiatric medicine. One of the most important discoveries was chlorpromazine, an antipsychotic that was first given to a patient in 1952. In the same decade, Julius Axelrod carried out research into the interaction of neurotransmitters, which provided a foundation for the development of further drugs. The popularity of these drugs have increased significantly since then, with millions prescribed annually.
The introduction of these drugs brought profound changes to the treatment of mental illness. It meant that more patients could be treated without the need for confinement in a psychiatric hospital. It was one of the key reasons why many countries moved towards deinstitutionalization, closing many of these hospitals so that patients could be treated at home, in general hospitals and smaller facilities. Use of physical restraints such as straitjackets also declined.
As of 2013, the 10 most prescribed psychiatric drugs by number of prescriptions were alprazolam, sertraline, citalopram, fluoxetine, lorazepam, trazodone, escitalopram, duloxetine, bupropion XL, and venlafaxine XR.
Administration
Psychiatric medications are prescription medications, requiring a prescription from a physician, such as a psychiatrist, or a psychiatric nurse practitioner, PMHNP, before they can be obtained. Some U.S. states and territories, following the creation of the prescriptive authority for psychologists movement, have granted prescriptive privileges to clinical psychologists who have undergone additional specialised education and training in medical psychology. In addition to the familiar dosage in pill form, psychiatric medications are evolving into more novel methods of drug delivery. New technologies include transdermal, transmucosal, inhalation, suppository or depot injection supplements.
Research
Psychopharmacology studies a wide range of substances with various types of psychoactive properties. The professional and commercial fields of pharmacology and psychopharmacology do not typically focus on psychedelic or recreational drugs, and so the majority of studies are conducted on psychiatric medication. While studies are conducted on all psychoactive drugs by both fields, psychopharmacology focuses on psychoactive and chemical interactions within the brain. Physicians who research psychiatric medications are psychopharmacologists, specialists in the field of psychopharmacology.
Adverse and withdrawal effects
Psychiatric disorders, including depression, psychosis, and bipolar disorder, are common and gaining more acceptance in the United States. The most commonly used classes of medications for these disorders are antidepressants, antipsychotics, and lithium. Unfortunately, these medications are associated with significant neurotoxicities.
Psychiatric medications carry risk for neurotoxic adverse effects. The occurrence of neurotoxic effects can potentially reduce drug compliance. Some adverse effects can be treated symptomatically by using adjunct medications such as anticholinergics (antimuscarinics). Some rebound or withdrawal adverse effects, such as the possibility of a sudden or severe emergence or re-emergence of psychosis in antipsychotic withdrawal, may appear when the drugs are discontinued, or discontinued too rapidly.
Medicine combinations with clinically untried risks
While clinical trials of psychiatric medications, like other medications, typically test medicines separately, there is a practice in psychiatry (more so than in somatic medicine) to use polypharmacy in combinations of medicines that have never been tested together in clinical trials (though all medicines involved have passed clinical trials separately). It is argued that this presents a risk of adverse effects, especially brain damage, in real-life mixed medication psychiatry that are not visible in the clinical trials of one medicine at a time (similar to mixed drug abuse causing significantly more damage than the additive effects of brain damages caused by using only one illegal drug). Outside clinical trials, there is evidence for an increase in mortality when psychiatric patients are transferred to polypharmacy with an increased number of medications being mixed.
Types
There are five main groups of psychiatric medications.
Antidepressants, which treat disparate disorders such as clinical depression, dysthymia, anxiety disorders, eating disorders and borderline personality disorder.
Antipsychotics, which treat psychotic disorders such as schizophrenia and psychotic symptoms occurring in the context of other disorders such as mood disorders. They are also used for the treatment of bipolar disorder.
Anxiolytics, which treat anxiety disorders, and include hypnotics and sedatives
Mood stabilizers, which treat bipolar disorder and schizoaffective disorder.
Stimulants, which treat disorders such as attention deficit hyperactivity disorder and narcolepsy.
Antidepressants
Antidepressants are drugs used to treat clinical depression, and they are also often used for anxiety and other disorders. Most antidepressants will hinder the breakdown of serotonin, norepinephrine, and/or dopamine. A commonly used class of antidepressants are called selective serotonin reuptake inhibitors (SSRIs), which act on serotonin transporters in the brain to increase levels of serotonin in the synaptic cleft. Another is the serotonin-norepinephrine reuptake inhibitors (SNRIs), which increase both serotonin and norepinephrine. Antidepressants will often take 3–5 weeks to have a noticeable effect as the regulation of receptors in the brain adapts. There are multiple classes of antidepressants which have different mechanisms of action. Another type of antidepressant is a monoamine oxidase inhibitor (MAOI), which is thought to block the action of monoamine oxidase, an enzyme that breaks down serotonin and norepinephrine. MAOIs are not used as first-line treatment due to the risk of hypertensive crisis related to the consumption of foods containing the amino acid tyramine.
Common antidepressants:
Fluoxetine (Prozac), SSRI
Paroxetine (Paxil, Seroxat), SSRI
Citalopram (Celexa), SSRI
Escitalopram (Lexapro), SSRI
Sertraline (Zoloft), SSRI
Duloxetine (Cymbalta), SNRI
Venlafaxine (Effexor), SNRI
Bupropion (Wellbutrin), NDRI
Mirtazapine (Remeron), NaSSA
Isocarboxazid (Marplan), MAOI
Phenelzine (Nardil), MAOI
Tranylcypromine (Parnate), MAOI
Amitriptyline (Elavil), TCA
Antipsychotics
Antipsychotics are drugs used to treat various symptoms of psychosis, such as those caused by psychotic disorders or schizophrenia. Atypical antipsychotics are also used as mood stabilizers in the treatment of bipolar disorder, and they can augment the action of antidepressants in major depressive disorder.
Antipsychotics are sometimes referred to as neuroleptic drugs and some antipsychotics are branded "major tranquilizers".
There are two categories of antipsychotics: typical antipsychotics and atypical antipsychotics. Most antipsychotics are available only by prescription.
Common antipsychotics:
Anxiolytics and hypnotics
Benzodiazepines are effective as hypnotics, anxiolytics, anticonvulsants, myorelaxants and amnesics. Having less proclivity for overdose and toxicity, they have widely supplanted barbiturates, although barbiturates (such as pentobarbital) are still used for euthanasia.
Developed in the 1950s onward, benzodiazepines were originally thought to be non-addictive at therapeutic doses, but are now known to cause withdrawal symptoms similar to barbiturates and alcohol. Benzodiazepines are generally recommended for short-term use.
Z-drugs are a group of drugs with effects generally similar to benzodiazepines, which are used in the treatment of insomnia.
Common benzodiazepines and z-drugs include:
Mood stabilizers
In 1949, the Australian John Cade discovered that lithium salts could control mania, reducing the frequency and severity of manic episodes. This introduced the now popular drug lithium carbonate to the mainstream public, as well as being the first mood stabilizer to be approved by the U.S. Food & Drug Administration.
Besides lithium, several anticonvulsants and atypical antipsychotics have mood stabilizing activity. The mechanism of action of mood stabilizers is not well understood.
Common non-antipsychotic mood stabilizers include:
Lithium (Lithobid, Eskalith), the oldest mood stabilizer
Anticonvulsants
Carbamazepine (Tegretol) and the related compound oxcarbazepine (Trileptal)
Valproic acid, and salts (Depakene, Depakote)
Lamotrigine (Lamictal)
Stimulants
A stimulant is a drug that stimulates the central nervous system, increasing arousal, attention and endurance. Stimulants are used in psychiatry to treat attention deficit-hyperactivity disorder. Because the medications can be addictive, patients with a history of drug abuse are typically monitored closely or treated with a non-stimulant.
Common stimulants:
Methylphenidate (Ritalin, Concerta), a norepinephrine-dopamine reuptake inhibitor
Dexmethylphenidate (Focalin), the active dextro-enantiomer of methylphenidate
Serdexmethylphenidate/dexmethylphenidate (Azstarys)
Mixed amphetamine salts (Adderall), a 3:1 mix of dextro/levo-enantiomers of amphetamine
Dextroamphetamine (Dexedrine), the dextro-enantiomer of amphetamine
Lisdexamfetamine (Vyvanse), a prodrug containing the dextro-enantiomer of amphetamine
Methamphetamine (Desoxyn), a potent but infrequently prescribed amphetamine
Controversies
Professionals, such as David Rosenhan, Peter Breggin, Paula Caplan, Thomas Szasz and Stuart A. Kirk sustain that psychiatry engages "in the systematic medicalization of normality". More recently these concerns have come from insiders who have worked for and promoted the APA (e.g., Robert Spitzer, Allen Frances).
Scholars such as Cooper, Foucalt, Goffman, Deleuze and Szasz believe that pharmacological "treatment" is only a placebo effect, and that administration of drugs is just a religion in disguise and ritualistic chemistry. Other scholars have argued against psychiatric medication in that significant aspects of mental illness are related to the psyche or environmental factors, but medication works exclusively on a pharmacological basis.
Antipsychotics have been associated with decreases in brain volume over time, which may indicate a neurotoxic effect. However, untreated psychosis has also been associated with decreases in brain volume and treatments have been shown improve cognitive functioning.
| Biology and health sciences | Psychiatric drugs | Health |
57763 | https://en.wikipedia.org/wiki/Aerosol | Aerosol | An aerosol is a suspension of fine solid particles or liquid droplets in air or another gas. Aerosols can be generated from natural or human causes. The term aerosol commonly refers to the mixture of particulates in air, and not to the particulate matter alone. Examples of natural aerosols are fog, mist or dust. Examples of human caused aerosols include particulate air pollutants, mist from the discharge at hydroelectric dams, irrigation mist, perfume from atomizers, smoke, dust, sprayed pesticides, and medical treatments for respiratory illnesses.
Several types of atmospheric aerosol have a significant effect on Earth's climate: volcanic, desert dust, sea-salt, that originating from biogenic sources and human-made. Volcanic aerosol forms in the stratosphere after an eruption as droplets of sulfuric acid that can prevail for up to two years, and reflect sunlight, lowering temperature. Desert dust, mineral particles blown to high altitudes, absorb heat and may be responsible for inhibiting storm cloud formation. Human-made sulfate aerosols, primarily from burning oil and coal, affect the behavior of clouds. When aerosols absorb pollutants, it facilitates the deposition of pollutants to the surface of the earth as well as to bodies of water. This has the potential to be damaging to both the environment and human health.
Ship tracks are clouds that form around the exhaust released by ships into the still ocean air. Water molecules collect around the tiny particles (aerosols) from exhaust to form a cloud seed. More and more water accumulates on the seed until a visible cloud is formed. In the case of ship tracks, the cloud seeds are stretched over a long narrow path where the wind has blown the ship's exhaust, so the resulting clouds resemble long strings over the ocean.
The warming caused by human-produced greenhouse gases has been somewhat offset by the cooling effect of human-produced aerosols. In 2020, regulations on fuel significantly cut sulfur dioxide emissions from international shipping by approximately 80%, leading to an unexpected global geoengineering termination shock.
The liquid or solid particles in an aerosol have diameters typically less than 1 μm. Larger particles with a significant settling speed make the mixture a suspension, but the distinction is not clear. In everyday language, aerosol often refers to a dispensing system that delivers a consumer product from a spray can.
Diseases can spread by means of small droplets in the breath, sometimes called bioaerosols.
Definitions
Aerosol is defined as a suspension system of solid or liquid particles in a gas. An aerosol includes both the particles and the suspending gas, which is usually air. Meteorologists and climatologists often refer to them as particle matter, while the classification in sizes ranges like PM2.5 or PM10, is useful in the field of atmospheric pollution as these size range play a role in ascertain the harmful effects in human health. Frederick G. Donnan presumably first used the term aerosol during World War I to describe an aero-solution, clouds of microscopic particles in air. This term developed analogously to the term hydrosol, a colloid system with water as the dispersed medium. Primary aerosols contain particles introduced directly into the gas; secondary aerosols form through gas-to-particle conversion.
Key aerosol groups include sulfates, organic carbon, black carbon, nitrates, mineral dust, and sea salt, they usually clump together to form a complex mixture. Various types of aerosol, classified according to physical form and how they were generated, include dust, fume, mist, smoke and fog.
There are several measures of aerosol concentration. Environmental science and environmental health often use the mass concentration (M), defined as the mass of particulate matter per unit volume, in units such as μg/m3. Also commonly used is the number concentration (N), the number of particles per unit volume, in units such as number per m3 or number per cm3.
Particle size has a major influence on particle properties, and the aerosol particle radius or diameter (dp) is a key property used to characterise aerosols.
Aerosols vary in their dispersity. A monodisperse aerosol, producible in the laboratory, contains particles of uniform size. Most aerosols, however, as polydisperse colloidal systems, exhibit a range of particle sizes. Liquid droplets are almost always nearly spherical, but scientists use an equivalent diameter to characterize the properties of various shapes of solid particles, some very irregular. The equivalent diameter is the diameter of a spherical particle with the same value of some physical property as the irregular particle. The equivalent volume diameter (de) is defined as the diameter of a sphere of the same volume as that of the irregular particle. Also commonly used is the aerodynamic diameter, da.
Generation and applications
People generate aerosols for various purposes, including:
as test aerosols for calibrating instruments, performing research, and testing sampling equipment and air filters;
to deliver deodorants, paints, and other consumer products in sprays;
for dispersal and agricultural application
for medical treatment of respiratory disease; and
in fuel injection systems and other combustion technology.
Some devices for generating aerosols are:
Aerosol spray
Atomizer nozzle or nebulizer
Electrospray
Electronic cigarette
Vibrating orifice aerosol generator (VOAG)
In the atmosphere
Several types of atmospheric aerosol have a significant effect on Earth's climate: volcanic, desert dust, sea-salt, that originating from biogenic sources and human-made. Volcanic aerosol forms in the stratosphere after an eruption as droplets of sulfuric acid that can prevail for up to two years, and reflect sunlight, lowering temperature. Desert dust, mineral particles blown to high altitudes, absorb heat and may be responsible for inhibiting storm cloud formation. Human-made sulfate aerosols, primarily from burning oil and coal, affect the behavior of clouds.
Although all hydrometeors, solid and liquid, can be described as aerosols, a distinction is commonly made between such dispersions (i.e. clouds) containing activated drops and crystals, and aerosol particles. The atmosphere of Earth contains aerosols of various types and concentrations, including quantities of:
natural inorganic materials: fine dust, sea salt, or water droplets
natural organic materials: smoke, pollen, spores, or bacteria
anthropogenic products of combustion such as: smoke, ashes or dusts
Aerosols can be found in urban ecosystems in various forms, for example:
Dust
Cigarette smoke
Mist from aerosol spray cans
Soot or fumes in car exhaust
The presence of aerosols in the Earth's atmosphere can influence its climate, as well as human health.
Effects
Volcanic eruptions release large amounts of sulphuric acid, hydrogen sulfide and hydrochloric acid into the atmosphere. These gases represent aerosols and eventually return to earth as acid rain, having a number of adverse effects on the environment and human life.
When aerosols absorb pollutants, it facilitates the deposition of pollutants to the surface of the earth as well as to bodies of water. This has the potential to be damaging to both the environment and human health.
Aerosols interact with the Earth's energy budget in two ways, directly and indirectly.
E.g., a direct effect is that aerosols scatter and absorb incoming solar radiation. This will mainly lead to a cooling of the surface (solar radiation is scattered back to space) but may also contribute to a warming of the surface (caused by the absorption of incoming solar energy). This will be an additional element to the greenhouse effect and therefore contributing to the global climate change.
The indirect effects refer to the aerosol interfering with formations that interact directly with radiation. For example, they are able to modify the size of the cloud particles in the lower atmosphere, thereby changing the way clouds reflect and absorb light and therefore modifying the Earth's energy budget.
There is evidence to suggest that anthropogenic aerosols actually offset the effects of greenhouse gases in some areas, which is why the Northern Hemisphere shows slower surface warming than the Southern Hemisphere, although that just means that the Northern Hemisphere will absorb the heat later through ocean currents bringing warmer waters from the South. On a global scale however, aerosol cooling decreases greenhouse-gases-induced heating without offsetting it completely.
Ship tracks are clouds that form around the exhaust released by ships into the still ocean air. Water molecules collect around the tiny particles (aerosols) from exhaust to form a cloud seed. More and more water accumulates on the seed until a visible cloud is formed. In the case of ship tracks, the cloud seeds are stretched over a long narrow path where the wind has blown the ship's exhaust, so the resulting clouds resemble long strings over the ocean.
The warming caused by human-produced greenhouse gases has been somewhat offset by the cooling effect of human-produced aerosols. In 2020, regulations on fuel significantly cut sulfur dioxide emissions from international shipping by approximately 80%, leading to an unexpected global geoengineering termination shock.
Aerosols in the 20 μm range show a particularly long persistence time in air conditioned rooms due to their "jet rider" behaviour (move with air jets, gravitationally fall out in slowly moving air); as this aerosol size is most effectively adsorbed in the human nose, the primordial infection site in COVID-19, such aerosols may contribute to the pandemic.
Aerosol particles with an effective diameter smaller than 10 μm can enter the bronchi, while the ones with an effective diameter smaller than 2.5 μm can enter as far as the gas exchange region in the lungs, which can be hazardous to human health.
Size distribution
For a monodisperse aerosol, a single number—the particle diameter—suffices to describe the size of the particles. However, more complicated particle-size distributions describe the sizes of the particles in a polydisperse aerosol. This distribution defines the relative amounts of particles, sorted according to size. One approach to defining the particle size distribution uses a list of the sizes of every particle in a sample. However, this approach proves tedious to ascertain in aerosols with millions of particles and awkward to use. Another approach splits the size range into intervals and finds the number (or proportion) of particles in each interval. These data can be presented in a histogram with the area of each bar representing the proportion of particles in that size bin, usually normalised by dividing the number of particles in a bin by the width of the interval so that the area of each bar is proportionate to the number of particles in the size range that it represents. If the width of the bins tends to zero, the frequency function is:
where
is the diameter of the particles
is the fraction of particles having diameters between and +
is the frequency function
Therefore, the area under the frequency curve between two sizes a and b represents the total fraction of the particles in that size range:
It can also be formulated in terms of the total number density N:
Assuming spherical aerosol particles, the aerosol surface area per unit volume (S) is given by the second moment:
And the third moment gives the total volume concentration (V) of the particles:
The particle size distribution can be approximated. The normal distribution usually does not suitably describe particle size distributions in aerosols because of the skewness associated with a long tail of larger particles. Also for a quantity that varies over a large range, as many aerosol sizes do, the width of the distribution implies negative particles sizes, which is not physically realistic. However, the normal distribution can be suitable for some aerosols, such as test aerosols, certain pollen grains and spores.
A more widely chosen log-normal distribution gives the number frequency as:
where:
is the standard deviation of the size distribution and
is the arithmetic mean diameter.
The log-normal distribution has no negative values, can cover a wide range of values, and fits many observed size distributions reasonably well.
Other distributions sometimes used to characterise particle size include: the Rosin-Rammler distribution, applied to coarsely dispersed dusts and sprays; the Nukiyama–Tanasawa distribution, for sprays of extremely broad size ranges; the power function distribution, occasionally applied to atmospheric aerosols; the exponential distribution, applied to powdered materials; and for cloud droplets, the Khrgian–Mazin distribution.
Physics
Terminal velocity of a particle in a fluid
For low values of the Reynolds number (<1), true for most aerosol motion, Stokes' law describes the force of resistance on a solid spherical particle in a fluid. However, Stokes' law is only valid when the velocity of the gas at the surface of the particle is zero. For small particles (< 1 μm) that characterize aerosols, however, this assumption fails. To account for this failure, one can introduce the Cunningham correction factor, always greater than 1. Including this factor, one finds the relation between the resisting force on a particle and its velocity:
where
is the resisting force on a spherical particle
is the dynamic viscosity of the gas
is the particle velocity
is the Cunningham correction factor.
This allows us to calculate the terminal velocity of a particle undergoing gravitational settling in still air. Neglecting buoyancy effects, we find:
where
is the terminal settling velocity of the particle.
The terminal velocity can also be derived for other kinds of forces. If Stokes' law holds, then the resistance to motion is directly proportional to speed. The constant of proportionality is the mechanical mobility (B) of a particle:
A particle traveling at any reasonable initial velocity approaches its terminal velocity exponentially with an e-folding time equal to the relaxation time:
where:
is the particle speed at time t
is the final particle speed
is the initial particle speed
To account for the effect of the shape of non-spherical particles, a correction factor known as the dynamic shape factor is applied to Stokes' law. It is defined as the ratio of the resistive force of the irregular particle to that of a spherical particle with the same volume and velocity:
where:
is the dynamic shape factor
Aerodynamic diameter
The aerodynamic diameter of an irregular particle is defined as the diameter of the spherical particle with a density of 1000 kg/m3 and the same settling velocity as the irregular particle.
Neglecting the slip correction, the particle settles at the terminal velocity proportional to the square of the aerodynamic diameter, da:
where
= standard particle density (1000 kg/m3).
This equation gives the aerodynamic diameter:
One can apply the aerodynamic diameter to particulate pollutants or to inhaled drugs to predict where in the respiratory tract such particles deposit. Pharmaceutical companies typically use aerodynamic diameter, not geometric diameter, to characterize particles in inhalable drugs.
Dynamics
The previous discussion focused on single aerosol particles. In contrast, aerosol dynamics explains the evolution of complete aerosol populations. The concentrations of particles will change over time as a result of many processes. External processes that move particles outside a volume of gas under study include diffusion, gravitational settling, and electric charges and other external forces that cause particle migration. A second set of processes internal to a given volume of gas include particle formation (nucleation), evaporation, chemical reaction, and coagulation.
A differential equation called the Aerosol General Dynamic Equation (GDE) characterizes the evolution of the number density of particles in an aerosol due to these processes.
Change in time = Convective transport + brownian diffusion + gas-particle interactions + coagulation + migration by external forces
Where:
is number density of particles of size category
is the particle velocity
is the particle Stokes-Einstein diffusivity
is the particle velocity associated with an external force
Coagulation
As particles and droplets in an aerosol collide with one another, they may undergo coalescence or aggregation. This process leads to a change in the aerosol particle-size distribution, with the mode increasing in diameter as total number of particles decreases. On occasion, particles may shatter apart into numerous smaller particles; however, this process usually occurs primarily in particles too large for consideration as aerosols.
Dynamics regimes
The Knudsen number of the particle define three different dynamical regimes that govern the behaviour of an aerosol:
where is the mean free path of the suspending gas and is the diameter of the particle. For particles in the free molecular regime, Kn >> 1; particles small compared to the mean free path of the suspending gas. In this regime, particles interact with the suspending gas through a series of "ballistic" collisions with gas molecules. As such, they behave similarly to gas molecules, tending to follow streamlines and diffusing rapidly through Brownian motion. The mass flux equation in the free molecular regime is:
where a is the particle radius, P∞ and PA are the pressures far from the droplet and at the surface of the droplet respectively, kb is the Boltzmann constant, T is the temperature, CA is mean thermal velocity and α is mass accommodation coefficient. The derivation of this equation assumes constant pressure and constant diffusion coefficient.
Particles are in the continuum regime when Kn << 1. In this regime, the particles are big compared to the mean free path of the suspending gas, meaning that the suspending gas acts as a continuous fluid flowing round the particle. The molecular flux in this regime is:
where a is the radius of the particle A, MA is the molecular mass of the particle A, DAB is the diffusion coefficient between particles A and B, R is the ideal gas constant, T is the temperature (in absolute units like kelvin), and PA∞ and PAS are the pressures at infinite and at the surface respectively.
The transition regime contains all the particles in between the free molecular and continuum regimes or Kn ≈ 1. The forces experienced by a particle are a complex combination of interactions with individual gas molecules and macroscopic interactions. The semi-empirical equation describing mass flux is:
where Icont is the mass flux in the continuum regime. This formula is called the Fuchs-Sutugin interpolation formula. These equations do not take into account the heat release effect.
Partitioning
Aerosol partitioning theory governs condensation on and evaporation from an aerosol surface, respectively. Condensation of mass causes the mode of the particle-size distributions of the aerosol to increase; conversely, evaporation causes the mode to decrease. Nucleation is the process of forming aerosol mass from the condensation of a gaseous precursor, specifically a vapor. Net condensation of the vapor requires supersaturation, a partial pressure greater than its vapor pressure. This can happen for three reasons:
Lowering the temperature of the system lowers the vapor pressure.
Chemical reactions may increase the partial pressure of a gas or lower its vapor pressure.
The addition of additional vapor to the system may lower the equilibrium vapor pressure according to Raoult's law.
There are two types of nucleation processes. Gases preferentially condense onto surfaces of pre-existing aerosol particles, known as heterogeneous nucleation. This process causes the diameter at the mode of particle-size distribution to increase with constant number concentration. With sufficiently high supersaturation and no suitable surfaces, particles may condense in the absence of a pre-existing surface, known as homogeneous nucleation. This results in the addition of very small, rapidly growing particles to the particle-size distribution.
Activation
Water coats particles in aerosols, making them activated, usually in the context of forming a cloud droplet (such as natural cloud seeding by aerosols from trees in a forest). Following the Kelvin equation (based on the curvature of liquid droplets), smaller particles need a higher ambient relative humidity to maintain equilibrium than larger particles do. The following formula gives relative humidity at equilibrium:
where is the saturation vapor pressure above a particle at equilibrium (around a curved liquid droplet), p0 is the saturation vapor pressure (flat surface of the same liquid) and S is the saturation ratio.
Kelvin equation for saturation vapor pressure above a curved surface is:
where rp droplet radius, σ surface tension of droplet, ρ density of liquid, M molar mass, T temperature, and R molar gas constant.
Solution to the general dynamic equation
There are no general solutions to the general dynamic equation (GDE); common methods used to solve the general dynamic equation include:
Moment method
Modal/sectional method, and
Quadrature method of moments/Taylor-series expansion method of moments, and
Monte Carlo method.
Detection
Aerosols can either be measured in-situ or with remote sensing techniques either ground-based on airborne-based.
In situ observations
Some available in situ measurement techniques include:
Aerosol mass spectrometer (AMS)
Differential mobility analyzer (DMA)
Electrical aerosol spectrometer (EAS)
Aerodynamic particle sizer (APS)
Aerodynamic aerosol classifier (AAC)
Wide range particle spectrometer (WPS)
Micro-Orifice Uniform Deposit Impactor(MOUDI)
Condensation particle counter (CPC)
Epiphaniometer
Electrical low pressure impactor (ELPI)
Aerosol particle mass-analyser (APM)
Centrifugal Particle Mass Analyser (CPMA)
Remote sensing approach
Remote sensing approaches include:
Sun photometer
Lidar
Imaging spectroscopy
Size selective sampling
Particles can deposit in the nose, mouth, pharynx and larynx (the head airways region), deeper within the respiratory tract (from the trachea to the terminal bronchioles), or in the alveolar region. The location of deposition of aerosol particles within the respiratory system strongly determines the health effects of exposure to such aerosols. This phenomenon led people to invent aerosol samplers that select a subset of the aerosol particles that reach certain parts of the respiratory system.
Examples of these subsets of the particle-size distribution of an aerosol, important in occupational health, include the inhalable, thoracic, and respirable fractions. The fraction that can enter each part of the respiratory system depends on the deposition of particles in the upper parts of the airway. The inhalable fraction of particles, defined as the proportion of particles originally in the air that can enter the nose or mouth, depends on external wind speed and direction and on the particle-size distribution by aerodynamic diameter. The thoracic fraction is the proportion of the particles in ambient aerosol that can reach the thorax or chest region. The respirable fraction is the proportion of particles in the air that can reach the alveolar region. To measure the respirable fraction of particles in air, a pre-collector is used with a sampling filter. The pre-collector excludes particles as the airways remove particles from inhaled air. The sampling filter collects the particles for measurement. It is common to use cyclonic separation for the pre-collector, but other techniques include impactors, horizontal elutriators, and large pore membrane filters.
Two alternative size-selective criteria, often used in atmospheric monitoring, are PM10 and PM2.5. PM10 is defined by ISO as particles which pass through a size-selective inlet with a 50% efficiency cut-off at 10 μm aerodynamic diameter and PM2.5 as particles which pass through a size-selective inlet with a 50% efficiency cut-off at 2.5 μm aerodynamic diameter. PM10 corresponds to the "thoracic convention" as defined in ISO 7708:1995, Clause 6; PM2.5 corresponds to the "high-risk respirable convention" as defined in ISO 7708:1995, 7.1. The United States Environmental Protection Agency replaced the older standards for particulate matter based on Total Suspended Particulate with another standard based on PM10 in 1987 and then introduced standards for PM2.5 (also known as fine particulate matter) in 1997.
| Physical sciences | Chemical mixtures: General | null |
57866 | https://en.wikipedia.org/wiki/Kudzu | Kudzu | Kudzu (), also called Japanese arrowroot or Chinese arrowroot, is a group of climbing, coiling, and trailing deciduous perennial vines native to much of East Asia, Southeast Asia, and some Pacific islands. It is invasive in many parts of the world, primarily North America.
The vine densely climbs over other plants and trees and grows so rapidly that it smothers and kills them by blocking most of the sunlight and taking root space. The plants are in the genus Pueraria, in the pea family Fabaceae, subfamily Faboideae. The name is derived from the Japanese name for the plant East Asian arrowroot, (Pueraria montana var. lobata), . Where these plants are naturalized, they can be invasive and are considered noxious weeds. The plant is edible, but often sprayed with herbicides.
Taxonomy
The name kudzu describes one or more species in the genus Pueraria that are closely related, and some of them are considered to be varieties rather than full species. The morphological differences between the subspecies of P. montana are subtle; they can breed with each other, and introduced kudzu populations in the United States apparently have ancestry from more than one of the subspecies. They are:
P. montana
Pueraria montana var. chinensis (Ohwi) Sanjappa & Pradeep (= P. chinensis)
Pueraria montana var. lobata (Willd.) Sanjappa & Pradeep (= P. lobata)
Pueraria montana var. thomsonii (Benth.) Wiersema ex D.B. Ward (= P. thomsonii)
P. edulis
P. phaseoloides – proposed to be moved to Neustanthus
Various other species in Pueraria sensu stricto are also known as "kudzu" with an adjective, but they are not as widely cultivated or introduced.
Ecology
Kudzu has been referred to as "quasi-wild" due to its long history of cultivation, selective breeding into various cultivars, and subsequent return to wild conditions. Some researchers suggest that humans are the main predator of kudzu in its native range, and that human use and cultivation of kudzu both contributes to its success as an invasive species and is a form of biological control for kudzu.
Propagation
Kudzu spreads by vegetative reproduction via stolons (runners) that root at the nodes to form new plants and by rhizomes. Kudzu also spreads by seeds, which are contained in pods and mature in the autumn, although this is rare. One or two viable seeds are produced per cluster of pods. The hard-coated seeds can remain viable for several years, and can successfully germinate only when soil is persistently soggy for 5–7 days, with temperatures above 20 °C (68 °F).
Once germinated, saplings must be kept in a well-drained medium that retains high moisture. During this stage of growth, kudzu must receive as much sunlight as possible. Kudzu saplings are sensitive to mechanical disturbance and are damaged by chemical fertilizers. They do not tolerate long periods of shade or high water tables. Kudzu is able to withstand environments ranging from sunny to shady upon reaching its mature stage; however, forest edges with greater light availability are optimal.
Invasive species
Kudzu's environmental and ecological damage results from its outcompeting other species for a resource. Kudzu competes with native flora for light, and acts to block their access to this vital resource by growing over them and shading them with its leaves. Native plants may then die as a result.When kudzu invades an ecosystem, it makes the leaf litter more labile, thereby lessening the carbon sequestration ability of the soil. This feeds climate change.
Americas
Kudzu is an infamous weed in the United States, where it can be found in 32 states. It is common along roadsides and other disturbed areas throughout most of the southeast, as far north as rural areas of Pulaski County, Illinois. The vine has become a sore point in Southern US culture. Estimates of its rate of spreading differ wildly; it has been described as spreading at the rate of annually, although in 2015 the United States Forest Service estimated the rate to be only per year.
A small patch of kudzu was discovered in 2009 in Leamington, Ontario, the second-warmest growing region of Canada after south coastal British Columbia.
Kudzu was introduced from Japan into the United States at the Japanese pavilion in the 1876 Centennial Exposition in Philadelphia. It was also shown at the Chicago World's Fair. It remained a garden plant until the Dust Bowl era (1930s–1940s), when the vine was marketed as a way for farmers to stop soil erosion. The new Soil Conservation Service grew seventy million kudzu seedlings and paid $8 an acre () to anyone who would sow the vine. Road and rail builders planted kudzu to stabilize steep slopes. Farmer and journalist Channing Cope, dubbed "kudzu kid" in a 1949 Time profile, popularised it in the South as a fix for eroded soils. He started the Kudzu Club of America, which, by 1943, had 20,000 members. The club aimed to plant across the South. Cultivation peaked at over by 1945. Once Soil Service payments ended, much of the kudzu was destroyed as farmers turned the land over to more profitable uses. The Soil Conservation Service stopped promoting kudzu altogether by the 1950s.
Kudzu's ongoing mythos as a mile-a-minute invader is likely due to its visibility coating trees at wooded roadsides, thriving in the sunshine at the forest edge. Despite kudzu's notoriety, Asian privet and invasive roses have each proved to be greater threats in the United States.
Europe
In Europe, kudzu has been included since 2016 on the list of Invasive Alien Species of Union concern (the Union list). This means that this species cannot be imported, cultivated, transported, commercialized, planted, or intentionally released into the environment anywhere in the European Union.
There are only some kudzu populations in certain regions of Italy and Switzerland. In Switzerland it occurs almost exclusively in Ticino, where it has been found in the wild since at least 1956. Most outbreaks are concentrated around Lake Lugano and Lake Maggiore, where the climate (hot summers and mild winters) encourages its growth. However, outbreaks in peripheral areas such as the Onsernone Valley and Lower Leventina are likely due to the illegal disposal of plant waste. A plan is currently in place to reduce and eventually eradicate the kudzu population in Ticino.
Other regions
During World War II, kudzu was introduced to Vanuatu and Fiji by United States Armed Forces to serve as camouflage for equipment and has become a major weed.
In Australia, Kudzu is also becoming a problem in Queensland, Northern Territory and New South Wales.
In New Zealand, kudzu was declared an "unwanted organism" and was added to the Biosecurity New Zealand register in 2002.
Control
Crown removal
Destroying the full underground system, which can be extremely large and deep, is not necessary for successful long-term control of kudzu. Killing or removing the kudzu root crown and all rooting runners is sufficient. The root crown is a fibrous knob of tissue that sits on top of the roots. Crowns form from multiple vine nodes that root to the ground, and range from pea- to basketball-sized. These crowns and attached tuberous roots can weigh 400 or 500 pounds (180 to 225 kilograms) and extend up to twenty feet (six meters) into the ground. The age of the crowns is correlated to how deep they are in the ground. Nodes and crowns are the source of all kudzu vines, and roots cannot produce vines. If any portion of a root crown remains after attempted removal, the kudzu plant may still grow back.
Mechanical methods of control involve cutting off crowns from roots, usually just below ground level. This immediately kills the plant. Cutting off the above-ground vines is not sufficient for an immediate kill. Destroying all removed crown material is necessary. Buried crowns can regenerate into healthy kudzu. Transporting crowns in soil removed from a kudzu infestation is one common way that kudzu unexpectedly spreads and shows up in new locations.
Close mowing every week, regular heavy grazing for many successive years, or repeated cultivation may be effective, as this serves to deplete root reserves. If done in the spring, cutting off vines must be repeated. Regrowth appears to exhaust the plant's stored carbohydrate reserves. Harvested kudzu can be fed to livestock, burned, or composted.
In the United States, the city of Chattanooga, Tennessee undertook a trial program in 2010 using goats and llamas to graze on the plant. Similar efforts to reduce widespread nuisance kudzu growth have also been undertaken in the cities of Winston-Salem, North Carolina and Tallahassee, Florida.
Prescribed burning is used on old extensive infestations to remove vegetative cover and promote seed germination for removal or treatment. While fire is not an effective way to kill kudzu, equipment, such as a skid loader, can later remove crowns and kill kudzu with minimal disturbance or erosion of soil.
Herbicide
A systemic herbicide, for example, glyphosate, triclopyr, or picloram, can be applied directly on cut stems, which is an effective means of transporting the herbicide into the kudzu's extensive root system. Herbicides can be used after other methods of control, such as mowing, grazing, or burning, which can allow for an easier application of the chemical to the weakened plants. In large-scale forestry infestations, soil-active herbicides have been shown to be highly effective.
After initial herbicidal treatment, follow-up treatments and monitoring are usually necessary, depending on how long the kudzu has been growing in an area. Up to 10 years of supervision may be needed after the initial chemical placement to make sure the plant does not return.
Fungi
Since 1998, the United States' Agricultural Research Service has experimented with using the fungus Myrothecium verrucaria as a biologically based herbicide against kudzu. A diacetylverrucarol spray based on M. verrucaria works under a variety of conditions (including the absence of dew), causes minimal injury to many of the other woody plants in kudzu-infested habitats, and takes effect quickly enough that kudzu treated with it in the morning starts showing evidence of damage by midafternoon. Initial formulations of the herbicide produced toxic levels of other trichothecenes as byproducts, though the ARS discovered that growing M. verrucaria in a fermenter on a liquid diet (instead of a solid) limited or eliminated the problem.
Uses
Soil improvement and preservation
Kudzu has been used as a form of erosion control and to enhance the soil. As a legume, it increases the nitrogen in the soil by a symbiotic relationship with nitrogen-fixing bacteria. Its deep taproots also transfer valuable minerals from the subsoil to the topsoil, thereby improving the topsoil. In the deforested section of the central Amazon Basin in Brazil, it has been used for improving the soil pore-space in clay latosols, thus freeing even more water for plants than in the soil prior to deforestation.
Animal feed
Kudzu can be used by grazing animals, as it is high in quality as a forage and palatable to livestock. It can be grazed until frost and even slightly after. Kudzu had been used in the southern United States specifically to feed goats on land that had limited resources. Kudzu hay typically has a 22–23% crude protein content and over 60% total digestible nutrient value. The quality of the leaves decreases as vine content increases relative to the leaf content. Kudzu also has low forage yields despite its rate of growth, yielding around two to four tons of dry matter per acre annually. It is also difficult to bale due to its vining growth and its slowness in shedding water. This makes it necessary to place kudzu hay under sheltered protection after being baled. Fresh kudzu is readily consumed by all types of grazing animals, but frequent grazing over three to four years can ruin even established stands. Thus, kudzu only serves well as a grazing crop on a temporary basis.
Basketry
Kudzu fiber has long been used for fiber art and basketry. The long runners which propagate the kudzu fields and the larger vines which cover trees make excellent weaving material. Some basketmakers use the material green. Others use it after splitting it in half, allowing it to dry and then rehydrating it using hot water. Both traditional and contemporary basketry artists use kudzu.
Phytochemicals and uses
Kudzu contains isoflavones, including puerarin (about 60% of the total isoflavones), daidzein, daidzin (structurally related to genistein), mirificin, and salvianolic acid, among numerous others identified. In traditional Chinese medicine, where it is known as gé gēn (gegen), kudzu is considered one of the 50 fundamental herbs thought to have therapeutic effects, although there is no high-quality clinical research to indicate it has any activity or therapeutic use in humans. Compounds of icariin, astragalus, and puerarin mitigates iron overload in the cerebral cortex of mice with Alzheimer's disease. Adverse effects may occur if kudzu is taken by people with hormone-sensitive cancer or those taking tamoxifen, antidiabetic medications, or methotrexate.
Food
The roots contain starch, which has traditionally been used as a food ingredient in East and Southeast Asia. In Vietnam, the starch, called bột sắn dây, is flavoured with pomelo oil and then used as a drink in the summer. In Korea, the plant root is made into chikcha (칡차; "arrowroot tea"), used in traditional medicine, and processed starch used for culinary purposes such as primary ingredient for naengmyeon (칡냉면). In Japan, the plant is known as kuzu and the starch named kuzuko. Kuzuko is used in dishes including kuzumochi, mizu manjū, and kuzuyu. It also serves as a thickener for sauces, and can substitute for cornstarch.
The flowers are used to make a jelly that tastes similar to grape jelly. Roots, flowers, and leaves of kudzu show antioxidant activity that suggests food uses. Nearby bee colonies may forage on kudzu nectar during droughts as a last resort, producing a low-viscosity red or purple honey that tastes of grape jelly or bubblegum.
Folk medicine
Kudzu has also been used for centuries in East Asia as folk medicine using herbal teas and tinctures. Kudzu powder is used in Japan to make an herbal tea called kuzuyu. Kakkonto () is a herbal drink with its origin in traditional Chinese medicine, intended for people having various mild illnesses, such as headache.
Fiber
Kudzu fiber, known as ko-hemp, is used traditionally to make clothing and paper, and has also been investigated for industrial-scale use. Kudzu fiber is a bast fiber similar to hemp or linen, and has been used for clothing in China for at least 6,000 years and in Japan for at least 1,500 years. In ancient China, kudzu was one of three main clothing and textile materials, with silk and ramie being the other two.
Kudzu is still currently used in Japan, primarily used to weave worn in the summer.
Other uses
It may become a valuable asset for the production of cellulosic ethanol. In the Southern United States, kudzu is used to make soaps, lotions, and compost.
| Biology and health sciences | Fabales | Plants |
57875 | https://en.wikipedia.org/wiki/Soap | Soap | Soap is a salt of a fatty acid (sometimes other carboxylic acids) used for cleaning and lubricating products as well as other applications. In a domestic setting, soaps, specifically "toilet soaps", are surfactants usually used for washing, bathing, and other types of housekeeping. In industrial settings, soaps are used as thickeners, components of some lubricants, emulsifiers, and catalysts.
Soaps are often produced by mixing fats and oils with a base. Humans have used soap for millennia; evidence exists for the production of soap-like materials in ancient Babylon around 2800 BC.
Types
Toilet soaps
In a domestic setting, "soap" usually refers to what is technically called a toilet soap, used for household and personal cleaning. Toilet soaps are salts of fatty acids with the general formula (RCO2−)M+, where M is Na (sodium) or K (potassium).
When used for cleaning, soap solubilizes particles and grime, which can then be separated from the article being cleaned. The insoluble oil/fat "dirt" become associated inside micelles, tiny spheres formed from soap molecules with polar hydrophilic (water-attracting) groups on the outside and encasing a lipophilic (fat-attracting) pocket, which shields the oil/fat molecules from the water, making them soluble. Anything that is soluble will be washed away with the water. In hand washing, as a surfactant, when lathered with a little water, soap kills microorganisms by disorganizing their membrane lipid bilayer and denaturing their proteins. It also emulsifies oils, enabling them to be carried away by running water.
When used in hard water, soap does not lather well but forms soap scum (related to metallic soaps, see below).
Non-toilet soaps
So-called metallic soaps are key components of most lubricating greases and thickeners. A commercially important example is lithium stearate. Greases are usually emulsions of calcium soap or lithium soap and mineral oil. Many other metallic soaps are also useful, including those of aluminium, sodium, and mixtures thereof. Such soaps are also used as thickeners to increase the viscosity of oils. In ancient times, lubricating greases were made by the addition of lime to olive oil, which would produce calcium soaps. Metal soaps are also included in modern artists' oil paints formulations as a rheology modifier. Metal soaps can be prepared by neutralizing fatty acids with metal oxides:
2 RCO2H + CaO → (RCO2)2Ca + H2O
A cation from an organic base such as ammonium can be used instead of a metal; ammonium nonanoate is an ammonium-based soap that is used as an herbicide.
Another class of non-toilet soaps are resin soaps, which are produced in the paper industry by the action of tree rosin with alkaline reagents used to separate cellulose from raw wood. A major component of such soaps is the sodium salt of abietic acid. Resin soaps are used as emulsifiers.
Soapmaking
The production of toilet soaps usually entails saponification of triglycerides, which are vegetable or animal oils and fats. An alkaline solution (often lye or sodium hydroxide) induces saponification whereby the triglyceride fats first hydrolyze into salts of fatty acids. Glycerol (glycerin) is liberated. The glycerin is sometimes left in the soap product as a softening agent, although it is sometimes separated. Handmade soap can differ from industrially made soap in that an excess of fat or coconut oil beyond that needed to consume the alkali is used (in a cold-pour process, this excess fat is called "superfatting"), and the glycerol left in acts as a moisturizing agent. However, the glycerine also makes the soap softer. The addition of glycerol and processing of this soap produces glycerin soap. Superfatted soap is more skin-friendly than one without extra fat, although it can leave a "greasy" feel. Sometimes, an emollient is added, such as jojoba oil or shea butter. Sand or pumice may be added to produce a scouring soap. The scouring agents serve to remove dead cells from the skin surface being cleaned. This process is called exfoliation.
To make antibacterial soap, compounds such as triclosan or triclocarban can be added. There is some concern that use of antibacterial soaps and other products might encourage antimicrobial resistance in microorganisms.
The type of alkali metal used determines the kind of soap product. Sodium soaps, prepared from sodium hydroxide, are firm, whereas potassium soaps, derived from potassium hydroxide, are softer or often liquid. Historically, potassium hydroxide was extracted from the ashes of bracken or other plants. Lithium soaps also tend to be hard. These are used exclusively in greases.
For making toilet soaps, triglycerides (oils and fats) are derived from coconut, olive, or palm oils, as well as tallow. Triglyceride is the chemical name for the triesters of fatty acids and glycerin. Tallow, i.e., rendered fat, is the most available triglyceride from animals. Each species offers quite different fatty acid content, resulting in soaps of distinct feel. The seed oils give softer but milder soaps. Soap made from pure olive oil, sometimes called Castile soap or Marseille soap, is reputed for its particular mildness. The term "Castile" is also sometimes applied to soaps from a mixture of oils with a high percentage of olive oil.
Gallery
History
Proto-soaps in the Ancient world
Proto-soaps, which mixed fat and alkali and were used for cleansing, are mentioned in Sumerian, Babylonian and Egyptian texts.
The earliest recorded evidence of the production of soap-like materials dates back to around 2800 BC in ancient Babylon. A formula for making a soap-like substance was written on a Sumerian clay tablet around 2500 BC. This was produced by heating a mixture of oil and wood ash, the earliest recorded chemical reaction, and used for washing woolen clothing.
The Ebers papyrus (Egypt, 1550 BC) indicates the ancient Egyptians used a soap-like product as a medicine and created this by combining animal fats or vegetable oils with a soda ash substance called trona. Egyptian documents mention a similar substance was used in the preparation of wool for weaving.
In the reign of Nabonidus (556–539 BC), a recipe for a soap-like substance consisted of uhulu [ashes], cypress [oil] and sesame [seed oil] "for washing the stones for the servant girls".
True soaps in the Ancient world
True soaps, which we might recognise as soaps today, were different to proto-soaps. They foamed, were made deliberately, and could be produced in a hard or soft form because of an understanding of lye sources. It is uncertain as to who was the first to invent true soap.
Knowledge of how to produce true soap emerged at some point between early mentions of proto-soaps and the first century AD. Alkali was used to clean textiles such as wool for thousands of years but soap only forms when there is enough fat, and experiments show that washing wool does not create visible quantities of soap. Experiments by Sally Pointer show that the repeated laundering of materials used in perfume-making lead to noticeable amounts of soap forming. This fits with other evidence from Mesopotamian culture.
Pliny the Elder, whose writings chronicle life in the first century AD, describes soap as "an invention of the Gauls". The word , Latin for soap, has connected to a mythical Mount Sapo, a hill near the River Tiber where animals were sacrificed. But in all likelihood, the word was borrowed from an early Germanic language and is cognate with Latin , "tallow". It first appears in Pliny the Elder's account, Historia Naturalis, which discusses the manufacture of soap from tallow and ashes. There he mentions its use in the treatment of scrofulous sores, as well as among the Gauls as a dye to redden hair which the men in Germania were more likely to use than women. The Romans avoided washing with harsh soaps before encountering the milder soaps used by the Gauls around 58 BC. Aretaeus of Cappadocia, writing in the 2nd century AD, observes among "Celts, which are men called Gauls, those alkaline substances that are made into balls [...] called soap". The Romans' preferred method of cleaning the body was to massage oil into the skin and then scrape away both the oil and any dirt with a strigil. The standard design is a curved blade with a handle, all of which is made of metal.
The 2nd-century AD physician Galen describes soap-making using lye and prescribes washing to carry away impurities from the body and clothes. The use of soap for personal cleanliness became increasingly common in this period. According to Galen, the best soaps were Germanic, and soaps from Gaul were second best. Zosimos of Panopolis, circa 300 AD, describes soap and soapmaking.
In the Southern Levant, the ashes from barilla plants, such as species of Salsola, saltwort (Seidlitzia rosmarinus) and Anabasis, were used to make potash. Traditionally, olive oil was used instead of animal lard throughout the Levant, which was boiled in a copper cauldron for several days. As the boiling progresses, alkali ashes and smaller quantities of quicklime are added and constantly stirred. In the case of lard, it required constant stirring while kept lukewarm until it began to trace. Once it began to thicken, the brew was poured into a mold and left to cool and harden for two weeks. After hardening, it was cut into smaller cakes. Aromatic herbs were often added to the rendered soap to impart their fragrance, such as yarrow leaves, lavender, germander, etc.
Ancient China
A detergent similar to soap was manufactured in ancient China from the seeds of Gleditsia sinensis. Another traditional detergent is a mixture of pig pancreas and plant ash called zhuyizi (). Soap made of animal fat did not appear in China until the modern era. Soap-like detergents were not as popular as ointments and creams.
Islamic Golden Age
Hard toilet soap with a pleasant smell was produced in the Middle East during the Islamic Golden Age, when soap-making became an established industry. Recipes for soap-making are described by Muhammad ibn Zakariya al-Razi (c. 865–925), who also gave a recipe for producing glycerine from olive oil. In the Middle East, soap was produced from the interaction of fatty oils and fats with alkali. In Syria, soap was produced using olive oil together with alkali and lime. Soap was exported from Syria to other parts of the Muslim world and to Europe.
A 12th-century document describes the process of soap production. It mentions the key ingredient, alkali, which later became crucial to modern chemistry, derived from al-qaly or "ashes".
By the 13th century, the manufacture of soap in the Middle East had become a major cottage industry, with sources in Nablus, Fes, Damascus, and Aleppo.
Medieval Europe
Soapmakers in Naples were members of a guild in the late sixth century (then under the control of the Eastern Roman Empire), and in the eighth century, soap-making was well known in Italy and Spain. The Carolingian capitulary De Villis, dating to around 800, representing the royal will of Charlemagne, mentions soap as being one of the products the stewards of royal estates are to tally. The lands of Medieval Spain were a leading soapmaker by 800, and soapmaking began in the Kingdom of England about 1200. Soapmaking is mentioned both as "women's work" and as the produce of "good workmen" alongside other necessities, such as the produce of carpenters, blacksmiths, and bakers.
In Europe, soap in the 9th century was produced from animal fats and had an unpleasant smell. This changed when olive oil began to be used in soap formulas instead, after which much of Europe's soap production moved to the Mediterranean olive-growing regions. Hard toilet soap was introduced to Europe by Arabs and gradually spread as a luxury item. It was often perfumed.
By the 15th century, the manufacture of soap in Christendom often took place on an industrial scale, with sources in Antwerp, Castile, Marseille, Naples and Venice.
16th–17th century
In France, by the second half of the 16th century, the semi-industrialized professional manufacture of soap was concentrated in a few centers of Provence—Toulon, Hyères, and Marseille—which supplied the rest of France. In Marseilles, by 1525, production was concentrated in at least two factories, and soap production at Marseille tended to eclipse the other Provençal centers.
English manufacture tended to concentrate in London. The demand for high-quality hard soap was significant enough during the Tudor period that barrels of ashes were imported for the manufacture of soap.
Finer soaps were later produced in Europe from the 17th century, using vegetable oils (such as olive oil) as opposed to animal fats. Many of these soaps are still produced, both industrially and by small-scale artisans. Castile soap is a popular example of the vegetable-only soaps derived from the oldest "white soap" of Italy. In 1634 Charles I granted the newly formed Society of Soapmakers a monopoly in soap production who produced certificates from 'foure Countesses, and five Viscountesses, and divers other Ladies and Gentlewomen of great credite and quality, besides common Laundresses and others', testifying that 'the New White Soap washeth whiter and sweeter than the Old Soap'.
During the Restoration era (February 1665 – August 1714) a soap tax was introduced in England, which meant that until the mid-1800s, soap was a luxury, used regularly only by the well-to-do. The soap manufacturing process was closely supervised by revenue officials who made sure that soapmakers' equipment was kept under lock and key when not being supervised. Moreover, soap could not be produced by small makers because of a law that stipulated that soap boilers must manufacture a minimum quantity of one imperial ton at each boiling, which placed the process beyond the reach of the average person. The soap trade was boosted and deregulated when the tax was repealed in 1853.
Modern period
Industrially manufactured bar soaps became available in the late 18th century, as advertising campaigns in Europe and America promoted popular awareness of the relationship between cleanliness and health. In modern times, the use of soap has become commonplace in industrialized nations due to a better understanding of the role of hygiene in reducing the population size of pathogenic microorganisms.
Until the Industrial Revolution, soapmaking was conducted on a small scale and the product was rough. In 1780, James Keir established a chemical works at Tipton, for the manufacture of alkali from the sulfates of potash and soda, to which he afterwards added a soap manufactory. The method of extraction proceeded on a discovery of Keir's. In 1790, Nicolas Leblanc discovered how to make alkali from common salt. Andrew Pears started making a high-quality, transparent soap, Pears soap, in 1807 in London. His son-in-law, Thomas J. Barratt, became the brand manager (the first of its kind) for Pears in 1865. In 1882, Barratt recruited English actress and socialite Lillie Langtry to become the poster-girl for Pears soap, making her the first celebrity to endorse a commercial product.
William Gossage produced low-priced, good-quality soap from the 1850s. Robert Spear Hudson began manufacturing a soap powder in 1837, initially by grinding the soap with a mortar and pestle. American manufacturer Benjamin T. Babbitt introduced marketing innovations that included the sale of bar soap and distribution of product samples. William Hesketh Lever and his brother, James, bought a small soap works in Warrington in 1886 and founded what is still one of the largest soap businesses, formerly called Lever Brothers and now called Unilever. These soap businesses were among the first to employ large-scale advertising campaigns.
Liquid soap
Liquid soap was invented in the nineteenth century; in 1865, William Sheppard patented a liquid version of soap. In 1898, B.J. Johnson developed a soap derived from palm and olive oils; his company, the B.J. Johnson Soap Company, introduced "Palmolive" brand soap that same year. This new brand of soap became popular rapidly, and to such a degree that B.J. Johnson Soap Company changed its name to Palmolive.
In the early 1900s, other companies began to develop their own liquid soaps. Such products as Pine-Sol and Tide appeared on the market, making the process of cleaning things other than skin, such as clothing, floors, and bathrooms, much easier.
Liquid soap also works better for more traditional or non-machine washing methods, such as using a washboard.
| Technology | Food, water and health | null |
57877 | https://en.wikipedia.org/wiki/Sodium%20hydroxide | Sodium hydroxide | Sodium hydroxide, also known as lye and caustic soda, is an inorganic compound with the formula . It is a white solid ionic compound consisting of sodium cations and hydroxide anions .
Sodium hydroxide is a highly corrosive base and alkali that decomposes lipids and proteins at ambient temperatures and may cause severe chemical burns. It is highly soluble in water, and readily absorbs moisture and carbon dioxide from the air. It forms a series of hydrates . The monohydrate crystallizes from water solutions between 12.3 and 61.8 °C. The commercially available "sodium hydroxide" is often this monohydrate, and published data may refer to it instead of the anhydrous compound.
As one of the simplest hydroxides, sodium hydroxide is frequently used alongside neutral water and acidic hydrochloric acid to demonstrate the pH scale to chemistry students.
Sodium hydroxide is used in many industries: in the making of wood pulp and paper, textiles, drinking water, soaps and detergents, and as a drain cleaner. Worldwide production in 2022 was approximately 83 million tons.
Properties
Physical properties
Pure sodium hydroxide is a colorless crystalline solid that melts at without decomposition and boils at . It is highly soluble in water, with a lower solubility in polar solvents such as ethanol and methanol. Sodium hydroxide is insoluble in ether and other non-polar solvents.
Similar to the hydration of sulfuric acid, dissolution of solid sodium hydroxide in water is a highly exothermic reaction where a large amount of heat is liberated, posing a threat to safety through the possibility of splashing. The resulting solution is usually colorless and odorless. As with other alkaline solutions, it feels slippery with skin contact due to the process of saponification that occurs between and natural skin oils.
Viscosity
Concentrated (50%) aqueous solutions of sodium hydroxide have a characteristic viscosity, 78 mPa·s, that is much greater than that of water (1.0 mPa·s) and near that of olive oil (85 mPa·s) at room temperature. The viscosity of aqueous , as with any liquid chemical, is inversely related to its temperature, i.e., its viscosity decreases as temperature increases, and vice versa. The viscosity of sodium hydroxide solutions plays a direct role in its application as well as its storage.
Hydrates
Sodium hydroxide can form several hydrates , which result in a complex solubility diagram that was described in detail by Spencer Umfreville Pickering in 1893. The known hydrates and the approximate ranges of temperature and concentration (mass percent of NaOH) of their saturated water solutions are:
Heptahydrate, : from −28 °C (18.8%) to −24 °C (22.2%).
Pentahydrate, : from −24 °C (22.2%) to −17.7 °C (24.8%).
Tetrahydrate, , α form: from −17.7 °C (24.8%) to 5.4 °C (32.5%).
Tetrahydrate, , β form: metastable.
Trihemihydrate, : from 5.4 °C (32.5%) to 15.38 °C (38.8%) and then to 5.0 °C (45.7%).
Trihydrate, : metastable.
Dihydrate, : from 5.0 °C (45.7%) to 12.3 °C (51%).
Monohydrate, : from 12.3 °C (51%) to 65.10 °C (69%) then to 62.63 °C (73.1%).
Early reports refer to hydrates with n = 0.5 or n = 2/3, but later careful investigations failed to confirm their existence.
The only hydrates with stable melting points are (65.10 °C) and (15.38 °C). The other hydrates, except the metastable ones and (β) can be crystallized from solutions of the proper composition, as listed above. However, solutions of NaOH can be easily supercooled by many degrees, which allows the formation of hydrates (including the metastable ones) from solutions with different concentrations.
For example, when a solution of NaOH and water with 1:2 mole ratio (52.6% NaOH by mass) is cooled, the monohydrate normally starts to crystallize (at about 22 °C) before the dihydrate. However, the solution can easily be supercooled down to −15 °C, at which point it may quickly crystallize as the dihydrate. When heated, the solid dihydrate might melt directly into a solution at 13.35 °C; however, once the temperature exceeds 12.58 °C it often decomposes into solid monohydrate and a liquid solution. Even the n = 3.5 hydrate is difficult to crystallize, because the solution supercools so much that other hydrates become more stable.
A hot water solution containing 73.1% (mass) of NaOH is a eutectic that solidifies at about 62.63 °C as an intimate mix of anhydrous and monohydrate crystals.
A second stable eutectic composition is 45.4% (mass) of NaOH, that solidifies at about 4.9 °C into a mixture of crystals of the dihydrate and of the 3.5-hydrate.
The third stable eutectic has 18.4% (mass) of NaOH. It solidifies at about −28.7 °C as a mixture of water ice and the heptahydrate .
When solutions with less than 18.4% NaOH are cooled, water ice crystallizes first, leaving the NaOH in solution.
The α form of the tetrahydrate has density 1.33 g/cm3. It melts congruously at 7.55 °C into a liquid with 35.7% NaOH and density 1.392 g/cm3, and therefore floats on it like ice on water. However, at about 4.9 °C it may instead melt incongruously into a mixture of solid and a liquid solution.
The β form of the tetrahydrate is metastable, and often transforms spontaneously to the α form when cooled below −20 °C. Once initiated, the exothermic transformation is complete in a few minutes, with a 6.5% increase in volume of the solid. The β form can be crystallized from supercooled solutions at −26 °C, and melts partially at −1.83 °C.
The "sodium hydroxide" of commerce is often the monohydrate (density 1.829 g/cm3). Physical data in technical literature may refer to this form, rather than the anhydrous compound.
Crystal structure
NaOH and its monohydrate form orthorhombic crystals with the space groups Cmcm (oS8) and Pbca (oP24), respectively. The monohydrate cell dimensions are a = 1.1825, b = 0.6213, c = 0.6069 nm. The atoms are arranged in a hydrargillite-like layer structure, with each sodium atom surrounded by six oxygen atoms, three each from hydroxide ions and three from water molecules. The hydrogen atoms of the hydroxyls form strong bonds with oxygen atoms within each O layer. Adjacent O layers are held together by hydrogen bonds between water molecules.
Chemical properties
Reaction with acids
Sodium hydroxide reacts with protic acids to produce water and the corresponding salts. For example, when sodium hydroxide reacts with hydrochloric acid, sodium chloride is formed:
In general, such neutralization reactions are represented by one simple net ionic equation:
This type of reaction with a strong acid releases heat, and hence is exothermic. Such acid–base reactions can also be used for titrations. However, sodium hydroxide is not used as a primary standard because it is hygroscopic and absorbs carbon dioxide from air.
Reaction with acidic oxides
Sodium hydroxide also reacts with acidic oxides, such as sulfur dioxide. Such reactions are often used to "scrub" harmful acidic gases (like and ) produced in the burning of coal and thus prevent their release into the atmosphere. For example,
Reaction with metals and oxides
Glass reacts slowly with aqueous sodium hydroxide solutions at ambient temperatures to form soluble silicates. Because of this, glass joints and stopcocks exposed to sodium hydroxide have a tendency to "freeze". Flasks and glass-lined chemical reactors are damaged by long exposure to hot sodium hydroxide, which also frosts the glass. Sodium hydroxide does not attack iron at room temperature, since iron does not have amphoteric properties (i.e., it only dissolves in acid, not base).
Nevertheless, at high temperatures (e.g. above 500 °C), iron can react endothermically with sodium hydroxide to form iron(III) oxide, sodium metal, and hydrogen gas. This is due to the lower enthalpy of formation of iron(III) oxide (−824.2 kJ/mol) compared to sodium hydroxide (−500 kJ/mol) and positive entropy change of the reaction, which implies spontaneity at high temperatures (, ) and non-spontaneity at low temperatures (, ). Consider the following reaction between molten sodium hydroxide and finely divided iron filings:
A few transition metals, however, may react quite vigorously with sodium hydroxide under milder conditions.
In 1986, an aluminium road tanker in the UK was mistakenly used to transport 25% sodium hydroxide solution, causing pressurization of the contents and damage to tankers. The pressurization is due to the hydrogen gas which is produced in the reaction between sodium hydroxide and aluminium:
Precipitant
Unlike sodium hydroxide, which is soluble, the hydroxides of most transition metals are insoluble, and therefore sodium hydroxide can be used to precipitate transition metal hydroxides. The following colours are observed:
Copper - blue
Iron(II) - green
Iron(III) - yellow / brown
Zinc and lead salts dissolve in excess sodium hydroxide to give a clear solution of or .
Aluminium hydroxide is used as a gelatinous flocculant to filter out particulate matter in water treatment. Aluminium hydroxide is prepared at the treatment plant from aluminium sulfate by reacting it with sodium hydroxide or bicarbonate.
Saponification
Sodium hydroxide can be used for the base-driven hydrolysis of esters (also called saponification), amides and alkyl halides. However, the limited solubility of sodium hydroxide in organic solvents means that the more soluble potassium hydroxide (KOH) is often preferred. Touching a sodium hydroxide solution with bare hands, while not recommended, produces a slippery feeling. This happens because oils on the skin such as sebum are converted to soap.
Despite solubility in propylene glycol it is unlikely to replace water in saponification due to propylene glycol's primary reaction with fat before reaction between sodium hydroxide and fat.
Production
Sodium hydroxide is industrially produced, first as a 32% solution, and then evaporated to a 50% solution by variations of the electrolytic chloralkali process. Chlorine gas is also produced in this process. Solid sodium hydroxide is obtained from this solution by the evaporation of water. Solid sodium hydroxide is most commonly sold as flakes, prills, and cast blocks.
In 2022, world production was estimated at 83 million dry tonnes of sodium hydroxide, and demand was estimated at 51 million tonnes. In 1998, total world production was around 45 million tonnes. North America and Asia each contributed around 14 million tonnes, while Europe produced around 10 million tonnes. In the United States, the major producer of sodium hydroxide is Olin, which has annual production around 5.7 million tonnes from sites at Freeport, Texas; Plaquemine, Louisiana; St. Gabriel, Louisiana; McIntosh, Alabama; Charleston, Tennessee; Niagara Falls, New York; and Bécancour, Canada. Other major US producers include Oxychem, Westlake, Shintek, and Formosa. All of these companies use the chloralkali process.
Historically, sodium hydroxide was produced by treating sodium carbonate with calcium hydroxide (slaked lime) in a metathesis reaction which takes advantage of the fact that sodium hydroxide is soluble, while calcium carbonate is not. This process was called causticizing.
The sodium carbonate for this reaction was produced by the Leblanc process in the early 19th century, or the Solvay process in the late 19th century. The conversion of sodium carbonate to sodium hydroxide was superseded entirely by the chloralkali process, which produces sodium hydroxide in a single process.
Sodium hydroxide is also produced by combining pure sodium metal with water. The byproducts are hydrogen gas and heat, often resulting in a flame.
This reaction is commonly used for demonstrating the reactivity of alkali metals in academic environments; however, it is not used commercially aside from a reaction within the mercury cell chloralkali process where sodium amalgam is reacted with water.
Uses
Sodium hydroxide is a popular strong base used in industry. Sodium hydroxide is used in the manufacture of sodium salts and detergents, pH regulation, and organic synthesis. In bulk, it is most often handled as an aqueous solution, since solutions are cheaper and easier to handle.
Sodium hydroxide is used in many scenarios where it is desirable to increase the alkalinity of a mixture, or to neutralize acids. For example, in the petroleum industry, sodium hydroxide is used as an additive in drilling mud to increase alkalinity in bentonite mud systems, to increase the mud viscosity, and to neutralize any acid gas (such as hydrogen sulfide and carbon dioxide) which may be encountered in the geological formation as drilling progresses. Another use is in salt spray testing where pH needs to be regulated. Sodium hydroxide is used with hydrochloric acid to balance pH. The resultant salt, NaCl, is the corrosive agent used in the standard neutral pH salt spray test.
Poor quality crude oil can be treated with sodium hydroxide to remove sulfurous impurities in a process known as caustic washing. Sodium hydroxide reacts with weak acids such as hydrogen sulfide and mercaptans to yield non-volatile sodium salts, which can be removed. The waste which is formed is toxic and difficult to deal with, and the process is banned in many countries because of this. In 2006, Trafigura used the process and then dumped the waste in Ivory Coast.
Other common uses of sodium hydroxide include:
for making soaps and detergents. Sodium hydroxide is used for hard bar soap, while potassium hydroxide is used for liquid soaps. Sodium hydroxide is used more often than potassium hydroxide because it is cheaper and a smaller quantity is needed.
as drain cleaners that convert pipe-clogging fats and grease into soap, which dissolves in water
for making artificial textile fibres such as rayon
in the manufacture of paper. Around 56% of sodium hydroxide produced is used by industry, 25% of which is used in the paper industry.
in purifying bauxite ore from which aluminium metal is extracted. This is known as the Bayer process.
de-greasing metals
oil refining
making dyes and bleaches
in water treatment plants for pH regulation
to treat bagels and pretzel dough, giving the distinctive shiny finish
Chemical pulping
Sodium hydroxide is also widely used in pulping of wood for making paper or regenerated fibers. Along with sodium sulfide, sodium hydroxide is a key component of the white liquor solution used to separate lignin from cellulose fibers in the kraft process. It also plays a key role in several later stages of the process of bleaching the brown pulp resulting from the pulping process. These stages include oxygen delignification, oxidative extraction, and simple extraction, all of which require a strong alkaline environment with a pH > 10.5 at the end of the stages.
Tissue digestion
In a similar fashion, sodium hydroxide is used to digest tissues, as in a process that was used with farm animals at one time. This process involved placing a carcass into a sealed chamber, then adding a mixture of sodium hydroxide and water (which breaks the chemical bonds that keep the flesh intact). This eventually turns the body into a liquid with a dark brown color, and the only solids that remain are bone hulls, which can be crushed between one's fingertips.
Sodium hydroxide is frequently used in the process of decomposing roadkill dumped in landfills by animal disposal contractors. Due to its availability and low cost, it has been used by criminals to dispose of corpses. Italian serial killer Leonarda Cianciulli used this chemical to turn dead bodies into soap. In Mexico, a man who worked for drug cartels admitted disposing of over 300 bodies with it.
Sodium hydroxide is a dangerous chemical due to its ability to hydrolyze protein. If a dilute solution is spilled on the skin, burns may result if the area is not washed thoroughly and for several minutes with running water. Splashes in the eye can be more serious and can lead to blindness.
Dissolving amphoteric metals and compounds
Strong bases attack aluminium. Sodium hydroxide reacts with aluminium and water to release hydrogen gas. The aluminium takes an oxygen atom from sodium hydroxide, which in turn takes an oxygen atom from water, and releases two hydrogen atoms. The reaction thus produces hydrogen gas and sodium aluminate. In this reaction, sodium hydroxide acts as an agent to make the solution alkaline, which aluminium can dissolve in.
→ 2 +
Sodium aluminate is an inorganic chemical that is used as an effective source of aluminium hydroxide for many industrial and technical applications. Pure sodium aluminate (anhydrous) is a white crystalline solid having a formula variously given as , , , or . Formation of sodium tetrahydroxoaluminate(III) or hydrated sodium aluminate is given by:
O
This reaction can be useful in etching, removing anodizing, or converting a polished surface to a satin-like finish, but without further passivation such as anodizing or alodining the surface may become degraded, either under normal use or in severe atmospheric conditions.
In the Bayer process, sodium hydroxide is used in the refining of alumina containing ores (bauxite) to produce alumina (aluminium oxide) which is the raw material used to produce aluminium via the electrolytic Hall-Héroult process. Since the alumina is amphoteric, it dissolves in the sodium hydroxide, leaving impurities less soluble at high pH such as iron oxides behind in the form of a highly alkaline red mud.
Other amphoteric metals are zinc and lead which dissolve in concentrated sodium hydroxide solutions to give sodium zincate and sodium plumbate respectively.
Esterification and transesterification reagent
Sodium hydroxide is traditionally used in soap making (cold process soap, saponification). It was made in the nineteenth century for a hard surface rather than liquid product because it was easier to store and transport.
For the manufacture of biodiesel, sodium hydroxide is used as a catalyst for the transesterification of methanol and triglycerides. This only works with anhydrous sodium hydroxide, because combined with water the fat would turn into soap, which would be tainted with methanol. NaOH is used more often than potassium hydroxide because it is cheaper and a smaller quantity is needed. Due to production costs, NaOH, which is produced using common salt is cheaper than potassium hydroxide.
Skincare ingredient
Sodium hydroxide is an ingredient used in some skincare and cosmetic products, such as facial cleansers, creams, lotions, and makeup. It is typically used in low concentration as a pH balancer, due its highly alkaline nature.
Food preparation
Food uses of sodium hydroxide include washing or chemical peeling of fruits and vegetables, chocolate and cocoa processing, caramel coloring production, poultry scalding, soft drink processing, and thickening ice cream. Olives are often soaked in sodium hydroxide for softening; pretzels and German lye rolls are glazed with a sodium hydroxide solution before baking to make them crisp. Owing to the difficulty in obtaining food grade sodium hydroxide in small quantities for home use, sodium carbonate is often used in place of sodium hydroxide. It is known as E number E524.
Specific foods processed with sodium hydroxide include:
German pretzels are poached in a boiling sodium carbonate solution or cold sodium hydroxide solution before baking, which contributes to their unique crust.
Lye water is an essential ingredient in the crust of the traditional baked Chinese moon cakes.
Most yellow coloured Chinese noodles are made with lye water but are commonly mistaken for containing egg.
One variety of zongzi uses lye water to impart a sweet flavor.
Sodium hydroxide causes gelling of egg whites in the production of century eggs.
Some methods of preparing olives involve subjecting them to a lye-based brine.
The Filipino dessert () called uses a small quantity of lye water to help give the rice flour batter a jelly-like consistency. A similar process is also used in the kakanin known as or except that the mixture uses grated cassava instead of rice flour.
The Norwegian dish known as lutefisk ().
Bagels are often boiled in a lye solution before baking, contributing to their shiny crust.
Hominy is dried maize (corn) kernels reconstituted by soaking in lye-water. These expand considerably in size and may be further processed by frying to make corn nuts or by drying and grinding to make grits. Hominy is used to create masa, a popular flour used in Mexican cuisine to make corn tortillas and tamales. Nixtamal is similar, but uses calcium hydroxide instead of sodium hydroxide.
Cleaning agent
Sodium hydroxide is frequently used as an industrial cleaning agent where it is often called "caustic". It is added to water, heated, and then used to clean process equipment, storage tanks, etc. It can dissolve grease, oils, fats and protein-based deposits. It is also used for cleaning waste discharge pipes under sinks and drains in domestic properties. Surfactants can be added to the sodium hydroxide solution in order to stabilize dissolved substances and thus prevent redeposition. A sodium hydroxide soak solution is used as a powerful degreaser on stainless steel and glass bakeware. It is also a common ingredient in oven cleaners.
A common use of sodium hydroxide is in the production of parts washer detergents. Parts washer detergents based on sodium hydroxide are some of the most aggressive parts washer cleaning chemicals. The sodium hydroxide-based detergents include surfactants, rust inhibitors and defoamers. A parts washer heats water and the detergent in a closed cabinet and then sprays the heated sodium hydroxide and hot water at pressure against dirty parts for degreasing applications. Sodium hydroxide used in this manner replaced many solvent-based systems in the early 1990s when trichloroethane was outlawed by the Montreal Protocol. Water and sodium hydroxide detergent-based parts washers are considered to be an environmental improvement over the solvent-based cleaning methods.
Sodium hydroxide is used in the home as a type of drain openers to unblock clogged drains, usually in the form of a dry crystal or as a thick liquid gel. The alkali dissolves greases to produce water soluble products. It also hydrolyzes proteins, such as those found in hair, which may block water pipes. These reactions are sped by the heat generated when sodium hydroxide and the other chemical components of the cleaner dissolve in water. Such alkaline drain cleaners and their acidic versions are highly corrosive and should be handled with great caution.
Relaxer
Sodium hydroxide is used in some relaxers to straighten hair. However, because of the high incidence and intensity of chemical burns, manufacturers of chemical relaxers use other alkaline chemicals in preparations available to consumers. Sodium hydroxide relaxers are still available, but they are used mostly by professionals.
Paint stripper
A solution of sodium hydroxide in water was traditionally used as the most common paint stripper on wooden objects. Its use has become less common, because it can damage the wood surface, raising the grain and staining the colour.
Water treatment
Sodium hydroxide is sometimes used during water purification to raise the pH of water supplies. Increased pH makes the water less corrosive to plumbing and reduces the amount of lead, copper and other toxic metals that can dissolve into drinking water.
Historical uses
Sodium hydroxide has been used for detection of carbon monoxide poisoning, with blood samples of such patients turning to a vermilion color upon the addition of a few drops of sodium hydroxide. Today, carbon monoxide poisoning can be detected by CO oximetry.
In cement mixes, mortars, concrete, grouts
Sodium hydroxide is used in some cement mix plasticisers. This helps homogenise cement mixes, preventing segregation of sands and cement, decreases the amount of water required in a mix and increases workability of the cement product, be it mortar, render or concrete.
Safety
Like other corrosive acids and alkalis, a few drops of sodium hydroxide solutions can readily decompose proteins and lipids in living tissues via amide hydrolysis and ester hydrolysis, which consequently cause chemical burns and may induce permanent blindness upon contact with eyes. Solid alkali can also express its corrosive nature if there is water, such as water vapor. Thus, protective equipment, like rubber gloves, safety clothing and eye protection, should always be used when handling this chemical or its solutions. The standard first aid measures for alkali spills on the skin is, as for other corrosives, irrigation with large quantities of water. Washing is continued for at least ten to fifteen minutes.
Moreover, dissolution of sodium hydroxide is highly exothermic, and the resulting heat may cause heat burns or ignite flammables. It also produces heat when reacted with acids.
Sodium hydroxide is mildly corrosive to glass, which can cause damage to glazing or cause ground glass joints to bind. Sodium hydroxide is corrosive to several metals, like aluminium which reacts with the alkali to produce flammable hydrogen gas on contact.
Storage
Careful storage is needed when handling sodium hydroxide for use, especially bulk volumes. Following proper NaOH storage guidelines and maintaining worker/environment safety is always recommended given the chemical's burn hazard.
Sodium hydroxide is often stored in bottles for small-scale laboratory use, within intermediate bulk containers (medium volume containers) for cargo handling and transport, or within large stationary storage tanks with volumes up to 100,000 gallons for manufacturing or waste water plants with extensive NaOH use. Common materials that are compatible with sodium hydroxide and often utilized for NaOH storage include: polyethylene (HDPE, usual, XLPE, less common), carbon steel, polyvinyl chloride (PVC), stainless steel, and fiberglass reinforced plastic (FRP, with a resistant liner).
Sodium hydroxide must be stored in airtight containers to preserve its normality as it will absorb water and carbon dioxide from the atmosphere.
History
Sodium hydroxide was first prepared by soap makers. A procedure for making sodium hydroxide appeared as part of a recipe for making soap in an Arab book of the late 13th century: (Inventions from the Various Industrial Arts), which was compiled by al-Muzaffar Yusuf ibn 'Umar ibn 'Ali ibn Rasul (d. 1295), a king of Yemen. The recipe called for passing water repeatedly through a mixture of alkali (Arabic: , where is ash from saltwort plants, which are rich in sodium; hence alkali was impure sodium carbonate) and quicklime (calcium oxide, CaO), whereby a solution of sodium hydroxide was obtained. European soap makers also followed this recipe. When in 1791 the French chemist and surgeon Nicolas Leblanc (1742–1806) patented a process for mass-producing sodium carbonate, natural "soda ash" (impure sodium carbonate that was obtained from the ashes of plants that are rich in sodium) was replaced by this artificial version. However, by the 20th century, the electrolysis of sodium chloride had become the primary method for producing sodium hydroxide.
| Physical sciences | Inorganic compounds | null |
57880 | https://en.wikipedia.org/wiki/In%20vitro%20fertilisation | In vitro fertilisation | In vitro fertilisation (IVF) is a process of fertilisation in which an egg is combined with sperm in vitro ("in glass"). The process involves monitoring and stimulating a woman's ovulatory process, then removing an ovum or ova (egg or eggs) from her ovaries and enabling a man's sperm to fertilise them in a culture medium in a laboratory. After a fertilised egg (zygote) undergoes embryo culture for 2–6 days, it is transferred by catheter into the uterus, with the intention of establishing a successful pregnancy.
IVF is a type of assisted reproductive technology used to treat infertility, enable gestational surrogacy, and, in combination with pre-implantation genetic testing, avoid the transmission of abnormal genetic conditions. When a fertilised egg from egg and sperm donors implants in the uterus of a genetically unrelated surrogate, the resulting child is also genetically unrelated to the surrogate. Some countries have banned or otherwise regulated the availability of IVF treatment, giving rise to fertility tourism. Financial cost and age may also restrict the availability of IVF as a means of carrying a healthy pregnancy to term.
In July 1978, Louise Brown was the first child successfully born after her mother received IVF treatment. Brown was born as a result of natural-cycle IVF, where no stimulation was made. The procedure took place at Dr Kershaw's Cottage Hospital (later Dr Kershaw's Hospice) in Royton, Oldham, England. Robert Edwards was awarded the Nobel Prize in Physiology or Medicine in 2010. (The physiologist co-developed the treatment together with Patrick Steptoe and embryologist Jean Purdy but the latter two were not eligible for consideration as they had died: the Nobel Prize is not awarded posthumously.)
When assisted by egg donation and IVF, many women who have reached menopause, have infertile partners, or have idiopathic female-fertility issues, can still become pregnant. After the IVF treatment, some couples get pregnant without any fertility treatments. In 2023, it was estimated that twelve million children had been born worldwide using IVF and other assisted reproduction techniques. A 2019 study that evaluated the use of 10 adjuncts with IVF (screening hysteroscopy, DHEA, testosterone, GH, aspirin, heparin, antioxidants, seminal plasma and PRP) suggested that (with the exception of hysteroscopy) these adjuncts should be avoided until there is more evidence to show that they are safe and effective.
Terminology
The Latin term in vitro, meaning "in glass", is used because early biological experiments involving cultivation of tissues outside the living organism were carried out in glass containers, such as beakers, test tubes, or Petri dishes. The modern scientific term "in vitro" refers to any biological procedure that is performed outside the organism in which it would normally have occurred, to distinguish it from an in vivo procedure (such as in vivo fertilisation), where the tissue remains inside the living organism in which it is normally found.
A colloquial term for babies conceived as the result of IVF, "test tube babies", refers to the tube-shaped containers of glass or plastic resin, called test tubes, that are commonly used in chemistry and biology labs. However, IVF is usually performed in Petri dishes, which are both wider and shallower and often used to cultivate cultures.
IVF is a form of assisted reproductive technology.
History
The first successful birth of a child after IVF treatment, Louise Brown, occurred in 1978. Louise Brown was born as a result of natural cycle IVF where no stimulation was made. The procedure took place at Dr Kershaw's Cottage Hospital (now Dr Kershaw's Hospice) in Royton, Oldham, England. Robert G. Edwards, the physiologist who co-developed the treatment, was awarded the Nobel Prize in Physiology or Medicine in 2010. His co-workers, Patrick Steptoe and Jean Purdy, were not eligible for consideration as the Nobel Prize is not awarded posthumously.
The second successful birth of a 'test tube baby' occurred in India on October 3, 1978, just 67 days after Louise Brown was born. The girl, named Durga, was conceived in vitro using a method developed independently by Subhash Mukhopadhyay, a physician and researcher from Hazaribag. Mukhopadhyay had been performing experiments on his own with primitive instruments and a household refrigerator. However, state authorities prevented him from presenting his work at scientific conferences, and it was many years before Mukhopadhyay's contribution was acknowledged in works dealing with the subject.
Adriana Iliescu held the record as the oldest woman to give birth using IVF and a donor egg, when she gave birth in 2004 at the age of 66, a record passed in 2006. After the IVF treatment some couples are able to get pregnant without any fertility treatments. In 2018 it was estimated that eight million children had been born worldwide using IVF and other assisted reproduction techniques.
Medical uses
Indications
IVF may be used to overcome female infertility when it is due to problems with the fallopian tubes, making in vivo fertilisation difficult. It can also assist in male infertility, in those cases where there is a defect in sperm quality; in such situations intracytoplasmic sperm injection (ICSI) may be used, where a sperm cell is injected directly into the egg cell. This is used when sperm has difficulty penetrating the egg. ICSI is also used when sperm numbers are very low. When indicated, the use of ICSI has been found to increase the success rates of IVF.
According to UK's National Institute for Health and Care Excellence (NICE) guidelines, IVF treatment is appropriate in cases of unexplained infertility for people who have not conceived after 2 years of regular unprotected sexual intercourse.
In people with anovulation, it may be an alternative after 7–12 attempted cycles of ovulation induction, since the latter is expensive and more easy to control.
Success rates
IVF success rates are the percentage of all IVF procedures that result in favourable outcomes. Depending on the type of calculation used, this outcome may represent the number of confirmed pregnancies, called the pregnancy rate, or the number of live births, called the live birth rate. Due to advances in reproductive technology, live birth rates by cycle five of IVF have increased from 76% in 2005 to 80% in 2010, despite a reduction in the number of embryos being transferred (which decreased the multiple birth rate from 25% to 8%).
The success rate depends on variable factors such as age of the woman, cause of infertility, embryo status, reproductive history, and lifestyle factors. Younger candidates of IVF are more likely to get pregnant. People older than 41 are more likely to get pregnant with a donor egg. People who have been previously pregnant are in many cases more successful with IVF treatments than those who have never been pregnant.
Live birth rate
The live birth rate is the percentage of all IVF cycles that lead to a live birth. This rate does not include miscarriage or stillbirth; multiple-order births, such as twins and triplets, are counted as one pregnancy.
A 2021 summary compiled by the Society for Assisted Reproductive Technology (SART) which reports the average IVF success rates in the United States per age group using non-donor eggs compiled the following data:
In 2006, Canadian clinics reported a live birth rate of 27%. Birth rates in younger patients were slightly higher, with a success rate of 35.3% for those 21 and younger, the youngest group evaluated. Success rates for older patients were also lower and decrease with age, with 37-year-olds at 27.4% and no live births for those older than 48, the oldest group evaluated. Some clinics exceeded these rates, but it is impossible to determine if that is due to superior technique or patient selection, since it is possible to artificially increase success rates by refusing to accept the most difficult patients or by steering them into oocyte donation cycles (which are compiled separately). Further, pregnancy rates can be increased by the placement of several embryos at the risk of increasing the chance for multiples.
Because not each IVF cycle that is started will lead to oocyte retrieval or embryo transfer, reports of live birth rates need to specify the denominator, namely IVF cycles started, IVF retrievals, or embryo transfers. The SART summarised 2008–9 success rates for US clinics for fresh embryo cycles that did not involve donor eggs and gave live birth rates by the age of the prospective mother, with a peak at 41.3% per cycle started and 47.3% per embryo transfer for patients under 35 years of age.
IVF attempts in multiple cycles result in increased cumulative live birth rates. Depending on the demographic group, one study reported 45% to 53% for three attempts, and 51% to 71% to 80% for six attempts.
According to the 2021 National Summary Report compiled by the Society for Assisted Reproductive Technology (SART), the mean number of embryos transfers for patients achieving live birth go as follows:
Effective from 15 February 2021 the majority of Australian IVF clinics publish their individual success rate online via YourIVFSuccess.com.au. This site also contains a predictor tool.
Pregnancy rate
Pregnancy rate may be defined in various ways. In the United States, SART and the Centers for Disease Control (and appearing in the table in the Success Rates section above) include statistics on positive pregnancy test and clinical pregnancy rate.
The 2019 summary compiled by the SART the following data for non-donor eggs (first embryo transfer) in the United States:
In 2006, Canadian clinics reported an average pregnancy rate of 35%. A French study estimated that 66% of patients starting IVF treatment finally succeed in having a child (40% during the IVF treatment at the centre and 26% after IVF discontinuation). Achievement of having a child after IVF discontinuation was mainly due to adoption (46%) or spontaneous pregnancy (42%).
Miscarriage rate
According to a study done by the Mayo Clinic, miscarriage rates for IVF are somewhere between 15 and 25% for those under the age of 35. In naturally conceived pregnancies, the rate of miscarriage is between 10 and 20% for those under the age of 35. Risk of miscarriage, regardless of the method of conception, does increase with age.
Predictors of success
The main potential factors that influence pregnancy (and live birth) rates in IVF have been suggested to be maternal age, duration of infertility or subfertility, bFSH and number of oocytes, all reflecting ovarian function. Optimal age is 23–39 years at time of treatment.
Biomarkers that affect the pregnancy chances of IVF include:
Antral follicle count, with higher count giving higher success rates.
Anti-Müllerian hormone levels, with higher levels indicating higher chances of pregnancy, as well as of live birth after IVF, even after adjusting for age.
Level of DNA fragmentation as measured, e.g. by Comet assay, advanced maternal age and semen quality.
People with ovary-specific FMR1 genotypes including het-norm/low have significantly decreased pregnancy chances in IVF.
Progesterone elevation on the day of induction of final maturation is associated with lower pregnancy rates in IVF cycles in women undergoing ovarian stimulation using GnRH analogues and gonadotrophins. At this time, compared to a progesterone level below 0.8 ng/ml, a level between 0.8 and 1.1 ng/ml confers an odds ratio of pregnancy of approximately 0.8, and a level between 1.2 and 3.0 ng/ml confers an odds ratio of pregnancy of between 0.6 and 0.7. On the other hand, progesterone elevation does not seem to confer a decreased chance of pregnancy in frozen–thawed cycles and cycles with egg donation.
Characteristics of cells from the cumulus oophorus and the membrana granulosa, which are easily aspirated during oocyte retrieval. These cells are closely associated with the oocyte and share the same microenvironment, and the rate of expression of certain genes in such cells are associated with higher or lower pregnancy rate.
An endometrial thickness (EMT) of less than 7 mm decreases the pregnancy rate by an odds ratio of approximately 0.4 compared to an EMT of over 7 mm. However, such low thickness rarely occurs, and any routine use of this parameter is regarded as not justified.
Other determinants of outcome of IVF include:
As maternal age increases, the likelihood of conception decreases and the chance of miscarriage increases.
With increasing paternal age, especially 50 years and older, the rate of blastocyst formation decreases.
Tobacco smoking reduces the chances of IVF producing a live birth by 34% and increases the risk of an IVF pregnancy miscarrying by 30%.
A body mass index (BMI) over 27 causes a 33% decrease in likelihood to have a live birth after the first cycle of IVF, compared to those with a BMI between 20 and 27. Also, pregnant people who are obese have higher rates of miscarriage, gestational diabetes, hypertension, thromboembolism and problems during delivery, as well as leading to an increased risk of fetal congenital abnormality. Ideal body mass index is 19–30, and many clinics restrict this BMI range as a criterion for initiation of the IVF process.
Salpingectomy or laparoscopic tubal occlusion before IVF treatment increases chances for people with hydrosalpinges.
Success with previous pregnancy and/or live birth increases chances
Low alcohol/caffeine intake increases success rate
The number of embryos transferred in the treatment cycle
Embryo quality
Some studies also suggest that autoimmune disease may also play a role in decreasing IVF success rates by interfering with the proper implantation of the embryo after transfer.
Aspirin is sometimes prescribed to people for the purpose of increasing the chances of conception by IVF, but there was no evidence to show that it is safe and effective.
A 2013 review and meta analysis of randomised controlled trials of acupuncture as an adjuvant therapy in IVF found no overall benefit, and concluded that an apparent benefit detected in a subset of published trials where the control group (those not using acupuncture) experienced a lower than average rate of pregnancy requires further study, due to the possibility of publication bias and other factors.
A Cochrane review came to the result that endometrial injury performed in the month prior to ovarian induction appeared to increase both the live birth rate and clinical pregnancy rate in IVF compared with no endometrial injury. There was no evidence of a difference between the groups in miscarriage, multiple pregnancy or bleeding rates. Evidence suggested that endometrial injury on the day of oocyte retrieval was associated with a lower live birth or ongoing pregnancy rate.
Intake of antioxidants (such as N-acetyl-cysteine, melatonin, vitamin A, vitamin C, vitamin E, folic acid, myo-inositol, zinc or selenium) has not been associated with a significantly increased live birth rate or clinical pregnancy rate in IVF according to Cochrane reviews. The review found that oral antioxidants given to the sperm donor with male factor or unexplained subfertility may improve live birth rates, but more evidence is needed.
A Cochrane review in 2015 came to the result that there is no evidence identified regarding the effect of preconception lifestyle advice on the chance of a live birth outcome.
Method
Theoretically, IVF could be performed by collecting the contents from the fallopian tubes or uterus after natural ovulation, mixing it with sperm, and reinserting the fertilised ova into the uterus. However, without additional techniques, the chances of pregnancy would be extremely small. The additional techniques that are routinely used in IVF include ovarian hyperstimulation to generate multiple eggs, ultrasound-guided transvaginal oocyte retrieval directly from the ovaries, co-incubation of eggs and sperm, as well as culture and selection of resultant embryos before embryo transfer into a uterus.
Ovarian hyperstimulation
Ovarian hyperstimulation is the stimulation to induce development of multiple follicles of the ovaries. It should start with response prediction based on factors such as age, antral follicle count and level of anti-Müllerian hormone. The resulting prediction (e.g. poor or hyper-response to ovarian hyperstimulation) determines the protocol and dosage for ovarian hyperstimulation.
Ovarian hyperstimulation also includes suppression of spontaneous ovulation, for which two main methods are available: Using a (usually longer) GnRH agonist protocol or a (usually shorter) GnRH antagonist protocol. In a standard long GnRH agonist protocol the day when hyperstimulation treatment is started and the expected day of later oocyte retrieval can be chosen to conform to personal choice, while in a GnRH antagonist protocol it must be adapted to the spontaneous onset of the previous menstruation. On the other hand, the GnRH antagonist protocol has a lower risk of ovarian hyperstimulation syndrome (OHSS), which is a life-threatening complication.
For the ovarian hyperstimulation in itself, injectable gonadotropins (usually FSH analogues) are generally used under close monitoring. Such monitoring frequently checks the estradiol level and, by means of gynecologic ultrasonography, follicular growth. Typically approximately 10 days of injections will be necessary.
When stimulating ovulation after suppressing endogenous secretion, it is necessary to supply exogenous gonadotropines. The most common one is the human menopausal gonadotropin (hMG), which is obtained by donation of menopausal women. Other pharmacological preparations are FSH+LH or coripholitropine alpha.
Natural IVF
There are several methods termed natural cycle IVF:
IVF using no drugs for ovarian hyperstimulation, while drugs for ovulation suppression may still be used.
IVF using ovarian hyperstimulation, including gonadotropins, but with a GnRH antagonist protocol so that the cycle initiates from natural mechanisms.
Frozen embryo transfer; IVF using ovarian hyperstimulation, followed by embryo cryopreservation, followed by embryo transfer in a later, natural, cycle.
IVF using no drugs for ovarian hyperstimulation was the method for the conception of Louise Brown. This method can be successfully used when people want to avoid taking ovarian stimulating drugs with its associated side-effects. HFEA has estimated the live birth rate to be approximately 1.3% per IVF cycle using no hyperstimulation drugs for women aged between 40 and 42.
Mild IVF is a method where a small dose of ovarian stimulating drugs are used for a short duration during a natural menstrual cycle aimed at producing 2–7 eggs and creating healthy embryos. This method appears to be an advance in the field to reduce complications and side-effects for women, and it is aimed at quality, and not quantity of eggs and embryos. One study comparing a mild treatment (mild ovarian stimulation with GnRH antagonist co-treatment combined with single embryo transfer) to a standard treatment (stimulation with a GnRH agonist long-protocol and transfer of two embryos) came to the result that the proportions of cumulative pregnancies that resulted in term live birth after 1 year were 43.4% with mild treatment and 44.7% with standard treatment. Mild IVF can be cheaper than conventional IVF and with a significantly reduced risk of multiple gestation and OHSS.
Final maturation induction
When the ovarian follicles have reached a certain degree of development, induction of final oocyte maturation is performed, generally by an injection of human chorionic gonadotropin (hCG). Commonly, this is known as the "trigger shot." hCG acts as an analogue of luteinising hormone, and ovulation would occur between 38 and 40 hours after a single HCG injection, but the egg retrieval is performed at a time usually between 34 and 36 hours after hCG injection, that is, just prior to when the follicles would rupture. This avails for scheduling the egg retrieval procedure at a time where the eggs are fully mature. HCG injection confers a risk of ovarian hyperstimulation syndrome. Using a GnRH agonist instead of hCG eliminates most of the risk of ovarian hyperstimulation syndrome, but with a reduced delivery rate if the embryos are transferred fresh. For this reason, many centers will freeze all oocytes or embryos following agonist trigger.
Egg retrieval
The eggs are retrieved from the patient using a transvaginal technique called transvaginal ultrasound aspiration involving an ultrasound-guided needle being injected through follicles upon collection. Through this needle, the oocyte and follicular fluid are aspirated and the follicular fluid is then passed to an embryologist to identify ova. It is common to remove between ten and thirty eggs. The retrieval process, which lasts approximately 20 to 40 minutes, is performed under conscious sedation or general anesthesia to ensure patient comfort. Following optimal follicular development, the eggs are meticulously retrieved using transvaginal ultrasound guidance with the aid of a specialised ultrasound probe and a fine needle aspiration technique. The follicular fluid, containing the retrieved eggs, is expeditiously transferred to the embryology laboratory for subsequent processing.
Egg and sperm preparation
In the laboratory, for ICSI treatments, the identified eggs are stripped of surrounding cells (also known as cumulus cells) and prepared for fertilisation. An oocyte selection may be performed prior to fertilisation to select eggs that can be fertilised, as it is required they are in metaphase II. There are cases in which if oocytes are in the metaphase I stage, they can be kept being cultured so as to undergo a posterior sperm injection. In the meantime, semen is prepared for fertilisation by removing inactive cells and seminal fluid in a process called sperm washing. If semen is being provided by a sperm donor, it will usually have been prepared for treatment before being frozen and quarantined, and it will be thawed ready for use.
Co-incubation
The sperm and the egg are incubated together at a ratio of about 75,000:1 in a culture media in order for the actual fertilisation to take place. A review in 2013 came to the result that a duration of this co-incubation of about 1 to 4 hours results in significantly higher pregnancy rates than 16 to 24 hours. In most cases, the egg will be fertilised during co-incubation and will show two pronuclei. In certain situations, such as low sperm count or motility, a single sperm may be injected directly into the egg using intracytoplasmic sperm injection (ICSI). The fertilised egg is passed to a special growth medium and left for about 48 hours until the embryo consists of six to eight cells.
In gamete intrafallopian transfer, eggs are removed from the woman and placed in one of the fallopian tubes, along with the man's sperm. This allows fertilisation to take place inside the woman's body. Therefore, this variation is actually an in vivo fertilisation, not in vitro.
Embryo culture
The main durations of embryo culture are until cleavage stage (day two to four after co-incubation) or the blastocyst stage (day five or six after co-incubation). Embryo culture until the blastocyst stage confers a significant increase in live birth rate per embryo transfer, but also confers a decreased number of embryos available for transfer and embryo cryopreservation, so the cumulative clinical pregnancy rates are increased with cleavage stage transfer. Transfer day two instead of day three after fertilisation has no differences in live birth rate. There are significantly higher odds of preterm birth (odds ratio 1.3) and congenital anomalies (odds ratio 1.3) among births having from embryos cultured until the blastocyst stage compared with cleavage stage.
Embryo selection
Laboratories have developed grading methods to judge ovocyte and embryo quality. In order to optimise pregnancy rates, there is significant evidence that a morphological scoring system is the best strategy for the selection of embryos. Since 2009 where the first time-lapse microscopy system for IVF was approved for clinical use, morphokinetic scoring systems has shown to improve to pregnancy rates further. However, when all different types of time-lapse embryo imaging devices, with or without morphokinetic scoring systems, are compared against conventional embryo assessment for IVF, there is insufficient evidence of a difference in live-birth, pregnancy, stillbirth or miscarriage to choose between them. Active efforts to develop a more accurate embryo selection analysis based on Artificial Intelligence and Deep Learning are underway. Embryo Ranking Intelligent Classification Assistant (ERICA), is a clear example. This Deep Learning software substitutes manual classifications with a ranking system based on an individual embryo's predicted genetic status in a non-invasive fashion. Studies on this area are still pending and current feasibility studies support its potential.
Embryo transfer
The number to be transferred depends on the number available, the age of the patient and other health and diagnostic factors. In countries such as Canada, the UK, Australia and New Zealand, a maximum of two embryos are transferred except in unusual circumstances. In the UK and according to HFEA regulations, a woman over 40 may have up to three embryos transferred, whereas in the US, there is no legal limit on the number of embryos which may be transferred, although medical associations have provided practice guidelines. Most clinics and country regulatory bodies seek to minimise the risk of multiple pregnancy, as it is not uncommon for multiple embryos to implant if multiple embryos are transferred. Embryos are transferred to the patient's uterus through a thin, plastic catheter, which goes through their vagina and cervix. Several embryos may be passed into the uterus to improve chances of implantation and pregnancy.
Luteal support
Luteal support is the administration of medication, generally progesterone, progestins, hCG, or GnRH agonists, and often accompanied by estradiol, to increase the success rate of implantation and early embryogenesis, thereby complementing and/or supporting the function of the corpus luteum. A Cochrane review found that hCG or progesterone given during the luteal phase may be associated with higher rates of live birth or ongoing pregnancy, but that the evidence is not conclusive. Co-treatment with GnRH agonists appears to improve outcomes, by a live birth rate RD of +16% (95% confidence interval +10 to +22%). On the other hand, growth hormone or aspirin as adjunctive medication in IVF have no evidence of overall benefit.
Expansions
There are various expansions or additional techniques that can be applied in IVF, which are usually not necessary for the IVF procedure itself, but would be virtually impossible or technically difficult to perform without concomitantly performing methods of IVF.
Preimplantation genetic screening or diagnosis
Preimplantation genetic screening (PGS) or preimplantation genetic diagnosis (PGD) has been suggested to be able to be used in IVF to select an embryo that appears to have the greatest chances for successful pregnancy. However, a systematic review and meta-analysis of existing randomised controlled trials came to the result that there is no evidence of a beneficial effect of PGS with cleavage-stage biopsy as measured by live birth rate. On the contrary, for those of advanced maternal age, PGS with cleavage-stage biopsy significantly lowers the live birth rate. Technical drawbacks, such as the invasiveness of the biopsy, and non-representative samples because of mosaicism are the major underlying factors for inefficacy of PGS.
Still, as an expansion of IVF, patients who can benefit from PGS/PGD include:
Those who have a family history of inherited disease
Those who want prenatal sex discernment. This can be used to diagnose monogenic disorders with sex linkage. It can potentially be used for sex selection, wherein a fetus is aborted if having an undesired sex.
Those who already have a child with an incurable disease and need compatible cells from a second healthy child to cure the first, resulting in a "saviour sibling" that matches the sick child in HLA type.
PGS screens for numeral chromosomal abnormalities while PGD diagnosis the specific molecular defect of the inherited disease. In both PGS and PGD, individual cells from a pre-embryo, or preferably trophectoderm cells biopsied from a blastocyst, are analysed during the IVF process. Before the transfer of a pre-embryo back to a person's uterus, one or two cells are removed from the pre-embryos (8-cell stage), or preferably from a blastocyst. These cells are then evaluated for normality. Typically within one to two days, following completion of the evaluation, only the normal pre-embryos are transferred back to the uterus. Alternatively, a blastocyst can be cryopreserved via vitrification and transferred at a later date to the uterus. In addition, PGS can significantly reduce the risk of multiple pregnancies because fewer embryos, ideally just one, are needed for implantation.
Cryopreservation
Cryopreservation can be performed as oocyte cryopreservation before fertilisation, or as embryo cryopreservation after fertilisation.
The Rand Consulting Group has estimated there to be 400,000 frozen embryos in the United States in 2006. The advantage is that patients who fail to conceive may become pregnant using such embryos without having to go through a full IVF cycle. Or, if pregnancy occurred, they could return later for another pregnancy. Spare oocytes or embryos resulting from fertility treatments may be used for oocyte donation or embryo donation to another aspiring parent, and embryos may be created, frozen and stored specifically for transfer and donation by using donor eggs and sperm. Also, oocyte cryopreservation can be used for those who are likely to lose their ovarian reserve due to undergoing chemotherapy.
By 2017, many centres have adopted embryo cryopreservation as their primary IVF therapy, and perform few or no fresh embryo transfers. The two main reasons for this have been better endometrial receptivity when embryos are transferred in cycles without exposure to ovarian stimulation and also the ability to store the embryos while awaiting the results of preimplantation genetic testing.
The outcome from using cryopreserved embryos has uniformly been positive with no increase in birth defects or development abnormalities.
Other expansions
Intracytoplasmic sperm injection (ICSI) is where a single sperm is injected directly into an egg. Its main usage as an expansion of IVF is to overcome male infertility problems, although it may also be used where eggs cannot easily be penetrated by sperm, and occasionally in conjunction with sperm donation. It can be used in teratozoospermia, since once the egg is fertilised abnormal sperm morphology does not appear to influence blastocyst development or blastocyst morphology.
Additional methods of embryo profiling. For example, methods are emerging in making comprehensive analyses of up to entire genomes, transcriptomes, proteomes and metabolomes which may be used to score embryos by comparing the patterns with ones that have previously been found among embryos in successful versus unsuccessful pregnancies.
Assisted zona hatching (AZH) can be performed shortly before the embryo is transferred to the uterus. A small opening is made in the outer layer surrounding the egg in order to help the embryo hatch out and aid in the implantation process of the growing embryo.
In egg donation and embryo donation, the resultant embryo after fertilisation is inserted in another person than the one providing the eggs. These are resources for those with no eggs due to surgery, chemotherapy, or genetic causes; or with poor egg quality, previously unsuccessful IVF cycles or advanced maternal age. In the egg donor process, eggs are retrieved from a donor's ovaries, fertilised in the laboratory with sperm, and the resulting healthy embryos are returned to the recipient's uterus.
In oocyte selection, the oocytes with optimal chances of live birth can be chosen. It can also be used as a means of preimplantation genetic screening.
Embryo splitting can be used for twinning to increase the number of available embryos.
Cytoplasmic transfer is where the cytoplasm from a donor egg is injected into an egg with compromised mitochondria. The resulting egg is then fertilised with sperm and introduced into a uterus, usually that of the person who provided the recipient egg and nuclear DNA. Cytoplasmic transfer was created to aid those who experience infertility due to deficient or damaged mitochondria, contained within an egg's cytoplasm.
Complications and health effects
Multiple births
The major complication of IVF is the risk of multiple births. This is directly related to the practice of transferring multiple embryos at embryo transfer. Multiple births are related to increased risk of pregnancy loss, obstetrical complications, prematurity, and neonatal morbidity with the potential for long term damage. Strict limits on the number of embryos that may be transferred have been enacted in some countries (e.g. Britain, Belgium) to reduce the risk of high-order multiples (triplets or more), but are not universally followed or accepted. Spontaneous splitting of embryos in the uterus after transfer can occur, but this is rare and would lead to identical twins. A double blind, randomised study followed IVF pregnancies that resulted in 73 infants, and reported that 8.7% of singleton infants and 54.2% of twins had a birth weight of less than . There is some evidence that making a double embryo transfer during one cycle achieves a higher live birth rate than a single embryo transfer; but making two single embryo transfers in two cycles has the same live birth rate and would avoid multiple pregnancies.
Sex ratio distortions
Certain kinds of IVF have been shown to lead to distortions in the sex ratio at birth. Intracytoplasmic sperm injection (ICSI), which was first applied in 1991, leads to slightly more female births (51.3% female). Blastocyst transfer, which was first applied in 1984, leads to significantly more male births (56.1% male). Standard IVF done at the second or third day leads to a normal sex ratio.
Epigenetic modifications caused by extended culture leading to the death of more female embryos has been theorised as the reason why blastocyst transfer leads to a higher male sex ratio; however, adding retinoic acid to the culture can bring this ratio back to normal. A second theory is that the male-biased sex ratio may due to a higher rate of selection of male embryos. Male embryos develop faster in vitro, and thus may appear more viable for transfer.
Spread of infectious disease
By sperm washing, the risk that a chronic disease in the individual providing the sperm would infect the birthing parent or offspring can be brought to negligible levels.
If the sperm donor has hepatitis B, The Practice Committee of the American Society for Reproductive Medicine advises that sperm washing is not necessary in IVF to prevent transmission, unless the birthing partner has not been effectively vaccinated. In women with hepatitis B, the risk of vertical transmission during IVF is no different from the risk in spontaneous conception. However, there is not enough evidence to say that ICSI procedures are safe in women with hepatitis B in regard to vertical transmission to the offspring.
Regarding potential spread of HIV/AIDS, Japan's government prohibited the use of IVF procedures in which both partners are infected with HIV. Despite the fact that the ethics committees previously allowed the Ogikubo, Tokyo Hospital, located in Tokyo, to use IVF for couples with HIV, the Ministry of Health, Labour and Welfare of Japan decided to block the practice. Hideji Hanabusa, the vice president of the Ogikubo Hospital, states that together with his colleagues, he managed to develop a method through which scientists are able to remove HIV from sperm.
In the United States, people seeking to be an embryo recipient undergo infectious disease screening required by the Food and Drug Administration (FDA), and reproductive tests to determine the best placement location and cycle timing before the actual embryo transfer occurs. The amount of screening the embryo has already undergone is largely dependent on the genetic parents' own IVF clinic and process. The embryo recipient may elect to have their own embryologist conduct further testing.
Other risks to the egg provider/retriever
A risk of ovarian stimulation is the development of ovarian hyperstimulation syndrome, particularly if hCG is used for inducing final oocyte maturation. This results in swollen, painful ovaries. It occurs in 30% of patients. Mild cases can be treated with over the counter medications and cases can be resolved in the absence of pregnancy. In moderate cases, ovaries swell and fluid accumulated in the abdominal cavities and may have symptoms of heartburn, gas, nausea or loss of appetite. In severe cases, patients have sudden excess abdominal pain, nausea, vomiting and will result in hospitalisation.
During egg retrieval, there exists a small chance of bleeding, infection, and damage to surrounding structures such as bowel and bladder (transvaginal ultrasound aspiration) as well as difficulty in breathing, chest infection, allergic reactions to medication, or nerve damage (laparoscopy).
Ectopic pregnancy may also occur if a fertilised egg develops outside the uterus, usually in the fallopian tubes and requires immediate destruction of the foetus.
IVF does not seem to be associated with an elevated risk of cervical cancer, nor with ovarian cancer or endometrial cancer when neutralising the confounder of infertility itself. Nor does it seem to impart any increased risk for breast cancer.
Regardless of pregnancy result, IVF treatment is usually stressful for patients. Neuroticism and the use of escapist coping strategies are associated with a higher degree of distress, while the presence of social support has a relieving effect. A negative pregnancy test after IVF is associated with an increased risk for depression, but not with any increased risk of developing anxiety disorders. Pregnancy test results do not seem to be a risk factor for depression or anxiety among men in the case of relationships between two cisgender, heterosexual people. Hormonal agents such as gonadotropin-releasing hormone agonist (GnRH agonist) are associated with depression.
Studies show that there is an increased risk of venous thrombosis or pulmonary embolism during the first trimester of IVF. When looking at long-term studies comparing patients who received or did not receive IVF, there seems to be no correlation with increased risk of cardiac events. There are more ongoing studies to solidify this.
Spontaneous pregnancy has occurred after successful and unsuccessful IVF treatments. Within 2 years of delivering an infant conceived through IVF, subfertile patients had a conception rate of 18%.
Birth defects
A review in 2013 came to the result that infants resulting from IVF (with or without ICSI) have a relative risk of birth defects of 1.32 (95% confidence interval 1.24–1.42) compared to naturally conceived infants. In 2008, an analysis of the data of the National Birth Defects Study in the US found that certain birth defects were significantly more common in infants conceived through IVF, notably septal heart defects, cleft lip with or without cleft palate, esophageal atresia, and anorectal atresia; the mechanism of causality is unclear. However, in a population-wide cohort study of 308,974 births (with 6,163 using assisted reproductive technology and following children from birth to age five) researchers found: "The increased risk of birth defects associated with IVF was no longer significant after adjustment for parental factors." Parental factors included known independent risks for birth defects such as maternal age, smoking status, etc. Multivariate correction did not remove the significance of the association of birth defects and ICSI (corrected odds ratio 1.57), although the authors speculate that underlying male infertility factors (which would be associated with the use of ICSI) may contribute to this observation and were not able to correct for these confounders. The authors also found that a history of infertility elevated risk itself in the absence of any treatment (odds ratio 1.29), consistent with a Danish national registry study and "implicates patient factors in this increased risk." The authors of the Danish national registry study speculate: "our results suggest that the reported increased prevalence of congenital malformations seen in singletons born after assisted reproductive technology is partly due to the underlying infertility or its determinants."
Other risks to the offspring
If the underlying infertility is related to abnormalities in spermatogenesis, male offspring will have a higher risk for sperm abnormalities. In some cases genetic testing may be recommended to help assess the risk of transmission of defects to progeny and to consider whether treatment is desirable.
IVF does not seem to confer any risks regarding cognitive development, school performance, social functioning, and behaviour. Also, IVF infants are known to be as securely attached to their parents as those who were naturally conceived, and IVF adolescents are as well-adjusted as those who have been naturally conceived.
Limited long-term follow-up data suggest that IVF may be associated with an increased incidence of hypertension, impaired fasting glucose, increase in total body fat composition, advancement of bone age, subclinical thyroid disorder, early adulthood clinical depression and binge drinking in the offspring. It is not known, however, whether these potential associations are caused by the IVF procedure in itself, by adverse obstetric outcomes associated with IVF, by the genetic origin of the children or by yet unknown IVF-associated causes. Increases in embryo manipulation during IVF result in more deviant fetal growth curves, but birth weight does not seem to be a reliable marker of fetal stress.
IVF, including ICSI, is associated with an increased risk of imprinting disorders (including Prader–Willi syndrome and Angelman syndrome), with an odds ratio of 3.7 (95% confidence interval 1.4 to 9.7).
An IVF-associated incidence of cerebral palsy and neurodevelopmental delay are believed to be related to the confounders of prematurity and low birthweight. Similarly, an IVF-associated incidence of autism and attention-deficit disorder are believed to be related to confounders of maternal and obstetric factors.
Overall, IVF does not cause an increased risk of childhood cancer. Studies have shown a decrease in the risk of certain cancers and an increased risks of certain others including retinoblastoma, hepatoblastoma and rhabdomyosarcoma.
Controversial cases
Mix-ups
In some cases, laboratory mix-ups (misidentified gametes, transfer of wrong embryos) have occurred, leading to legal action against the IVF provider and complex paternity suits. An example is the case of a woman in California who received the embryo of another couple and was notified of this mistake after the birth of her son. This has led to many authorities and individual clinics implementing procedures to minimise the risk of such mix-ups. The HFEA, for example, requires clinics to use a double witnessing system, the identity of specimens is checked by two people at each point at which specimens are transferred. Alternatively, technological solutions are gaining favour, to reduce the manpower cost of manual double witnessing, and to further reduce risks with uniquely numbered RFID tags which can be identified by readers connected to a computer. The computer tracks specimens throughout the process and alerts the embryologist if non-matching specimens are identified. Although the use of RFID tracking has expanded in the US, it is still not widely adopted.
Preimplantation genetic diagnosis or screening
Pre-implantation genetic diagnosis (PGD) is criticised for giving select demographic groups disproportionate access to a means of creating a child possessing characteristics that they consider "ideal". Many fertile couples now demand equal access to embryonic screening so that their child can be just as healthy as one created through IVF. Mass use of PGD, especially as a means of population control or in the presence of legal measures related to population or demographic control, can lead to intentional or unintentional demographic effects such as the skewed live-birth sex ratios seen in China following implementation of its one-child policy.
While PGD was originally designed to screen for embryos carrying hereditary genetic diseases, the method has been applied to select features that are unrelated to diseases, thus raising ethical questions. Examples of such cases include the selection of embryos based on histocompatibility (HLA) for the donation of tissues to a sick family member, the diagnosis of genetic susceptibility to disease, and sex selection.
These examples raise ethical issues because of the morality of eugenics. It becomes frowned upon because of the advantage of being able to eliminate unwanted traits and selecting desired traits. By using PGD, individuals are given the opportunity to create a human life unethically and rely on science and not by natural selection.
For example, a deaf British couple, Tom and Paula Lichy, have petitioned to create a deaf baby using IVF. Some medical ethicists have been very critical of this approach. Jacob M. Appel wrote that "intentionally culling out blind or deaf embryos might prevent considerable future suffering, while a policy that allowed deaf or blind parents to select for such traits intentionally would be far more troublesome."
Industry corruption
Robert Winston, professor of fertility studies at Imperial College London, had called the industry "corrupt" and "greedy" stating that "one of the major problems facing us in healthcare is that IVF has become a massive commercial industry," and that "what has happened, of course, is that money is corrupting this whole technology", and accused authorities of failing to protect couples from exploitation: "The regulatory authority has done a consistently bad job. It's not prevented the exploitation of people, it's not put out very good information to couples, it's not limited the number of unscientific treatments people have access to". The IVF industry has been described as a market-driven construction of health, medicine and the human body.
The industry has been accused of making unscientific claims, and distorting facts relating to infertility, in particular through widely exaggerated claims about how common infertility is in society, in an attempt to get as many couples as possible and as soon as possible to try treatments (rather than trying to conceive naturally for a longer time). This risks removing infertility from its social context and reducing the experience to a simple biological malfunction, which not only can be treated through bio-medical procedures, but should be treated by them.
Older patients
All pregnancies can be risky, but there are greater risk for mothers who are older and are over the age of 40. As people get older, they are more likely to develop conditions such as gestational diabetes and pre-eclampsia. If the mother does conceive over the age of 40, their offspring may be of lower birth weight, and more likely to requires intensive care. Because of this, the increased risk is a sufficient cause for concern. The high incidence of caesarean in older patients is commonly regarded as a risk.
Those conceiving at 40 have a greater risk of gestational hypertension and premature birth. The offspring is at risk when being born from older mothers, and the risks associated with being conceived through IVF.
Adriana Iliescu held the record for a while as the oldest woman to give birth using IVF and a donor egg, when she gave birth in 2004 at the age of 66. In September 2019, a 74-year-old woman became the oldest-ever to give birth after she delivered twins at a hospital in Guntur, Andhra Pradesh.
Pregnancy after menopause
Although menopause is a natural barrier to further conception, IVF has allowed people to be pregnant in their fifties and sixties. People whose uteruses have been appropriately prepared receive embryos that originated from an egg donor. Therefore, although they do not have a genetic link with the child, they have a physical link through pregnancy and childbirth. Even after menopause, the uterus is fully capable of carrying out a pregnancy.
Same-sex couples, single and unmarried parents
A 2009 statement from the ASRM found no persuasive evidence that children are harmed or disadvantaged solely by being raised by single parents, unmarried parents, or homosexual parents. It did not support restricting access to assisted reproductive technologies on the basis of a prospective parent's marital status or sexual orientation. A 2018 study found that children's psychological well-being did not differ when raised by either same-sex parents or heterosexual parents, even finding that psychological well-being was better amongst children raised by same-sex parents.
Ethical concerns include reproductive rights, the welfare of offspring, nondiscrimination against unmarried individuals, homosexual, and professional autonomy.
A controversy in California focused on the question of whether physicians opposed to same-sex relationships should be required to perform IVF for a lesbian couple. Guadalupe T. Benitez, a lesbian medical assistant from San Diego, sued doctors Christine Brody and Douglas Fenton of the North Coast Woman's Care Medical Group after Brody told her that she had "religious-based objections to treating her and homosexuals in general to help them conceive children by artificial insemination," and Fenton refused to authorise a refill of her prescription for the fertility drug Clomid on the same grounds. The California Medical Association had initially sided with Brody and Fenton, but the case, North Coast Women's Care Medical Group v. Superior Court, was decided unanimously by the California State Supreme Court in favour of Benitez on 19 August 2008.
Nadya Suleman came to international attention after having twelve embryos implanted, eight of which survived, resulting in eight newborns being added to her existing six-child family. The Medical Board of California sought to have fertility doctor Michael Kamrava, who treated Suleman, stripped of his licence. State officials allege that performing Suleman's procedure is evidence of unreasonable judgment, substandard care, and a lack of concern for the eight children she would conceive and the six she was already struggling to raise. On 1 June 2011 the Medical Board issued a ruling that Kamrava's medical licence be revoked effective 1 July 2011.
Transgender parents
The research on transgender reproduction and family planning is limited. A 2020 comparative study of children born to a transgender father and cisgender mother via donor sperm insemination in France showed no significant differences to IVF and naturally conceived children of cisgender parents.
Transgender men can experience challenges in pregnancy and birthing from the cis-normative structure within the medical system, as well as psychological challenges such as renewed gender dysphoria. The effect of continued testosterone therapy during pregnancy and breastfeeding is undetermined. Ethical concerns include reproductive rights, reproductive justice, physician autonomy, and transphobia within the health care setting.
Anonymous donors
Alana Stewart, who was conceived using donor sperm, began an online forum for donor children called AnonymousUS in 2010. The forum welcomes the viewpoints of anyone involved in the IVF process. In May 2012, a court ruled making anonymous sperm and egg donation in British Columbia illegal.
In the U.K., Sweden, Norway, Germany, Italy, New Zealand, and some Australian states, donors are not paid and cannot be anonymous.
In 2000, a website called Donor Sibling Registry was created to help biological children with a common donor connect with each other.
Leftover embryos or eggs, unwanted embryos
There may be leftover embryos or eggs from IVF procedures if the person for whom they were originally created has successfully carried one or more pregnancies to term, and no longer wishes to use them. With the patient's permission, these may be donated to help others conceive by means of third party reproduction.
In embryo donation, these extra embryos are given to others for transfer, with the goal of producing a successful pregnancy. Embryo recipients have genetic issues or poor-quality embryos or eggs of their own. The resulting child is considered the child of whoever birthed them, and not the child of the donor, the same as occurs with egg donation or sperm donation. As per The National Infertility Association, typically, genetic parents donate the eggs or embryos to a fertility clinic where they are preserved by oocyte cryopreservation or embryo cryopreservation until a carrier is found for them. The process of matching the donation with the prospective parents is conducted by the agency itself, at which time the clinic transfers ownership of the embryos to the prospective parent(s).
Alternatives to donating unused embryos are destroying them (or having them transferred at a time when pregnancy is very unlikely), keeping them frozen indefinitely, or donating them for use in research (rendering them non-viable). Individual moral views on disposing of leftover embryos may depend on personal views on the beginning of human personhood and the definition and/or value of potential future persons, and on the value that is given to fundamental research questions. Some people believe donation of leftover embryos for research is a good alternative to discarding the embryos when patients receive proper, honest and clear information about the research project, the procedures and the scientific values.
During the embryo selection and transfer phases, many embryos may be discarded in favour of others. This selection may be based on criteria such as genetic disorders or the sex. One of the earliest cases of special gene selection through IVF was the case of the Collins family in the 1990s, who selected the sex of their child.
The ethic issues remain unresolved as no worldwide consensus exists in science, religion, and philosophy on when a human embryo should be recognised as a person. For those who believe that this is at the moment of conception, IVF becomes a moral question when multiple eggs are fertilised, begin development, and only a few are chosen for uterus transfer.
If IVF were to involve the fertilisation of only a single egg, or at least only the number that will be transferred, then this would not be an issue. However, this has the chance of increasing costs dramatically as only a few eggs can be attempted at a time. As a result, the couple must decide what to do with these extra embryos. Depending on their view of the embryo's humanity or the chance the couple will want to try to have another child, the couple has multiple options for dealing with these extra embryos. Couples can choose to keep them frozen, donate them to other infertile couples, thaw them, or donate them to medical research. Keeping them frozen costs money, donating them does not ensure they will survive, thawing them renders them immediately unviable, and medical research results in their termination. In the realm of medical research, the couple is not necessarily told what the embryos will be used for, and as a result, some can be used in stem cell research.
In February 2024, the Alabama Supreme Court ruled in LePage v. Center for Reproductive Medicine that cryopreserved embryos were "persons" or "extrauterine children". After Dobbs v. Jackson Women's Health Organization (2022), some antiabortionists had hoped to get a judgement that fetuses and embryos were "person[s]".
Religious response
The Catholic Church opposes all kinds of assisted reproductive technology and artificial contraception, on the grounds that they separate the procreative goal of marital sex from the goal of uniting married couples.
The Catholic Church permits the use of a small number of reproductive technologies and contraceptive methods such as natural family planning, which involves charting ovulation times, and allows other forms of reproductive technologies that allow conception to take place from normative sexual intercourse, such as a fertility lubricant. Pope Benedict XVI had publicly re-emphasised the Catholic Church's opposition to in vitro fertilisation, saying that it replaces love between a husband and wife.
The Catechism of the Catholic Church, in accordance with the Catholic understanding of natural law, teaches that reproduction has an "inseparable connection" to the sexual union of married couples. In addition, the church opposes IVF because it might result in the disposal of embryos; in Catholicism, an embryo is viewed as an individual with a soul that must be treated as a person. The Catholic Church maintains that it is not objectively evil to be infertile, and advocates adoption as an option for such couples who still wish to have children.
Hindus welcome IVF as a gift for those who are unable to bear children and have declared doctors related to IVF to be conducting punya as there are several characters who were claimed to be born without intercourse, mainly Kaurav and five Pandavas.
Regarding the response to IVF by Islam, a general consensus from the contemporary Sunni scholars concludes that IVF methods are immoral and prohibited. However, Gad El-Hak Ali Gad El-Hak's ART fatwa includes that:
IVF of an egg from the wife with the sperm of her husband and the transfer of the fertilised egg back to the uterus of the wife is allowed, provided that the procedure is indicated for a medical reason and is carried out by an expert physician.
Since marriage is a contract between the wife and husband during the span of their marriage, no third party should intrude into the marital functions of sex and procreation. This means that a third party donor is not acceptable, whether he or she is providing sperm, eggs, embryos, or a uterus. The use of a third party is tantamount to zina, or adultery.
Within the Orthodox Jewish community the concept is debated as there is little precedent in traditional Jewish legal textual sources. Regarding laws of sexuality, religious challenges include masturbation (which may be regarded as "seed wasting"), laws related to sexual activity and menstruation (niddah) and the specific laws regarding intercourse. An additional major issue is that of establishing paternity and lineage. For a baby conceived naturally, the father's identity is determined by a legal presumption (chazakah) of legitimacy: rov bi'ot achar ha'baal – a woman's sexual relations are assumed to be with her husband. Regarding an IVF child, this assumption does not exist and as such Rabbi Eliezer Waldenberg (among others) requires an outside supervisor to positively identify the father. Reform Judaism has generally approved IVF.
Society and culture
Many women of sub-Saharan Africa choose to foster their children to infertile women. IVF enables these infertile women to have their own children, which imposes new ideals to a culture in which fostering children is seen as both natural and culturally important. Many infertile women are able to earn more respect in their society by taking care of the children of other mothers, and this may be lost if they choose to use IVF instead. As IVF is seen as unnatural, it may even hinder their societal position as opposed to making them equal with fertile women. It is also economically advantageous for infertile women to raise foster children as it gives these children greater ability to access resources that are important for their development and also aids the development of their society at large. If IVF becomes more popular without the birth rate decreasing, there could be more large family homes with fewer options to send their newborn children. This could result in an increase of orphaned children and/or a decrease in resources for the children of large families. This would ultimately stifle the children's and the community's growth.
In the US, the pineapple has emerged as a symbol of IVF users, possibly because some people thought, without scientific evidence, that eating pineapple might slightly increase the success rate for the procedure.
Emotional involvement with children
Studies have indicated that IVF mothers show greater emotional involvement with their child, and they enjoy motherhood more than mothers by natural conception. Similarly, studies have indicated that IVF fathers express more warmth and emotional involvement than fathers by adoption and natural conception and enjoy fatherhood more. Some IVF parents become overly involved with their children.
Men and IVF
Research has shown that men largely view themselves as "passive contributors" since they have "less physical involvement" in IVF treatment. Despite this, many men feel distressed after seeing the toll of hormonal injections and ongoing physical intervention on their female partner. Fertility was found to be a significant factor in a man's perception of his masculinity, driving many to keep the treatment a secret. In cases where the men did share that he and his partner were undergoing IVF, they reported to have been teased, mainly by other men, although some viewed this as an affirmation of support and friendship. For others, this led to feeling socially isolated. In comparison with females, males showed less deterioration in mental health in the years following a failed treatment. However, many men did feel guilt, disappointment and inadequacy, stating that they were simply trying to provide an "emotional rock" for their partners.
Ability to withdraw consent
In certain countries, including Austria, Italy, Estonia, Hungary, Spain and Israel, the male does not have the full ability to withdraw consent to storage or use of embryos once they are fertilised. In the United States, the matter has been left to the courts on a more or less ad hoc basis. If embryos are implanted and a child is born contrary to the wishes of the male, he still has legal and financial responsibilities of a father.
Availability and utilisation
Cost
Costs of IVF can be broken down into direct and indirect costs. Direct costs include the medical treatments themselves, including doctor consultations, medications, ultrasound scanning, laboratory tests, the actual IVF procedure, and any associated hospital charges and administrative costs. Indirect costs includes the cost of addressing any complications with treatments, compensation for the gestational surrogate, patients' travel costs, and lost hours of productivity. These costs can be exaggerated by the increasing age of the woman undergoing IVF treatment (particularly those over the age of 40), and the increase costs associated with multiple births. For instance, a pregnancy with twins can cost up to three times that of a singleton pregnancy. While some insurances cover one cycle of IVF, it takes multiple cycles of IVF to have a successful outcome. A study completed in Northern California reveals that the IVF procedure alone that results in a successful outcome costs $61,377, and this can be more costly with the use of a donor egg.
The cost of IVF rather reflects the costliness of the underlying healthcare system than the regulatory or funding environment, and ranges, on average for a standard IVF cycle and in 2006 United States dollars, between $12,500 in the United States to $4,000 in Japan. In Ireland, IVF costs around €4,000, with fertility drugs, if required, costing up to €3,000. The cost per live birth is highest in the United States ($41,000) and United Kingdom ($40,000) and lowest in Scandinavia and Japan (both around $24,500).
The high cost of IVF is also a barrier to access for disabled individuals, who typically have lower incomes, face higher health care costs, and seek health care services more often than non-disabled individuals.
Navigating insurance coverage for transgender expectant parents presents a unique challenge. Insurance plans are designed to cater towards a specific population, meaning that some plans can provide adequate coverage for gender-affirming care but fail to provide fertility services for transgender patients. Additionally, insurance coverage is constructed around a person's legally recognised sex and not their anatomy; thus, transgender people may not get coverage for the services they need, including transgender men for fertility services.
Use by LGBT individuals
Same-sex couples
In larger urban centres, studies have noted that lesbian, gay, bisexual, transgender and queer (LGBTQ+) populations are among the fastest-growing users of fertility care. IVF is increasingly being used to allow lesbian and other LGBT couples to share in the reproductive process through a technique called reciprocal IVF. The eggs of one partner are used to create embryos which the other partner carries through pregnancy. For gay male couples, many elect to use IVF through gestational surrogacy, where one partner's sperm is used to fertilise a donor ovum, and the resulting embryo is transplanted into a surrogate carrier's womb. There are various IVF options available for same-sex couples including, but not limited to, IVF with donor sperm, IVF with a partner's oocytes, reciprocal IVF, IVF with donor eggs, and IVF with gestational surrogate. IVF with donor sperm can be considered traditional IVF for lesbian couples, but reciprocal IVF or using a partner's oocytes are other options for lesbian couples trying to conceive to include both partners in the biological process. Using a partner's oocytes is an option for partners who are unsuccessful in conceiving with their own, and reciprocal IVF involves undergoing reproduction with a donor egg and sperm that is then transferred to a partner who will gestate. Donor IVF involves conceiving with a third party's eggs. Typically, for gay male couples hoping to use IVF, the common techniques are using IVF with donor eggs and gestational surrogates.
Transgender parents
Many LGBT communities centre their support around cisgender gay, lesbian and bisexual people and neglect to include proper support for transgender people. The same 2020 literature review analyses the social, emotional and physical experiences of pregnant transgender men. A common obstacle faced by pregnant transgender men is the possibility of gender dysphoria. Literature shows that transgender men report uncomfortable procedures and interactions during their pregnancies as well as feeling misgendered due to gendered terminology used by healthcare providers. Outside of the healthcare system, pregnant transgender men may experience gender dysphoria due to cultural assumptions that all pregnant people are cisgender women. These people use three common approaches to navigating their pregnancy: passing as a cisgender woman, hiding their pregnancy, or being out and visibly pregnant as a transgender man. Some transgender and gender diverse patients describe their experience in seeking gynaecological and reproductive health care as isolating and discriminatory, as the strictly binary healthcare system often leads to denial of healthcare coverage or unnecessary revelation of their transgender status to their employer.
Many transgender people retain their original sex organs and choose to have children through biological reproduction. Advances in assisted reproductive technology and fertility preservation have broadened the options transgender people have to conceive a child using their own gametes or a donor's. Transgender men and women may opt for fertility preservation before any gender affirming surgery, but it is not required for future biological reproduction. It is also recommended that fertility preservation is conducted before any hormone therapy. Additionally, while fertility specialists often suggest that transgender men discontinue their testosterone hormones prior to pregnancy, research on this topic is still inconclusive. However, a 2019 study found that transgender male patients seeking oocyte retrieval via assisted reproductive technology (including IVF) were able to undergo treatment four months after stopping testosterone treatment, on average. All patients experienced menses and normal AMH, FSH and E2 levels and antral follicle counts after coming off testosterone, which allowed for successful oocyte retrieval. Despite assumptions that the long-term androgen treatment negatively impacts fertility, oocyte retrieval, an integral part of the IVF process, does not appear to be affected.
Biological reproductive options available to transgender women include, but are not limited to, IVF and IUI with the trans woman's sperm and a donor or a partner's eggs and uterus. Fertility treatment options for transgender men include, but are not limited to, IUI or IVF using his own eggs with a donor's sperm and/or donor's eggs, his uterus, or a different uterus, whether that is a partner's or a surrogate's.
Use by disabled individuals
People with disabilities who wish to have children are equally or more likely than the non-disabled population to experience infertility, yet disabled individuals are much less likely to have access to fertility treatment such as IVF. There are many extraneous factors that hinder disabled individuals access to IVF, such as assumptions about decision-making capacity, sexual interests and abilities, heritability of a disability, and beliefs about parenting ability. These same misconceptions about people with disabilities that once led health care providers to sterilise thousands of women with disabilities now lead them to provide or deny reproductive care on the basis of stereotypes concerning people with disabilities and their sexuality.
Not only do misconceptions about disabled individuals parenting ability, sexuality, and health restrict and hinder access to fertility treatment such as IVF, structural barriers such as providers uneducated in disability healthcare and inaccessible clinics severely hinder disabled individuals access to receiving IVF.
By country
Australia
In Australia, the average age of women undergoing ART treatment is 35.5 years among those using their own eggs (one in four being 40 or older) and 40.5 years among those using donated eggs. While IVF is available in Australia, Australians using IVF are unable to choose their baby's gender.
Cameroon
Ernestine Gwet Bell supervised the first Cameroonian child born by IVF in 1998.
Canada
In Canada, one cycle of IVF treatment can cost between $7,750 to $12,250 CAD, and medications alone can cost between $2,500 to over $7,000 CAD. The funding mechanisms that influence accessibility in Canada vary by province and territory, with some provinces providing full, partial or no coverage.
New Brunswick provides partial funding through their Infertility Special Assistance Fund – a one time grant of up to $5,000. Patients may only claim up to 50% of treatment costs or $5,000 (whichever is less) occurred after April 2014. Eligible patients must be a full-time New Brunswick resident with a valid Medicare card and have an official medical infertility diagnosis by a physician.
In December 2015, the Ontario provincial government enacted the Ontario Fertility Program for patients with medical and non-medical infertility, regardless of sexual orientation, gender or family composition. Eligible patients for IVF treatment must be Ontario residents under the age of 43 and have a valid Ontario Health Insurance Plan card and have not already undergone any IVF cycles. Coverage is extensive, but not universal. Coverage extends to certain blood and urine tests, physician/nurse counselling and consultations, certain ultrasounds, up to two cycle monitorings, embryo thawing, freezing and culture, fertilisation and embryology services, single transfers of all embryos, and one surgical sperm retrieval using certain techniques only if necessary. Drugs and medications are not covered under this Program, along with psychologist or social worker counselling, storage and shipping of eggs, sperm or embryos, and the purchase of donor sperm or eggs.
China
IVF is expensive in China and not generally accessible to unmarried women. In August 2022, China's National Health Authority announced that it will take steps to make assisted reproductive technology more accessible, including by guiding local governments to include such technology in its national medical system.
Croatia
No egg or sperm donations take place in Croatia, however using donated sperm or egg in ART and IUI is allowed. With donated eggs, sperm or embryo, a heterosexual couple and single women have legal access to IVF. Male or female couples do not have access to ART as a form of reproduction. The minimum age for males and females to access ART in Croatia is 18 there is no maximum age. Donor anonymity applies, but the born child can be given access to the donor's identity at a certain age
India
The penetration of the IVF market in India is quite low, with only 2,800 cycles per million infertile people in the reproductive age group (20–44 years), as compared to China, which has 6,500 cycles. The key challenges are lack of awareness, affordability and accessibility. Since 2018, however, India has become a destination for fertility tourism, because of lower costs than in the Western world. In December 2021, the Lok Sabha passed the Assisted Reproductive Technology (Regulation) Bill 2020, to regulate ART services including IVF centres, sperm and egg banks.
Israel
Israel has the highest rate of IVF in the world, with 1,657 procedures performed per million people per year. Couples without children can receive funding for IVF for up to two children. The same funding is available for people without children who will raise up to two children in a single parent home. IVF is available for people aged 18 to 45. The Israeli Health Ministry says it spends roughly $3450 per procedure.
Sweden
One, two or three IVF treatments are government subsidised for people who are younger than 40 and have no children. The rules for how many treatments are subsidised, and the upper age limit for the people, vary between different county councils. Single people are treated, and embryo adoption is allowed. There are also private clinics that offer the treatment for a fee.
United Kingdom
Availability of IVF in England is determined by Clinical Commissioning Groups (CCGs). The National Institute for Health and Care Excellence (NICE) recommends up to 3 cycles of treatment for people under 40 years old with minimal success conceiving after 2 years of unprotected sex. Cycles will not be continued for people who are older than 40 years. CCGs in Essex, Bedfordshire and Somerset have reduced funding to one cycle, or none, and it is expected that reductions will become more widespread. Funding may be available in "exceptional circumstances" – for example if a male partner has a transmittable infection or one partner is affected by cancer treatment. According to the campaign group Fertility Fairness "at the end of 2014 every CCG in England was funding at least one cycle of IVF". Prices paid by the NHS in England varied between under £3,000 to more than £6,000 in 2014/5. In February 2013, the cost of implementing the NICE guidelines for IVF along with other treatments for infertility was projected to be £236,000 per year per 100,000 members of the population.
IVF increasingly appears on NHS treatments blacklists. In August 2017 five of the 208 CCGs had stopped funding IVF completely and others were considering doing so. By October 2017 only 25 CCGs were delivering the three recommended NHS IVF cycles to eligible people under 40. Policies could fall foul of discrimination laws if they treat same sex couples differently from heterosexual ones. In July 2019 Jackie Doyle-Price said that women were registering with surgeries further away from their own home in order to get around CCG rationing policies.
The Human Fertilisation and Embryology Authority said in September 2018 that parents who are limited to one cycle of IVF, or have to fund it themselves, are more likely choose to implant multiple embryos in the hope it increases the chances of pregnancy. This significantly increases the chance of multiple births and the associated poor outcomes, which would increase NHS costs. The president of the Royal College of Obstetricians and Gynaecologists said that funding 3 cycles was "the most important factor in maintaining low rates of multiple pregnancies and reduce(s) associated complications".
United States
In the United States, overall availability of IVF in 2005 was 2.5 IVF physicians per 100,000 population, and utilisation was 236 IVF cycles per 100,000. 126 procedures are performed per million people per year. Utilisation highly increases with availability and IVF insurance coverage, and to a significant extent also with percentage of single persons and median income. In the US, an average cycle, from egg retrieval to embryo implantation, costs $12,400, and insurance companies that do cover treatment, even partially, usually cap the number of cycles they pay for. As of 2015, more than 1 million babies had been born utilising IVF technologies.
In the US, as of September 2023, 21 states and the District of Columbia had passed laws for fertility insurance coverage. In 15 of those jurisdictions, some level of IVF coverage is included, and in 17, some fertility preservation services are included. Eleven states require coverage for both fertility preservation and IVF: Colorado, Connecticut, Delaware, Maryland, Maine, New Hampshire, New Jersey, New York, Rhode Island, Utah, and Washington D.C. The states that have infertility coverage laws are Arkansas, California, Colorado, Connecticut, Delaware, Hawaii, Illinois, Louisiana, Maryland, Massachusetts, Montana, New Hampshire, New Jersey, New York, Ohio, Rhode Island, Texas, Utah, and West Virginia. As of July 2023, New York was reportedly the only state Medicaid program to cover IVF. These laws differ by state but many require an egg be fertilised with sperm from a spouse and that in order to be covered you must show you cannot become pregnant through penile-vaginal sex. These requirements are not possible for a same-sex couple to meet.
Many fertility clinics in the United States limit the upper age at which people are eligible for IVF to 50 or 55 years. These cut-offs make it difficult for people older than fifty-five to utilise the procedure.
Legal status
Government agencies in China passed bans on the use of IVF in 2003 by unmarried people or by couples with certain infectious diseases.
In India, the use of IVF as a means of sex selection (preimplantation genetic diagnosis) is banned under the Pre-Conception and Pre-Natal Diagnostic Techniques Act, 1994.
Sunni Muslim nations generally allow IVF between married couples when conducted with their own respective sperm and eggs, but not with donor eggs from other couples. But Iran, which is Shi'a Muslim, has a more complex scheme. Iran bans sperm donation but allows donation of both fertilised and unfertilised eggs. Fertilised eggs are donated from married couples to other married couples, while unfertilised eggs are donated in the context of mut'ah or temporary marriage to the father.
By 2012 Costa Rica was the only country in the world with a complete ban on IVF technology, it having been ruled unconstitutional by the nation's Supreme Court because it "violated life." Costa Rica had been the only country in the western hemisphere that forbade IVF. A law project sent reluctantly by the government of President Laura Chinchilla was rejected by parliament. President Chinchilla has not publicly stated her position on the question of IVF. However, given the massive influence of the Catholic Church in her government any change in the status quo seems very unlikely. In spite of Costa Rican government and strong religious opposition, the IVF ban has been struck down by the Inter-American Court of Human Rights in a decision of 20 December 2012. The court said that a long-standing Costa Rican guarantee of protection for every human embryo violated the reproductive freedom of infertile couples because it prohibited them from using IVF, which often involves the disposal of embryos not implanted in a woman's uterus. On 10 September 2015, President Luis Guillermo Solís signed a decree legalising in-vitro fertilisation. The decree was added to the country's official gazette on 11 September. Opponents of the practice have since filed a lawsuit before the country's Constitutional Court.
All major restrictions on single but infertile people using IVF were lifted in Australia in 2002 after a final appeal to the Australian High Court was rejected on procedural grounds in the Leesa Meldrum case. A Victorian federal court had ruled in 2000 that the existing ban on all single women and lesbians using IVF constituted sex discrimination. Victoria's government announced changes to its IVF law in 2007 eliminating remaining restrictions on fertile single women and lesbians, leaving South Australia as the only state maintaining them.
United States
Despite strong popular support (7 out of 10 adults consider IVF access a good thing and 67% believe that health insurance plans should cover IVF), IVF can involve complicated legal issues and has become a contentious issue in US politics. Federal regulations include screening requirements and restrictions on donations, but these generally do not affect heterosexually intimate partners. Doctors may be required to provide treatments to unmarried or LGBTQ couples under non-discrimination laws, as for example in California. The state of Tennessee proposed a bill in 2009 that would have defined donor IVF as adoption. During the same session, another bill proposed barring adoption from any unmarried and cohabitating couple, and activist groups stated that passing the first bill would effectively stop unmarried women from using IVF. Neither of these bills passed.
In 2023, the Practice Committee of the American Society for Reproductive Medicine (ASRM) updated its guidelines for the definition of “infertility” to include those who need medical interventions “in order to achieve a successful pregnancy either as an individual or with a partner.” In many states, legal and financial decisions about provision of infertility treatments reference this “official” definition. On September 29, 2024, California Governor Gavin Newsom signed SB 729, legislation which aligns with the ASRM definition of “infertility”.
In the United States, much of the opposition to the use of IVF is associated with the anti-abortion movement, evangelicals, and denominations such as the Southern Baptists. Current legal opposition to IVF and other fertility treatment access has stemmed from recent court rulings regarding women's reproductive healthcare. In the 2022 Dobbs v. Jackson Women's Health Organization decision, the U.S. Supreme Court overturned the 1973 Roe v. Wade decision which had federally protected the right to abortion. The 2024 Alabama Supreme Court decision regarding IVF has since threatened IVF access and legality in the U.S. Frozen embryos at an IVF clinic were accidentally destroyed resulting in a lawsuit during which the attorneys for the plaintiff sought damages under the Wrongful Death of a Minor Act. The court ruled in favor of the plaintiffs, setting a state-level precedent that embryos and fetuses are given the same rights as minors/children, regardless of whether they are in utero or not. This has created confusion over the status of unused embryos and questions surrounding when life begins. After the court's decision, numerous IVF clinics in Alabama halted IVF treatment services for fears of civil and criminal liability associated with the new rights granted to embryos. Since, laws proposing embryonic personhood have been proposed in 13 other states, creating fear of further state restrictions. This ruling raised concerns from The National Infertility Association and the American Society for Reproductive Medicine that the decision would mean Alabama's bans on abortion prohibit IVF as well, while the University of Alabama at Birmingham health system paused IVF treatments. Eight days later the Alabama legislature voted to protect IVF providers and patients from criminal or civil liability.
The Right to IVF Act, federal legislation that would have codified a right to fertility treatments and provided insurance coverage for in vitro fertilisation treatments, was twice brought to a vote in the Senate in 2024. Both times it was blocked by Senate Republicans, of whom only Lisa Murkowski and Susan Collins voted to move the bill forward.
Few American courts have addressed the issue of the "property" status of a frozen embryo. This issue might arise in the context of a divorce case, in which a court would need to determine which spouse would be able to decide the disposition of the embryos. It could also arise in the context of a dispute between a sperm donor and egg donor, even if they were unmarried. In 2015, an Illinois court held that such disputes could be decided by reference to any contract between the parents-to-be. In the absence of a contract, the court would weigh the relative interests of the parties.
Alternatives
Some alternatives to IVF are:
Artificial insemination, including intracervical insemination and intrauterine insemination of semen. It requires that a woman ovulates, but is a relatively simple procedure, and can be used in the home for self-insemination without medical practitioner assistance. The beneficiaries of artificial insemination are people who desire to give birth to their own child who may be single, people who are in a lesbian relationship or females who are in a heterosexual relationship but with a male partner who is infertile or who has a physical impairment which prevents full intercourse from taking place.
Ovulation induction (in the sense of medical treatment aiming for the development of one or two ovulatory follicles) is an alternative for people with anovulation or oligoovulation, since it is less expensive and more easy to control. It generally involves antiestrogens such as clomifene citrate or letrozole, and is followed by natural or artificial insemination.
Surrogacy, the process in which a surrogate agrees to bear a child for another person or persons, who will become the child's parent(s) after birth. People may seek a surrogacy arrangement when pregnancy is medically impossible, when pregnancy risks are too dangerous for the intended gestational carrier, or when a single man or a male couple wish to have a child.
Adoption whereby a person assumes the parenting of another, usually a child, from that person's biological or legal parent or parents.
| Biology and health sciences | Health and fitness: General | Health |
57893 | https://en.wikipedia.org/wiki/Ficus | Ficus | Ficus ( or ) is a genus of about 850 species of woody trees, shrubs, vines, epiphytes and hemiepiphytes in the family Moraceae. Collectively known as fig trees or figs, they are native throughout the tropics with a few species extending into the semi-warm temperate zone. The common fig (F. carica) is a temperate species native to southwest Asia and the Mediterranean region (from Afghanistan to Portugal), which has been widely cultivated from ancient times for its fruit, also referred to as figs. The fruit of most other species are also edible though they are usually of only local economic importance or eaten as bushfood. However, they are extremely important food resources for wildlife. Figs are also of considerable cultural importance throughout the tropics, both as objects of worship and for their many practical uses.
Description
Ficus is a pantropical genus of trees, shrubs, and vines occupying a wide variety of ecological niches; most are evergreen, but some deciduous species are found in areas outside of the tropics and to higher elevations. Fig species are characterized by their unique inflorescence and distinctive pollination syndrome, which uses wasp species belonging to the family Agaonidae for pollination. Adult plants vary in size from Ficus benghalensis (the Indian banyan), a tall and speading tree with many adventitious roots which may cover a hectare (2.5 acres) or more of ground to Ficus nana of New Guinea which never exceeds one meter (forty inches) in height and width.
Specific identification of many of the species can be difficult, but members of the genus Ficus are relatively easy to recognize. Many have aerial roots and a distinctive shape or habit, and their fruits distinguish them from other plants. The fruit of Ficus is an inflorescence enclosed in an urn-like structure called a syconium, which is lined on the inside with the fig's tiny flowers that develop into multiple ovaries on the inside surface. In essence, the fig fruit is a fleshy stem with multiple tiny flowers that fruit and coalesce.
Notably, three vegetative traits together are unique to figs. All figs present a white to yellowish latex, some in copious quantities; the twig shows paired stipules —or circular scars if the stipules have fallen off; the lateral veins at the base of the leaf are steep, forming a tighter angle with the midrib than the other lateral veins, a feature referred to as "triveined".
Current molecular clock estimates indicate that Ficus is a relatively ancient genus, being at least 60 million years old, and possibly as old as 80 million years. The main radiation of extant species, however, may have taken place more recently, between 20 and 40 million years ago.
Some better-known species that represent the diversity of the genus include, alongside the common fig, whose fingered fig leaf is well known in art and iconography: the weeping fig (F. benjamina), a hemiepiphyte with thin, tough leaves on pendulous stalks adapted to its rain forest habitat; the rough-leaved sandpaper figs from Australia; and the creeping fig (F. pumila), a vine whose small, hard leaves form a dense carpet of foliage over rocks or garden walls.
Moreover, figs with different plant habits have undergone adaptive radiation in different biogeographic regions, leading to very high levels of alpha diversity. In the tropics, Ficus commonly is the most species-rich plant genus in a particular forest. In Asia, as many as 70 or more species can co-exist. Ficus species richness declines with an increase in latitude in both hemispheres.
A description of fig tree cultivation is set out in Ibn al-'Awwam's 12th-century agricultural work titled, Book on Agriculture.
Ecology
Figs are keystone species in many tropical forest ecosystems. Their fruit are a key resource for some frugivores including fruit bats, and primates including: capuchin monkeys, langurs, gibbons and mangabeys. They are even more important for birds such as Asian barbets, pigeons, hornbills, fig-parrots and bulbuls, which may almost entirely subsist on figs when these are in plenty. Many Lepidoptera caterpillars feed on fig leaves, for example several Euploea species (crow butterflies), the plain tiger (Danaus chrysippus), the giant swallowtail (Papilio cresphontes), the brown awl (Badamia exclamationis), and Chrysodeixis eriosoma, Choreutidae and Copromorphidae moths. The citrus long-horned beetle (Anoplophora chinensis), for example, has larvae that feed on wood, including that of fig trees; it can become a pest in fig plantations. Similarly, the sweet potato whitefly (Bemisia tabaci) is frequently found as a pest on figs grown as potted plants and is spread through the export of these plants to other localities. For a list of other diseases common to fig trees, see List of foliage plant diseases (Moraceae).
Fig fruit and reproduction system
Many fig species are grown for their fruits, though only Ficus carica is cultivated to any extent for this purpose. A fig "fruit" is a type of multiple fruit known as a syconium, derived from an arrangement of many small flowers on an inverted, nearly closed receptacle. The many small flowers are unseen unless the fig is cut open.
The fruit typically has a bulbous shape with a small opening (the ostiole) at the outward end that allows access to pollinators. The flowers are pollinated by very small wasps such as Pegoscapus that crawl through the opening in search of a suitable place to lay eggs. Without this pollinator service fig trees could not reproduce by seed. In turn, the flowers provide a safe haven and nourishment for the next generation of wasps. This accounts for the frequent presence of wasp larvae in the fruit, and has led to a coevolutionary relationship. Technically, a fig fruit proper would be only one of the many tiny matured, seed-bearing gynoecia found inside one fig – if you cut open a fresh fig, individual fruit will appear as fleshy "threads", each bearing a single seed inside. The genus Dorstenia, also in the fig family (Moraceae), exhibits similar tiny flowers arranged on a receptacle but in this case the receptacle is a more or less flat, open surface.
Fig plants can be monoecious (hermaphrodite) or gynodioecious (hermaphrodite and female). Nearly half of fig species are gynodioecious, and therefore have some plants with inflorescences (syconium) with long styled pistillate flowers, and other plants with staminate flowers mixed with short styled pistillate flowers. The long-styled flowers tend to prevent wasps from laying their eggs within the ovules, while the short-styled flowers are accessible for egg laying.
All the native fig trees of the American continent are hermaphrodites, as well as species like Indian banyan (F. benghalensis), weeping fig (F. benjamina), Indian rubber plant (F. elastica), fiddle-leaved fig (F. lyrata), Moreton Bay fig (F. macrophylla), Chinese banyan (F. microcarpa), sacred fig (F. religiosa) and sycamore fig (F. sycomorus). The common fig (Ficus carica) is a gynodioecious plant, as well as lofty fig or clown fig (F. aspera), Roxburgh fig (F. auriculata), mistletoe fig (F. deltoidea), F. pseudopalma, creeping fig (F. pumila) and related species. The hermaphrodite common figs are called "inedible figs" or "caprifigs"; in traditional culture in the Mediterranean region they were considered food for goats (Capra aegagrus). In the female fig trees, the male flower parts fail to develop; they produce the "'edible figs". Fig wasps grow in common fig caprifigs but not in the female syconiums because the female flower is too long for the wasp to successfully lay her eggs in them. Nonetheless, the wasp pollinates the flower with pollen from the caprifig it grew up in. In many situations, the wasp pollinator is unable to escape and dies within the fruit. When the wasp dies, it is broken down by enzymes (Ficain) inside the fig. Fig wasps are not known to transmit any diseases harmful to humans.
When a caprifig ripens, another caprifig must be ready to be pollinated. In temperate climes, wasps hibernate in figs, and there are distinct crops. Caprifigs have three crops per year; common figs have two. The first crop (breba) is larger and juicier, and usually eaten fresh. In cold climates the breba crop is often destroyed by spring frosts. Some parthenocarpic cultivars of common figs do not require pollination at all, and will produce a crop of figs (albeit sterile) in the absence of caprifigs and fig wasps.
Depending on the species, each fruit can contain hundreds or even thousand of seeds. Figs can be propagated by seeds, cuttings, air-layering or grafting. However, as with any plant, figs grown from seed are not necessarily genetically identical to the parent and are only propagated this way for breeding purposes.
Mutualism with the pollinating fig wasps
The unique fig pollination system involves tiny, highly specific wasps, known as fig wasps, that enter via ostiole these subclosed inflorescences to both pollinate and lay their own eggs. Each species of fig is pollinated by one or a few specialised wasp species, and therefore plantings of fig species outside of their native range results in effectively sterile individuals. For example, in Hawaii, some 60 species of figs have been introduced, but only four of the wasps that fertilize them, so only those species of figs produce viable seeds there and can become invasive species. This is an example of mutualism, in which each organism (fig plant and fig wasp) benefit each other, in this case reproductively.
The intimate association between fig species and their wasp pollinators, along with the high incidence of a one-to-one plant-pollinator ratio have long led scientists to believe that figs and wasps are a clear example of coevolution. Morphological and reproductive behavior evidence, such as the correspondence between fig and wasp larvae maturation rates, have been cited as support for this hypothesis for many years. Additionally, recent genetic and molecular dating analyses have shown a very close correspondence in the character evolution and speciation phylogenies of these two clades.
According to meta-analysis of molecular data for 119 fig species 35% (41) have multiple pollinator wasp species. The real proportion is higher because not all wasp species were detected. On the other hand, species of wasps pollinate multiple host fig species. Molecular techniques, like microsatellite markers and mitochondrial sequence analysis, allowed a discovery of multiple genetically distinct, cryptic wasp species. Not all these cryptic species are sister taxa and thus must have experienced a host fig shift at some point. These cryptic species lacked evidence of genetic introgression or backcrosses indicating limited fitness for hybrids and effective reproductive isolation and speciation.
The existence of cryptic species suggests that neither the number of symbionts nor their evolutionary relationships are necessarily fixed ecologically. While the morphological characteristics that facilitate the fig-wasp mutualisms are likely to be shared more fully in closer relatives, the absence of unique pairings would make it impossible to do a one-to-one tree comparison and difficult to determine cospeciation.
Systematics
With over 800 species, Ficus is by far the largest genus in the Moraceae, and is one of the largest genera of flowering plants currently described. The species currently classified within Ficus were originally split into several genera in the mid-1800s, providing the basis for a subgeneric classification when reunited into one genus in 1867. This classification put functionally dioecious species into four subgenera based on floral characters. In 1965, E. J. H. Corner reorganized the genus on the basis of breeding system, uniting these four dioecious subgenera into a single dioecious subgenus Ficus. Monoecious figs were classified within the subgenera Urostigma, Pharmacosycea and Sycomorus.
This traditional classification has been called into question by recent phylogenetic studies employing genetic methods to investigate the relationships between representative members of the various sections of each subgenus. Of Corner's original subgeneric divisions of the genus, only Sycomorus is supported as monophyletic in the majority of phylogenetic studies. Notably, there is no clear split between dioecious and monoecious lineages. One of the two sections of Pharmacosycea, a monoecious group, form a monophyletic clade basal to the rest of the genus, which includes the other section of Pharmacosycea, the rest of the monoecious species, and all of the dioecious species. These remaining species are divided into two main monophyletic lineages (though the statistical support for these lineages is not as strong as for the monophyly of the more derived clades within them). One consists of all sections of Urostigma except for section Urostigma s. s.. The other includes section Urostigma s. s., subgenus Sycomorus, and the species of subgenus Ficus, though the relationships of the sections of these groups to one another are not well resolved.
Selected species
As of April 2024, there are 880 accepted Ficus species according to Plants of the World Online.
Subgenus Ficus
Ficus amplissima Sm. – bat fig
Ficus carica L. – common fig
Ficus daimingshanensis Chang
Ficus deltoidea Jack – mistletoe fig
Ficus erecta Thunb. – Japanese fig
Ficus fulva Reinw. ex Blume
Ficus grossularioides Burman f. – white-leaved fig
Ficus neriifolia Sm.
Ficus palmata Forssk.
Ficus pandurata Hance
Ficus simplicissima Lour. (synonym Ficus hirta Vahl)
Ficus triloba Buch.-Ham. ex Voigt
Subgenus Pharmacosycea
Ficus crassiuscula Standl.
Ficus gigantosyce Dugand
Ficus insipida Willd.
Ficus lacunata Kvitvik
Ficus maxima Mill.
Ficus mutabilis Bureau
Ficus nervosa Heyne ex Roth
Ficus pulchella Schott
Ficus yoponensis Desv.
Subgenus Sycidium
Ficus andamanica Corner
Ficus aspera G.Forst.
Ficus assamica Miq.
Ficus bojeri Baker
Ficus capreifolia Delile
Ficus coronata Spin – creek sandpaper fig
Ficus fraseri Miq. – shiny sandpaper fig
Ficus heterophylla L.f.
Ficus lateriflora Vahl
Ficus montana Burm.f. – oakleaf fig
Ficus opposita Miq. – sweet sandpaper fig
Ficus phaeosyce K.Schum. & Lauterb.
Ficus tinctoria G.Forst. – dye fig
Ficus ulmifolia Lam.
Ficus wassa Roxb.
Ficus parietalis
Ficus sinuata
Ficus hampelas
Subgenus Sycomorus
Ficus auriculata Lour. – Roxburgh fig
Ficus bernaysii King
Ficus brusii Weiblen– lowland form of breadfruit, kapiak
Ficus dammaropsis Diels – highland breadfruit, kapiak
Ficus fistulosa Blume
Ficus hispida L.
Ficus nota Merr. – tibig
Ficus pseudopalma Blanco
Ficus racemosa L. – cluster fig
Ficus septica Burm.f. – hauli tree
Ficus sycomorus L., 1753 – sycamore fig (Africa)
Ficus variegata Blume
Subgenus Synoecia
The following species are typically spreading or climbing lianas:
Ficus hederacea Roxb.
Ficus pantoniana King – climbing fig
Ficus pumila L. – creeping fig
Ficus pumila var. awkeotsang (Makino) Corner – jelly fig
Ficus punctata Thunb.
Ficus sagittata J. König ex Vahl
Ficus sarmentosa Buch.-Ham. ex Sm.
Ficus trichocarpa Blume
Ficus villosa Blume
Subgenus Urostigma
Ficus abutilifolia Miq.
Ficus albert-smithii Standl.
Ficus altissima Blume
Ficus amazonica Miq.
Ficus americana Aubl.
Ficus aripuanensis Berg & Kooy
Ficus arpazusa Carauta and Diaz – Brazil
Ficus aurea Nutt. – Florida strangler fig
Ficus beddomei King – thavital
Ficus benghalensis L. – Indian banyan
Ficus benjamina L. – weeping fig
Ficus binnendijkii Miq.
Ficus bizanae Hutch. & Burtt-Davy
Ficus blepharophylla Vázquez Avila
Ficus broadwayi Urb.
Ficus burtt-davyi Hutch.
Ficus calyptroceras Miq.
Ficus castellviana Dugand
Ficus catappifolia Kunth & Bouché
Ficus citrifolia Mill. – short-leaved fig
Ficus consociata Bl.
Ficus cordata Thunb.
Ficus costata Ait.
Ficus crassipes F.M.Bailey – round-leaved banana fig
Ficus craterostoma Mildbr. & Burret
Ficus cyathistipula Warb.
Ficus cyclophylla (Miq.) Miq.
Ficus dendrocida Kunth
Ficus depressa Bl.
Ficus destruens F.White
Ficus drupacea Thunb.
Ficus elastica Hornem. – rubber plant
Ficus exasperata Vahl.
Ficus faulkneriana Berg
Ficus fergusonii (King) T.B.Worth. ex Corner
Ficus glaberrima Blume
Ficus glumosa Delile
Ficus greiffiana Dugand
Ficus hirsuta Schott
Ficus ilicina Miq.
Ficus kerkhovenii Valeton – Johore fig
Ficus kurzii King
Ficus luschnathiana Miq.
Ficus ingens Miq.
Ficus krukovii Standl.
Ficus lacor Buch.-Ham.
Ficus lapathifolia Miq.
Ficus lauretana Vázquez Avila
Ficus lutea Vahl
Ficus lyrata Warb. – fiddle-leaved fig
Ficus maclellandii King – Alii fig
Ficus macrophylla Desf. ex Pers. – Moreton Bay fig
Ficus malacocarpa Standl.
Ficus mariae Berg, Emygdio & Carauta
Ficus mathewsii Miq.
Ficus matiziana Dugand
Ficus microcarpa L. – Chinese banyan
Ficus muelleriana Berg
Ficus natalensis Hochst. – Natal fig
Ficus obliqua G.Forst. – small-leaved fig
Ficus obtusifolia Kunth
Ficus pakkensis Standl.
Ficus pallida Vahl
Ficus panurensis Standl.
Ficus pertusa L.f.
Ficus petiolaris Kunth
Ficus pisocarpa Bl.
Ficus platypoda Cunn. – desert fig
Ficus pleurocarpa DC. – banana fig
Ficus polita Vahl
Ficus religiosa L. – sacred fig
Ficus roraimensis Berg
Ficus rubiginosa Desf. – Port Jackson fig
Ficus rumphii Blume
Ficus salicifolia Vahl – willow-leaved fig
Ficus sansibarica Warb.
Ficus schippii Standl.
Ficus schultesii Dugand
Ficus schumacheri Griseb.
Ficus sphenophylla Standl.
Ficus stuhlmannii Warb.
Ficus subcordata Bl.
Ficus subpisocarpa Gagnep.
Ficus subpuberula Corner
Ficus sumatrana Miq.
Ficus superba Miq.
Ficus superba var. henneana (Miq.) Corner
Ficus thonningii Blume
Ficus trichopoda Baker
Ficus trigona L.f.
Ficus trigonata L.
Ficus triradiata Corner – red-stipule fig
Ficus ursina Standl.
Ficus velutina Willd.
Ficus verruculosa Warb.
Ficus virens Aiton – white fig
Ficus virens var. sublanceolata (Miq.) Corner – sour fig
Ficus watkinsiana F.M.Bailey – Watkins' fig
Unknown subgenus
Ficus bibracteata
Ficus callosa Willd.
Ficus cristobalensis
Ficus hebetifolia
Ficus punctata
Ficus tsjahela Burm.f.
Ficus nymphaeifolia Mill.
Uses
The wood of fig trees is often soft and the latex precludes its use for many purposes. It was used to make mummy caskets in Ancient Egypt. Certain fig species (mainly F. cotinifolia, F. insipida and F. padifolia) are traditionally used in Mesoamerica to produce papel amate (Nahuatl: āmatl). Mutuba (F. natalensis) is used to produce barkcloth in Uganda. Pou (F. religiosa) leaves' shape inspired one of the standard kbach rachana, decorative elements in Cambodian architecture. Indian banyan (F. benghalensis) and the Indian rubber plant, as well as other species, have use in herbalism. The inner bark of an unknown type of wild fig, locally known as urú, was once used by the of Bolivia to produce a fibrous cloth used for clothing.
Figs have figured prominently in some human cultures. There is evidence that figs, specifically the common fig (F. carica) and sycamore fig (Ficus sycomorus), were among the first plant species that were deliberately bred for agriculture in the Middle East, starting more than 11,000 years ago. Nine subfossil F. carica figs dated to about 9400–9200 BCE were found in the early Neolithic village Gilgal I (in the Jordan Valley, 13 km, or 8.1 mi, north of Jericho). These were a parthenogenetic type and thus apparently an early cultivar. This find predates the first known cultivation of grain in the Middle East by many hundreds of years.
Cultivation
Numerous species of fig are found in cultivation in domestic and office environments, including:
F. carica, common fig – hardy to . Shrub or small tree which can be grown outdoors in mild temperate regions, producing substantial harvests of fruit. Many cultivars are available.
F. benjamina, weeping fig, ficus – hardy to . Widely used as an indoor plant for the home or the office. It benefits from the dry, warm atmosphere of centrally-heated interiors, and can grow to substantial heights in a favoured position. Several variegated cultivars are available.
F. elastica, rubber plant – hardy to : widely cultivated as a houseplant; several cultivars with variegated leaves
F. lyrata, fiddle-leaf fig – hardy to
F. maclellandii – hardy to
F. microcarpa, Indian laurel – hardy to
F. pumila, creeping fig – hardy to
F. rubiginosa, Port Jackson fig – hardy to
Cultural and spiritual significance
Fig trees have profoundly influenced culture through several religious traditions. Among the more famous species are the sacred fig tree (Pipal, bodhi, bo, or po, Ficus religiosa) and other banyan figs such as Ficus benghalensis. The oldest living plant of known planting date is a Ficus religiosa tree known as the Sri Maha Bodhi planted in the temple at Anuradhapura, Sri Lanka by King Tissa in 288 BCE. In Asia, figs are important in Buddhism and Hinduism. The Buddha is traditionally held to have found bodhi (enlightenment) while meditating for 49 days under a sacred fig. The same species was Ashvattha, the "world tree" of Hinduism. The Plaksa Pra-sravana was said to be a fig tree between the roots of which the Sarasvati River sprang forth; it is usually held to be a sacred fig but more probably is Ficus virens. In Jainism, the consumption of any fruit belonging to this genus is prohibited. The common fig is one of two significant trees in Islam, and there is a sura in Quran named "The Fig" or At-Tin (سوره تین). The common fig tree is cited in the Bible: Adam and Eve cover their nakedness with fig leaves. The fig fruit is one of the traditional crops of Israel, and is included in the list of food found in the Promised Land. The fig tree was sacred in ancient Greece and Cyprus, where it was a symbol of fertility.
Famous fig trees
Ashvattha – the world tree of Hinduism, held to be a supernatural F. religiosa
Bodhi tree – a F. religiosa
Charybdis Fig Tree of Homer's Odyssey, presumably a F. carica
Curtain Fig Tree – a F. virens
Ficus Ruminalis – a F. carica
Plaksa – another supernatural fig in Hinduism; usually identified as F. religiosa but is probably F. virens
Santa Barbara's Moreton Bay Fig Tree – a F. macrophylla
Sri Maha Bodhi – another F. religiosa, planted in 288 BCE, the oldest human-planted tree on record
The Barren Fig Tree – Matthew 21:19 of the Christian Bible, Jesus put a curse on the tree and used this as an example for believers of the promise of the power faith in the only true God. The Great Banyan – a F. benghalensis, a clonal colony and once the largest organism known
Vidurashwatha – "Vidura's Sacred Fig Tree", a village in India named after a famous F. religiosa that until recently stood there
Wonderboom – the largest fig tree in Pretoria, South Africa, which has grown very large, through self-layering(limbs laying in the ground take root).
Citations
General references
Supporting Online Material
Electronic appendices
External links
Figweb—Major reference site for the genus Ficus''
World checklist of Ficus species from the Catalogue of Life, 845 species supplied by M. Hassler's World Plants.
Video: Interaction of figs and fig wasps—Multi-award-winning documentary
Fruits of Warm Climates: Fig
BBC: Fig fossil clue to early farming
Checklist of Ficus species in Borneo Island
Video
How the fig tree strangles other plants for survival in the rainforest
Moraceae genera
Taxa named by Carl Linnaeus | Biology and health sciences | Rosales | Plants |
57925 | https://en.wikipedia.org/wiki/Pituitary%20gland | Pituitary gland | The pituitary gland or hypophysis is an endocrine gland in vertebrates. In humans, the pituitary gland is located at the base of the brain, protruding off the bottom of the hypothalamus. The human pituitary gland is oval shaped, about 1 cm in diameter, in weight on average, and about the size of a kidney bean.
There are two main lobes of the pituitary, an anterior lobe, and a posterior lobe joined and separated by a small intermediate lobe. The anterior lobe (adenohypophysis) is the glandular part that produces and secretes several hormones. The posterior lobe (neurohypophysis) secretes neurohypophysial hormones produced in the hypothalamus. Both lobes have different origins and they are both controlled by the hypothalamus.
Hormones secreted from the pituitary gland help to control growth, blood pressure, energy management, all functions of the sex organs, thyroid gland, metabolism, as well as some aspects of pregnancy, childbirth, breastfeeding, water/salt concentration at the kidneys, temperature regulation, and pain relief.
Structure
In humans, the pituitary gland rests upon the hypophyseal fossa of the sphenoid bone, in the center of the middle cranial fossa. It sits in a protective bony enclosure called the sella turcica, covered by the dural fold diaphragma sellae.
The pituitary gland is composed of the anterior pituitary, the posterior pituitary, and an intermediate lobe that joins them. The intermediate lobe is avascular and almost absent in humans. In many animals, these three lobes are distinct. The intermediate lobe is present in many animal species, particularly in rodents, mice, and rats, which have been used extensively to study pituitary development and function. In all animals, the fleshy, glandular anterior pituitary is distinct from the neural composition of the posterior pituitary, which is an extension of the hypothalamus.
The height of the pituitary gland ranges from 5.3 to 7.0 mm. The volume of the pituitary gland ranges from 200 to 440 mm3. Its most common shape, found in 46% of people is flat, it is convex in 31.2% and concave in 22.8%.
Anterior
The anterior pituitary lobe (adenohypophysis) arises from an evagination of the oral ectoderm (Rathke's pouch). This contrasts with the posterior pituitary, which originates from neuroectoderm.
Endocrine cells of the anterior pituitary are controlled by regulatory hormones released by parvocellular neurosecretory cells in the hypothalamic capillaries leading to infundibular blood vessels, which in turn lead to a second capillary bed in the anterior pituitary. This vascular relationship constitutes the hypophyseal portal system. Diffusing out of the second capillary bed, the hypothalamic releasing hormones then bind to anterior pituitary endocrine cells, upregulating or downregulating their release of hormones.
The anterior lobe of the pituitary can be divided into the pars tuberalis (pars infundibularis) and pars distalis (pars glandularis) that constitutes ~80% of the gland. The pars intermedia (the intermediate lobe) lies between the pars distalis and the pars tuberalis, and is rudimentary in the human, although in other species it is more developed. It develops from a depression in the dorsal wall of the pharynx (stomal part) known as Rathke's pouch.
The anterior pituitary contains several different types of cells that synthesize and secrete hormones. Usually there is one type of cell for each major hormone formed in anterior pituitary. With special stains attached to high-affinity antibodies that bind with distinctive hormone, at least 5 types of cells can be differentiated.
Posterior
The posterior pituitary consists of the posterior lobe and the pituitary stalk (infundibulum) that connects it to the hypothalamus. It develops as an extension of the hypothalamus, from the floor of the third ventricle. The posterior pituitary hormones are synthesized by cell bodies in the hypothalamus. The magnocellular neurosecretory cells, of the supraoptic and paraventricular nuclei located in the hypothalamus, project axons down the infundibulum to terminals in the posterior pituitary. This simple arrangement differs sharply from that of the adjacent anterior pituitary, which does not develop from the hypothalamus.
The release of pituitary hormones by both the anterior and posterior lobes is under the control of the hypothalamus, albeit in different ways.
Function
The anterior pituitary regulates several physiological processes by secreting hormones. This includes stress (by secreting ACTH), growth (by secreting GH), reproduction (by secreting FSH and LH), metabolism rate (by secreting TSH) and lactation (by secreting prolactin). The intermediate lobe synthesizes and secretes melanocyte-stimulating hormone. The posterior pituitary (or neurohypophysis) is a lobe of the gland that is functionally connected to the hypothalamus by the median eminence via a small tube called the pituitary stalk (also called the infundibular stalk or the infundibulum). It regulates hydroelectrolytic stability (by secreting ADH), uterine contraction during labor and human attachment (by secreting oxytocin).
Anterior
The anterior pituitary synthesizes and secretes hormones. All releasing hormones (-RH) referred to can also be referred to as releasing factors (-RF).
Somatotropes:
Growth hormone (GH), also known as somatotropin, is released under the influence of hypothalamic growth hormone-releasing hormone (GHRH), and is inhibited by hypothalamic somatostatin.
Corticotropes:
Cleaved from the precursor proopiomelanocortin protein, and include adrenocorticotropic hormone (ACTH), and beta-endorphin, and melanocyte-stimulating hormone are released.
Thyrotropes:
Thyroid-stimulating hormone (TSH) is released under the influence of hypothalamic thyrotropin-releasing hormone (TRH) and is inhibited by somatostatin.
Gonadotropes:
Luteinizing hormone (LH). stimulated by Gonadotropin-releasing hormone (GnRH)
Follicle-stimulating hormone (FSH), also stimulated by Gonadotropin-releasing Hormone (GnRH), and also by Activin
Lactotropes:
Prolactin (PRL), whose release is inconsistently stimulated by hypothalamic TRH, oxytocin, vasopressin, vasoactive intestinal peptide, angiotensin II, neuropeptide Y, galanin, substance P, bombesin-like peptides (gastrin-releasing peptide, neuromedin B and C), and neurotensin, and inhibited by hypothalamic dopamine.
These hormones are released from the anterior pituitary under the influence of the hypothalamus. Hypothalamic hormones are secreted to the anterior lobe by way of a special capillary system, called the hypothalamic-hypophysial portal system.
There is also a non-endocrine cell population called folliculostellate cells.
Posterior
The posterior pituitary stores and secretes (but does not synthesize) the following important endocrine hormones:
Magnocellular neurons:
Antidiuretic hormone (ADH, also known as vasopressin and arginine vasopressin AVP), the majority of which is released from the supraoptic nucleus in the hypothalamus.
Oxytocin, most of which is released from the paraventricular nucleus in the hypothalamus. Oxytocin is one of the few hormones to create a positive feedback loop. For example, uterine contractions stimulate the release of oxytocin from the posterior pituitary, which, in turn, increases uterine contractions. This positive feedback loop continues throughout labour.
Hormones
Hormones secreted from the pituitary gland help control the following body processes:
Growth (GH)
Blood pressure
Some aspects of pregnancy and childbirth including stimulation of uterine contractions
Breast milk production
Sex organ functions in both sexes
Thyroid gland function
Metabolic conversion of food into energy
Water and osmolarity regulation in the body
Water balance via the control of reabsorption of water by the kidneys
Temperature regulation
Pain relief
Development
The development of the pituitary gland, or hypophysis, is a complex process that occurs early in embryonic life and involves contributions from two distinct embryonic tissues. Here’s a detailed explanation:
1.Embryological Origin
The pituitary gland develops from two embryonic tissues:
Rathke's pouch: An ectodermal outpocketing from the roof of the primitive oral cavity, or stomodeum, which gives rise to the anterior pituitary (adenohypophysis).
Infundibulum: A downward extension from the neuroectoderm of the diencephalon in the brain, which forms the posterior pituitary (neurohypophysis).
2. Developmental Stages
Formation of Rathke's pouch (4th week of gestation):
During the 4th week, an invagination of the oral ectoderm occurs, creating Rathke's pouch.
Differentiation and Migration (5th to 6th week):
Rathke's pouch grows towards the developing brain. The upper part of the pouch eventually constricts and detaches from the oral cavity.
Cells in Rathke's pouch differentiate to form three parts of the adenohypophysis: the pars distalis, pars intermedia, and pars tuberalis.
Formation of the Posterior Pituitary (4th to 8th week):
The infundibulum from the diencephalon elongates downward, forming a stalk that connects with Rathke’s pouch. This stalk will develop into the pars nervosa, or posterior pituitary.
Specialized cells from the hypothalamus, known as pituicytes, migrate to the posterior pituitary, where they help store and release hormones such as oxytocin and vasopressin.
3. Hormone Production and Functional Maturity
By around the 12th to 16th week of gestation, the anterior pituitary begins producing hormones like growth hormone (GH), adrenocorticotropic hormone (ACTH), and others essential for fetal development.
The posterior pituitary functions primarily in storage, as it stores hormones produced by the hypothalamus and releases them into the bloodstream.
4. Final Structural Differentiation
The pituitary gland achieves its final form, including the complete separation of anterior and posterior lobes, by the end of the first trimester
The gland remains connected to the hypothalamus by the pituitary stalk, allowing it to integrate signals from the brain and regulate various endocrine functions in the body.
This dual-origin structure and function are what make the pituitary gland a unique and critical component of the endocrine system, acting as a bridge between the nervous and endocrine systems.
5. Pituitary stem cells: stem cells are found in the pituitary which can differentiate into various types of hormone-producing cells in times of physiological need. In the neonate, these stem cells undergo a massive wave of differentiation specifically to gonadotropes, which forms the basis of most of the adult gonadotrope population, though some gonadotropes of embryonic origin remain.
Clinical significance
Some of the diseases involving the pituitary gland are:
Central diabetes insipidus caused by a deficiency of vasopressin
Gigantism and acromegaly caused by an excess of growth hormone in childhood and adult, respectively
Hypothyroidism caused by a deficiency of thyroid-stimulating hormone
Hyperpituitarism, the increased (hyper) secretion of one or more of the hormones normally produced by the pituitary gland
Hypopituitarism, the decreased (hypo) secretion of one or more of the hormones normally produced by the pituitary gland
Panhypopituitarism a decreased secretion of most of the pituitary hormones
Pituitary tumours
Pituitary adenomas, noncancerous tumors that occur in the pituitary gland
All of the functions of the pituitary gland can be adversely affected by an over- or under-production of associated hormones.
The pituitary gland is important for mediating the stress response, via the hypothalamic–pituitary–adrenal axis (HPA axis). Critically, pituitary gland growth during adolescence can be altered by early life stress such as childhood maltreatment or maternal dysphoric behavior.
It has been demonstrated that, after controlling for age, sex, and BMI, larger quantities of DHEA and DHEA-S tended to be linked to larger pituitary volume. Additionally, a correlation between pituitary gland volume and Social Anxiety subscale scores was identified which provided a basis for exploring mediation. Again controlling for age, sex, and BMI, DHEA and DHEA-S have been found to be predictive of larger pituitary gland volume, which was also associated with increased ratings of social anxiety. This research provides evidence that pituitary gland volume mediates the link between higher DHEA(S) levels (associated with relatively early adrenarche) and traits associated with social anxiety. Children who experience early adrenarcheal development tend to have larger pituitary gland volume compared to children with later adrenarcheal development.
History
Etymology
Pituitary gland
The Greek physician Galen referred to the pituitary gland by only using the (Ancient Greek) name , gland. He described the pituitary gland as part of a series of secretory organs for the excretion of nasal mucus. Anatomist Andreas Vesalius translated with glans, in quam pituita destillat, "gland in which slime (pituita) drips". Besides this 'descriptive' name, Vesalius used glandula pituitaria, from which the English name pituitary gland is ultimately derived.
The expression glandula pituitaria is still used as official synonym beside hypophysis in the official Latin nomenclature Terminologia Anatomica. In the seventeenth century the supposed function of the pituitary gland to produce nasal mucus was debunked. The expression glandula pituitaria and its English equivalent pituitary gland can only be justified from a historical point of view. The inclusion of this synonym is merely justified by noting that the main term hypophysis is a much less popular term.
Hypophysis
Note: hypophysial (or hypophyseal) means "related to the hypophysis (pituitary gland)".
The anatomist Samuel Thomas von Sömmerring coined the name hypophysis. This name consists of ὑπό ('under') and φύειν ('to grow'). In later Greek ὑπόφυσις is used differently by Greek physicians as outgrowth. Sömmering also used the equivalent expression appendix cerebri, with appendix as appendage. In various languages, in German and in Dutch, the terms are derived from appendix cerebri.
Other animals
The pituitary gland is found in all vertebrates, but its structure varies among different groups.
The division of the pituitary described above is typical of mammals, and is also true, to varying degrees, of all tetrapods. However, only in mammals does the posterior pituitary have a compact shape. In lungfish, it is a relatively flat sheet of tissue lying above the anterior pituitary, but in amphibians, reptiles, and birds, it becomes increasingly well developed. The intermediate lobe is, in general, not well developed in any species and is entirely absent in birds.
The structure of the pituitary in fish, apart from the lungfish, is generally different from that in other animals. In general, the intermediate lobe tends to be well developed, and may equal the remainder of the anterior pituitary in size. The posterior lobe typically forms a sheet of tissue at the base of the pituitary stalk, and in most cases sends irregular finger-like projection into the tissue of the anterior pituitary, which lies directly beneath it. The anterior pituitary is typically divided into two regions, a more anterior rostral portion and a posterior proximal portion, but the boundary between the two is often not clearly marked. In elasmobranchs, there is an additional, ventral lobe beneath the anterior pituitary proper.
The arrangement in lampreys, which are among the most primitive of all fish, may indicate how the pituitary originally evolved in ancestral vertebrates. Here, the posterior pituitary is a simple flat sheet of tissue at the base of the brain, and there is no pituitary stalk. Rathke's pouch remains open to the outside, close to the nasal openings. Closely associated with the pouch are three distinct clusters of glandular tissue, corresponding to the intermediate lobe, and the rostral and proximal portions of the anterior pituitary. These various parts are separated by meningial membranes, suggesting that the pituitary of other vertebrates may have formed from the fusion of a pair of separate, but associated, glands.
Most armadillos also possess a neural secretory gland very similar in form to the posterior pituitary, but located in the tail and associated with the spinal cord. This may have a function in osmoregulation.
There is a structure analogous to the pituitary in the octopus brain.
Intermediate lobe
Although rudimentary in humans (and often considered part of the anterior pituitary), the intermediate lobe located between the anterior and posterior pituitary is important to many animals. For instance, in fish, it is believed to control physiological color change. In adult humans, it is just a thin layer of cells between the anterior and posterior pituitary. The intermediate lobe produces melanocyte-stimulating hormone (MSH), although this function is often (imprecisely) attributed to the anterior pituitary.
The intermediate lobe is, in general, not well developed in tetrapods, and is entirely absent in birds.
Additional images
| Biology and health sciences | Endocrine system | Biology |
57947 | https://en.wikipedia.org/wiki/Theobroma%20cacao | Theobroma cacao | Theobroma cacao (cacao tree or cocoa tree) is a small ( tall) evergreen tree in the family Malvaceae. Its seeds - cocoa beans - are used to make chocolate liquor, cocoa solids, cocoa butter and chocolate. Although the tree is native to the tropics of the Americas, the largest producer of cocoa beans in 2022 was Ivory Coast. The plant's leaves are alternate, entire, unlobed, long and broad.
Description
Flowers
The flowers are produced in clusters directly on the trunk and older branches; this is known as cauliflory. The flowers are small, diameter, with pink calyx. The floral formula, used to represent the structure of a flower using numbers, is ✶ K5 C5 A(5°+52) (5).
While many of the world's flowers are pollinated by bees (Hymenoptera) or butterflies/moths (Lepidoptera), cacao flowers are pollinated by tiny flies, Forcipomyia biting midges. Using the natural pollinator Forcipomyia midges produced more fruit than using artificial pollinators.
Fruit
The fruit, called a cacao pod, is ovoid, long and wide, ripening yellow to orange, and weighs about when ripe. The pod contains 20 to 60 seeds, usually called "beans", embedded in a white pulp.
The seeds are the main ingredient of chocolate, while the pulp is used in some countries to prepare refreshing juice, smoothies, jelly, and cream. Usually discarded until practices changed in the 21st century, the fermented pulp may be distilled into an alcoholic beverage. Each seed contains a significant amount of fat (40–50%) as cocoa butter.
The fruit's active constituent is the stimulant theobromine, a compound similar to caffeine.
Nomenclature
The generic name Theobroma is derived from the Greek for "food of the gods"; from (), meaning 'god' or 'divine', and (), meaning 'food'. The specific name cacao is the Hispanization of the name given to the plant in indigenous Mesoamerican languages such as in Tzeltal, Kʼicheʼ and Classic Maya; in Sayula Popoluca; and in Nahuatl meaning "bean of the cocoa-tree".
Taxonomy
Cacao (Theobroma cacao) is one of 26 species belonging to the genus Theobroma classified under the subfamily Byttnerioideae of the mallow family Malvaceae.
In 2008, researchers proposed a new classification based upon morphological, geographic, and genomic criteria: 10 groups have been named according to their geographic origin or the traditional cultivar name. These groups are: Amelonado, Criollo, Nacional, Contamana, Curaray, Cacao guiana, Iquitos, Marañon, Nanay, and Purús.
Distribution and domestication
T. cacao is widely distributed from southeastern Mexico to the Amazon basin. There were originally two hypotheses about its domestication; one said that there were two foci for domestication, one in the Lacandon Jungle area of Mexico and another in lowland South America. More recent studies of patterns of DNA diversity, however, suggest that this is not the case. One study sampled 1241 trees and classified them into 10 distinct genetic clusters. This study also identified areas, for example around Iquitos in modern Peru and Ecuador, where representatives of several genetic clusters originated more than 5000 years ago, leading to development of the variety, Nacional cocoa bean. This result suggests that this is where T. cacao was originally domesticated, probably for the pulp that surrounds the beans, which is eaten as a snack and fermented into a mildly alcoholic beverage. Using the DNA sequences and comparing them with data derived from climate models and the known conditions suitable for cacao, one study refined the view of domestication, linking the area of greatest cacao genetic diversity to a bean-shaped area that encompasses Ecuador, the border between Brazil and Peru and the southern part of the Colombian–Brazilian border. Climate models indicate that at the peak of the last ice age 21,000 years ago, when habitat suitable for cacao was at its most reduced, this area was still suitable, and so provided a refugium for species.
Cacao trees grow well as understory plants in humid forest ecosystems. This is equally true of abandoned cultivated trees, making it difficult to distinguish truly wild trees from those whose parents may originally have been cultivated.
Cultivation
In 2016, cocoa beans were cultivated on roughly worldwide. Cocoa beans are grown by large agroindustrial plantations and small producers, the bulk of production coming from millions of farmers with small plots. A tree begins to bear when it is four or five years old. A mature tree may have 6,000 flowers in a year, yet only about 20 pods. About 1,200 seeds (40 pods) are required to produce of cocoa paste.
Historically, chocolate makers have recognized three main cultivar groups of cacao beans used to make cocoa and chocolate: Forastero, Criollo and Trinitario. The most prized, rare, and expensive is the Criollo group, the cocoa bean used by the Maya. Only 10% of chocolate is made from Criollo, which is arguably less bitter and more aromatic than any other bean. In November 2000, the cacao beans coming from Chuao were awarded an appellation of origin under the title (from Spanish: 'cacao of Chuao').
The cacao bean in 80% of chocolate is made using beans of the Forastero group, the main and most ubiquitous variety being the Amenolado variety, while the Arriba variety (such as the Nacional variety) are less commonly found in Forastero produce. Forastero trees are significantly hardier and more disease-resistant than Criollo trees, resulting in cheaper cacao beans.
Major cocoa bean processors include Hershey's, Nestlé and Mars. Chocolate can be made from T. cacao through a process of steps that involve harvesting, fermenting of T. cacao pulp, drying, harvesting, and then extraction. Roasting T. cacao by using superheated steam was found to be better than conventional oven-roasting because it resulted in the same quality of cocoa beans in a shorter time.
Production
In 2022, world production of cocoa beans was 5.9 million tonnes, led by Ivory Coast with 38% of the total. Other major producers were Ghana (19%) and Indonesia (11%).
Conservation
The pests and diseases to which cacao is subject, along with climate change, mean that new varieties will be needed to respond to these challenges. Breeders rely on the genetic diversity conserved in field genebanks to create new varieties, because cacao has recalcitrant seeds that cannot be stored in a conventional genebank. In an effort to improve the diversity available to breeders, and ensure the future of the field genebanks, experts have drawn up A Global Strategy for the Conservation and Use of Cacao Genetic Resources, as the Foundation for a Sustainable Cocoa Economy. The strategy has been adopted by the cacao producers and their clients, and seeks to improve the characterization of cacao diversity, the sustainability and diversity of the cacao collections, the usefulness of the collections, and to ease access to better information about the conserved material. Some natural areas of cacao diversity are protected by various forms of conservation, for example national parks. However, a recent study of genetic diversity and predicted climates suggests that many of those protected areas will no longer be suitable for cacao by 2050. It also identifies an area around Iquitos in Peru that will remain suitable for cacao and that is home to considerable genetic diversity, and recommends that this area be considered for protection. Other projects, such as the International Cocoa Quarantine Centre, aim to combat cacao diseases and preserve genetic diversity.
Phytopathogens (parasitic organisms) cause much damage to Theobroma cacao plantations around the world. Many of those phytopathogens, which include many of the pests named below, were analyzed using mass spectrometry and allow for guiding on the correct approaches to get rid of the specific phytopathogens. This method was found to be quick, reproducible, and accurate showing promising results in the future to prevent damage to Theobroma cacao by various phytopathogens.
A specific bacterium Streptomyces camerooniansis was found to be beneficial for T. cacao by helping plant growth by accelerating seed germination of T. cacao, inhibiting growth of various types of microorganisms (such as different oomycetes, fungi, and bacteria), and preventing rotting by Phytophthora megakarya.
Pests
Various plant pests and diseases can cause serious problems for cacao production.
Insects
Cocoa mirids or capsids worldwide (but especially Sahlbergella singularis and Distantiella theobroma in West Africa and Helopeltis spp. in Southeast Asia)
Bathycoelia thalassina - West Africa
Conopomorpha cramerella (cocoa pod borer – in Southeast Asia)
Carmenta theobromae - C. & S. America
Fungi
Moniliophthora roreri (frosty pod rot)
Moniliophthora perniciosa (witches' broom)
Ceratocystis cacaofunesta (mal de machete) or (Ceratocystis wilt)
Verticillium dahliae
Oncobasidium theobromae (vascular streak dieback)
Oomycetes
Phytophthora spp. (black pod) especially Phytophthora megakarya in West Africa
Viruses
Cacao swollen shoot virus
Mistletoe
Rats and other vertebrate pests (squirrels, woodpeckers, etc.)
Genome
The genome of T. cacao is diploid, its size is 430 Mbp, and it comprises 10 chromosome pairs (2n=2x=20). In September 2010, a team of scientists announced a draft sequence of the cacao genome (Matina1-6 genotype). In a second, unrelated project, the International Cocoa Genome Sequencing Consortium-ICGS, coordinated by CIRAD, first published in December 2010 (online, paper publication in January 2011), the sequence of the cacao genome, of the Criollo cacao (of a landrace from Belize, B97-61/B2). In their publication, they reported a detailed analysis of the genomic and genetic data.
The sequence of the cacao genome identified 28,798 protein-coding genes, compared to the roughly 23,000 protein-coding genes of the human genome. About 20% of the cacao genome consists of transposable elements, a low proportion compared to other plant species. Many genes were identified as coding for flavonoids, aromatic terpenes, theobromine and many other metabolites involved in cocoa flavor and quality traits, among which a relatively high proportion code for polyphenols, which constitute up to 8% of cacao pods dry weight. The cacao genome appears close to the hypothetical hexaploid ancestor of all dicotyledonous plants, and it is proposed as an evolutionary mechanism by which the 21 chromosomes of the dicots' hypothetical hexaploid ancestor underwent major fusions leading to cacao's 10 chromosome pairs.
The genome sequence enables cacao molecular biology and breeding for elite varieties through marker-assisted selection, in particular for genetic resistance to fungal, oomycete and viral diseases responsible for huge yield losses each year. In 2017–18, due to concerns about survivability of cacao plants in an era of global warming in which climates become more extreme in the narrow band of latitudes where cacao is grown (20 degrees north and south of the equator), the commercial company, Mars, Incorporated and the University of California, Berkeley are using CRISPR to adjust DNA for improved hardiness of cacao in hot climates.
History of cultivation
Domestication
The cacao tree, native of the Amazon rainforest, was first domesticated at least 5,300 years ago, in equatorial South America from the Santa Ana-La Florida (SALF) site in what is present-day southeast Ecuador (Zamora-Chinchipe Province) by the Mayo-Chinchipe culture before being introduced in Mesoamerica.
In Mesoamerica, ceramic vessels with residues from the preparation of cacao beverages have been found from the Early Formative (1900–900 BC) period. For example, one such vessel found at an Olmec archaeological site on the Gulf Coast of Veracruz, Mexico dates cacao's preparation by pre-Olmec peoples as early as 1750 BC. On the Pacific coast of Chiapas, Mexico, a Mokaya archaeological site provides evidence of even earlier cacao beverages, to 1900 BC. The initial domestication was probably related to the making of a fermented alcoholic beverage. In 2018, researchers who analysed the genome of cultivated cacao trees concluded that the domesticated cacao trees all originated from a single domestication event that occurred about 3,600 years ago somewhere in Central America.
Ancient uses
Several mixtures of cacao are described in ancient texts, for ceremonial or medicinal, as well as culinary, purposes. Some mixtures included maize, chili, vanilla (Vanilla planifolia), and honey. Archaeological evidence for use of cacao, while relatively sparse, has come from the recovery of whole cacao beans at Uaxactun, Guatemala and from the preservation of wood fragments of the cacao tree at Belize sites including Cuello and Pulltrouser Swamp. In addition, analysis of residues from ceramic vessels has found traces of theobromine and caffeine in early formative vessels from Puerto Escondido, Honduras (1100–900 BC) and in middle formative vessels from Colha, Belize (600–400 BC) using similar techniques to those used to extract chocolate residues from four classic period (around 400 AD) vessels from a tomb at the Maya archaeological site of Rio Azul. As cacao is the only known commodity from Mesoamerica containing both of these alkaloid compounds, it seems likely these vessels were used as containers for cacao drinks. In addition, cacao is named in a hieroglyphic text on one of the Rio Azul vessels. Cacao is also believed to have been ground by the Aztecs and mixed with tobacco for smoking purposes. Cocoa was being domesticated by the Mayo Chinchipe of the upper Amazon around 3,000 BC.
The Maya believed the (cacao) was discovered by the gods in a mountain that also contained other delectable foods to be used by them. According to Maya mythology, the Plumed Serpent gave cacao to the Maya after humans were created from maize by divine grandmother goddess Xmucane. The Maya celebrated an annual festival in April to honor their cacao god, Ek Chuah, an event that included the sacrifice of a dog with cacao-colored markings, additional animal sacrifices, offerings of cacao, feathers and incense, and an exchange of gifts. In a similar creation story, the Mexica (Aztec) god Quetzalcoatl discovered cacao (: "bitter water"), in a mountain filled with other plant foods. Cacao was offered regularly to a pantheon of Mexica deities and the Madrid Codex depicts priests lancing their ear lobes (autosacrifice) and covering the cacao with blood as a suitable sacrifice to the gods. The cacao beverage was used as a ritual only by men, as it was believed to be an intoxicating food unsuitable for women and children.
Cacao beans constituted both a ritual beverage and a major currency system in pre-Columbian Mesoamerican civilizations. At one point, the Aztec empire received a yearly tribute of 980 loads () of cacao, in addition to other goods. Each load represented exactly 8,000 beans. The buying power of quality beans was such that 80–100 beans could buy a new cloth mantle. The use of cacao beans as currency is also known to have spawned counterfeiters during the Aztec empire.
Modern history
The first European knowledge about chocolate came in the form of a beverage which was first introduced to the Spanish at their meeting with Moctezuma in the Aztec capital of Tenochtitlan in 1519. Cortés and others noted the vast quantities of this beverage the Aztec emperor consumed, and how it was carefully whipped by his attendants beforehand. Examples of cacao beans, along with other agricultural products, were brought back to Spain at that time, but it seems the beverage made from cacao was introduced to the Spanish court in 1544 by Kekchi Maya nobles brought from the New World to Spain by Dominican friars to meet Prince Philip. Within a century, chocolate had spread to France, England and elsewhere in Western Europe. Demand for this beverage led the French to establish cacao plantations in the Caribbean, while Spain subsequently developed their cacao plantations in their Venezuelan and Philippine colonies (Bloom 1998, Coe 1996). A painting by Dutch Golden Age artist Albert Eckhout shows a wild cacao tree in mid-seventeenth century Dutch Brazil. The Nahuatl-derived Spanish word cacao entered scientific nomenclature in 1753 after the Swedish naturalist Linnaeus published his taxonomic binomial system and coined the genus and species Theobroma cacao. Traditional pre-Hispanic beverages made with cacao are still consumed in Mesoamerica. These include the Oaxacan beverage known as tejate.
Gallery
| Biology and health sciences | Malvales | Plants |
57965 | https://en.wikipedia.org/wiki/Moraceae | Moraceae | The Moraceae—often called the mulberry family or fig family—are a family of flowering plants comprising about 38 genera and over 1100 species. Most are widespread in tropical and subtropical regions, less so in temperate climates; however, their distribution is cosmopolitan overall. The only synapomorphy within the Moraceae is presence of laticifers and milky sap in all parenchymatous tissues, but generally useful field characters include two carpels sometimes with one reduced, compound inconspicuous flowers, and compound fruits. The family includes well-known plants such as the fig, banyan, breadfruit, jackfruit, mulberry, and Osage orange. The 'flowers' of Moraceae are often pseudanthia (reduced inflorescences).
Description
Overall
The family varies from colossal trees like the Indian Banyan (Ficus benghalensis) which can cover five acres (two hectares) of ground, to Dorstenia barnimiana which is a small stemless, bulbous succulent 2–5 cm in diameter that produces a single peltate leaf on a 4–15 cm petiole. These two species have an approximately one billion fold difference in weight.
Flowers
The individual flowers are often small, with single whorled or absent perianth. Most flowers have either petals or sepals, but not both, known as monochlamydeae, and have pistils and stamens in different flowers, known as diclinous. Except for Brosimum gaudichaudii and Castilla elastica, the perianth in all species of the Moraceae contain sepals. If the flower has an inflexed stamen, then pollen is released and distributed by wind dispersal; however, if the stamen is straight, then insect pollination is most likely to occur. Insect pollination occurs in Antiaropsis, Artocarpus, Castilla, Dorstenia, Ficus, and Mesogyne.
Leaves
The leaves are much like the flowers when analyzing diversity. The leaves can be singly attached to the stem or alternating, they may be lobed or unlobed, and can be evergreen or deciduous depending on the species in question. The red mulberry can host numerous leaf types on the same tree. Leaves can be both lobed and unlobed and appear very different, but coexist on the same plant.
Fruits and seeds
Plant species in the Moraceae are best known for their fruits. Overall, most species produced a fleshy fruit containing seeds. Examples include the breadfruit from Artocarpus altilis, the mulberry from Morus rubra, the fig from Ficus carica, and the jackfruit from Artocarpus heterophyllus.
Taxonomy
Formerly included within the now defunct order Urticales, recent molecular studies have resulted in the family's placement within the Rosales in a clade called the urticalean rosids that also includes Ulmaceae, Celtidaceae, Cannabaceae, and Urticaceae. Cecropia, which has variously been placed in the Moraceae, Urticaceae, or their own family, Cecropiaceae, is now included in the Urticaceae.
Dioecy (having individuals with separate sexes) appears to be the primitive state in Moraceae. Monoecy has evolved independently at least four times within the family.
Phylogeny
Modern molecular phylogenetics suggest these relationships:
Tribes and genera
Moraceae is comprised 48 genera in seven tribes.
Artocarpeae
Artocarpus (73 spp.)
Batocarpus (3 spp.)
Clarisia (4 spp.)
Chlorophoreae (syn. Maclureae )
Maclura (13 spp.)
Parartocarpeae
Hullettia (2 spp.)
Parartocarpus (2 spp.)
Pseudostreblus (1 sp.)
Olmedieae (syn. Castilleae)
Antiaris (1 sp.)
Antiaropsis (2 spp.)
Castilla (3 spp.)
Helicostylis (8 spp.)
Maquira (4 spp.)
Mesogyne (1 sp.)
Naucleopsis (25 spp.)
Olmedia (1 sp.)
Perebea (10 spp.)
Poulsenia (1 sp.)
Pseudolmedia (11 spp.)
Sparattosyce (2 spp.)
Streblus (5 spp.)
Dorstenieae
Bleekrodea (3 spp.)
Bosqueiopsis (1 sp.)
Brosimum (19 spp.)
Broussonetia (4 spp.)
Dorstenia (122 spp.)
Fatoua (3 spp.)
Malaisia (1 sp.)
Scyphosyce (3 spp.)
Sloetia (1 sp.)
Sloetiopsis (1 sp.)
Trilepisium (2 sp.)
Utsetela (2 sp.)
Ficeae
Ficus (880 spp.)
Moreae
Afromorus
Ampalis (2 spp.)
Bagassa (1 sp.)
Maillardia (2 spp.)
Milicia (2 spp.)
Morus (17 spp.)
Paratrophis (12 spp.)
Sorocea (22 spp.)
Taxotrophis (6 spp.)
Trophis (5 spp.)
Other genera accepted by Plants of the World Online :
Allaeanthus (4 spp.)
Calaunia (1 sp.)
Hijmania (4 spp.)
Prainea (2 spp.)
Treculia (5 spp.)
Fossil genera and species
In addition to the living species, a number of fossil genera have been ascribed to the family:
Aginoxylon
Aginoxylon moroides
†Artocarpidium
†Artocarpoides
†Arthmiocarpus
†Artocarpoxylon
†Becktonia
†Becktonia hantonensis
†Cornerocarpon
†Cornerocarpon copiosum
†Coussapoites
†Coussapoites veracruzianus
†Cudranioxylon
†Cudranioxylon engolismense
†Ficofolium
†Ficofolium weylandii
†Ficonium
†Ficonium nitidum
†Ficonium silesiacum
†Ficonium solanderi
†Milicioxylon
†Milicioxylon kachchhense
†Moraceoipollenites
†Moricites
†Moroidea
†Moroidea baltica
†Moroidea caucasica
†Moroidea cretacea
†Moroidea hordwellensis
†Moroidea reticulata
†Moroidea tymensis
†Moroxylon
†Myrianthoxylon
Myrianthoxylon chaloneri
†Ovicarpum
†Palaeokalopanax
†Palaeokalopanax kamtschaticus
†Palaeokalopanax vollosovitschii
†Paleoficus
†Protoficus
†Protoficus crenulata
†Protoficus crispans
†Protoficus dentatus
†Protoficus insignis
†Protoficus lacera
†Protoficus nervosa
†Protoficus saportae
†Protoficus sezannensis
†Soroceaxylon
Soroceaxylon entrerriense
†Ungerites (syn Ficoxylon)
†Ungerites tropicus
†Welkoetoxylon
†Welkoetoxylon multiseriatum
Evolution
While the fossil record of Moraceae goes back to the late Cretaceous, molecular clock estimates suggest that the family had begun to diversify by the mid-Cretaceous, with some major clades emerging during the Tertiary period.
Distribution
Moraceae can be found throughout the world with a cosmopolitan distribution. The majority of species originate in the Old World tropics, particularly in Asia and the Pacific islands.
| Biology and health sciences | Rosales | null |
57980 | https://en.wikipedia.org/wiki/Shortwave%20radio | Shortwave radio | Shortwave radio is radio transmission using radio frequencies in the shortwave bands (SW). There is no official definition of the band range, but it always includes all of the high frequency band (HF), which extends from 3 to 30 MHz (approximately 100 to 10 metres in wavelength). It lies between the medium frequency band (MF) and the bottom of the VHF band.
Radio waves in the shortwave band can be reflected or refracted from a layer of electrically charged atoms in the atmosphere called the Ionosphere. Therefore, short waves directed at an angle into the sky can be reflected back to Earth at great distances, beyond the horizon. This is called skywave or "skip" propagation. Thus shortwave radio can be used for communication over very long distances, in contrast to radio waves of higher frequency, which travel in straight lines (line-of-sight propagation) and are generally limited by the visual horizon, about 64 km (40 miles).
Shortwave broadcasts of radio programs played an important role in international broadcasting for many decades, serving both to provide news and information and as a propaganda tool for an international audience. The heyday of international shortwave broadcasting was during the Cold War between 1960 and 1990.
With the wide implementation of other technologies for the long-distance distribution of radio programs, such as satellite radio, cable broadcasting and IP-based transmissions, shortwave broadcasting lost importance. Initiatives for the digitization of broadcasting did not bear fruit either, and , relatively few broadcasters continue to broadcast programs on shortwave.
However, shortwave remains important in war zones, such as in the Russo-Ukrainian war, and shortwave broadcasts can be transmitted over thousands of miles from a single transmitter, making it difficult for government authorities to censor them. Shortwave radio is also often used by aircraft.
History
Development
The name "shortwave" originated during the beginning of radio in the early 20th century, when the radio spectrum was divided into long wave (LW), medium wave (MW), and short wave (SW) bands based on the length of the wave. Shortwave radio received its name because the wavelengths in this band are shorter than 200 m (1,500 kHz) which marked the original upper limit of the medium frequency band first used for radio communications. The broadcast medium wave band now extends above the 200 m / 1,500 kHz limit.
Early long-distance radio telegraphy used long waves, below 300 kilohertz (kHz) / above 1000 m. The drawbacks to this system included a very limited spectrum available for long-distance communication, and the very expensive transmitters, receivers and gigantic antennas. Long waves are also difficult to beam directionally, resulting in a major loss of power over long distances. Prior to the 1920s, the shortwave frequencies above 1.5 MHz were regarded as useless for long-distance communication and were designated in many countries for amateur use.
Guglielmo Marconi, pioneer of radio, commissioned his assistant Charles Samuel Franklin to carry out a large-scale study into the transmission characteristics of short-wavelength waves and to determine their suitability for long-distance transmissions. Franklin rigged up a large antenna at Poldhu Wireless Station, Cornwall, running on 25 kW of power. In June and July 1923, wireless transmissions were completed during nights on 97 meters (about 3 MHz) from Poldhu to Marconi's yacht Elettra in the Cape Verde Islands.
In September 1924, Marconi arranged for transmissions to be made day and night on 32 meters (about 9.4 MHz) from Poldhu to his yacht in the harbour at Beirut, to which he had sailed, and was "astonished" to find he could receive signals "throughout the day". Franklin went on to refine the directional transmission by inventing the curtain array aerial system. In July 1924, Marconi entered into contracts with the British General Post Office (GPO) to install high-speed shortwave telegraphy circuits from London to Australia, India, South Africa and Canada as the main element of the Imperial Wireless Chain. The UK-to-Canada shortwave "Beam Wireless Service" went into commercial operation on 25 October 1926. Beam Wireless Services from the UK to Australia, South Africa and India went into service in 1927.
Shortwave communications began to grow rapidly in the 1920s. By 1928, more than half of long-distance communications had moved from transoceanic cables and longwave wireless services to shortwave, and the overall volume of transoceanic shortwave communications had vastly increased. Shortwave stations had cost and efficiency advantages over massive longwave wireless installations. However, some commercial longwave communications stations remained in use until the 1960s. Long-distance radio circuits also reduced the need for new cables, although the cables maintained their advantages of high security and a much more reliable and better-quality signal than shortwave.
The cable companies began to lose large sums of money in 1927. A serious financial crisis threatened viability of cable companies that were vital to strategic British interests. The British government convened the Imperial Wireless and Cable Conference in 1928 "to examine the situation that had arisen as a result of the competition of Beam Wireless with the Cable Services". It recommended and received government approval for all overseas cable and wireless resources of the Empire to be merged into one system controlled by a newly formed company in 1929, Imperial and International Communications Ltd. The name of the company was changed to Cable and Wireless Ltd. in 1934.
A resurgence of long-distance cables began in 1956 with the laying of TAT-1 across the Atlantic Ocean, the first voice frequency cable on this route. This provided 36 high-quality telephone channels and was soon followed by even higher-capacity cables all around the world. Competition from these cables soon ended the economic viability of shortwave radio for commercial communication.
Amateur use of shortwave propagation
Amateur radio operators also discovered that long-distance communication was possible on shortwave bands. Early long-distance services used surface wave propagation at very low frequencies, which are attenuated along the path at wavelengths shorter than 1,000 meters. Longer distances and higher frequencies using this method meant more signal loss. This, and the difficulties of generating and detecting higher frequencies, made discovery of shortwave propagation difficult for commercial services.
Radio amateurs may have conducted the first successful transatlantic tests in December 1921, operating in the 200 meter mediumwave band (near 1,500 kHz, inside the modern AM broadcast band), which at that time was the shortest wavelength / highest frequency available to amateur radio. In 1922 hundreds of North American amateurs were heard in Europe on 200 meters and at least 20 North American amateurs heard amateur signals from Europe. The first two-way communications between North American and Hawaiian amateurs began in 1922 at 200 meters. Although operation on wavelengths shorter than 200 meters was technically illegal (but tolerated at the time as the authorities mistakenly believed that such frequencies were useless for commercial or military use), amateurs began to experiment with those wavelengths using newly available vacuum tubes shortly after World War I.
Extreme interference at the longer edge of the 150–200 meter band – the official wavelengths allocated to amateurs by the Second National Radio Conference in 1923 – forced amateurs to shift to shorter and shorter wavelengths; however, amateurs were limited by regulation to wavelengths longer than 150 meters (2 MHz). A few fortunate amateurs who obtained special permission for experimental communications at wavelengths shorter than 150 meters completed hundreds of long-distance two-way contacts on 100 meters (3 MHz) in 1923 including the first transatlantic two-way contacts.
By 1924 many additional specially licensed amateurs were routinely making transoceanic contacts at distances of 6,000 miles (9,600 km) and more. On 21 September 1924 several amateurs in California completed two-way contacts with an amateur in New Zealand. On 19 October amateurs in New Zealand and England completed a 90 minute two-way contact nearly halfway around the world. On 10 October the Third National Radio Conference made three shortwave bands available to U.S. amateurs at 80 meters (3.75 MHz), 40 meters (7 MHz) and 20 meters (14 MHz). These were allocated worldwide, while the 10 meter band (28 MHz) was created by the Washington International Radiotelegraph Conference on 25 November 1927. The 15 meter band (21 MHz) was opened to amateurs in the United States on 1 May 1952.
Propagation characteristics
Shortwave radio frequency energy is capable of reaching any location on the Earth as it is influenced by ionospheric reflection back to Earth by the ionosphere, (a phenomenon known as "skywave propagation"). A typical phenomenon of shortwave propagation is the occurrence of a skip zone where reception fails. With a fixed working frequency, large changes in ionospheric conditions may create skip zones at night.
As a result of the multi-layer structure of the ionosphere, propagation often simultaneously occurs on different paths, scattered by the ‘E’ or ‘F’ layer and with different numbers of hops, a phenomenon that may be disturbed for certain techniques. Particularly for lower frequencies of the shortwave band, absorption of radio frequency energy in the lowest ionospheric layer, the ‘D’ layer, may impose a serious limit. This is due to collisions of electrons with neutral molecules, absorbing some of a radio frequency's energy and converting it to heat. Predictions of skywave propagation depend on:
The distance from the transmitter to the target receiver.
Time of day. During the day, frequencies higher than approximately 12 MHz can travel longer distances than lower ones. At night, this property is reversed.
With lower frequencies the dependence on the time of the day is mainly due to the lowest ionospheric layer, the ‘D’ Layer, forming only during the day when photons from the sun break up atoms into ions and free electrons.
Season. During the winter months of the Northern or Southern hemispheres, the AM/MW broadcast band tends to be more favorable because of longer hours of darkness.
Solar flares produce a large increase in D region ionization – so great, sometimes for periods of several minutes, that skywave propagation is nonexistent.
Types of modulation
Several different types of modulation are used to incorporate information in a short-wave signal.
Audio modes
AM
Amplitude modulation is the simplest type and the most commonly used for shortwave broadcasting. The instantaneous amplitude of the carrier is controlled by the amplitude of the signal (speech, or music, for example). At the receiver, a simple detector recovers the desired modulation signal from the carrier.
SSB
Single-sideband transmission is a form of amplitude modulation but in effect filters the result of modulation. An amplitude-modulated signal has frequency components both above and below the carrier frequency. If one set of these components is eliminated as well as the residual carrier, only the remaining set is transmitted. This reduces power in the transmission, as roughly of the energy sent by an AM signal is in the carrier, which is not needed to recover the information contained in the signal. It also reduces signal bandwidth, enabling less than one-half the AM signal bandwidth to be used.
The drawback is the receiver is more complicated, since it must re-create the carrier to recover the signal. Small errors in the detection process greatly affect the pitch of the received signal. As a result, single sideband is not used for music or general broadcast. Single sideband is used for long-range voice communications by ships and aircraft, citizen's band, and amateur radio operators. In amateur radio operation lower sideband (LSB) is customarily used below 10 MHz and USB (upper sideband) above 10 MHz, non-amateur services use USB regardless of frequency.
VSB
Vestigial sideband transmits the carrier and one complete sideband, but filters out most of the other sideband. It is a compromise between AM and SSB, enabling simple receivers to be used, but requires almost as much transmitter power as AM. Its main advantage is that only half the bandwidth of an AM signal is used. It is used by the Canadian standard time signal station CHU. Vestigial sideband was used for analog television and by ATSC, the digital TV system used in North America.
NFM
Narrow-band frequency modulation (NBFM or NFM) is used typically above 20 MHz. Because of the larger bandwidth required, NBFM is commonly used for VHF communication. Regulations limit the bandwidth of a signal transmitted in the HF bands, and the advantages of frequency modulation are greatest if the FM signal has a wide bandwidth. NBFM is limited to short-range transmissions due to the multiphasic distortions created by the ionosphere.
DRM
Digital Radio Mondiale (DRM) is a digital modulation for use on bands below 30 MHz. It is a digital signal, like the data modes, below, but is for transmitting audio, like the analog modes above.
Data modes
CW
Continuous wave (CW) is on-and-off keying of a sine-wave carrier, used for Morse code communications and Hellschreiber facsimile-based teleprinter transmissions. It is a data mode, although often listed separately. It is typically received via lower or upper SSB modes.
RTTY, FAX, SSTV
Radioteletype, fax, digital, slow-scan television, and other systems use forms of frequency-shift keying or audio subcarriers on a shortwave carrier. These generally require special equipment to decode, such as software on a computer equipped with a sound card.
Note that on modern computer-driven systems, digital modes are typically sent by coupling a computer's sound output to the SSB input of a radio.
Users
Some established users of the shortwave radio bands may include:
International broadcasting primarily by government-sponsored propaganda, or international news (for example, the BBC World Service), religious or cultural stations to foreign audiences: The most common use of all.
Domestic broadcasting: to widely dispersed populations with few longwave, mediumwave and FM stations serving them; or for speciality political, religious and alternative media networks; or of individual commercial and non-commercial paid broadcasts.
Oceanic air traffic control uses the HF/shortwave band for long-distance communication to aircraft over the oceans and poles, which are far beyond the range of traditional VHF frequencies. Modern systems also include satellite communications, such as ADS-C/CPDLC.
Two-way radio communications by marine and maritime HF stations, aeronautical users, and ground based stations. For example, two way shortwave communication is still used in remote regions by the Royal Flying Doctor Service of Australia.
"Utility" stations transmitting messages not intended for the general public, such as merchant shipping, marine weather, and ship-to-shore stations; for aviation weather and air-to-ground communications; for military communications; for long-distance governmental purposes, and for other non-broadcast communications.
Amateur radio operators at the 80/75, 60, 40, 30, 20, 17, 15, 12, and 10–meter bands. Licenses are granted by authorized government agencies.
Time signal and radio clock stations: In North America, WWV radio and WWVH radio transmit at these frequencies: 2.5 MHz, 5 MHz, 10 MHz, and 15 MHz; and WWV also transmits on 20 MHz. The CHU radio station in Canada transmits on the following frequencies: 3.33 MHz, 7.85 MHz, and 14.67 MHz. Other similar radio clock stations transmit on various shortwave and longwave frequencies around the world. The shortwave transmissions are primarily intended for human reception, while the longwave stations are generally used for automatic synchronization of watches and clocks.
Sporadic or non-traditional users of the shortwave bands may include:
Clandestine stations. These are stations that broadcast on behalf of various political movements such as rebel or insurrectionist forces. They may advocate civil war, insurrection, rebellion against the government-in-charge of the country to which they are directed. Clandestine broadcasts may emanate from transmitters located in rebel-controlled territory or from outside the country entirely, using another country's transmission facilities.
Numbers stations. These stations regularly appear and disappear all over the shortwave radio band, but are unlicensed and untraceable. It is believed that numbers stations are operated by government agencies and are used to communicate with clandestine operatives working within foreign countries. However, no definitive proof of such use has emerged. Because the vast majority of these broadcasts contain nothing but the recitation of blocks of numbers, in various languages, with occasional bursts of music, they have become known colloquially as "number stations". Perhaps the most noted number station is called the "Lincolnshire Poacher", named after the 18th century English folk song, which is transmitted just before the sequences of numbers.
Unlicensed two way radio activity by individuals such as taxi drivers, bus drivers and fishermen in various countries can be heard on various shortwave frequencies. Such unlicensed transmissions by "pirate" or "bootleg" two way radio operators can often cause signal interference to licensed stations. Unlicensed business radio (taxis, trucking companies, among numerous others) land mobile systems may be found in the 20-30 MHz region while unlicensed marine mobile and other similar users may be found over the entire shortwave range.
Pirate radio broadcasters who feature programming such as music, talk and other entertainment, can be heard sporadically and in various modes on the shortwave bands. Pirate broadcasters take advantage of the better propagation characteristics to achieve more range compared to the AM or FM broadcast bands.
Over-the-horizon radar: From 1976 to 1989, the Soviet Union's Russian Woodpecker over-the-horizon radar system blotted out numerous shortwave broadcasts daily.
Ionospheric heaters used for scientific experimentation such as the High Frequency Active Auroral Research Program in Alaska, and the Sura ionospheric heating facility in Russia.
Shortwave broadcasting
See International broadcasting for details on the history and practice of broadcasting to foreign audiences.
See List of shortwave radio broadcasters for a list of international and domestic shortwave radio broadcasters.
See Shortwave relay station for the actual kinds of integrated technologies used to bring high power signals to listeners.
Frequency allocations
The World Radiocommunication Conference (WRC), organized under the auspices of the International Telecommunication Union, allocates bands for various services in conferences every few years. The last WRC took place in 2023.
As of WRC-97 in 1997, these bands were allocated for international broadcasting. AM shortwave broadcasting channels are allocated with a 5 kHz separation for traditional analog audio broadcasting:
Although countries generally follow the assigned bands, there may be small differences between countries or regions. For example, in the official bandplan of the Netherlands, the 49 m band starts at 5.95 MHz, the 41 m band ends at 7.45 MHz, the 11 m band starts at 25.67 MHz, and the 120 m, 90 m, and 60 m bands are absent altogether. International broadcasters sometimes operate outside the normal the WRC-allocated bands or use off-channel frequencies. This is done for practical reasons, or to attract attention in crowded bands (60 m, 49 m, 40 m, 41 m, 31 m, 25 m).
The new digital audio broadcasting format for shortwave DRM operates 10 kHz or 20 kHz channels. There are some ongoing discussions with respect to specific band allocation for DRM, as it mainly transmitted in 10 kHz format.
The power used by shortwave transmitters ranges from less than one watt for some experimental and amateur radio transmissions to 500 kilowatts and higher for intercontinental broadcasters and over-the-horizon radar. Shortwave transmitting centers often use specialized antenna designs (like the ALLISS antenna technology) to concentrate radio energy at the target area.
Advantages
Shortwave possesses a number of advantages over newer technologies:
Difficulty of censoring programming by authorities in restrictive countries. Unlike their relative ease in monitoring and censoring the Internet, over-the air television, cable television, satellite television, satellite radio, mobile phones, landline phones, and satellite phones, government authorities face technical difficulties monitoring which stations (sites) are being listened to (accessed). For example, during the attempted coup against Soviet President Mikhail Gorbachev, when his access to communications was limited (e.g. his phones, television and radio were cut off), Gorbachev was able to stay informed by means of the BBC World Service on shortwave.
Low-cost shortwave radios are widely available in all but the most repressive countries in the world. Simple shortwave regenerative receivers can be easily built with a few parts.
In many countries (particularly in most developing nations and in the Eastern bloc during the Cold War era) ownership of shortwave receivers has been and continues to be widespread (in many of these countries some domestic stations also used shortwave).
Many newer shortwave receivers are portable and can be battery-operated, making them useful in difficult circumstances. Newer technology includes hand-cranked radios which provide power without batteries.
Shortwave radios can be used in situations where over-the-air television, cable television, satellite television, landline phones, mobile phones, satellite phones, satellite communications, or the Internet is temporarily, long-term or permanently unavailable (or unaffordable).
Shortwave radio travels much farther than broadcast FM (88–108 MHz). Shortwave broadcasts can be easily transmitted over a distance of several thousand miles, including from one continent to another.
Particularly in tropical regions, SW is somewhat less prone to interference from thunderstorms than medium wave radio, and is able to cover a large geographic area with relatively low power (and hence cost). Therefore, in many of these countries it is widely used for domestic broadcasting.
Very little infrastructure is required for long-distance two-way communications using shortwave radio. All one needs is a pair of transceivers, each with an antenna, and a source of energy (such as a battery, a portable generator, or the electrical grid). This makes shortwave radio one of the most robust means of communications, which can be disrupted only by interference or bad ionospheric conditions. Modern digital transmission modes such as MFSK and Olivia are even more robust, allowing successful reception of signals well below the noise floor of a conventional receiver.
Disadvantages
Shortwave radio's benefits are sometimes regarded as being outweighed by its drawbacks, including:
In most Western countries, shortwave radio ownership is usually limited to enthusiasts, since most new standard radios do not receive the shortwave band. Therefore, Western audiences are limited.
In the developed world, shortwave reception is very difficult in urban areas because of excessive noise from switched-mode power adapters, fluorescent or LED light sources, internet modems and routers, computers and many other sources of radio interference.
Audio quality may be limited due to interference and the modes that are used.
Shortwave listening
The Asia-Pacific Telecommunity estimates that there are approximately 600 million shortwave broadcast-radio receivers in use in 2002. WWCR claims that there are 1.5 billion shortwave receivers worldwide.
Many hobbyists listen to shortwave broadcasters. In some cases, the goal is to hear as many stations from as many countries as possible (DXing); others listen to specialized shortwave utility, or "ute", transmissions such as maritime, naval, aviation, or military signals. Others focus on intelligence signals from numbers stations, stations which transmit strange broadcast usually for intelligence operations, or the two way communications by amateur radio operators. Some short wave listeners behave analogously to "lurkers" on the Internet, in that they listen only, and never attempt to send out their own signals. Other listeners participate in clubs, or actively send and receive QSL cards, or become involved with amateur radio and start transmitting on their own.
Many listeners tune the shortwave bands for the programmes of stations broadcasting to a general audience (such as Radio Taiwan International, China Radio International, Voice of America, Radio France Internationale, BBC World Service, Voice of Korea, Radio Free Sarawak etc.). Today, through the evolution of the Internet, the hobbyist can listen to shortwave signals via remotely controlled or web controlled shortwave receivers around the world, even without owning a shortwave radio. Many international broadcasters offer live streaming audio on their websites and a number have closed their shortwave service entirely, or severely curtailed it, in favour of internet transmission.
Shortwave listeners, or SWLs, can obtain QSL cards from broadcasters, utility stations or amateur radio operators as trophies of the hobby. Some stations even give out special certificates, pennants, stickers and other tokens and promotional materials to shortwave listeners.
Shortwave broadcasts and music
Some musicians have been attracted to the unique aural characteristics of shortwave radio which – due to the nature of amplitude modulation, varying propagation conditions, and the presence of interference – generally has lower fidelity than local broadcasts (particularly via FM stations). Shortwave transmissions often have bursts of distortion, and "hollow" sounding loss of clarity at certain aural frequencies, altering the harmonics of natural sound and creating at times a strange "spacey" quality due to echoes and phase distortion. Evocations of shortwave reception distortions have been incorporated into rock and classical compositions, by means of delays or feedback loops, equalizers, or even playing shortwave radios as live instruments. Snippets of broadcasts have been mixed into electronic sound collages and live musical instruments, by means of analogue tape loops or digital samples. Sometimes the sounds of instruments and existing musical recordings are altered by remixing or equalizing, with various distortions added, to replicate the garbled effects of shortwave radio reception.
The first attempts by serious composers to incorporate radio effects into music may be those of the Russian physicist and musician Léon Theremin, who perfected a form of radio oscillator as a musical instrument in 1928 (regenerative circuits in radios of the time were prone to breaking into oscillation, adding various tonal harmonics to music and speech); and in the same year, the development of a French instrument called the Ondes Martenot by its inventor Maurice Martenot, a French cellist and former wireless telegrapher. Karlheinz Stockhausen used shortwave radio and effects in works including Hymnen (1966–1967), Kurzwellen (1968) – adapted for the Beethoven Bicentennial in Opus 1970 with filtered and distorted snippets of Beethoven pieces – Spiral (1968), Pole, Expo (both 1969–1970), and Michaelion (1997).
Cypriot composer Yannis Kyriakides incorporated shortwave numbers station transmissions in his 1999 ConSPIracy cantata.
Holger Czukay, a student of Stockhausen, was one of the first to use shortwave in a rock music context. In 1975, German electronic music band Kraftwerk recorded a full length concept album around simulated radiowave and shortwave sounds, entitled Radio-Activity. The The's Radio Cineola monthly broadcasts drew heavily on shortwave radio sound.
Shortwave's future
The development of direct broadcasts from satellites has reduced the demand for shortwave receiver hardware, but there are still a great number of shortwave broadcasters. A new digital radio technology, Digital Radio Mondiale (DRM), is expected to improve the quality of shortwave audio from very poor to adequate. The future of shortwave radio is threatened by the rise of power line communication (PLC), also known as Broadband over Power Lines (BPL), which uses a data stream transmitted over unshielded power lines. As the BPL frequencies used overlap with shortwave bands, severe distortions can make listening to analog shortwave radio signals near power lines difficult or impossible.
According to Andy Sennitt, former editor of the World Radio TV Handbook,
However, Thomas Witherspoon, editor of shortwave news site SWLingPost.com wrote that
In 2018, Nigel Fry, head of Distribution for the BBC World Service Group,
During the 2022 Russian invasion of Ukraine, the BBC World Service launched two new shortwave frequencies for listeners in Ukraine and Russia, broadcasting English-language news updates in an effort to avoid censorship by the Russian state. American commercial shortwave broadcasters WTWW and WRMI also redirected much of their programming to Ukraine.
| Technology | Broadcasting | null |
57986 | https://en.wikipedia.org/wiki/Gladius | Gladius | Gladius () is a Latin word properly referring to the type of sword that was used by ancient Roman foot soldiers starting from the 3rd century BC and until the 3rd century AD. Linguistically, within Latin, the word also came to mean "sword", regardless of the type used.
Early ancient Roman swords were similar to those of the Greeks, called xiphe (, : xiphos). From the 3rd century BC, however, the Romans adopted a weapon based on the sword of the Celtiberians of Hispania in service to Carthage during the Punic Wars, known in Latin as the gladius hispaniensis, meaning "Hispanic-type sword". The Romans improved the weapon and modified it depending on how their battle units waged war, and created over time new types of "gladii" such as the Mainz gladius and the Pompeii gladius. Finally, in the third century AD the heavy Roman infantry replaced the gladius with the spatha (already common among Roman cavalrymen), relegating the gladius as a weapon for light Roman infantry.
A fully equipped Roman legionary after the consulships of Gaius Marius was armed with a sword (gladius), a shield (scutum), one or two javelins (pila), often a dagger (pugio), and perhaps, in the later empire period, darts (plumbatae). Conventionally, soldiers threw pila to disable the enemy's shields and disrupt enemy formations before engaging in close combat, for which they drew the . A soldier generally led with the shield and thrust with the sword.
Etymology
Gladius is a Latin masculine noun. The nominative plural of it is . However, in Latin refers to any sword, not only the sword described here. The word appears in literature as early as the plays of Plautus (Casina, Rudens).
Gladius is generally believed to be a Celtic loan in Latin (perhaps via an Etruscan intermediary), derived from ancient Celtic or "sword" (whence modern Welsh "sword", modern Breton , Old Irish /Modern Irish [itself perhaps a loan from Welsh]; the root of the word may survive in the Old Irish verb claidid "digs, excavates" and anciently attested in the Gallo-Brittonic place name element cladia/clado "ditch, trench, valley hollow").
Modern English words derived from include gladiator ("swordsman") and gladiolus ("little sword", from the diminutive form of gladius), a flowering plant with sword-shaped leaves.
Predecessors and origins
According to Polybius, the sword used by the Roman army during the Battle of Telamon in 225 BC, though deemed superior to the cumbersome Gallic swords, was mainly useful to thrust. These thrusting swords used before the adoption of the Gladius were possibly based on the Greek xiphos. Later, during the Battle of Cannae in 216 BC, they found Hannibal's Celtiberian mercenaries wielding swords that excelled at both slashing and thrusting. A text attributed to Polybius describes the adoption of this design by the Romans even before the end of the war, which canonical Polybius reaffirms by calling the later Roman sword gladius hispaniensis in Latin and iberiké machaira in Greek. It is believed Scipio Africanus was the promoter of the change after the Battle of Cartagena in 209 BC, after which he set the inhabitants to produce weapons for the Roman army.
In 70 BC, both Claudius Quadrigarius and Livy relate the story of Titus Manlius Torquatus using a "Hispanic sword" (gladius Hispanus) in a duel with a Gaul in 361 BC. However, the Gladius was not yet used by the Romans in the 4th century BC, and because of that this has been traditionally considered a terminological anachronism caused by the long established naming convention. It's possible that the Celtiberian sword was first adopted by Romans after encounters with Carthaginian mercenaries of that nationality during the First Punic War (264-241 BC), not the second. In any case, the gladius hispaniensis became particularly known in 200 BC during the Second Macedonian War, in which Macedonian soldiers became horrified at what Roman swords could do after an early cavalry skirmish. It has been suggested that the sword used by Roman cavalrymen was different from the infantry model, but most academics have discarded this view.
Arguments for the Celtiberian source of the weapon have been reinforced in recent decades by discovery of early Roman gladii that seem to highlight that they were copies of Celtiberian models. The weapon developed in Iberia after La Tène I models, which were adapted to traditional Celtiberian techniques during the late 4th and early 3rd centuries BC. These weapons are quite original in their design, so that they cannot be confused with Gallic types. As for the origin of the word gladius, one theory proposes the borrowing of the word from *kladi- during the Gallic wars, relying on the principle that K often became G in Latin. Ennius attests the word gladius may have replaced ensis, which until then was used mainly by poets.
Manufacturing
Technique
By the time of the Roman Republic, which flourished during the Iron Age, steel and the steel-making process was known to the classical world. Pure iron is relatively soft, but pure iron is never found in nature. Natural iron ore contains various impurities in solid solution, which harden the reduced metal by producing irregular-shaped metallic crystals. The gladius was generally made out of steel.
In Roman times, workers reduced ore in a bloomery furnace. The resulting pieces were called blooms, which they further worked to remove slag inclusions from the porous surface.
A recent metallurgical study of two Etrurian swords, one in the form of a Greek kopis from 7th century BC Vetulonia, the other in the form of a gladius Hispaniensis from 4th century BC Clusium (Chiusi), gives insight concerning the manufacture of Roman swords. The Chiusi sword comes from Romanized etruria; thus, regardless of the names of the forms (which the authors do not identify), the authors believe the process was continuous from the Etruscans to the Romans.
The Vetulonian sword was crafted by the pattern welding process from five blooms reduced at a temperature of . Five strips of varying carbon content were created. A central core of the sword contained the highest: 0.15–0.25% carbon. On its edges were placed four strips of low-carbon steel, 0.05–0.07%, and the whole thing was welded together by forging on the pattern of hammer blows. A blow increased the temperature sufficiently to produce a friction weld at that spot. Forging continued until the steel was cold, producing some central annealing. The sword was long.
The Chiusian sword was created from a single bloom by forging from a temperature of . The carbon content increased from 0.05–0.08% at the back side of the sword to 0.35–0.4% on the blade, from which the authors deduce that some form of carburization may have been used. The sword was long and was characterized by a wasp-waist close to the hilt.
Romans continued to forge swords, both as composites and from single pieces. Inclusions of sand and rust weakened the two swords in the study, and no doubt limited the strength of swords during the Roman period.
Production
The craftsmen with the strategic task of making the gladii were called gladiarii. They were part of the Roman legions as fabri, enjoying the status of immunes. There were also public workshops, fabricae, dedicated to the making of the gladii. Epigraphic attestations of the gladiarii have been found in Italy, especially in areas of ancient metallurgic tradition such as Capua and Aquileia.
Description
The word gladius acquired a general meaning as any type of sword. This use appears as early as the 1st century AD in the Biography of Alexander the Great by Quintus Curtius Rufus. The republican authors, however, appear to mean a specific type of sword, which is now known from archaeology to have had variants.
Gladii were two-edged for cutting and had a tapered point for stabbing during thrusting. A solid grip was provided by a knobbed hilt added on, possibly with ridges for the fingers. Blade strength was achieved by welding together strips, in which case the sword had a channel down the centre, or by fashioning a single piece of high-carbon steel, rhomboidal in cross-section. The owner's name was often engraved or punched on the blade.
The hilt of a Roman sword was the capulus. It was often ornate, especially the sword-hilts of officers and dignitaries.
Stabbing was a very efficient technique, as stabbing wounds, especially in the abdominal area, were almost always deadly. However, the gladius in some circumstances was used for cutting or slashing, as is indicated by Livy's account of the Macedonian Wars, wherein the Macedonian soldiers were horrified to see dismembered bodies.
Though the primary infantry attack was thrusting at stomach height, they were trained to take any advantage, such as slashing at kneecaps beneath the shield wall.
The gladius was sheathed in a scabbard mounted on a belt or shoulder strap. Some say the soldier reached across his body to draw it, and others claim that the position of the shield made this method of drawing impossible. A centurion wore it on the opposite side as a mark of distinction.
Towards the end of the 2nd century AD and during the 3rd century the spatha gradually took the place of the gladius in the Roman legions.
Types
Several different designs were used; among collectors and historical reenactors, the three primary kinds are known as the Mainz gladius, the Fulham gladius, and the Pompeii gladius (these names refer to where or how the canonical example was found). More recent archaeological finds have uncovered an earlier version, the gladius Hispaniensis.
The differences between these varieties are subtle. The original Hispanic sword, which was used during the republic, had a slight "wasp-waist" or "leaf-blade" curvature. The Mainz variety came into use on the frontier in the early empire. It kept the curvature, but shortened and widened the blade and made the point triangular. At home, the less battle-effective Pompeii version came into use. It eliminated the curvature, lengthened the blade, and diminished the point. The Fulham was a compromise, with straight edges and a long point.
Gladius Hispaniensis
The gladius Hispaniensis was a Roman sword used from around 216 BC until 20 BC. Its blade had a length of , and the sword was long. The width of the sword was . It was the largest and heaviest of the gladii, weighing or . This gladius was also the earliest and longest blade. It had a pronounced leaf-shape.
Mainz Gladius
The Mainz Gladius is made of heavily corroded iron and a sheath made of tinned and gilded bronze. The blade was long and in width. The sword was long. The sword weighed . The point of the sword was more triangular than the Gladius Hispaniensis. The Mainz Gladius still had wasp-waisted curves. The decoration on the scabbard illustrates the ceding of military victory to Augustus by Tiberius after a successful Alpine campaign. Augustus is semi-nude, and sits in the pose of Jupiter, flanked by the Roman gods of Victory and Mars Ultor, while Tiberius, in military dress, presents Augustus with a statuette of Victory.
Fulham gladius
The Fulham gladius or Mainz-Fulham gladius was a Roman sword that was used after Aulus Plautius' invasion of Britain in 43 AD. The Romans used it until the end of the 1st century. The Fulham gladius has a triangular tip. The length of the blade is . The length of the sword is . The width of the blade is . The swords weighs (wooden hilt). A full size replica can be seen at Fulham Palace, Fulham.
Pompeii gladius
The Pompeii gladius was named by modern historians after the Roman town of Pompeii. This type of gladius was by far the most popular one. Four examples of the sword type were found in Pompeii, with others turning up elsewhere. The sword has parallel cutting edges and a triangular tip. This is the shortest of the gladii. It is often confused with the spatha, which was a longer, slashing weapon used initially by mounted auxilians. Over the years, the Pompeii got longer, and these later versions are called semi-spathes. The length of the blade was . The length of the sword is . The width of the blade is . The sword weighs (wooden hilt).
| Technology | Swords | null |
58005 | https://en.wikipedia.org/wiki/Airship | Airship | An airship, dirigible balloon or dirigible is a type of aerostat (lighter-than-air) aircraft that can navigate through the air flying under its own power. Aerostats use buoyancy from a lifting gas that is less dense than the surrounding air to achieve the lift needed to stay airborne.
In early dirigibles, the lifting gas used was hydrogen, due to its high lifting capacity and ready availability, but the inherent flammability led to several fatal accidents that rendered hydrogen airships obsolete. The alternative lifting gas, helium gas is not flammable, but is rare and relatively expensive. Significant amounts were first discovered in the United States and for a while helium was only available for airship usage in North America. Most airships built since the 1960s have used helium, though some have used hot air.
The envelope of an airship may form the gasbag, or it may contain a number of gas-filled cells. An airship also has engines, crew, and optionally also payload accommodation, typically housed in one or more gondolas suspended below the envelope.
The main types of airship are non-rigid, semi-rigid and rigid airships. Non-rigid airships, often called "blimps", rely solely on internal gas pressure to maintain the envelope shape. Semi-rigid airships maintain their shape by internal pressure, but have some form of supporting structure, such as a fixed keel, attached to it. Rigid airships have an outer structural framework that maintains the shape and carries all structural loads, while the lifting gas is contained in one or more internal gasbags or cells. Rigid airships were first flown by Count Ferdinand von Zeppelin and the vast majority of rigid airships built were manufactured by the firm he founded, Luftschiffbau Zeppelin. As a result, rigid airships are often called zeppelins.
Airships were the first aircraft capable of controlled powered flight, and were most commonly used before the 1940s; their use decreased as their capabilities were surpassed by those of aeroplanes. Their decline was accelerated by a series of high-profile accidents, including the 1930 crash and burning of the British R101 in France, the 1933 and 1935 storm-related crashes of the twin airborne aircraft carrier U.S. Navy helium-filled rigids, the and USS Macon respectively, and the 1937 burning of the German hydrogen-filled Hindenburg. From the 1960s, helium airships have been used where the ability to hover for a long time outweighs the need for speed and manoeuvrability, such as advertising, tourism, camera platforms, geological surveys and aerial observation.
Terminology
Airship
During the pioneer years of aeronautics, terms such as "airship", "air-ship", "air ship" and "ship of the air" meant any kind of navigable or dirigible flying machine. In 1919 Frederick Handley Page was reported as referring to "ships of the air", with smaller passenger types as "air yachts". In the 1930s, large intercontinental flying boats were also sometimes referred to as "ships of the air" or "flying-ships". Nowadays the term "airship" is used only for powered, dirigible balloons, with sub-types being classified as rigid, semi-rigid or non-rigid. Semi-rigid architecture is the more recent, following advances in deformable structures and the exigency of reducing weight and volume of the airships. They have a minimal structure that keeps the shape jointly with overpressure of the gas envelope.
Aerostat
An aerostat is an aircraft that remains aloft using buoyancy or static lift, as opposed to the aerodyne, which obtains lift by moving through the air. Airships are a type of aerostat. The term aerostat has also been used to indicate a tethered or moored balloon as opposed to a free-floating balloon. Aerostats today are capable of lifting a payload of to an altitude of more than above sea level. They can also stay in the air for extended periods of time, particularly when powered by an on-board generator or if the tether contains electrical conductors. Due to this capability, aerostats can be used as platforms for telecommunication services. For instance, Platform Wireless International Corporation announced in 2001 that it would use a tethered airborne payload to deliver cellular phone service to a region in Brazil. The European Union's ABSOLUTE project was also reportedly exploring the use of tethered aerostat stations to provide telecommunications during disaster response.
Blimp
A blimp is a non-rigid aerostat. In British usage it refers to any non-rigid aerostat, including barrage balloons and other kite balloons, having a streamlined shape and stabilising tail fins. Some blimps may be powered dirigibles, as in early versions of the Goodyear Blimp. Later Goodyear dirigibles, though technically semi-rigid airships, have still been called "blimps" by the company.
Zeppelin
The term zeppelin originally referred to airships manufactured by the German Zeppelin Company, which built and operated the first rigid airships in the early years of the twentieth century. The initials LZ, for (German for "Zeppelin airship"), usually prefixed their craft's serial identifiers.
Streamlined rigid (or semi-rigid) airships are often referred to as "Zeppelins", because of the fame that this company acquired due to the number of airships it produced, although its early rival was the Parseval semi-rigid design.
Hybrid airship
Hybrid airships fly with a positive aerostatic contribution, usually equal to the empty weight of the system, and the variable payload is sustained by propulsion or aerodynamic contribution.
Classification
Airships are classified according to their method of construction into rigid, semi-rigid and non-rigid types.
Rigid
A rigid airship has a rigid framework covered by an outer skin or envelope. The interior contains one or more gasbags, cells or balloons to provide lift. Rigid airships are typically unpressurised and can be made to virtually any size. Most, but not all, of the German Zeppelin airships have been of this type.
Semi-rigid
A semi-rigid airship has some kind of supporting structure but the main envelope is held in shape by the internal pressure of the lifting gas. Typically the airship has an extended, usually articulated keel running along the bottom of the envelope to stop it kinking in the middle by distributing suspension loads into the envelope, while also allowing lower envelope pressures.
Non-rigid
Non-rigid airships are often called "blimps". Most, but not all, of the American Goodyear airships have been blimps.
A non-rigid airship relies entirely on internal gas pressure to retain its shape during flight. Unlike the rigid design, the non-rigid airship's gas envelope has no compartments. However, it still typically has smaller internal bags containing air (ballonets). As altitude is increased, the lifting gas expands and air from the ballonets is expelled through valves to maintain the hull's shape. To return to sea level, the process is reversed: air is forced back into the ballonets by scooping air from the engine exhaust and using auxiliary blowers.
Construction
Envelope
The envelope is the structure which contains the buoyant gas. Envelopes in the early 19th century were made from goldbeater's skin, selected for its low weight, relatively high strength, and impermeability compared to paper or linen. By the 1920s, natural rubber treated with cotton became the predominant elastomer used in envelope construction. The natural rubber was succeeded by neoprene in the 1930s and Nylon and PET in the 1950s. A few airships have been metal-clad. The most successful of which is the Detroit ZMC-2, which logged 2265 hours of flight time from 1929 to 1941 before being scrapped, as it was considered too small for operational use on anti-submarine patrols.
The problem of the exact determination of the pressure on an airship envelope is still problematic and has fascinated major scientists such as Theodor Von Karman.
The envelope may contain ballonets (see below), allowingadjust the density of the buoyant gas by adding or subtracting envelope volume.
Ballonet
A ballonet is an air bag inside the outer envelope of an airship which, when inflated, reduces the volume available for the lifting gas, making it more dense. Because air is also denser than the lifting gas, inflating the ballonet reduces the overall lift, while deflating it increases lift. In this way, the ballonet can be used to adjust the lift as required by controlling the buoyancy. By inflating or deflating ballonets strategically, the pilot can control the airship's altitude and attitude.
Ballonets may typically be used in non-rigid or semi-rigid airships, commonly with multiple ballonets located both fore and aft to maintain balance and to control the pitch of the airship.
Lifting gas
Lifting gas is generally hydrogen, helium or hot air.
Hydrogen gives the highest lift and is inexpensive and easily obtained, but is highly flammable and can detonate if mixed with air. Helium is completely non flammable, but gives lower performance- and is a rare element and much more expensive.
Thermal airships use a heated lifting gas, usually air, in a fashion similar to hot air balloons. The first to do so was flown in 1973 by the British company Cameron Balloons.
Gondola
Propulsion and control
Small airships carry their engine(s) in their gondola. Where there were multiple engines on larger airships, these were placed in separate nacelles, termed power cars or engine cars. To allow asymmetric thrust to be applied for maneuvering, these power cars were mounted towards the sides of the envelope, away from the centre line gondola. This also raised them above the ground, reducing the risk of a propeller strike when landing. Widely spaced power cars were also termed wing cars, from the use of "wing" to mean being on the side of something, as in a theater, rather than the aerodynamic device. These engine cars carried a crew during flight who maintained the engines as needed, but who also worked the engine controls, throttle etc., mounted directly on the engine. Instructions were relayed to them from the pilot's station by a telegraph system, as on a ship.
If fuel is burnt for propulsion, then progressive reduction in the airship's overall weight occurs. In hydrogen airships, this is usually dealt with by simply venting cheap hydrogen lifting gas. In helium airships water is often condensed from the exhaust and stored as ballast.
Fins and rudders
To control the airship's direction and stability, it is equipped with fins and rudders. Fins are typically located on the tail section and provide stability and resistance to rolling. Rudders are movable surfaces on the tail that allow the pilot to steer the airship left or right.
Empennage
The empennage refers to the tail section of the airship, which includes the fins, rudders, and other aerodynamic surfaces. It plays a crucial role in maintaining stability and controlling the airship's attitude.
Fuel and power systems
Airships require a source of power to operate their propulsion systems. This includes engines, generators, or batteries, depending on the type of airship and its design. Fuel tanks or batteries are typically located within the envelope or gondola.
Navigation and communication equipment
To navigate safely and communicate with ground control or other aircraft, airships are equipped with a range of instruments, including GPS systems, radios, radar, and navigation lights.
Landing gear
Some airships have landing gear that allows them to land on runways or other surfaces. This landing gear may include wheels, skids, or landing pads.
Performance
Efficiency
The main advantage of airships with respect to any other vehicle is that they require less energy to remain in flight, compared to other air vehicles. The proposed Varialift airship, powered by a mixture of solar-powered engines and conventional jet engines, would use only an estimated 8 percent of the fuel required by jet aircraft. Furthermore, utilizing the jet stream could allow for a faster and more energy-efficient cargo transport alternative to maritime shipping. This is one of the reasons why China has embraced their use recently.
History
Early pioneers
17th–18th century
In 1670, the Jesuit Father Francesco Lana de Terzi, sometimes referred to as the "Father of Aeronautics", published a description of an "Aerial Ship" supported by four copper spheres from which the air was evacuated. Although the basic principle is sound, such a craft was unrealizable then and remains so to the present day, since external air pressure would cause the spheres to collapse unless their thickness was such as to make them too heavy to be buoyant. A hypothetical craft constructed using this principle is known as a vacuum airship.
In 1709, the Brazilian-Portuguese Jesuit priest Bartolomeu de Gusmão made a hot air balloon, the Passarola, ascend to the skies, before an astonished Portuguese court. It would have been on August 8, 1709, when Father Bartolomeu de Gusmão held, in the courtyard of the Casa da Índia, in the city of Lisbon, the first Passarola demonstration. The balloon caught fire without leaving the ground, but, in a second demonstration, it rose to 95 meters in height. It was a small balloon of thick brown paper, filled with hot air, produced by the "fire of material contained in a clay bowl embedded in the base of a waxed wooden tray". The event was witnessed by King John V of Portugal and the future Pope Innocent XIII.
A more practical dirigible airship was described by Lieutenant Jean Baptiste Marie Meusnier in a paper entitled "" (Memorandum on the equilibrium of aerostatic machines) presented to the French Academy on 3 December 1783. The 16 water-color drawings published the following year depict a streamlined envelope with internal ballonets that could be used for regulating lift: this was attached to a long carriage that could be used as a boat if the vehicle was forced to land in water. The airship was designed to be driven by three propellers and steered with a sail-like aft rudder. In 1784, Jean-Pierre Blanchard fitted a hand-powered propeller to a balloon, the first recorded means of propulsion carried aloft. In 1785, he crossed the English Channel in a balloon equipped with flapping wings for propulsion and a birdlike tail for steering.
19th century
The 19th century saw continued attempts to add methods of propulsion to balloons. Rufus Porter built and flew scale models of his "Aerial Locomotive", but never a successful full-size implementation. The Australian William Bland sent designs for his "Atmotic airship" to the Great Exhibition held in London in 1851, where a model was displayed. This was an elongated balloon with a steam engine driving twin propellers suspended underneath. The lift of the balloon was estimated as 5 tons and the car with the fuel as weighing 3.5 tons, giving a payload of 1.5 tons. Bland believed that the machine could be driven at and could fly from Sydney to London in less than a week.
In 1852, Henri Giffard became the first person to make an engine-powered flight when he flew in a steam-powered airship. Airships would develop considerably over the next two decades. In 1863, Solomon Andrews flew his aereon design, an unpowered, controllable dirigible in Perth Amboy, New Jersey and offered the device to the U.S. Military during the Civil War. He flew a later design in 1866 around New York City and as far as Oyster Bay, New York. This concept used changes in lift to provide propulsive force, and did not need a powerplant. In 1872, the French naval architect Dupuy de Lome launched a large navigable balloon, which was driven by a large propeller turned by eight men. It was developed during the Franco-Prussian war and was intended as an improvement to the balloons used for communications between Paris and the countryside during the siege of Paris, but was completed only after the end of the war.
In 1872, Paul Haenlein flew an airship with an internal combustion engine running on the coal gas used to inflate the envelope, the first use of such an engine to power an aircraft. Charles F. Ritchel made a public demonstration flight in 1878 of his hand-powered one-man rigid airship, and went on to build and sell five of his aircraft.
In 1874, Micajah Clark Dyer filed U.S. Patent 154,654 "Apparatus for Navigating the Air". It is believed successful trial flights were made between 1872 and 1874, but detailed dates are not available. The apparatus used a combination of wings and paddle wheels for navigation and propulsion.
More details can be found in the book about his life.
In 1883, the first electric-powered flight was made by Gaston Tissandier, who fitted a Siemens electric motor to an airship.
The first fully controllable free flight was made in 1884 by Charles Renard and Arthur Constantin Krebs in the French Army airship La France. La France made the first flight of an airship that landed where it took off; the long, airship covered in 23 minutes with the aid of an electric motor, and a battery. It made seven flights in 1884 and 1885.
In 1888, the design of the Campbell Air Ship, designed by Professor Peter C. Campbell, was built by the Novelty Air Ship Company. It was lost at sea in 1889 while being flown by Professor Hogan during an exhibition flight.
From 1888 to 1897, Friedrich Wölfert built three airships powered by Daimler Motoren Gesellschaft-built petrol engines, the last of which, Deutschland, caught fire in flight and killed both occupants in 1897. The 1888 version used a single cylinder Daimler engine and flew from Canstatt to Kornwestheim.
In 1897, an airship with an aluminum envelope was built by the Hungarian-Croatian engineer David Schwarz. It made its first flight at Tempelhof field in Berlin after Schwarz had died. His widow, Melanie Schwarz, was paid 15,000 marks by Count Ferdinand von Zeppelin to release the industrialist Carl Berg from his exclusive contract to supply Schwartz with aluminium.
From 1897 to 1899, Konstantin Danilewsky, medical doctor and inventor from Kharkiv (now Ukraine, then Russian Empire), built four muscle-powered airships, of gas volume . About 200 ascents were made within a framework of experimental flight program, at two locations, with no significant incidents.
Early 20th century
In July 1900, the Luftschiff Zeppelin LZ1 made its first flight. This led to the most successful airships of all time: the Zeppelins, named after Count Ferdinand von Zeppelin who began working on rigid airship designs in the 1890s, leading to the flawed LZ1 in 1900 and the more successful LZ2 in 1906. The Zeppelin airships had a framework composed of triangular lattice girders covered with fabric that contained separate gas cells. At first multiplane tail surfaces were used for control and stability: later designs had simpler cruciform tail surfaces. The engines and crew were accommodated in "gondolas" hung beneath the hull driving propellers attached to the sides of the frame by means of long drive shafts. Additionally, there was a passenger compartment (later a bomb bay) located halfway between the two engine compartments.
Alberto Santos-Dumont was a wealthy young Brazilian who lived in France and had a passion for flying. He designed 18 balloons and dirigibles before turning his attention to fixed-winged aircraft.
On 19 October 1901 he flew his airship Number 6, from the Parc Saint Cloud to and around the Eiffel Tower and back in under thirty minutes. This feat earned him the Deutsch de la Meurthe prize of 100,000 francs. Many inventors were inspired by Santos-Dumont's small airships. Many airship pioneers, such as the American Thomas Scott Baldwin, financed their activities through passenger flights and public demonstration flights. Stanley Spencer built the first British airship with funds from advertising baby food on the sides of the envelope. Others, such as Walter Wellman and Melvin Vaniman, set their sights on loftier goals, attempting two polar flights in 1907 and 1909, and two trans-Atlantic flights in 1910 and 1912.
In 1902 the Spanish engineer Leonardo Torres Quevedo published details of an innovative airship design in Spain and France titled "" ("Improvements in dirigible aerostats"). With a non-rigid body and internal bracing wires, it overcame the flaws of these types of aircraft as regards both rigid structure (zeppelin type) and flexibility, providing the airships with more stability during flight, and the capability of using heavier engines and a greater passenger load. A system called "auto-rigid". In 1905, helped by Captain A. Kindelán, he built the airship "Torres Quevedo" at the Guadalajara military base. In 1909 he patented an improved design that he offered to the French Astra company, who started mass-producing it in 1911 as the Astra-Torres airship. This type of envelope was employed in the United Kingdom in the Coastal, C Star, and North Sea airships. The distinctive three-lobed design was widely used during the Great War by the Entente powers for diverse tasks, principally convoy protection and anti-submarine warfare. The success during the war even drew the attention of the Imperial Japanese Navy, who acquired a model in 1922. Torres also drew up designs of a 'docking station' and made alterations to airship designs, to find a resolution to the slew of problems faced by airship engineers to dock dirigibles. In 1910, he proposed the idea of attaching an airships nose to a mooring mast and allowing the airship to weathervane with changes of wind direction. The use of a metal column erected on the ground, the top of which the bow or stem would be directly attached to (by a cable) would allow a dirigible to be moored at any time, in the open, regardless of wind speeds. Additionally, Torres' design called for the improvement and accessibility of temporary landing sites, where airships were to be moored for the purpose of disembarkation of passengers. The final patent was presented in February 1911 in Belgium, and later to France and the United Kingdom in 1912, under the title "Improvements in Mooring Arrengements for Airships".
Other airship builders were also active before the war: from 1902 the French company Lebaudy Frères specialized in semirigid airships such as the Patrie and the République, designed by their engineer Henri Julliot, who later worked for the American company Goodrich; the German firm Schütte-Lanz built the wooden-framed SL series from 1911, introducing important technical innovations; another German firm Luft-Fahrzeug-Gesellschaft built the Parseval-Luftschiff (PL) series from 1909, and Italian Enrico Forlanini's firm had built and flown the first two Forlanini airships.
On May 12, 1902, the inventor and Brazilian aeronaut Augusto Severo de Albuquerque Maranhao and his French mechanic, Georges Saché, died when they were flying over Paris in the airship called Pax. A marble plaque at number 81 of the Avenue du Maine in Paris, commemorates the location of Augusto Severo accident. The Catastrophe of the Balloon "Le Pax" is a 1902 short silent film recreation of the catastrophe, directed by Georges Méliès.
In Britain, the Army built their first dirigible, the Nulli Secundus, in 1907. The Navy ordered the construction of an experimental rigid in 1908. Officially known as His Majesty's Airship No. 1 and nicknamed the Mayfly, it broke its back in 1911 before making a single flight. Work on a successor did not start until 1913.
German airship passenger service known as DELAG (Deutsche-Luftschiffahrts AG) was established in 1910.
In 1910 Walter Wellman unsuccessfully attempted an aerial crossing of the Atlantic Ocean in the airship America.
World War I
The prospect of airships as bombers had been recognized in Europe well before the airships were up to the task. H. G. Wells' The War in the Air (1908) described the obliteration of entire fleets and cities by airship attack. The Italian forces became the first to use dirigibles for a military purpose during the Italo–Turkish War, the first bombing mission being flown on 10 March 1912. World War I marked the airship's real debut as a weapon. The Germans, French, and Italians all used airships for scouting and tactical bombing roles early in the war, and all learned that the airship was too vulnerable for operations over the front. The decision to end operations in direct support of armies was made by all in 1917.
Many in the German military believed they had found the ideal weapon with which to counteract British naval superiority and strike at Britain itself, while more realistic airship advocates believed the zeppelin's value was as a long range scout/attack craft for naval operations. Raids on England began in January 1915 and peaked in 1916: following losses to the British defenses only a few raids were made in 1917–18, the last in August 1918. Zeppelins proved to be terrifying but inaccurate weapons. Navigation, target selection and bomb-aiming proved to be difficult under the best of conditions, and the cloud cover that was frequently encountered by the airships reduced accuracy even further. The physical damage done by airships over the course of the war was insignificant, and the deaths that they caused amounted to a few hundred. Nevertheless, the raid caused a significant diversion of British resources to defense efforts. The airships were initially immune to attack by aircraft and anti-aircraft guns: as the pressure in their envelopes was only just higher than ambient air, holes had little effect. But following the introduction of a combination of incendiary and explosive ammunition in 1916, their flammable hydrogen lifting gas made them vulnerable to the defending aeroplanes. Several were shot down in flames by British defenders, and many others destroyed in accidents. New designs capable of reaching greater altitude were developed, but although this made them immune from attack it made their bombing accuracy even worse.
Countermeasures by the British included sound detection equipment, searchlights and anti-aircraft artillery, followed by night fighters in 1915. One tactic used early in the war, when their limited range meant the airships had to fly from forward bases and the only zeppelin production facilities were in Friedrichshafen, was the bombing of airship sheds by the British Royal Naval Air Service. Later in the war, the development of the aircraft carrier led to the first successful carrier-based air strike in history: on the morning of 19 July 1918, seven Sopwith 2F.1 Camels were launched from and struck the airship base at Tønder, destroying zeppelins L 54 and L 60.
The British Army had abandoned airship development in favour of aeroplanes before the start of the war, but the Royal Navy had recognized the need for small airships to counteract the submarine and mine threat in coastal waters. Beginning in February 1915, they began to develop the SS (Sea Scout) class of blimp. These had a small envelope of and at first used aircraft fuselages without the wing and tail surfaces as control cars. Later, more advanced blimps with purpose-built gondolas were used. The NS class (North Sea) were the largest and most effective non-rigid airships in British service, with a gas capacity of , a crew of 10 and an endurance of 24 hours. Six bombs were carried, as well as three to five machine guns. British blimps were used for scouting, mine clearance, and convoy patrol duties. During the war, the British operated over 200 non-rigid airships. Several were sold to Russia, France, the United States, and Italy. The large number of trained crews, low attrition rate and constant experimentation in handling techniques meant that at the war's end Britain was the world leader in non-rigid airship technology.
The Royal Navy continued development of rigid airships until the end of the war. Eight rigid airships had been completed by the armistice, (No. 9r, four 23 Class, two R23X Class and one R31 Class), although several more were in an advanced state of completion by the war's end. Both France and Italy continued to use airships throughout the war. France preferred the non-rigid type, whereas Italy flew 49 semi-rigid airships in both the scouting and bombing roles.
Aeroplanes had almost entirely replaced airships as bombers by the end of the war, and Germany's remaining zeppelins were destroyed by their crews, scrapped or handed over to the Allied powers as war reparations. The British rigid airship program, which had mainly been a reaction to the potential threat of the German airships, was wound down.
The interwar period
Britain, the United States and Germany built rigid airships between the two world wars. Italy and France made limited use of Zeppelins handed over as war reparations. Italy, the Soviet Union, the United States and Japan mainly operated semi-rigid airships.
Under the terms of the Treaty of Versailles, Germany was not allowed to build airships of greater capacity than a million cubic feet. Two small passenger airships, LZ 120 Bodensee and its sister ship LZ 121 Nordstern, were built immediately after the war but were confiscated following the sabotage of the wartime Zeppelins that were to have been handed over as war reparations: Bodensee was given to Italy and Nordstern to France. On May 12, 1926, the Italian built semi-rigid airship Norge was the first aircraft to fly over the North Pole.
The British R33 and R34 were near-identical copies of the German L 33, which had come down almost intact in Yorkshire on 24 September 1916. Despite being almost three years out of date by the time they were launched in 1919, they became two of the most successful airships in British service. The creation of the Royal Air Force (RAF) in early 1918 created a hybrid British airship program. The RAF was not interested in airships while the Admiralty was, so a deal was made where the Admiralty would design any future military airships and the RAF would handle manpower, facilities and operations. On 2 July 1919, R34 began the first double crossing of the Atlantic by an aircraft. It landed at Mineola, Long Island on 6 July after 108 hours in the air; the return crossing began on 8 July and took 75 hours. This feat failed to generate enthusiasm for continued airship development, and the British airship program was rapidly wound down.
During World War I, the U.S. Navy acquired its first airship, the DH-1, but it was destroyed while being inflated shortly after delivery to the Navy. After the war, the U.S. Navy contracted to buy the R 38, which was being built in Britain, but before it was handed over it was destroyed because of a structural failure during a test flight.
America then started constructing the , designed by the Bureau of Aeronautics and based on the Zeppelin L 49. Assembled in Hangar No. 1 and first flown on 4 September 1923 at Lakehurst, New Jersey, it was the first airship to be inflated with the noble gas helium, which was then so scarce that the Shenandoah contained most of the world's supply. A second airship, , was built by the Zeppelin company as compensation for the airships that should have been handed over as war reparations according to the terms of the Versailles Treaty but had been sabotaged by their crews. This construction order saved the Zeppelin works from the threat of closure. The success of the Los Angeles, which was flown successfully for eight years, encouraged the U.S. Navy to invest in its own, larger airships. When the Los Angeles was delivered, the two airships had to share the limited supply of helium, and thus alternated operating and overhauls.
In 1922, Sir Dennistoun Burney suggested a plan for a subsidised air service throughout the British Empire using airships (the Burney Scheme). Following the coming to power of Ramsay MacDonald's Labour government in 1924, the scheme was transformed into the Imperial Airship Scheme, under which two airships were built, one by a private company and the other by the Royal Airship Works under Air Ministry control. The two designs were radically different. The "capitalist" ship, the R100, was more conventional, while the "socialist" ship, the R101, had many innovative design features. Construction of both took longer than expected, and the airships did not fly until 1929. Neither airship was capable of the service intended, though the R100 did complete a proving flight to Canada and back in 1930. On 5 October 1930, the R101, which had not been thoroughly tested after major modifications, crashed on its maiden voyage to India at Beauvais in France killing 48 of the 54 people aboard. Among the dead were the craft's chief designer and the Secretary of State for Air. The disaster ended British interest in airships.
In 1925 the Zeppelin company started construction of the Graf Zeppelin (LZ 127), the largest airship that could be built in the company's existing shed, and intended to stimulate interest in passenger airships. The Graf Zeppelin burned blau gas, similar to propane, stored in large gas bags below the hydrogen cells, as fuel. Since its density was similar to that of air, it avoided the weight change as fuel was used, and thus the need to valve hydrogen. The Graf Zeppelin had an impressive safety record, flying over (including the first circumnavigation of the globe by airship) without a single passenger injury.
The U.S. Navy experimented with the use of airships as airborne aircraft carriers, developing an idea pioneered by the British. The USS Los Angeles was used for initial experiments, and the and , the world's largest at the time, were used to test the principle in naval operations. Each carried four F9C Sparrowhawk fighters in its hangar, and could carry a fifth on the trapeze. The idea had mixed results. By the time the Navy started to develop a sound doctrine for using the ZRS-type airships, the last of the two built, USS Macon, had been wrecked. Meanwhile, the seaplane had become more capable, and was considered a better investment.
Eventually, the U.S. Navy lost all three U.S.-built rigid airships to accidents. USS Shenandoah flew into a severe thunderstorm over Noble County, Ohio while on a poorly planned publicity flight on 3 September 1925. It broke into pieces, killing 14 of its crew. USS Akron was caught in a severe storm and flown into the surface of the sea off the shore of New Jersey on 3 April 1933. It carried no life boats and few life vests, so 73 of its crew of 76 died from drowning or hypothermia. USS Macon was lost after suffering a structural failure offshore near Point Sur Lighthouse on 12 February 1935. The failure caused a loss of gas, which was made much worse when the aircraft was driven over pressure height causing it to lose too much helium to maintain flight. Only two of its crew of 83 died in the crash thanks to the inclusion of life jackets and inflatable rafts after the Akron disaster.
The Empire State Building was completed in 1931 with a dirigible mast, in anticipation of future passenger airship service, but no airship ever used the mast. Various entrepreneurs experimented with commuting and shipping freight via airship.
In the 1930s, the German Zeppelins successfully competed with other means of transport. They could carry significantly more passengers than other contemporary aircraft while providing amenities similar to those on ocean liners, such as private cabins, observation decks, and dining rooms. Less importantly, the technology was potentially more energy-efficient than heavier-than-air designs. Zeppelins were also faster than ocean liners. On the other hand, operating airships was quite involved. Often the crew would outnumber passengers, and on the ground large teams were necessary to assist mooring and very large hangars were required at airports.
By the mid-1930s, only Germany still pursued airship development. The Zeppelin company continued to operate the Graf Zeppelin on passenger service between Frankfurt and Recife in Brazil, taking 68 hours. Even with the small Graf Zeppelin, the operation was almost profitable. In the mid-1930s, work began on an airship designed specifically to operate a passenger service across the Atlantic. The Hindenburg (LZ 129) completed a successful 1936 season, carrying passengers between Lakehurst, New Jersey and Germany. The year 1937 started with the most spectacular and widely remembered airship accident. Approaching the Lakehurst mooring mast minutes before landing on 6 May 1937, the Hindenburg suddenly burst into flames and crashed to the ground. Of the 97 people aboard, 35 died: 13 passengers, 22 aircrew, along with one American ground-crewman. The disaster happened before a large crowd, was filmed and a radio news reporter was recording the arrival. This was a disaster that theater goers could see and hear in newsreels. The Hindenburg disaster shattered public confidence in airships, and brought a definitive end to their "golden age". The day after the Hindenburg disaster, the Graf Zeppelin landed safely in Germany after its return flight from Brazil. This was the last international passenger airship flight.
Hindenburgs identical sister ship, the Graf Zeppelin II (LZ 130), could not carry commercial passengers without helium, which the United States refused to sell to Germany. The Graf Zeppelin made several test flights and conducted some electronic espionage until 1939 when it was grounded due to the beginning of the war. The two Graf Zeppelins were scrapped in April, 1940.
Development of airships continued only in the United States, and to a lesser extent, the Soviet Union. The Soviet Union had several semi-rigid and non-rigid airships. The semi-rigid dirigible SSSR-V6 OSOAVIAKhIM was among the largest of these craft, and it set the longest endurance flight at the time of over 130 hours. It crashed into a mountain in 1938, killing 13 of the 19 people on board. While this was a severe blow to the Soviet airship program, they continued to operate non-rigid airships until 1950.
World War II
While Germany determined that airships were obsolete for military purposes in the coming war and concentrated on the development of aeroplanes, the United States pursued a program of military airship construction even though it had not developed a clear military doctrine for airship use. When the Japanese attacked Pearl Harbor on 7 December 1941, bringing the United States into World War II, the U.S. Navy had 10 nonrigid airships:
4 K-class: K-2, K-3, K-4 and K-5 designed as patrol ships, all built in 1938.
3 L-class: L-1, L-2 and L-3 as small training ships, produced in 1938.
1 G-class, built in 1936 for training.
2 TC-class that were older patrol airships designed for land forces, built in 1933. The U.S. Navy acquired both from the United States Army in 1938.
Only K- and TC-class airships were suitable for combat and they were quickly pressed into service against Japanese and German submarines, which were then sinking American shipping within visual range of the American coast. U.S. Navy command, remembering airship's anti-submarine success in World War I, immediately requested new modern antisubmarine airships and on 2 January 1942 formed the ZP-12 patrol unit based in Lakehurst from the four K airships. The ZP-32 patrol unit was formed from two TC and two L airships a month later, based at NAS Moffett Field in Sunnyvale, California. An airship training base was created there as well. The status of submarine-hunting Goodyear airships in the early days of World War II has created significant confusion. Although various accounts refer to airships Resolute and Volunteer as operating as "privateers" under a Letter of Marque, Congress never authorized a commission, nor did the President sign one.
In the years 1942–44, approximately 1,400 airship pilots and 3,000 support crew members were trained in the military airship crew training program and the airship military personnel grew from 430 to 12,400. The U.S. airships were produced by the Goodyear factory in Akron, Ohio. From 1942 till 1945, 154 airships were built for the U.S. Navy (133 K-class, 10 L-class, seven G-class, four M-class) and five L-class for civilian customers (serial numbers L-4 to L-8).
The primary airship tasks were patrol and convoy escort near the American coastline. They also served as an organization centre for the convoys to direct ship movements, and were used in naval search and rescue operations. Rarer duties of the airships included aerophoto reconnaissance, naval mine-laying and mine-sweeping, parachute unit transport and deployment, cargo and personnel transportation. They were deemed quite successful in their duties with the highest combat readiness factor in the entire U.S. air force (87%).
During the war, some 532 ships without airship escort were sunk near the U.S. coast by enemy submarines. Only one ship, the tanker Persephone, of the 89,000 or so in convoys escorted by blimps was sunk by the enemy. Airships engaged submarines with depth charges and, less frequently, with other on-board weapons. They were excellent at driving submarines down, where their limited speed and range prevented them from attacking convoys. The weapons available to airships were so limited that until the advent of the homing torpedo they had little chance of sinking a submarine.
Only one airship was ever destroyed by U-boat: on the night of 18/19 July 1943, the K-74 from ZP-21 division was patrolling the coastline near Florida. Using radar, the airship located a surfaced German submarine. The K-74 made her attack run but the U-boat opened fire first. K-74s depth charges did not release as she crossed the U-boat and the K-74 received serious damage, losing gas pressure and an engine but landing in the water without loss of life. The crew was rescued by patrol boats in the morning, but one crewman, Aviation Machinist's Mate Second Class Isadore Stessel, died from a shark attack. The U-boat, , was slightly damaged and the next day or so was attacked by aircraft, sustaining damage that forced it to return to base. It was finally sunk on 24 August 1943 by a British Vickers Wellington near Vigo, Spain.
Fleet Airship Wing One operated from Lakehurst, New Jersey, Glynco, Georgia, Weeksville, North Carolina, South Weymouth NAS Massachusetts, Brunswick NAS and Bar Harbor Maine, Yarmouth, Nova Scotia, and Argentia, Newfoundland.
Some Navy blimps saw action in the European war theater. In 1944–45, the U.S. Navy moved an entire squadron of eight Goodyear K class blimps (K-89, K-101, K-109, K-112, K-114, K-123, K-130, & K-134) with flight and maintenance crews from Weeksville Naval Air Station in North Carolina to Naval Air Station Port Lyautey, French Morocco. Their mission was to locate and destroy German U-boats in the relatively shallow waters around the Strait of Gibraltar where magnetic anomaly detection (MAD) was viable. PBY aircraft had been searching these waters but MAD required low altitude flying that was dangerous at night for these aircraft. The blimps were considered a perfect solution to establish a 24/7 MAD barrier (fence) at the Straits of Gibraltar with the PBYs flying the day shift and the blimps flying the night shift. The first two blimps (K-123 & K-130) left South Weymouth NAS on 28 May 1944 and flew to Argentia, Newfoundland, the Azores, and finally to Port Lyautey where they completed the first transatlantic crossing by nonrigid airships on 1 June 1944. The blimps of USN Blimp Squadron ZP-14 (Blimpron 14, aka The Africa Squadron) also conducted mine-spotting and mine-sweeping operations in key Mediterranean ports and various escorts including the convoy carrying United States President Franklin D. Roosevelt and British Prime Minister Winston Churchill to the Yalta Conference in 1945. Airships from the ZP-12 unit took part in the sinking of the last U-boat before German capitulation, sinking the U-881 on 6 May 1945 together with destroyers USS Atherton and USS Moberly.
Other airships patrolled the Caribbean, Fleet Airship Wing Two, Headquartered at Naval Air Station Richmond, covered the Gulf of Mexico from Richmond and Key West, Florida, Houma, Louisiana, as well as Hitchcock and Brownsville, Texas. FAW 2 also patrolled the northern Caribbean from San Julian, the Isle of Pines (now called Isla de la Juventud) and Guantánamo Bay, Cuba as well as Vernam Field, Jamaica.
Navy blimps of Fleet Airship Wing Five, (ZP-51) operated from bases in Trinidad, British Guiana and Paramaribo, Suriname. Fleet Airship Wing Four operated along the coast of Brazil. Two squadrons, VP-41 and VP-42 flew from bases at Amapá, Igarapé-Açu, São Luís Fortaleza, Fernando de Noronha, Recife, Maceió, Ipitanga (near Salvador, Bahia), Caravelas, Vitória and the hangar built for the Graf Zeppelin at Santa Cruz, Rio de Janeiro.
Fleet Airship Wing Three operated squadrons, ZP-32 from Moffett Field, ZP-31 at NAS Santa Ana, and ZP-33 at NAS Tillamook, Oregon. Auxiliary fields were at Del Mar, Lompoc, Watsonville and Eureka, California, North Bend and Astoria, Oregon, as well as Shelton and Quillayute in Washington.
From 2 January 1942 until the end of war airship operations in the Atlantic, the blimps of the Atlantic fleet made 37,554 flights and flew 378,237 hours. Of the over 70,000 ships in convoys protected by blimps, only one was sunk by a submarine while under blimp escort.
The Soviet Union flew a single airship during the war. The W-12, built in 1939, entered service in 1942 for paratrooper training and equipment transport. It made 1432 flights with 300 metric tons of cargo until 1945. On 1 February 1945, the Soviets constructed a second airship, a Pobeda-class (Victory-class) unit (used for mine-sweeping and wreckage clearing in the Black Sea) that crashed on 21 January 1947. Another W-class – W-12bis Patriot – was commissioned in 1947 and was mostly used until the mid-1950s for crew training, parades and propaganda.
Postwar period
Although airships are no longer used for major cargo and passenger transport, they are still used for other purposes such as advertising, sightseeing, surveillance, research and advocacy.
There were several studies and proposals for nuclear-powered airships, starting with a 1954 study by F.W. Locke Jr for US Navy. In 1957 Edwin J. Kirschner published the book The Zeppelin in the Atomic Age, which promoted the use of atomic airships. In 1959 Goodyear presented a plan for nuclear-powered airship for both military and commercial use. Several other proposals and papers were published during the next decades.
In the 1980s, Per Lindstrand and his team introduced the GA-42 airship, the first airship to use fly-by-wire flight control, which considerably reduced the pilot's workload.
An airship was prominently featured in the James Bond film A View to a Kill, released in 1985. The Skyship 500 had the livery of Zorin Industries.
The world's largest thermal airship () was constructed by the Per Lindstrand company for French botanists in 1993. The AS-300 carried an underslung raft, which was positioned by the airship on top of tree canopies in the rain forest, allowing the botanists to carry out their treetop research without significant damage to the rainforest. When research was finished at a given location, the airship returned to pick up and relocate the raft.
In June 1987, the U.S. Navy awarded a US$168.9 million contract to Westinghouse Electric and Airship Industries of the UK to find out whether an airship could be used as an airborne platform to detect the threat of sea-skimming missiles, such as the Exocet. At 2.5 million cubic feet, the Westinghouse/Airship Industries Sentinel 5000 (Redesignated YEZ-2A by the U.S. Navy) prototype design was to have been the largest blimp ever constructed. Additional funding for the Naval Airship Program was killed in 1995 and development was discontinued.
The SVAM CA-80 airship, which was produced in 2000 by Shanghai Vantage Airship Manufacture Co., Ltd., had a successful trial flight in September 2001. This was designed for advertisement and propagation, air-photo, scientific test, tour and surveillance duties. It was certified as a grade-A Hi-Tech introduction program (No. 20000186) in Shanghai. The CAAC authority granted a type design approval and certificate of airworthiness for the airship.
In the 1990s the Zeppelin company returned to the airship business. Their new model, designated the Zeppelin NT, made its maiden flight on 18 September 1997. there were four NT aircraft flying, a fifth was completed in March 2009 and an expanded NT-14 (14,000 cubic meters of helium, capable of carrying 19 passengers) was under construction. One was sold to a Japanese company, and was planned to be flown to Japan in the summer of 2004. Due to delays getting permission from the Russian government, the company decided to transport the airship to Japan by sea. One of the four NT craft is in South Africa carrying diamond detection equipment from De Beers, an application at which the very stable low vibration NT platform excels. The project included design adaptations for high temperature operation and desert climate, as well as a separate mooring mast and a very heavy mooring truck. NT-4 belonged to Airship Ventures of Moffett Field, Mountain View in the San Francisco Bay Area, and provided sight-seeing tours.
Blimps are used for advertising and as TV camera platforms at major sporting events. The most iconic of these are the Goodyear Blimps. Goodyear operates three blimps in the United States, and The Lightship Group, now The AirSign Airship Group, operates up to 19 advertising blimps around the world. Airship Management Services owns and operates three Skyship 600 blimps. Two operate as advertising and security ships in North America and the Caribbean. Airship Ventures operated a Zeppelin NT for advertising, passenger service and special mission projects. They were the only airship operator in the U.S. authorized to fly commercial passengers, until closing their doors in 2012.
Skycruise Switzerland AG owns and operates two Skyship 600 blimps. One operates regularly over Switzerland used on sightseeing tours.
The Switzerland-based Skyship 600 has also played other roles over the years. For example, it was flown over Athens during the 2004 Summer Olympics as a security measure. In November 2006, it carried advertising calling it The Spirit of Dubai as it began a publicity tour from London to Dubai, UAE on behalf of The Palm Islands, the world's largest man-made islands created as a residential complex.
Los Angeles-based Worldwide Aeros Corp. produces FAA Type Certified Aeros 40D Sky Dragon airships.
In May 2006, the U.S. Navy began to fly airships again after a hiatus of nearly 44 years. The program uses a single American Blimp Company A-170 nonrigid airship, with designation MZ-3A. Operations focus on crew training and research, and the platform integrator is Northrop Grumman. The program is directed by the Naval Air Systems Command and is being carried out at NAES Lakehurst, the original centre of U.S. Navy lighter-than-air operations in previous decades.
In November 2006 the U.S. Army bought an A380+ airship from American Blimp Corporation through a Systems level contract with Northrop Grumman and Booz Allen Hamilton. The airship started flight tests in late 2007, with a primary goal of carrying of payload to an altitude of under remote control and autonomous waypoint navigation. The program will also demonstrate carrying of payload to The platform could be used for intelligence collection. In 2008, the CA-150 airship was launched by Vantage Airship. This is an improved modification of model CA-120 and completed manufacturing in 2008. With larger volume and increased passenger capacity, it is the largest manned nonrigid airship in China at present.
In late June 2014 the Electronic Frontier Foundation flew the GEFA-FLUG AS 105 GD/4 blimp AE Bates (owned by, and in conjunction with, Greenpeace) over the NSA's Bluffdale Utah Data Center in protest.
Postwar projects
Hybrid designs such as the Heli-Stat airship/helicopter, the Aereon aerostatic/aerodynamic craft, and the CycloCrane (a hybrid aerostatic/rotorcraft), struggled to take flight. The Cyclocrane was also interesting in that the airship's envelope rotated along its longitudinal axis.
In 2005, a short-lived project of the U.S. Defense Advanced Research Projects Agency (DARPA) was Walrus HULA, which explored the potential for using airships as long-distance, heavy lift craft. The primary goal of the research program was to determine the feasibility of building an airship capable of carrying of payload a distance of and land on an unimproved location without the use of external ballast or ground equipment (such as masts). In 2005, two contractors, Lockheed Martin and US Aeros Airships were each awarded approximately $3 million to do feasibility studies of designs for WALRUS. Congress removed funding for Walrus HULA in 2006.
Modern Airships
Military
In 2010, the U.S. Army awarded a $517 million (£350.6 million) contract to Northrop Grumman and partner Hybrid Air Vehicles to develop a Long Endurance Multi-Intelligence Vehicle (LEMV) system, in the form of three HAV 304s. The project was cancelled in February 2012 due to it being behind schedule and over budget; also the forthcoming U.S. withdrawal from Afghanistan where it was intended to be deployed. Following this the Hybrid Air Vehicles HAV 304 Airlander 10 was repurchased by Hybrid Air Vehicles then modified and reassembled in Bedford, UK, and renamed the Airlander 10. As of 2018, it was being tested in readiness for its UK flight test programme.
, a French company, manufactures and operates airships and aerostats. For 2 years, A-NSE has been testing its airships for the French Army. Airships and aerostats are operated to provide intelligence, surveillance, and reconnaissance (ISR) support. Their airships include many innovative features such as water ballast take-off and landing systems, variable geometry envelopes and thrust–vectoring systems.
The U.S. government has funded two major projects in the high altitude arena. The Composite Hull High Altitude Powered Platform (CHHAPP) is sponsored by U.S. Army Space and Missile Defense Command. This aircraft is also sometimes called HiSentinel High-Altitude Airship. This prototype ship made a five-hour test flight in September 2005. The second project, the high-altitude airship (HAA), is sponsored by DARPA. In 2005, DARPA awarded a contract for nearly $150 million to Lockheed Martin for prototype development. First flight of the HAA was planned for 2008 but suffered programmatic and funding delays. The HAA project evolved into the High Altitude Long Endurance-Demonstrator (HALE-D). The U.S. Army and Lockheed Martin launched the first-of-its kind HALE-D on July 27, 2011. After attaining an altitude of , due to an anomaly, the company decided to abort the mission. The airship made a controlled descent in an unpopulated area of southwest Pennsylvania.
On 31 January 2006 Lockheed Martin made the first flight of their secretly built hybrid airship designated the P-791. The design is very similar to the SkyCat, unsuccessfully promoted for many years by the British company Advanced Technologies Group (ATG).
Dirigibles have been used in the War in Afghanistan for reconnaissance purposes, as they allow for constant monitoring of a specific area through cameras mounted on the airships.
Passenger transport
In the 1990s, the successor of the original Zeppelin company in Friedrichshafen, the Zeppelin Luftschifftechnik GmbH, reengaged in airship construction. The first experimental craft (later christened Friedrichshafen) of the type "Zeppelin NT" flew in September 1997. Though larger than common blimps, the Neue Technologie (New Technology) zeppelins are much smaller than their giant ancestors and not actually Zeppelin-types in the classical sense. They are sophisticated semirigids. Apart from the greater payload, their main advantages compared to blimps are higher speed and excellent maneuverability. Meanwhile, several Zeppelin NT have been produced and operated profitably in joyrides, research flights and similar applications.
In June 2004, a Zeppelin NT was sold for the first time to a Japanese company, Nippon Airship Corporation, for tourism and advertising mainly around Tokyo. It was also given a role at the 2005 Expo in Aichi. The aircraft began a flight from Friedrichshafen to Japan, stopping at Geneva, Paris, Rotterdam, Munich, Berlin, Stockholm and other European cities to carry passengers on short legs of the flight. Russian authorities denied overflight permission, so the airship had to be dismantled and shipped to Japan rather than following the historic Graf Zeppelin flight from Germany to Japan.
In 2008, Airship Ventures Inc. began operations from Moffett Federal Airfield near Mountain View, California and until November 2012 offered tours of the San Francisco Bay Area for up to 12 passengers.
Exploration
In November 2005, De Beers, a diamond mining company, launched an airship exploration program over the remote Kalahari Desert. A Zeppelin NT, equipped with a Bell Geospace gravity gradiometer, was used to find potential diamond mines by scanning the local geography for low-density rock formations, known as kimberlite pipes. On 21 September 2007, the airship was severely damaged by a whirlwind while in Botswana. One crew member, who was on watch aboard the moored craft, was slightly injured but released after overnight observation in hospital.
Thermal
Several companies, such as Cameron Balloons in Bristol, United Kingdom, build hot-air airships. These combine the structures of both hot-air balloons and small airships. The envelope is the normal cigar shape, complete with tail fins, but is inflated with hot air instead of helium to provide the lifting force. A small gondola, carrying the pilot and passengers, a small engine, and the burners to provide the hot air are suspended below the envelope, beneath an opening through which the burners protrude.
Hot-air airships typically cost less to buy and maintain than modern helium-based blimps, and can be quickly deflated after flights. This makes them easy to carry in trailers or trucks and inexpensive to store. They are usually very slow moving, with a typical top speed of . They are mainly used for advertising, but at least one has been used in rainforests for wildlife observation, as they can be easily transported to remote areas.
Unmanned remote
Remote-controlled (RC) airships, a type of unmanned aerial system (UAS), are sometimes used for commercial purposes such as advertising and aerial video and photography as well as recreational purposes. They are particularly common as an advertising mechanism at indoor stadiums. While RC airships are sometimes flown outdoors, doing so for commercial purposes is illegal in the US. Commercial use of an unmanned airship must be certified under part 121.
Adventures
In 2008, French adventurer Stephane Rousson attempted to cross the English Channel with a muscular pedal powered airship.
Stephane Rousson also flies the Aérosail, a sky sailing yacht.
Current design projects
Today, with large, fast, and more cost-efficient fixed-wing aircraft and helicopters, it is unknown whether huge airships can operate profitably in regular passenger transport though, as energy costs rise, attention is once again returning to these lighter-than-air vessels as a possible alternative. At the very least, the idea of comparatively slow, "majestic" cruising at relatively low altitudes and in comfortable atmosphere certainly has retained some appeal. There have been some niches for airships in and after World War II, such as long-duration observations, antisubmarine patrol, platforms for TV camera crews, and advertising; these generally require only small and flexible craft, and have thus generally been better fitted for cheaper (non-passenger) blimps.
Heavy lifting
It has periodically been suggested that airships could be employed for cargo transport, especially delivering extremely heavy loads to areas with poor infrastructure over great distances. This has also been called roadless trucking. Also, airships could be used for heavy lifting over short distances (e.g. on construction sites); this is described as heavy-lift, short-haul. In both cases, the airships are heavy haulers. One recent enterprise of this sort was the Cargolifter project, in which a hybrid (thus not entirely Zeppelin-type) airship even larger than Hindenburg was projected. Around 2000, CargoLifter AG built the world's largest self-supporting hall, measuring long, wide and high about south of Berlin. In May 2002, the project was stopped for financial reasons; the company had to file bankruptcy. The enormous CargoLifter hangar was later converted to house the Tropical Islands Resort. Although no rigid airships are currently used for heavy lifting, hybrid airships are being developed for such purposes. AEREON 26, tested in 1971, was described in John McPhee's The Deltoid Pumpkin Seed.
An impediment to the large-scale development of airships as heavy haulers has been figuring out how they can be used in a cost-efficient way. In order to have a significant economic advantage over ocean transport, cargo airships must be able to deliver their payload faster than ocean carriers but more cheaply than airplanes. William Crowder, a fellow at the Logistics Management Institute, has calculated that cargo airships are only economical when they can transport 500 to 1,000 tons, approximately the same as a super-jumbo aircraft. The large initial investment required to build such a large airship has been a hindrance to production, especially given the risk inherent in a new technology. The chief commercial officer of the company hoping to sell the LMH-1, a cargo airship currently being developed by Lockheed Martin, believes that airships can be economical in hard-to-reach locations such as mining operations in northern Canada that currently require ice roads.
Metal-clad airships
A metal-clad airship has a very thin metal envelope, rather than the usual fabric. The shell may be either internally braced or monocoque as in the ZMC-2, which flew many times in the 1920s, the only example ever to do so. The shell may be gas-tight as in a non-rigid blimp, or the design may employ internal gas bags as in a rigid airship. Compared to a fabric envelope the metal cladding is expected to be more durable.
Hybrid airships
A hybrid airship is a general term for an aircraft that combines characteristics of heavier-than-air (aeroplane or helicopter) and lighter-than-air technology. Examples include helicopter/airship hybrids intended for heavy lift applications and dynamic lift airships intended for long-range cruising. Most airships, when fully loaded with cargo and fuel, are usually ballasted to be heavier than air, and thus must use their propulsion system and shape to create aerodynamic lift, necessary to stay aloft. All airships can be operated to be slightly heavier than air at periods during flight (descent). Accordingly, the term "hybrid airship" refers to craft that obtain a significant portion of their lift from aerodynamic lift or other kinetic means.
For example, the Aeroscraft is a buoyancy assisted air vehicle that generates lift through a combination of aerodynamics, thrust vectoring and gas buoyancy generation and management, and for much of the time will fly heavier than air. Aeroscraft is Worldwide Aeros Corporation's continuation of DARPA's now cancelled Walrus HULA (Hybrid Ultra Large Aircraft) project.
The Patroller P3 hybrid airship developed by Advanced Hybrid Aircraft Ltd, BC, Canada, is a relatively small () buoyant craft, manned by the crew of five and with the endurance of up to 72 hours. The flight-tests with the 40% RC scale model proved that such a craft can be launched and landed without a large team of strong ground-handlers. Design features a special "winglet" for aerodynamic lift control.
Airships in space exploration
Airships have been proposed as a potential cheap alternative to surface rocket launches for achieving Earth orbit. JP Aerospace have proposed the Airship to Orbit project, which intends to float a multi-stage airship up to mesospheric altitudes of 55 km (180,000 ft) and then use ion propulsion to accelerate to orbital speed. At these heights, air resistance would not be a significant problem for achieving such speeds. The company has not yet built any of the three stages.
NASA has proposed the High Altitude Venus Operational Concept, which comprises a series of five missions including crewed missions to the atmosphere of Venus in airships. Pressures on the surface of the planet are too high for human habitation, but at a specific altitude the pressure is equal to that found on Earth and this makes Venus a potential target for human colonization.
Hypothetically, there could be an airship lifted by a vacuum—that is, by material that can contain nothing at all inside but withstand the atmospheric pressure from the outside. It is, at this point, science fiction, although NASA has posited that some kind of vacuum airship could eventually be used to explore the surface of Mars.
Cruiser feeder transport airship
EU FP7 MAAT Project has studied an innovative cruiser/feeder airship system, for the stratosphere with a cruiser remaining airborne for a long time and feeders connecting it to the ground and flying as piloted balloons.
Airships for humanitarian and cargo transport
Google co-founder Sergey Brin founded LTA Research in 2015 to develop airships for humanitarian and cargo transport. The company's 124-meter-long airship Pathfinder 1 received from the FAA a special airworthiness certificate for the helium-filled airship in September 2023.
The certificate allowed the largest airship since the ill-fated Hindenburg to begin flight tests at Moffett Field, a joint civil-military airport in Silicon Valley.
Comparison with heavier-than-air aircraft
The advantage of airships over aeroplanes is that static lift sufficient for flight is generated by the lifting gas and requires no engine power. This was an immense advantage before the middle of World War I and remained an advantage for long-distance or long-duration operations until World War II. Modern concepts for high-altitude airships include photovoltaic cells to reduce the need to land to refuel, thus they can remain in the air until consumables expire. This similarly reduces or eliminates the need to consider variable fuel weight in buoyancy calculations.
The disadvantages are that an airship has a very large reference area and comparatively large drag coefficient, thus a larger drag force compared to that of aeroplanes and even helicopters. Given the large frontal area and wetted surface of an airship, a practical limit is reached around , only about one-third the typical airspeed of a modern commercial airplane. Thus, airships are used where speed is not critical.
The lift capability of an airship is equal to the buoyant force minus the weight of the airship. This assumes standard air-temperature and pressure conditions. Corrections are usually made for water vapor and impurity of lifting gas, as well as percentage of inflation of the gas cells at liftoff. Based on specific lift (lifting force per unit volume of gas), the greatest static lift is provided by hydrogen (11.15 N/m3 or 71 lbf/1000 cu ft) with helium (10.37 N/m3 or 66 lbf/1000 cu ft) a close second.
In addition to static lift, an airship can obtain a certain amount of dynamic lift from its engines. Dynamic lift in past airships has been about 10% of the static lift. Dynamic lift allows an airship to "take off heavy" from a runway similar to fixed-wing and rotary-wing aircraft. This requires additional weight in engines, fuel, and landing gear, negating some of the static lift capacity.
The altitude at which an airship can fly largely depends on how much lifting gas it can lose due to expansion before stasis is reached. The ultimate altitude record for a rigid airship was set in 1917 by the L-55 under the command of Hans-Kurt Flemming when he forced the airship to attempting to cross France after the "Silent Raid" on London. The L-55 lost lift during the descent to lower altitudes over Germany and crashed due to loss of lift. While such waste of gas was necessary for the survival of airships in the later years of World War I, it was impractical for commercial operations, or operations of helium-filled military airships. The highest flight made by a hydrogen-filled passenger airship was on the Graf Zeppelin's around-the-world flight.
The greatest disadvantage of the airship is size, which is essential to increasing performance. As size increases, the problems of ground handling increase geometrically. As the German Navy changed from the P class of 1915 with a volume of over to the larger Q class of 1916, the R class of 1917, and finally the W class of 1918, at almost ground handling problems reduced the number of days the Zeppelins were able to make patrol flights. This availability declined from 34% in 1915, to 24.3% in 1916 and finally 17.5% in 1918.
So long as the power-to-weight ratios of aircraft engines remained low and specific fuel consumption high, the airship had an edge for long-range or -duration operations. As those figures changed, the balance shifted rapidly in the aeroplane's favour. By mid-1917, the airship could no longer survive in a combat situation where the threat was aeroplanes. By the late 1930s, the airship barely had an advantage over the aeroplane on intercontinental over-water flights, and that advantage had vanished by the end of World War II.
This is in face-to-face tactical situations. Currently, a high-altitude airship project is planned to survey hundreds of kilometres as their operation radius, often much farther than the normal engagement range of a military aeroplane. For example, a radar mounted on a vessel platform high has radio horizon at range, while a radar at altitude has radio horizon at range. This is significantly important for detecting low-flying cruise missiles or fighter-bombers.
Safety
The most commonly used lifting gas, helium, is inert and therefore presents no fire risk. A series of vulnerability tests were done by the UK Defence Evaluation and Research Agency DERA on a Skyship 600. Since the internal gas pressure was maintained at only 1–2% above the surrounding air pressure, the vehicle proved highly tolerant to physical damage or to attack by small-arms fire or missiles. Several hundred high-velocity bullets were fired through the hull, and even two hours later the vehicle would have been able to return to base. Ordnance passed through the envelope without causing critical helium loss. The results and related mathematical model have presented in the hypothesis of considering a Zeppelin NT size airship. In all instances of light armament fire evaluated under both test and live conditions, the airship was able to complete its mission and return to base.
Licensing
In the United Kingdom, the basic pilot licence for airships is the PPL(As), or private pilot licence, which requires a minimum of 35 hours instruction on airships. To fly commercially, an Commercial Pilot Licence (Airships) is required.
| Technology | Aviation | null |
58017 | https://en.wikipedia.org/wiki/Microwave%20oven | Microwave oven | A microwave oven or simply microwave is an electric oven that heats and cooks food by exposing it to electromagnetic radiation in the microwave frequency range. This induces polar molecules in the food to rotate and produce thermal energy in a process known as dielectric heating. Microwave ovens heat foods quickly and efficiently because excitation is fairly uniform in the outer of a homogeneous, high-water-content food item.
The development of the cavity magnetron in the United Kingdom made possible the production of electromagnetic waves of a small enough wavelength (microwaves) to efficiently heat up water molecules. American electrical engineer Percy Spencer is generally credited with developing and patenting the world's first commercial microwave oven post World War II from British radar technology developed before and during the war. Named the "RadaRange", it was first sold in 1947.
Raytheon later licensed its patents for a home-use microwave oven that was introduced by Tappan in 1955, but it was still too large and expensive for general home use. Sharp Corporation introduced the first microwave oven with a turntable between 1964 and 1966. The countertop microwave oven was introduced in 1967 by the Amana Corporation. After microwave ovens became affordable for residential use in the late 1970s, their use spread into commercial and residential kitchens around the world, and prices fell rapidly during the 1980s. In addition to cooking food, microwave ovens are used for heating in many industrial processes.
Microwave ovens are a common kitchen appliance and are popular for reheating previously cooked foods and cooking a variety of foods. They rapidly heat foods which can easily burn or turn lumpy if cooked in conventional pans, such as hot butter, fats, chocolate, or porridge. Microwave ovens usually do not directly brown or caramelize food, since they rarely attain the necessary temperature to produce Maillard reactions. Exceptions occur in cases where the oven is used to heat frying-oil and other oily items (such as bacon), which attain far higher temperatures than that of boiling water.
Microwave ovens have a limited role in professional cooking, because the boiling-range temperatures of a microwave oven do not produce the flavorful chemical reactions that frying, browning, or baking at a higher temperature produces. However, such high-heat sources can be added to microwave ovens in the form of a convection microwave oven.
History
Early developments
The exploitation of high-frequency radio waves for heating substances was made possible by the development of vacuum tube radio transmitters around 1920. By 1930 the application of short waves to heat human tissue had developed into the medical therapy of diathermy. At the 1933 Chicago World's Fair, Westinghouse demonstrated the cooking of foods between two metal plates attached to a 10 kW, 60 MHz shortwave transmitter. The Westinghouse team, led by I. F. Mouromtseff, found that foods like steaks and potatoes could be cooked in minutes.
The 1937 United States patent application by Bell Laboratories states:
However, lower-frequency dielectric heating, as described in the aforementioned patent, is (like induction heating) an electromagnetic heating effect, the result of the so-called near-field effects that exist in an electromagnetic cavity that is small compared with the wavelength of the electromagnetic field. This patent proposed radio frequency heating, at 10 to 20 megahertz (wavelength 30 to 15 meters, respectively). Heating from microwaves that have a wavelength that is small relative to the cavity (as in a modern microwave oven) is due to "far-field" effects that are due to classical electromagnetic radiation that describes freely propagating light and microwaves suitably far from their source. Nevertheless, the primary heating effect of all types of electromagnetic fields at both radio and microwave frequencies occurs via the dielectric heating effect, as polarized molecules are affected by a rapidly alternating electric field.
Cavity magnetron
The invention of the cavity magnetron made possible the production of electromagnetic waves of a small enough wavelength (microwaves). The cavity magnetron was a crucial component in the development of short wavelength radar during World War II. In 1937–1940, a multi-cavity magnetron was built by British physicist Sir John Turton Randall, FRSE and coworkers, for the British and American military radar installations in World War II. A higher-powered microwave generator that worked at shorter wavelengths was needed, and in 1940, at the University of Birmingham in England, Randall and Harry Boot produced a working prototype. They invented a valve that could produce pulses of microwave radio energy at a wavelength of 10 cm, an unprecedented discovery.
Sir Henry Tizard traveled to the US in late September 1940 to offer Britain's most valuable technical secrets including the cavity magnetron in exchange for US financial and industrial support (see Tizard Mission). An early 6 kW version, built in England by the General Electric Company Research Laboratories, Wembley, London, was given to the U.S. government in September 1940. The cavity magnetron was later described by American historian James Phinney Baxter III as "[t]he most valuable cargo ever brought to our shores". Contracts were awarded to Raytheon and other companies for the mass production of the cavity magnetron.
Discovery
In 1945, the heating effect of a high-power microwave beam was independently and accidentally discovered by Percy Spencer, an American self-taught engineer from Howland, Maine. Employed by Raytheon at the time, he noticed that microwaves from an active radar set he was working on started to melt a candy bar he had in his pocket. The first food deliberately cooked by Spencer was popcorn, and the second was an egg, which exploded in the face of one of the experimenters.
To verify his finding, Spencer created a high-density electromagnetic field by feeding microwave power from a magnetron into a metal box from which it had no way to escape. When food was placed in the box with the microwave energy, the temperature of the food rose rapidly. On 8 October 1945, Raytheon filed a United States patent application for Spencer's microwave cooking process, and an oven that heated food using microwave energy from a magnetron was soon placed in a Boston restaurant for testing.
Another independent discovery of microwave oven technology was by British scientists, including James Lovelock, who in the 1950s used it to reanimate cryogenically frozen hamsters.
Commercial availability
In 1947, Raytheon built the "Radarange", the first commercially available microwave oven. It was almost tall, weighed and cost about US$5,000 ($ in dollars) each. It consumed 3 kilowatts, about three times as much as today's microwave ovens, and was water-cooled. The name was the winning entry in an employee contest. An early Radarange was installed (and remains) in the galley of the nuclear-powered passenger/cargo ship NS Savannah. An early commercial model introduced in 1954 consumed 1.6 kilowatts and sold for US$2,000 to US$3,000 ($ to $ in dollars). Raytheon licensed its technology to the Tappan Stove company of Mansfield, Ohio in 1952. Under contract to Whirlpool, Westinghouse, and other major appliance manufacturers looking to add matching microwave ovens to their conventional oven line, Tappan produced several variations of their built-in model from roughly 1955 to 1960. Due to maintenance (some units were water-cooled), in-built requirement, and cost—US$1,295 ($ in dollars)—sales were limited.
Japan's Sharp Corporation began manufacturing microwave ovens in 1961. Between 1964 and 1966, Sharp introduced the first microwave oven with a turntable, an alternative means to promote more even heating of food. In 1965, Raytheon, looking to expand their Radarange technology into the home market, acquired Amana to provide more manufacturing capability. In 1967, they introduced the first popular home model, the countertop Radarange, at a price of US$495 ($ in dollars). Unlike the Sharp models, a motor driven mode stirrer in the top of the oven cavity rotated allowing the food to remain stationary.
In the 1960s, Litton bought Studebaker's Franklin Manufacturing assets, which had been manufacturing magnetrons and building and selling microwave ovens similar to the Radarange. Litton developed a new configuration of the microwave oven: the short, wide shape that is now common. The magnetron feed was also unique. This resulted in an oven that could survive a no-load condition: an empty microwave oven where there is nothing to absorb the microwaves. The new oven was shown at a trade show in Chicago, and helped begin a rapid growth of the market for home microwave ovens. Sales volume of 40,000 units for the U.S. industry in 1970 grew to one million by 1975. Market penetration was even faster in Japan, due to a less expensive re-engineered magnetron.
Several other companies joined in the market, and for a time most systems were built by defence contractors, who were most familiar with the magnetron. Litton was particularly well known in the restaurant business.
Residential use
While uncommon today, combination microwave-ranges were offered by major appliance manufacturers through much of the 1970s as a natural progression of the technology. Both Tappan and General Electric offered units that appeared to be conventional stove top/oven ranges, but included microwave capability in the conventional oven cavity. Such ranges were attractive to consumers since both microwave energy and conventional heating elements could be used simultaneously to speed cooking, and there was no loss of countertop space. The proposition was also attractive to manufacturers as the additional component cost could better be absorbed compared with countertop units where pricing was increasingly market-sensitive.
By 1972, Litton (Litton Atherton Division, Minneapolis) introduced two new microwave ovens, priced at $349 and $399, to tap into the market estimated at $750 million by 1976, according to Robert I Bruder, president of the division. While prices remained high, new features continued to be added to home models. Amana introduced automatic defrost in 1974 on their RR-4D model, and was the first to offer a microprocessor controlled digital control panel in 1975 with their RR-6 model.
The late 1970s saw an explosion of low-cost countertop models from many major manufacturers.
Formerly found only in large industrial applications, microwave ovens increasingly became a standard fixture of residential kitchens in developed countries. By 1986, roughly 25% of households in the U.S. owned a microwave oven, up from only about 1% in 1971; the U.S. Bureau of Labor Statistics reported that over 90% of American households owned a microwave oven in 1997. In Australia, a 2008 market research study found that 95% of kitchens contained a microwave oven and that 83% of them were used daily. In Canada, fewer than 5% of households had a microwave oven in 1979, but more than 88% of households owned one by 1998. In France, 40% of households owned a microwave oven in 1994, but that number had increased to 65% by 2004.
Adoption has been slower in less-developed countries, as households with disposable income concentrate on more important household appliances like refrigerators and ovens. In India, for example, only about 5% of households owned a microwave oven in 2013, well behind refrigerators at 31% ownership. However, microwave ovens are gaining popularity. In Russia, for example, the number of households with a microwave oven grew from almost 24% in 2002 to almost 40% in 2008. Almost twice as many households in South Africa owned microwave ovens in 2008 (38.7%) as in 2002 (19.8%). Microwave oven ownership in Vietnam in 2008 was at 16% of households, versus 30% ownership of refrigerators; this rate was up significantly from 6.7% microwave oven ownership in 2002, with 14% ownership for refrigerators that year.
Consumer household microwave ovens usually come with a cooking power of between 600 and 1200 watts. Microwave cooking power, also referred to as output wattage, is lower than its input wattage, which is the manufacturer's listed power rating.
The size of household microwave ovens can vary, but usually have an internal volume of around , and external dimensions of approximately wide, deep and tall. Countertop microwaves vary in weight 23 – 45 lbs.
Microwaves can be turntable or flatbed. Turntable ovens include a glass plate or tray. Flatbed ones do not include a plate, so they have a flat and wider cavity.
By position and type, US DOE classifies them as (1) countertop or (2) over the range and built-in (wall oven for a cabinet or a drawer model).
A traditional microwave only has two power output levels, fully on and fully off. Intermediate heat settings are achieved using duty-cycle modulation and switch between full power and off every few seconds, with more time on for higher settings.
An inverter type, however, can sustain lower temperatures for a lengthy duration without having to switch itself off and on repeatedly. Apart from offering superior cooking ability, these microwaves are generally more energy-efficient.
, the majority of countertop microwave ovens (regardless of brand) sold in the United States were manufactured by the Midea Group.
Categories
Domestic microwave ovens are typically marked with the microwave-safe symbol, next to the device's approximate IEC 60705 output power rating, in watts (typically either: 600W, 700W, 800W, 900W, 1000W), and a voluntary Heating Category (A-E).
Principles
A microwave oven heats food by passing microwave radiation through it. Microwaves are a form of non-ionizing electromagnetic radiation with a frequency in the so-called microwave region (300 MHz to 300 GHz). Microwave ovens use frequencies in one of the ISM (industrial, scientific, medical) bands, which are otherwise used for communication amongst devices that do not need a license to operate, so they do not interfere with other vital radio services.
It is a common misconception that microwave ovens heat food by operating at a special resonance of water molecules in the food. Instead, microwave ovens heat by causing molecules to spin under the influence of a constantly changing electric field, usually in the microwave frequencies range, and a higher wattage power of the microwave oven results in faster cooking times. Typically, consumer ovens work around a nominal 2.45 gigahertz (GHz) – a wavelength of in the 2.4 GHz to 2.5 GHz ISM band – while large industrial / commercial ovens often use 915 megahertz (MHz) – . Among other differences, the longer wavelength of a commercial microwave oven allows the initial heating effects to begin deeper within the food or liquid, and therefore become evenly spread within its bulk sooner, as well as raising the temperature deep within the food more quickly.
A microwave oven takes advantage of the electric dipole structure of water molecules, fats, and many other substances in the food, using a process known as dielectric heating. These molecules have a partial positive charge at one end and a partial negative charge at the other. In an alternating electric field, they will constantly spin around as they continually try to align themselves with the electric field. This can happen over a wide range of frequencies. The electric field's energy is absorbed by the dipole molecules as rotational energy. Then they hit non-dipole molecules, making them move faster as well. This energy is shared deeper into the substance as molecular rotation and translational movement occurs, signifying an increase in the temperature of the food. Once the electrical field's energy is initially absorbed, heat will gradually spread through the object similarly to any other heat transfer by contact with a hotter body.
Defrosting
Microwave heating is more efficient on liquid water than on frozen water, where the movement of molecules is more restricted. Defrosting is done at a low power setting, allowing time for conduction to carry heat to still frozen parts of food. Dielectric heating of liquid water is also temperature-dependent: At 0 °C, dielectric loss is greatest at a field frequency of about 10 GHz, and for higher water temperatures at higher field frequencies.
Fats and sugar
Sugars and triglycerides (fats and oils) absorb microwaves due to the dipole moments of their hydroxyl groups or ester groups. Microwave heating is less efficient on fats and sugars than on water because they have a smaller molecular dipole moment.
Although fats and sugar typically absorb energy less efficiently than water, paradoxically their temperatures rise faster and higher than water when cooking: Fats and oils require less energy delivered per gram of material to raise their temperature by 1 °C than does water (they have lower specific heat capacity) and they begin cooling off by "boiling" only after reaching a higher temperature than water (the temperature they require to vaporize is higher), so inside microwave ovens they normally reach higher temperatures – sometimes much higher. This can induce temperatures in oil or fatty foods like bacon far above the boiling point of water, and high enough to induce some browning reactions, much in the manner of conventional broiling (UK: grilling), braising, or deep fat frying.
The effect is most often noticed by consumers from unexpected damage to plastic containers when microwaving foods high in sugar, starch, or fat generates higher temperatures. Foods high in water content and with little oil rarely exceed the boiling temperature of water and do not damage plastic.
Cookware
Cookware must be transparent to microwaves. Conductive cookware, such as metal pots, reflects microwaves, and prevents the microwaves from reaching the food. Cookware made of materials with high electrical permittivity will absorb microwaves, resulting in the cookware heating rather than the food. Cookware made of melamine resin is a common type of cookware that will heat in a microwave oven, reducing the effectiveness of the microwave oven and creating a hazard from burns or shattered cookware.
Thermal runaway
Microwave heating can cause localized thermal runaways in some materials with low thermal conductivity which also have dielectric constants that increase with temperature. An example is glass, which can exhibit thermal runaway in a microwave oven to the point of melting if preheated. Additionally, microwaves can melt certain types of rocks, producing small quantities of molten rock. Some ceramics can also be melted, and may even become clear upon cooling. Thermal runaway is more typical of electrically conductive liquids such as salty water.
Penetration
Another misconception is that microwave ovens cook food "from the inside out", meaning from the center of the entire mass of food outwards. This idea arises from heating behavior seen if an absorbent layer of water lies beneath a less absorbent drier layer at the surface of a food; in this case, the deposition of heat energy inside a food can exceed that on its surface. This can also occur if the inner layer has a lower heat capacity than the outer layer causing it to reach a higher temperature, or even if the inner layer is more thermally conductive than the outer layer making it feel hotter despite having a lower temperature. In most cases, however, with uniformly structured or reasonably homogeneous food item, microwaves are absorbed in the outer layers of the item at a similar level to that of the inner layers.
Depending on water content, the depth of initial heat deposition may be several centimetres or more with microwave ovens, in contrast with broiling / grilling (infrared) or convection heating methods which thinly deposit heat at the food surface. Penetration depth of microwaves depends on food composition and the frequency, with lower microwave frequencies (longer wavelengths) penetrating deeper.
Energy consumption
In use, microwave ovens can be as low as 50% efficient at converting electricity into microwaves, but energy-efficient models can exceed 64% efficiency. Stovetop cooking is 40–90% efficient, depending on the type of appliance used.
Because they are used fairly infrequently, the average residential microwave oven consumes only 72 kWh per year. Globally, microwave ovens used an estimated 77 TWh per year in 2018, or 0.3% of global electricity generation.
A 2000 study by Lawrence Berkeley National Laboratory found that the average microwave drew almost 3 watts of standby power when not being used, which would total approximately 26 kWh per year. New efficiency standards imposed in 2016 by the United States Department of Energy require less than 1 watt, or approximately 9 kWh per year, of standby power for most types of microwave ovens.
Components
A microwave oven generally consists of:
a high-voltage DC power source, either:
a large high voltage transformer with a voltage doubler (a high-voltage capacitor and a diode)
an electronic power converter usually based around an inverter.
a cavity magnetron, which converts the high-voltage DC electric energy to microwave radiation
a magnetron control circuit (usually with a microcontroller)
a short waveguide (to couple microwave power from the magnetron into the cooking chamber)
a turntable and/or metal wave guide stirring fan
a control panel
In most ovens, the magnetron is driven by a linear transformer which can only feasibly be switched completely on or off. (One variant of the GE Spacemaker had two taps on the transformer primary, for high and low power modes.) Usually choice of power level does not affect intensity of the microwave radiation; instead, the magnetron is cycled on and off every few seconds, thus altering the large scale duty cycle. Newer models use inverter power supplies that use pulse-width modulation to provide effectively continuous heating at reduced power settings, so that foods are heated more evenly at a given power level and can be heated more quickly without being damaged by uneven heating.
The microwave frequencies used in microwave ovens are chosen based on regulatory and cost constraints. The first is that they should be in one of the industrial, scientific, and medical (ISM) frequency bands set aside for unlicensed purposes. For household purposes, 2.45 GHz has the advantage over 915 MHz in that 915 MHz is only an ISM band in some countries (ITU Region 2) while 2.45 GHz is available worldwide. Three additional ISM bands exist in the microwave frequencies, but are not used for microwave cooking. Two of them are centered on 5.8 GHz and 24.125 GHz, but are not used for microwave cooking because of the very high cost of power generation at these frequencies. The third, centered on 433.92 MHz, is a narrow band that would require expensive equipment to generate sufficient power without creating interference outside the band, and is only available in some countries.
The cooking chamber is similar to a Faraday cage to prevent the waves from coming out of the oven. Even though there is no continuous metal-to-metal contact around the rim of the door, choke connections on the door edges act like metal-to-metal contact, at the frequency of the microwaves, to prevent leakage. The oven door usually has a window for easy viewing, with a layer of conductive mesh some distance from the outer panel to maintain the shielding. Because the size of the perforations in the mesh is much less than the microwaves' wavelength (12.2 cm for the usual 2.45 GHz), microwave radiation cannot pass through the door, while visible light (with its much shorter wavelength) can.
Control panel
Modern microwave ovens use either an analog dial-type timer or a digital control panel for operation. Control panels feature an LED, LCD or vacuum fluorescent display, buttons for entering the cook time and a power level selection feature. A defrost option is typically offered, as either a power level or a separate function. Some models include pre-programmed settings for different food types, typically taking weight as input. In the 1990s, brands such as Panasonic and GE began offering models with a scrolling-text display showing cooking instructions.
Power settings are commonly implemented not by actually varying the power output, but by switching the emission of microwave energy off and on at intervals. The highest setting thus represents continuous power. Defrost might represent power for two seconds followed by no power for five seconds. To indicate cooking has completed, an audible warning such as a bell or a beeper is usually present, and/or "End" usually appears on the display of a digital microwave.
Microwave control panels are often considered awkward to use and are frequently employed as examples for user interface design.
Variants and accessories
A variant of the conventional microwave oven is the convection microwave oven. A convection microwave oven is a combination of a standard microwave oven and a convection oven. It allows food to be cooked quickly, yet come out browned or crisped, as from a convection oven. Convection microwave ovens are more expensive than conventional microwave ovens. Some convection microwave ovens—those with exposed heating elements—can produce smoke and burning odors as food spatter from earlier microwave-only use is burned off the heating elements. Some ovens use high speed air; these are known as impingement ovens and are designed to cook food quickly in restaurants, but cost more and consume more power.
In 2000, some manufacturers began offering high power quartz halogen bulbs to their convection microwave oven models, marketing them under names such as "Speedcook", "Advantium", "Lightwave" and "Optimawave" to emphasize their ability to cook food rapidly and with good browning. The bulbs heat the food's surface with infrared (IR) radiation, browning surfaces as in a conventional oven. The food browns while also being heated by the microwave radiation and heated through conduction through contact with heated air. The IR energy which is delivered to the outer surface of food by the lamps is sufficient to initiate browning caramelization in foods primarily made up of carbohydrates and Maillard reactions in foods primarily made up of protein. These reactions in food produce a texture and taste similar to that typically expected of conventional oven cooking rather than the bland boiled and steamed taste that microwave-only cooking tends to create.
In order to aid browning, sometimes an accessory browning tray is used, usually composed of glass or porcelain. It makes food crisp by oxidizing the top layer until it turns brown. Ordinary plastic cookware is unsuitable for this purpose because it could melt.
Frozen dinners, pies, and microwave popcorn bags often contain a susceptor made from thin aluminium film in the packaging or included on a small paper tray. The metal film absorbs microwave energy efficiently and consequently becomes extremely hot and radiates in the infrared, concentrating the heating of oil for popcorn or even browning surfaces of frozen foods. Heating packages or trays containing susceptors are designed for a single use and are then discarded as waste.
Heating characteristics
Microwave ovens produce heat directly within the food, but despite the common misconception that microwaved food cooks from the inside out, 2.45 GHz microwaves can only penetrate approximately into most foods. The inside portions of thicker foods are mainly heated by heat conducted from the outer .
Uneven heating in microwaved food can be partly due to the uneven distribution of microwave energy inside the oven, and partly due to the different rates of energy absorption in different parts of the food. The first problem is reduced by a stirrer, a type of fan that reflects microwave energy to different parts of the oven as it rotates, or by a turntable or carousel that turns the food; turntables, however, may still leave spots, such as the center of the oven, which receive uneven energy distribution.
The location of dead spots and hot spots in a microwave oven can be mapped out by placing a damp piece of thermal paper in the oven: When the water-saturated paper is subjected to the microwave radiation it becomes hot enough to cause the dye to be darkened which can provide a visual representation of the microwaves. If multiple layers of paper are constructed in the oven with a sufficient distance between them a three-dimensional map can be created. Many store receipts are printed on thermal paper which allows this to be easily done at home.
The second problem is due to food composition and geometry, and must be addressed by the cook, by arranging the food so that it absorbs energy evenly, and periodically testing and shielding any parts of the food that overheat. In some materials with low thermal conductivity, where dielectric constant increases with temperature, microwave heating can cause localized thermal runaway. Under certain conditions, glass can exhibit thermal runaway in a microwave oven to the point of melting.
Due to this phenomenon, microwave ovens set at too-high power levels may even start to cook the edges of frozen food while the inside of the food remains frozen. Another case of uneven heating can be observed in baked goods containing berries. In these items, the berries absorb more energy than the drier surrounding bread and cannot dissipate the heat due to the low thermal conductivity of the bread. Often this results in overheating the berries relative to the rest of the food. "Defrost" oven settings either use low power levels or repeatedly turn the power off and on – intended to allow time for heat to be conducted within frozen foods from areas that absorb heat more readily to those which heat more slowly. In turntable-equipped ovens, more even heating can take place by placing food off-center on the turntable tray instead of exactly in the center, as this results in more even heating of the food throughout.
There are microwave ovens on the market that allow full-power defrosting. They do this by exploiting the properties of the electromagnetic radiation LSM modes. LSM full-power defrosting may actually achieve more even results than slow defrosting.
Microwave heating can be deliberately uneven by design. Some microwavable packages (notably pies) may include materials that contain ceramic or aluminium flakes, which are designed to absorb microwaves and heat up, which aids in baking or crust preparation by depositing more energy shallowly in these areas. The technical term for such a microwave-absorbing patch is a susceptor. Such ceramic patches affixed to cardboard are positioned next to the food, and are typically smokey blue or gray in colour, usually making them easily identifiable; the cardboard sleeves included with Hot Pockets, which have a silver surface on the inside, are a good example of such packaging. Microwavable cardboard packaging may also contain overhead ceramic patches which function in the same way.
Effects on food and nutrients
Any form of cooking diminishes overall nutrient content in food, particularly water-soluble vitamins common in vegetables, but the key variables are how much water is used in the cooking, how long the food is cooked, and at what temperature. Nutrients are primarily lost by leaching into cooking water, which tends to make microwave cooking effective, given the shorter cooking times it requires and that the water heated is in the food. Like other heating methods, microwaving converts vitamin B from an active to inactive form; the amount of conversion depends on the temperature reached, as well as the cooking time. Boiled food reaches a maximum of (the boiling point of water), whereas microwaved food can get internally hotter than this, leading to faster breakdown of vitamin B. The higher rate of loss is partially offset by the shorter cooking times required.
Spinach retains nearly all its folate when cooked in a microwave oven; when boiled, it loses about 77%, leaching nutrients into the cooking water. Bacon cooked by microwave oven has significantly lower levels of nitrosamines than conventionally cooked bacon. Steamed vegetables tend to maintain more nutrients when microwaved than when cooked on a stovetop. Microwave blanching is 3–4 times more effective than boiled-water blanching for retaining of the water-soluble vitamins, folate, thiamin and riboflavin, with the exception of of which 29% is lost (compared with a 16% loss with boiled-water blanching).
Safety benefits and features
All microwave ovens use a timer to switch off the oven at the end of the cooking time.
Microwave ovens heat food without getting hot themselves. Taking a pot off a stove, unless it is an induction cooktop, leaves a potentially dangerous heating element or trivet that remains hot for some time. Likewise, when taking a casserole out of a conventional oven, one's arms are exposed to the very hot walls of the oven. A microwave oven does not pose this problem.
Food and cookware taken out of a microwave oven are rarely much hotter than . Cookware used in a microwave oven is often much cooler than the food because the cookware is transparent to microwaves; the microwaves heat the food directly and the cookware is indirectly heated by the food. Food and cookware from a conventional oven, on the other hand, are the same temperature as the rest of the oven; a typical cooking temperature is . That means that conventional stoves and ovens can cause more serious burns.
The lower temperature of cooking (the boiling point of water) is a significant safety benefit compared with baking in the oven or frying, because it eliminates the formation of tars and char, which are carcinogenic. Microwave radiation also penetrates deeper than direct heat, so that the food is heated by its own internal water content. In contrast, direct heat can burn the surface while the inside is still cold. Pre-heating the food in a microwave oven before putting it into the grill or pan reduces the time needed to heat up the food and reduces the formation of carcinogenic char. Unlike frying and baking, microwaving does not produce acrylamide in potatoes, however unlike deep-frying at high-temperatures, it is of only limited effectiveness in reducing glycoalkaloid (i.e., solanine) levels. Acrylamide has been found in other microwaved products like popcorn.
Use in cleaning kitchen sponges
Studies have investigated the use of the microwave oven to clean non-metallic domestic sponges which have been thoroughly wetted. A 2006 study found that microwaving wet sponges for 2 minutes (at 1000-watt power) removed 99% of coliforms, E. coli, and MS2 phages. Bacillus cereus spores were killed at 4 minutes of microwaving.
A 2017 study was less affirmative: About 60% of the germs were killed but the remaining ones quickly re-colonized the sponge.
Issues
High temperatures
Closed containers
Closed containers, such as eggs, can explode when heated in a microwave oven due to the increased pressure from steam. Intact fresh egg yolks outside the shell also explode as a result of superheating. Insulating plastic foams of all types generally contain closed air pockets, and are generally not recommended for use in a microwave oven, as the air pockets explode and the foam (which can be toxic if consumed) may melt. Not all plastics are microwave-safe, and some plastics absorb microwaves to the point that they may become dangerously hot.
Fires
Products that are heated for too long can catch fire. Though this is inherent to any form of cooking, the rapid cooking and unattended nature of the use of microwave ovens results in additional hazard.
Superheating
In rare cases, water and other homogeneous liquids can superheat when heated in a microwave oven in a container with a smooth surface. That is, the liquid reaches a temperature slightly above its normal boiling point without bubbles of vapour forming inside the liquid. The boiling process can start explosively when the liquid is disturbed, such as when the user takes hold of the container to remove it from the oven or while adding solid ingredients such as powdered creamer or sugar. This can result in spontaneous boiling (nucleation) which may be violent enough to eject the boiling liquid from the container and cause severe scalding.
Metal objects
Contrary to popular assumptions, metal objects can be safely used in a microwave oven, but with some restrictions. Any metal or conductive object placed into the microwave oven acts as an antenna to some degree, resulting in an electric current. This causes the object to act as a heating element. This effect varies with the object's shape and composition, and is sometimes utilized for cooking.
Any object containing pointed metal can create an electric arc (sparks) when microwaved. This includes cutlery, crumpled aluminium foil (though some foil used in microwave ovens is safe, see below), twist-ties containing metal wire, the metal wire carry-handles in oyster pails, or almost any metal formed into a poorly conductive foil or thin wire, or into a pointed shape. Forks are a good example: the tines of the fork respond to the electric field by producing high concentrations of electric charge at the tips. This has the effect of exceeding the dielectric breakdown of air, about 3 megavolts per meter (3×106 V/m). The air forms a conductive plasma, which is visible as a spark. The plasma and the tines may then form a conductive loop, which may be a more effective antenna, resulting in a longer lived spark. When dielectric breakdown occurs in air, some ozone and nitrogen oxides are formed, both of which are unhealthy in large quantities.
Microwaving an individual smooth metal object without pointed ends, for example, a spoon or shallow metal pan, usually does not produce sparking. Thick metal wire racks can be part of the interior design in microwave ovens (see illustration). In a similar way, the interior wall plates with perforating holes which allow light and air into the oven, and allow interior-viewing through the oven door, are all made of conductive metal formed in a safe shape.
The effect of microwaving thin metal films can be seen clearly on a Compact Disc or DVD (particularly the factory pressed type). The microwaves induce electric currents in the metal film, which heats up, melting the plastic in the disc and leaving a visible pattern of concentric and radial scars. Similarly, porcelain with thin metal films can also be destroyed or damaged by microwaving. Aluminium foil is thick enough to be used in microwave ovens as a shield against heating parts of food items, if the foil is not badly warped. When wrinkled, aluminium foil is generally unsafe in microwaves, as manipulation of the foil causes sharp bends and gaps that invite sparking. The USDA recommends that aluminium foil used as a partial food shield in microwave oven cooking cover no more than one quarter of a food object, and be carefully smoothed to eliminate sparking hazards.
Another hazard is the resonance of the magnetron tube itself. If the microwave oven is run without an object to absorb the radiation, a standing wave forms. The energy is reflected back and forth between the tube and the cooking chamber. This may cause the tube to overload and burn out. High reflected power may also cause magnetron arcing, possibly resulting in primary power fuse failure, though such a causal relationship is not easily established. Thus, dehydrated food, or food wrapped in metal which does not arc, is problematic for overload reasons, without necessarily being a fire hazard.
Certain foods such as grapes, if properly arranged, can produce an electric arc. Prolonged arcing from food carries similar risks to arcing from other sources as noted above.
Some other objects that may conduct sparks are plastic/holographic print Thermos flasks and other heat-retaining containers (such as Starbucks novelty cups) or cups with metal lining. If any bit of the metal is exposed, all the outer shell can burst off the object or melt.
The high electrical fields generated inside a microwave oven often can be illustrated by placing a radiometer or neon glow-bulb inside the cooking chamber, creating glowing plasma inside the low-pressure bulb of the device.
Direct microwave exposure
Direct microwave exposure is not generally possible, as microwaves emitted by the source in a microwave oven are confined in the oven by the material out of which the oven is constructed. Furthermore, ovens are equipped with redundant safety interlocks, which remove power from the magnetron if the door is opened. This safety mechanism is required by United States federal regulations. Tests have shown confinement of the microwaves in commercially available ovens to be so nearly universal as to make routine testing unnecessary. According to the United States Food and Drug Administration's Center for Devices and Radiological Health, a U.S. Federal Standard limits the amount of microwaves that can leak from an oven throughout its lifetime to 5 milliwatts of microwave radiation per square centimeter at approximately (2 in) from the surface of the oven. This is far below the exposure level currently considered to be harmful to human health.
The radiation produced by a microwave oven is non-ionizing. It therefore does not have the cancer risks associated with ionizing radiation such as X-rays and high-energy particles. Long-term rodent studies to assess cancer risk have so far failed to identify any carcinogenicity from microwave radiation even with chronic exposure levels (i.e. large fraction of life span) far larger than humans are likely to encounter from any leaking ovens. However, with the oven door open, the radiation may cause damage by heating. Microwave ovens are sold with a protective interlock so that it cannot be run when the door is open or improperly latched.
Microwaves generated in microwave ovens cease to exist once the electrical power is turned off. They do not remain in the food when the power is turned off, any more than light from an electric lamp remains in the walls and furnishings of a room when the lamp is turned off. They do not make the food or the oven radioactive. In contrast with conventional cooking, the nutritional content of some foods may be altered differently, but generally in a positive way by preserving more micronutrients – see above. There is no indication of detrimental health issues associated with microwaved food.
There are, however, a few cases where people have been exposed to direct microwave radiation, either from appliance malfunction or deliberate action. This exposure generally results in physical burns to the body, as human tissue, particularly the outer fat and muscle layers, has a similar composition to some foods that are typically cooked in microwave ovens and so experiences similar dielectric heating effects when exposed to microwave electromagnetic radiation.
Chemical exposure
The use of unmarked plastics for microwave cooking raises the issue of plasticizers leaching into the food.
The plasticizers which received the most attention are bisphenol A (BPA) and phthalates, although it is unclear whether other plastic components present a toxicity risk. Other issues include melting and flammability. An alleged issue of release of dioxins into food has been dismissed as an intentional red herring distraction from actual safety issues.
Some current plastic containers and food wraps are specifically designed to resist radiation from microwaves. Products may use the term "microwave safe", may carry a microwave symbol (three lines of waves, one above the other) or simply provide instructions for proper microwave oven use. Any of these is an indication that a product is suitable for microwaving when used in accordance with the directions provided.
Plastic containers can release microplastics into food when heated in microwave ovens.
Uneven heating
Microwave ovens are frequently used for reheating leftover food, and bacterial contamination may not be repressed if the microwave oven is used improperly. If safe temperature is not reached, this can result in foodborne illness, as with other reheating methods. While microwave ovens can destroy bacteria as well as conventional ovens can, they cook rapidly and may not cook as evenly, similar to frying or grilling, leading to a risk of some food regions failing to reach recommended temperatures. Therefore, a standing period after cooking to allow temperatures in the food to equalize is recommended, as well as the use of a food thermometer to verify internal temperatures.
Interference
Microwave ovens, although shielded for safety purposes, still emit low levels of microwave radiation. This is not harmful to humans, but can sometimes cause interference to Wi-Fi and Bluetooth and other devices that communicate on the 2.45 GHz wavebands, particularly at close range.
Conventional transformer ovens do not operate continuously over the mains cycle, but can cause significant slowdowns for many metres around the oven, whereas inverter-based ovens can stop nearby networking entirely while operating.
| Technology | Household appliances | null |
58041 | https://en.wikipedia.org/wiki/Bromeliaceae | Bromeliaceae | The Bromeliaceae (the bromeliads) are a family of monocot flowering plants of about 80 genera and 3700 known species, native mainly to the tropical Americas, with several species found in the American subtropics and one in tropical west Africa, Pitcairnia feliciana.
It is among the basal families within the Poales and is the only family within the order that has septal nectaries and inferior ovaries. These inferior ovaries characterize the Bromelioideae, a subfamily of the Bromeliaceae. The family includes both epiphytes, such as Spanish moss (Tillandsia usneoides), and terrestrial species, such as the pineapple (Ananas comosus). Many bromeliads are able to store water in a structure formed by their tightly overlapping leaf bases. However, the family is diverse enough to include the tank bromeliads, grey-leaved epiphyte Tillandsia species that gather water only from leaf structures called trichomes, and many desert-dwelling succulents.
The largest bromeliad is Puya raimondii, which reaches tall in vegetative growth with a flower spike tall, and the smallest is Spanish moss.
Description
Bromeliads are mostly herbaceous perennials, although a few have a more tree-like habit. Many are more or less succulent or have other adaptations to resist drought. They may be terrestrial or epiphytic, rarely climbing (e.g. Pitcairnia species). Some species of Tillandsia (e.g. Spanish moss, Tillandsia usneoides) are aerophytes, which have very reduced root systems and absorb water directly from the air. Many terrestrial and epiphytic bromeliads have their leaves in the form of vase-shaped rosettes which accumulate water. These rosettes, called "tanks", can hold as much as ten liters (eighteen pints) of water, and be little biotic communities unto themselves. One individual tank was found to contain the following: four harvestmen, a spider, three species of wood lice, a centipede, a "jumping millipede"[sic], a pseudoscorpion, "various metallic beetles", earwigs, a tree seedling, Chironomia fly larva, an ant colony, an earthworm, numerous mites, and a small frog. Individual leaves are not divided and have parallel veins without cross connections. The epidermis of the leaf contains silica. Bromeliad flowers are aggregated into inflorescences of various forms. The flowers have bracts, often brightly coloured, and distinct calyces of three sepals and corollas of three petals. The flowers have nectaries. They are pollinated by insects, birds (often hummingbirds) or bats, or more rarely (in Navia) they are wind-pollinated. Fruits are variable, typically taking the form of a capsule or a berry.
Bromeliads are able to live in an array of environmental conditions due to their many adaptations. Trichomes, in the form of scales or hairs, allow bromeliads to capture water in cloud forests and help to reflect sunlight in desert environments. Bromeliads with leaf vases can capture water and nutrients in the absence of a well-developed root system. Many bromeliads also use crassulacean acid metabolism (CAM) photosynthesis to create sugars. This adaptation allows bromeliads in hot or dry climates to open their stomata at night rather than during the day, which reduces water loss. Both CAM and epiphytism have evolved multiple times within the family, with some taxa reverting to C3 photosynthesis as they radiated into less arid climates.
Evolution
Bromeliads are among the more recent plant groups to have emerged. They are thought to have originated in the tepuis of the Guiana Shield approximately 100 million years ago. The greatest number of extant basal species are found in the Andean highlands of South America. However, the family did not diverge into its extant subfamilies until 19 million years ago. The long period between the origin and diversification of bromeliads, during which no extant species evolved, suggests that there was much speciation and extinction during that time, which would explain the genetic distance of the Bromeliaceae from other families within the Poales.
Based on molecular phylogenetic studies, the family is divided into eight subfamilies. The relationship among them is shown in the following cladogram.
The most basal genus, Brocchinia (subfamily Brocchinioideae), is endemic to the Guiana Shield, and is placed as the sister group to the remaining genera in the family. The subfamilies Lindmanioideae and Navioideae are endemic to the Guiana Shield as well.
The West African species Pitcairnia feliciana is the only bromeliad not endemic to the Americas, and is thought to have reached Africa via long-distance dispersal about 12 million years ago.
Radiation of Tillandsioideae and Hechtia
The first groups to leave the Guiana Shield were the subfamily Tillandsioideae, which spread gradually into northern South America, and the genus Hechtia (Hechtioideae), which spread to Central America via long-distance dispersal. Both of these movements occurred approximately 15.4 million years ago. When it reached the Andes mountains, the speciation of Tillandsioideae occurred quite rapidly, largely due to the Andean uplift, which was also occurring rapidly from 14.2 to 8.7 million years ago. The uplift greatly altered the region's geological and climatic conditions, creating a new mountainous environment for the epiphytic tillandsioids to colonize. These new conditions directly drove the speciation of the Tillandsioideae, and also drove the speciation of their animal pollinators, such as hummingbirds.
Evolution of the Bromelioideae
Around 5.5 million years ago, a clade of epiphytic bromelioids arose in Serra do Mar, a lush mountainous region on the coast of Southeastern Brazil. This is thought to have been caused not only by the uplift of Serra do Mar itself at that time, but also because of the continued uplift of the distant Andes mountains, which impacted the circulation of air and created a cooler, wetter climate in Serra do Mar. These epiphytes thrived in this humid environment, since their trichomes rely on water in the air rather than from the ground like terrestrial plants. Many epiphytic bromeliads with the tank habit also speciated here.
Even before this, a few other bromelioids had already dispersed to the Brazilian shield while the climate was still arid, likely through a gradual process of short-distance dispersal. These make up the terrestrial members of the Bromelioideae, which have highly xeromorphic characters.
Classification
The family Bromeliaceae is currently placed in the order Poales.
Subfamilies
The family Bromeliaceae is organized into eight subfamilies:
Brocchinioideae
Lindmanioideae
Tillandsioideae
Hechtioideae
Navioideae
Pitcairnioideae
Puyoideae
Bromelioideae
Bromeliaceae were originally split into three subfamilies based on morphological seed characters: Bromelioideae (seeds in baccate fruits), Tillandsioideae (plumose seeds), and Pitcairnioideae (seeds with wing-like appendages). However, molecular evidence has revealed that while Bromelioideae and Tillandsioideae are monophyletic, Pitcairnioideae as traditionally defined is paraphyletic and should be split into six subfamilies: Brocchinioideae, Lindmanioideae, Hechtioideae, Navioideae, Pitcairnioideae, and Puyoideae.
Brocchinioideae is defined as the most basal branch of Bromeliaceae based on both morphological and molecular evidence, namely genes in chloroplast DNA.
Lindmanioideae is the next most basal branch distinguished from the other subfamilies by convolute sepals and chloroplast DNA.
Hechtioideae is also defined based on analyses of chloroplast DNA; similar morphological adaptations to arid environments also found in other groups (namely the genus Puya) are attributed to convergent evolution.
Navioideae is split from Pitcairnioideae based on its cochlear sepals and chloroplast DNA.
Puyoideae has been re-classified multiple times and its monophyly remains controversial according to analyses of chloroplast DNA.
Genera
, Plants of the World Online (PoWO) accepted 72 genera, as listed below. A few more genera were accepted by the Encyclopaedia of Bromeliads, including Josemania and Mezobromelia, which PoWO sinks into Cipuropsis.
Acanthostachys Klotzsch
Aechmea Ruiz & Pav.
Alcantarea Harms
Ananas Mill., including Pseudananas Hassl. ex Harms (includes the pineapple)
Androlepis Brongn. ex Houllet
Araeococcus Brongn.
Barfussia Manzan. & W.Till
Billbergia Thunb.
Brewcaria L.B.Sm., Steyerm. & H.Rob, synonym of Navia in PoWO
Brocchinia Schult.f.
Bromelia L.
Canistropsis (Mez) Leme
Canistrum E.Morren
Catopsis Griseb.
Cipuropsis Ule
Connellia N.E.Br.
Cottendorfia Schult.f.
Cryptanthus Otto & A.Dietr.
Deinacanthon Mez
Deuterocohnia Mez
Disteganthus Lem.
Dyckia Schult.f.
Edmundoa Leme
Eduandrea Leme, W.Till, G.K.Br., J.R.Grant & Govaerts
Encholirium Mart. ex Schult.f.
Fascicularia Mez
Fernseea Baker
Forzzaea Leme, S.Heller & Zizka
Fosterella L.B.Sm.
Glomeropitcairnia Mez
Goudaea W.Till & Barfuss
Gregbrownia W.Till & Barfuss
Greigia Regel
Guzmania Ruiz & Pav.
Hechtia Klotzsch
Hohenbergia Schult.f.
Hohenbergiopsis L.B.Sm. & Read
Hoplocryptanthus (Mez) Leme, S.Heller & Zizka
Hylaeaicum (Ule ex Mez) Leme, Forzza, Zizka & Aguirre-Santoro
Jagrantia Barfuss & W.Till
Josemania W.Till & Barfuss
Karawata J.R.Maciel & G.M.Sousa
Lapanthus Louzada & Versieux
Lemeltonia Barfuss & W.Till
Lindmania Mez
Lutheria Barfuss & W.Till
Lymania Read
Mezobromelia L.B.Sm.
Navia Schult.f.
Neoglaziovia Mez
Neoregelia L.B.Sm.
Nidularium Lem.
Ochagavia Phil.
Orthophytum Beer
Pitcairnia L'Her., including subgenus Pepinia
Portea K. Koch
Pseudaechmea L.B.Sm. & Read, synonym of Billbergia in PoWO
Pseudalcantarea (Mez) Pinzón & Barfuss
Pseudaraeococcus (Mez) R.A.Pontes & Versieux
Puya Molina
Quesnelia Gaudich.
Racinaea M.A.Spencer & L.B.Sm.
Rokautskyia Leme, S.Heller & Zizka
Ronnbergia E.Morren & André
Sequencia Givnish
Sincoraea Ule
Steyerbromelia L.B.Sm.
Stigmatodon Leme, G.K.Br. & Barfuss
Tillandsia L.
Ursulaea Read & H.U.Baensch, synonym of Aechmea in PoWO
Vriesea Lindl.
Wallisia (Regel) É.Morren
Waltillia Leme, Barfuss & Halbritt.
Werauhia J.R.Grant
Wittmackia Mez
Wittrockia Lindm.
Zizkaea W.Till & Barfuss
Hybrid genera
Intergeneric hybrid genera accepted by Plants of the World Online include:
× Cryptbergia R.G.Wilson & C.L.Wilson = Cryptanthus × Billbergia
× Guzlandsia Gouda = Guzmania × Tillandsia
× Hohenmea B.R.Silva & L.F.Sousa = Hohenbergia × Aechmea
× Niduregelia Leme = Nidularium × Neoregelia
Gallery
Distribution and habitat
Plants in the Bromeliaceae are widely represented in their natural climates across the Americas. One species (Pitcairnia feliciana) can be found in Africa. They can be found at altitudes from sea level to 4,200 meters, from rainforests to deserts. 1,814 species are epiphytes, some are lithophytes, and some are terrestrial. Accordingly, these plants can be found in the Andean highlands, from northern Chile to Colombia, in the Sechura Desert of coastal Peru, in the cloud forests of Central and South America, in southern United States from southern Virginia to Florida to Texas, and in far southern Arizona.
Ecology
Bromeliads often serve as phytotelmata, accumulating water between their leaves. One study found 175,000 bromeliads per hectare (2.5 acres) in one forest; that many bromeliads can sequester 50,000 liters (more than 13,000 gallons) of water. The aquatic habitat created as a result is host to a diverse array of invertebrates, especially aquatic insect larvae, including those of mosquitos. These bromeliad invertebrates benefit their hosts by increasing nitrogen uptake into the plant. A study of 209 plants from the Yasuní Scientific Reserve in Ecuador identified 11,219 animals, representing more than 350 distinct species, many of which are found only on bromeliads. Examples include some species of ostracods, small salamanders about in length, and tree frogs. Jamaican bromeliads are home to Metopaulias depressus, a reddish-brown crab across, which has evolved social behavior to protect its young from predation by Diceratobasis macrogaster, a species of damselfly whose larvae live in bromeliads. Some bromeliads even form homes for other species of bromeliads.
Trees or branches that have a higher incidence of sunlight tend to have more bromeliads. In contrast, the sectors facing west receive less sunlight and therefore fewer bromeliads. In addition, thicker trees have more bromeliads, possibly because they are older and have greater structural complexity.
Cultivation and uses
Humans have been using bromeliads for thousands of years. The Incas, Aztecs, Maya and others used them for food, protection, fiber and ceremony, just as they are still used today. European interest began when Spanish conquistadors returned with pineapple, which became so popular as an exotic food that the image of the pineapple was adapted into European art and sculpture. In 1776, the species Guzmania lingulata was introduced to Europe, causing a sensation among gardeners unfamiliar with such a plant. In 1828, Aechmea fasciata was brought to Europe, followed by Vriesea splendens in 1840. These transplants were so successful, they are still among the most widely grown bromeliad varieties.
In the 19th century, breeders in Belgium, France and the Netherlands started hybridizing plants for wholesale trade. Many exotic varieties were produced until World War I, which halted breeding programs and led to the loss of some species. The plants experienced a resurgence of popularity after World War II. Since then, Dutch, Belgian and North American nurseries have greatly expanded bromeliad production.
Only one bromeliad, the pineapple (Ananas comosus), is a commercially important food crop. Bromelain, a common ingredient in meat tenderizer, is extracted from pineapple stems. Many other bromeliads are popular ornamental plants, grown as both garden and houseplants.
Bromeliads are important food plants for many peoples. For example, the Pima of Mexico occasionally consume flowers of Tillandsia erubescens and T. recurvata due to their high sugar content; in Argentina and Bolivia, the shoot apices of T. rubella and T. maxima are consumed; in Venezuela, indigenous coastal tribes eat a sour-tasting but sweet-smelling berry, known as 'Maya', of Bromelia chrysantha as a fruit or in fermented beverages; in Chile, the sweet fruit of Greigia sphacelata, known as 'chupones', is consumed raw.
Collectors
Édouard André was a French collector/explorer whose many discoveries of bromeliads in the Cordilleras of South America would be influential on horticulturists to follow. He served as a source of inspiration to 20th-century collectors, in particular Mulford B. Foster and Lyman Smith of the United States and Werner Rauh of Germany and Michelle Jenkins of Australia.
| Biology and health sciences | Poales | null |
58046 | https://en.wikipedia.org/wiki/Figure-eight%20knot | Figure-eight knot | The figure-eight knot or figure-of-eight knot is a type of stopper knot. It is very important in both sailing and rock climbing as a method of stopping ropes from running out of retaining devices. Like the overhand knot, which will jam under strain, often requiring the rope to be cut, the figure-eight will also jam, but is usually more easily undone than the overhand knot.
The stevedore knot is an extension of simple figure-eight knot with an additional turn before the end is finally tightened.
Different types
Figure-eight loop
The figure-eight loop is used like an overhand loop knot. This type of knot can be used in prusik climbing when used in conjunction with a climbing harness, a climbing rope, and locking carabiner designed for climbing, to ascend or descend with minimal equipment and effort.
Figure-eight bend
The figure-eight bend knot is used to "splice" together two ropes, not necessarily of equal diameter. This knot is tied starting with a loose figure-eight knot on one rope (the larger-diameter one if unequal), and threading of the other rope's running end through the first figure eight, starting at the first figure-eight's running end and paralleling the path of the first rope through the figure eight until the second's ropes running end lies parallel against first's standing end. The result is two figure-eight knots, each partly inside the other and tightening its hold on the other when they are pulled in opposite directions. This can be a permanent or temporary splice. While it precludes the ropes' slipping relative to each other, it is a typical knot in having less strength than the straight ropes.
Offset figure-eight bend
The offset figure-eight bend is a poor knot that has been implicated in the deaths of several rock climbers.
Stein knot
The stein knot (also known as a stone knot) is a variation of the figure-eight knot. It is used to secure a rope that is already passed around a post or through a ring. It is quick and easy to tie and untie. It is a device rigging rather than a true knot. In canyoneering, it is used to isolate rope strands to allow one person to rappel while another is getting on the rappel, or allow rappellers the option of using a single or a double rope. It is also used in basketmaking.
Symbolic use
In heraldry, this knot is known as Savoy knot.
In the United States Navy, a figure-of-eight badge was formerly worn by enlisted men who had successfully completed the apprentice rating.
In The Scout Association in the United Kingdom, awards for gallantry and long service are represented by a cloth figure-of-eight knot emblem in various colours.
| Technology | Flexible components | null |
58134 | https://en.wikipedia.org/wiki/Proterozoic | Proterozoic | The Proterozoic ( ) is the third of the four geologic eons of Earth's history, spanning the time interval from 2500 to 538.8Mya, and is the longest eon of Earth's geologic time scale. It is preceded by the Archean and followed by the Phanerozoic, and is the most recent part of the Precambrian "supereon".
The Proterozoic is subdivided into three geologic eras (from oldest to youngest): the Paleoproterozoic, Mesoproterozoic and Neoproterozoic. It covers the time from the appearance of free oxygen in Earth's atmosphere to just before the proliferation of complex life on the Earth during the Cambrian Explosion. The name Proterozoic combines two words of Greek origin: meaning "former, earlier", and , meaning "of life".
Well-identified events of this eon were the transition to an oxygenated atmosphere during the Paleoproterozoic; the evolution of eukaryotes via symbiogenesis; several global glaciations, which produced the 300 million years-long Huronian glaciation (during the Siderian and Rhyacian periods of the Paleoproterozoic) and the hypothesized Snowball Earth (during the Cryogenian period in the late Neoproterozoic); and the Ediacaran period (635–538.8 Ma), which was characterized by the evolution of abundant soft-bodied multicellular organisms such as sponges, algae, cnidarians, bilaterians and the sessile Ediacaran biota (some of which had evolved sexual reproduction) and provides the first obvious fossil evidence of life on Earth.
The Proterozoic record
The geologic record of the Proterozoic Eon is more complete than that for the preceding Archean Eon. In contrast to the deep-water deposits of the Archean, the Proterozoic features many strata that were laid down in extensive shallow epicontinental seas; furthermore, many of those rocks are less metamorphosed than Archean rocks, and many are unaltered. Studies of these rocks have shown that the eon continued the massive continental accretion that had begun late in the Archean Eon. The Proterozoic Eon also featured the first definitive supercontinent cycles and mountain building activity (orogeny).
There is evidence that the first known glaciations occurred during the Proterozoic. The first began shortly after the beginning of the Proterozoic Eon, and evidence of at least four during the Neoproterozoic Era at the end of the Proterozoic Eon, possibly climaxing with the hypothesized Snowball Earth of the Sturtian and Marinoan glaciations.
The accumulation of oxygen
One of the most important events of the Proterozoic was the accumulation of oxygen in the Earth's atmosphere. Though oxygen is believed to have been released by photosynthesis as far back as the Archean Eon, it could not build up to any significant degree until mineral sinks of unoxidized sulfur and iron had been exhausted. Until roughly 2.3 billion years ago, oxygen was probably only 1% to 2% of its current level. The banded iron formations, which provide most of the world's iron ore, are one mark of that mineral sink process. Their accumulation ceased after 1.9 billion years ago, after the iron in the oceans had all been oxidized.
Red beds, which are colored by hematite, indicate an increase in atmospheric oxygen 2 billion years ago. Such massive iron oxide formations are not found in older rocks. The oxygen buildup was probably due to two factors: Exhaustion of the chemical sinks, and an increase in carbon sequestration, which sequestered organic compounds that would have otherwise been oxidized by the atmosphere.
The first surge in atmospheric oxygen at the beginning of the Proterozoic is called the Great Oxygenation Event, or alternately the Oxygen Catastrophe – to reflect the mass extinction of almost all life on Earth, which at the time was virtually all obligate anaerobic. A second, later surge in oxygen concentrations is called the Neoproterozoic Oxygenation Event, occurred during the Middle and Late Neoproterozoic and drove the rapid evolution of multicellular life towards the end of the era.
Subduction processes
The Proterozoic Eon was a very tectonically active period in the Earth's history. Oxygen changed the chemistry allowing for extensive geological changes. Volcanism was also extensive resulting in more geologic changes.
The late Archean Eon to Early Proterozoic Eon corresponds to a period of increasing crustal recycling, suggesting subduction. Evidence for this increased subduction activity comes from the abundance of old granites originating mostly after 2.6 Ga.
The occurrence of eclogite (a type of metamorphic rock created by high pressure, > 1 GPa), is explained using a model that incorporates subduction. The lack of eclogites that date to the Archean Eon suggests that conditions at that time did not favor the formation of high grade metamorphism and therefore did not achieve the same levels of subduction as was occurring in the Proterozoic Eon.
As a result of remelting of basaltic oceanic crust due to subduction, the cores of the first continents grew large enough to withstand the crustal recycling processes.
The long-term tectonic stability of those cratons is why we find continental crust ranging up to a few billion years in age. It is believed that 43% of modern continental crust was formed in the Proterozoic, 39% formed in the Archean, and only 18% in the Phanerozoic. Studies by Condie (2000) and Rino et al. (2004) suggest that crust production happened episodically. By isotopically calculating the ages of Proterozoic granitoids it was determined that there were several episodes of rapid increase in continental crust production. The reason for these pulses is unknown, but they seemed to have decreased in magnitude after every period.
Supercontinent tectonic history
Evidence of collision and rifting between continents raises the question as to what exactly were the movements of the Archean cratons composing Proterozoic continents. Paleomagnetic and geochronological dating mechanisms have allowed the deciphering of Precambrian Supereon tectonics. It is known that tectonic processes of the Proterozoic Eon resemble greatly the evidence of tectonic activity, such as orogenic belts or ophiolite complexes, we see today. Hence, most geologists would conclude that the Earth was active at that time. It is also commonly accepted that during the Precambrian, the Earth went through several supercontinent breakup and rebuilding cycles (Wilson cycle).
In the late Proterozoic (most recent), the dominant supercontinent was Rodinia (~1000–750 Ma). It consisted of a series of continents attached to a central craton that forms the core of the North American Continent called Laurentia. An example of an orogeny (mountain building processes) associated with the construction of Rodinia is the Grenville orogeny located in Eastern North America. Rodinia formed after the breakup of the supercontinent Columbia and prior to the assemblage of the supercontinent Gondwana (~500 Ma). The defining orogenic event associated with the formation of Gondwana was the collision of Africa, South America, Antarctica and Australia forming the Pan-African orogeny.
Columbia was dominant in the early-mid Proterozoic and not much is known about continental assemblages before then. There are a few plausible models that explain tectonics of the early Earth prior to the formation of Columbia, but the current most plausible hypothesis is that prior to Columbia, there were only a few independent cratons scattered around the Earth (not necessarily a supercontinent, like Rodinia or Columbia).
Life
The Proterozoic can be roughly divided into seven biostratigraphic zones which correspond to informal time periods. The first was the Labradorian, lasting from 2.0–1.65 Ga. It was followed by the Anabarian, which lasted from 1.65–1.2 Ga and was itself followed by the Turukhanian from 1.2–1.03 Ga. The Turukhanian was succeeded by the Uchuromayan, lasting from 1.03–0.85 Ga, which was in turn succeeded by the Yuzhnouralian, lasting from 0.85–0.63 Ga. The final two zones were the Amadeusian, spanning the first half of the Ediacaran from 0.63–0.55 Ga, and the Belomorian, spanning from 0.55–0.542 Ga.
The emergence of advanced single-celled eukaryotes began after the Oxygen Catastrophe. This may have been due to an increase in the oxidized nitrates that eukaryotes use, as opposed to cyanobacteria. It was also during the Proterozoic that the first symbiotic relationships between mitochondria (found in nearly all eukaryotes) and chloroplasts (found in plants and some protists only) and their hosts evolved.
By the late Palaeoproterozoic, eukaryotic organisms had become moderately biodiverse. The blossoming of eukaryotes such as acritarchs did not preclude the expansion of cyanobacteria – in fact, stromatolites reached their greatest abundance and diversity during the Proterozoic, peaking roughly 1.2 billion years ago.
The earliest fossils possessing features typical of fungi date to the Paleoproterozoic Era, some 2.4 billion years ago; these multicellular benthic organisms had filamentous structures capable of anastomosis.
The Viridiplantae evolved sometime in the Palaeoproterozoic or Mesoproterozoic, according to molecular data.
Eukaryote fossils from before the Cryogenian are sparse, and there seems to be low and relatively constant rates of species appearance, change, and extinction. This contrasts with the Ediacaran and early Cambrian periods, in which the quantity and variety of speciations, changes, and extinctions exploded.
Classically, the boundary between the Proterozoic and the Phanerozoic eons was set at the base of the Cambrian Period when the first fossils of animals, including trilobites and archeocyathids, as well as the animal-like Caveasphaera, appeared. In the second half of the 20th century, a number of fossil forms have been found in Proterozoic rocks, particularly in ones from the Ediacaran, proving that multicellular life had already become widespread tens of millions of years before the Cambrian Explosion in what is known as the Avalon Explosion. Nonetheless, the upper boundary of the Proterozoic has remained fixed at the base of the Cambrian, which is currently placed at 538.8 Ma.
| Physical sciences | Geological timescale | Earth science |
58161 | https://en.wikipedia.org/wiki/Dust%20devil | Dust devil | A dust devil (also known regionally as a dirt devil) is a strong, well-formed, and relatively short-lived whirlwind. Its size ranges from small (18 in/half a metre wide and a few yards/metres tall) to large (more than 30 ft/10 m wide and more than half a mile/1 km tall). The primary vertical motion is upward. Dust devils are usually harmless, but can on rare occasions grow large enough to pose a threat to both people and property.
They are comparable to tornadoes in that both are a weather phenomenon involving a vertically oriented rotating column of wind. Most tornadoes are associated with a larger parent circulation, the mesocyclone on the back of a supercell thunderstorm. Dust devils form as a swirling updraft under sunny conditions during fair weather, rarely coming close to the intensity of a tornado.
Formation
Dust devils form when a pocket of hot air near the surface rises quickly through cooler air above it, forming an updraft. If conditions are just right, the updraft may begin to rotate. As the air rapidly rises, the column of hot air is stretched vertically, thereby moving mass closer to the axis of rotation, which causes intensification of the spinning effect by conservation of angular momentum. The secondary flow in the dust devil causes other hot air to speed horizontally inward to the bottom of the newly forming vortex. As more hot air rushes in toward the developing vortex to replace the air that is rising, the spinning effect becomes further intensified and self-sustaining. A dust devil, fully formed, is a funnel-like chimney through which hot air moves, both upwards and in a circle. As the hot air rises, it cools, loses its buoyancy and eventually ceases to rise. As it rises, it displaces air which descends outside the core of the vortex. This cool air returning acts as a balance against the spinning hot-air outer wall and keeps the system stable.
The spinning effect, along with surface friction, usually will produce a forward momentum. The dust devil may be sustained if it moves over nearby sources of hot surface air.
As available hot air near the surface is channelled up the dust devil, eventually surrounding cooler air will be sucked in. Once it occurs, the effect is dramatic, and the dust devil dissipates in seconds. Usually it occurs when the dust devil is moving slowly (depletion) or begins to enter a terrain where the surface temperatures are cooler.
Certain conditions increase the likelihood of dust devil formation.
Flat barren terrain, desert or tarmac: Flat conditions increase the likelihood of the hot-air "fuel" being a near constant. Dusty or sandy conditions will cause particles to become caught up in the vortex, making the dust devil easily visible, but are not necessary for the formation of the vortex.
Clear skies or lightly cloudy conditions: The surface needs to absorb significant amounts of solar energy to heat the air near the surface and create ideal dust devil conditions.
Light or no wind and cool atmospheric temperature: The underlying factor for sustainability of a dust devil is the extreme difference in temperature between the near-surface air and the atmosphere. Windy conditions will destabilize the spinning effect of a dust devil.
Intensity and duration
Many dust devils are usually small and weak, often less than 3 feet (0.9 m) in diameter with maximum winds averaging about 45 miles per hour (70 km/h), and they often dissipate less than a minute after forming. On rare occasions, a dust devil can grow very large and intense, sometimes reaching a diameter of up to 300 feet (90 m) with winds in excess of 60 mph (100 km/h+) and can last for upwards of 20 minutes before dissipating. Because of their small diameter, Coriolis force is not significant in the dust devil itself so dust devils with anticyclonic rotation do occur.
Hazards
Dust devils typically do not cause injuries, but rare, severe dust devils have caused damage and even deaths in the past. One such dust devil struck the Coconino County Fairgrounds in Flagstaff, Arizona, on September 14, 2000, causing extensive damage to several temporary tents, stands and booths, as well as some permanent fairgrounds structures. Several injuries were reported, but there were no fatalities. Based on the degree of damage left behind, it is estimated that the dust devil produced winds as high as 75 mph (120 km/h), which is equivalent to an EF0 tornado. On May 19, 2003, a dust devil lifted the roof off a two-story building in Lebanon, Maine, causing it to collapse and kill a man inside.<ref>NCDC: Event Details National Climatic Data Center'.' Retrieved 2008-06-05.</ref> On June 18, 2008, a woman near Casper, Wyoming was killed when a dust devil caused a small scorer's shed at a youth baseball field to flip on top of her. She had been trying to shelter from the dust devil by going behind the shed. At East El Paso, Texas in 2010, three children in an inflatable jump house were picked up by a dust devil and lifted over 10 feet (3 m), travelling over a fence and landing in a backyard three houses away.This rare weather incident was the subject of a United States Air Force Weather Squadron study: Clarence Giles, "Air Force Weather Squadron forecasts, studies weather to keep servicemembers safe", https://web.archive.org/web/20150518114436/http://fortblissbugle.com/air-force-weather-squadron-forecasts-studies-weather-to-keep-servicemembers-safe/ archived 2015-05-18 Fort Bliss Bugle, Unit News p.1A (January 12, 2011) In Commerce City, Colorado in 2018, a powerful dust devil hurtled two porta-potties into the air; no one was injured. In 2019, a large dust devil in Yucheng county, Henan province, China killed 2 children and injured 18 children and 2 adults when an inflatable jump house was lifted into the air.
Dust devils have been implicated in around 100 aircraft accidents. While many incidents have been simple taxiing problems, a few have had fatal consequences. Dust devils are also considered major hazards among skydivers and paragliding pilots as they can cause a parachute or a paraglider to collapse with little to no warning, at altitudes considered too low to cut away, and contribute to the serious injury or death of parachutists. Such was the case on June 1, 1996, when a dust devil caused a skydiver's parachute to collapse about above the ground. He later died from the injuries he sustained. Dust devils can also contribute to wildfires. One case occurred in Engebæk, Billund Municipality, Denmark in 1868 where a dust devil tossed tuft into a heater, causing a wildfire that possibly extended from 10,000 to 50,000 hectares or more.
Electrical activities
Dust devils, even small ones, can produce radio noise and electrical fields greater than 10,000 volts per meter. A dust devil picks up small dirt and dust particles. As the particles whirl around, they become electrically charged through contact or frictional charging (triboelectrification). The whirling charged particles also create a magnetic field that fluctuates between 3 and 30 times each second.
These electric fields may assist the vortices in lifting material off the ground and into the atmosphere. Field experiments indicate that a dust devil can lift 1 gram of dust per second from each square metre (10 lb/s from each acre) of ground over which it passes. A large dust devil measuring about 100 metres (330 ft) across at its base can lift about 15 metric tonnes (17 short tons) of dust into the air in 30 minutes. Giant dust storms that sweep across the world's deserts contribute 8% of the mineral dust in the atmosphere each year during the handful of storms that occur. In comparison, the significantly smaller dust devils that twist across the deserts during the summer lift about three times as much dust, thus having a greater combined impact on the dust content of the atmosphere. When this occurs, they are often called sand pillars.
Martian dust devils
Alternate names
In Australia, a dust devil is more commonly known as "Willy willy".
In Ireland, dust devils are known as "sí gaoithe" or "fairy wind".
Related phenomena
Ash devils
Hot cinders underneath freshly deposited ash in recently burned areas may sometimes generate numerous dust devils. The lighter weight and the darker color of the ash may create dust devils that are visible hundreds of feet into the air.
Ash devils form similar to dust devils and are often seen on unstable days in burn scar areas of recent fires.
Coal devils are common at the coal town of Tsagaan Khad in South Gobi Province, Mongolia. They occur when dust devils pick up large amounts of stockpiled coal. Their dark color makes them resemble some tornadoes.
Fire whirls
Fire whirls or swirls, sometimes called fire devils or fire tornadoes, can be seen during intense fires in combustible building structures or, more commonly, in forest or bush fires. A fire whirl is a vortex-shaped formation of burning gases being released from the combustible material. The genesis of the vortex is probably similar to a dust devil's. As distinct from the dust devil, it is improbable that the height reached by the fire gas vortex is greater than the visible height of the vertical flames because of turbulence in the surrounding gases that inhibit creation of a stable boundary layer between the rotating/rising gases relative to the surrounding gases.
Hay devils
A "hay devil" is a gentle whirlwind that forms in the warm air above fields of freshly-cut hay. A vortex forms from a column of hot air rising from the ground on calm, sunny days, tossing and swirling stalks and clumps of hay harmlessly through the air, often to the delight of children and onlookers.
Snow devils
The same conditions can produce a snow whirlwind, snow devil'', or sometimes referred to as a "snownado", although differential heating is more difficult in snow-covered areas.
Steam devils
Steam devils are a small vortex column of saturated air of varying height but small diameter forming when cold air lies over a much warmer body of water or saturated surface. They are also often observed in the steam rising from power plants.
| Physical sciences | Storms | Earth science |
58246 | https://en.wikipedia.org/wiki/Nitrocellulose | Nitrocellulose | Nitrocellulose (also known as cellulose nitrate, flash paper, flash cotton, guncotton, pyroxylin and flash string, depending on form) is a highly flammable compound formed by nitrating cellulose through exposure to a mixture of nitric acid and sulfuric acid. One of its first major uses was as guncotton, a replacement for gunpowder as propellant in firearms. It was also used to replace gunpowder as a low-order explosive in mining and other applications. In the form of collodion it was also a critical component in an early photographic emulsion, the use of which revolutionized photography in the 1860s. In the 20th century it was adapted to automobile lacquer and adhesives.
Production
The process uses a mixture of nitric acid and sulfuric acid to convert cellulose into nitrocellulose. The quality of the cellulose is important. Hemicellulose, lignin, pentosans, and mineral salts give inferior nitrocelluloses. In precise chemical terms, nitrocellulose is not a nitro compound, but a nitrate ester. The glucose repeat unit (anhydroglucose) within the cellulose chain has three OH groups, each of which can form a nitrate ester. Thus, nitrocellulose can denote mononitrocellulose, dinitrocellulose, and trinitrocellulose, or a mixture thereof. With fewer OH groups than the parent cellulose, nitrocelluloses do not aggregate by hydrogen bonding. The overarching consequence is that the nitrocellulose is soluble in organic solvents such as acetone and esters; e.g., ethyl acetate, methyl acetate, ethyl carbonate. Most lacquers are prepared from the dinitrate, whereas explosives are mainly the trinitrate.
The chemical equation for the formation of the trinitrate is
3 HNO3 + C6H7(OH)3O2 C6H7(ONO2)3O2 + 3 H2O
The yields are about 85%, with losses attributed to complete oxidation of the cellulose to oxalic acid.
Use
The principal uses of cellulose nitrate is for the production of lacquers and coatings, explosives, and celluloid.
In terms of lacquers and coatings, nitrocellulose dissolves readily in organic solvents, which upon evaporation leave a colorless, transparent, flexible film. Nitrocellulose lacquers have been used as a finish on furniture and musical instruments.
Guncotton, dissolved at about 25% in acetone, forms a lacquer used in preliminary stages of wood finishing to develop a hard finish with a deep lustre. It is normally the first coat applied, then it is sanded and followed by other coatings that bond to it.
Nail polish contains nitrocellulose, as it is inexpensive, dries quickly to a hard film, and does not damage skin.
The explosive applications are diverse and nitrate content is typically higher for propellant applications than for coatings. For space flight, nitrocellulose was used by Copenhagen Suborbitals on several missions as a means of jettisoning components of the rocket/space capsule and deploying recovery systems. However, after several missions and flights, it proved not to have the desired explosive properties in a near vacuum environment. In 2014, the Philae comet lander failed to deploy its harpoons because its 0.3 grams of nitrocellulose propulsion charges failed to fire during the landing.
Other uses
Collodion, a solution of nitrocellulose, is used today in topical skin applications, such as liquid skin and in the application of salicylic acid, the active ingredient in Compound W wart remover.
Laboratory uses
Membrane filters made of a mesh of nitrocellulose threads with various porosities are used in laboratory procedures for particle retention and cell capture in liquid or gaseous solutions and, reversely, obtaining particle-free filtrates.
A nitrocellulose slide, nitrocellulose membrane, or nitrocellulose paper is a sticky membrane used for immobilizing nucleic acids in southern blots and northern blots. It is also used for immobilization of proteins in western blots and atomic force microscopy for its nonspecific affinity for amino acids. Nitrocellulose is widely used as support in diagnostic tests where antigen-antibody binding occurs; e.g., pregnancy tests, U-albumin tests, and CRP tests. Glycine and chloride ions make protein transfer more efficient.
Radon tests for alpha track etches use nitrocellulose.
Adolph Noé developed a method of peeling coal balls using nitrocellulose.
It is used to coat playing cards and to bind staples together in office staplers.
Hobbies
In 1846, nitrated cellulose was found to be soluble in ether and alcohol. The solution was named collodion and was soon used as a dressing for wounds.
In 1851, Frederick Scott Archer invented the wet collodion process as a replacement for albumen in early photographic emulsions, binding light-sensitive silver halides to a glass plate.
Magicians' flash paper are sheets of paper consisting of pure nitrocellulose, which burn almost instantly with a bright flash, leaving no ash or smoke.
As a medium for cryptographic one-time pads, they make the disposal of the pad complete, secure, and efficient.
Nitrocellulose lacquer is spin-coated onto aluminium or glass discs, then a groove is cut with a lathe, to make one-off phonograph records, used as masters for pressing or for play in dance clubs. They are referred to as acetate discs.
Depending on the manufacturing process, nitrocellulose is esterified to varying degrees. Table tennis balls, guitar picks, and some photographic films have fairly low esterification levels and burn comparatively slowly with some charred residue.
Historical uses
Early work on nitration of cellulose
In 1832 Henri Braconnot discovered that nitric acid, when combined with starch or wood fibers, would produce a lightweight combustible explosive material, which he named xyloïdine. A few years later in 1838, another French chemist, Théophile-Jules Pelouze (teacher of Ascanio Sobrero and Alfred Nobel), treated paper and cardboard in the same way. Jean-Baptiste Dumas obtained a similar material, which he called nitramidine.
Guncotton
Around 1846 Christian Friedrich Schönbein, a German-Swiss chemist, discovered a more practical formulation. As he was working in the kitchen of his home in Basel, he spilled a mixture of nitric acid (HNO3) and sulfuric acid (H2SO4) on the kitchen table. He reached for the nearest cloth, a cotton apron, and wiped it up. He hung the apron on the stove door to dry, and as soon as it was dry, a flash occurred as the apron ignited. His preparation method was the first to be widely used. The method was to immerse one part of fine cotton in 15 parts of an equal blend of sulfuric acid and nitric acid. After two minutes, the cotton was removed and washed in cold water to set the esterification level and to remove all acid residue. The cotton was then slowly dried at a temperature below 40 °C (104 °F). Schönbein collaborated with the Frankfurt professor Rudolf Christian Böttger, who had discovered the process independently in the same year.
By coincidence, a third chemist, the Brunswick professor F. J. Otto had also produced guncotton in 1846 and was the first to publish the process, much to the disappointment of Schönbein and Böttger.
The patent rights for the manufacture of guncotton were obtained by John Hall & Son in 1846, and industrial manufacture of the explosive began at a purpose-built factory at Marsh Works in Faversham, Kent, a year later. The manufacturing process was not properly understood and few safety measures were put in place. A serious explosion in July that killed almost two dozen workers resulted in the immediate closure of the plant. Guncotton manufacture ceased for over 15 years until a safer procedure could be developed.
The British chemist Frederick Augustus Abel developed the first safe process for guncotton manufacture, which he patented in 1865. The washing and drying times of the nitrocellulose were both extended to 48 hours and repeated eight times over. The acid mixture was changed to two parts sulfuric acid to one part nitric. Nitration can be controlled by adjusting acid concentrations and reaction temperature. Nitrocellulose is soluble in a mixture of ethanol and ether until nitrogen concentration exceeds 12%. Soluble nitrocellulose, or a solution thereof, is sometimes called collodion.
Guncotton containing more than 13% nitrogen (sometimes called insoluble nitrocellulose) was prepared by prolonged exposure to hot, concentrated acids for limited use as a blasting explosive or for warheads of underwater weapons such as naval mines and torpedoes. Safe and sustained production of guncotton began at the Waltham Abbey Royal Gunpowder Mills in the 1860s, and the material rapidly became the dominant explosive, becoming the standard for military warheads, although it remained too potent to be used as a propellant. More-stable and slower-burning collodion mixtures were eventually prepared using less concentrated acids at lower temperatures for smokeless powder in firearms. The first practical smokeless powder made from nitrocellulose, for firearms and artillery ammunition, was invented by French chemist Paul Vieille in 1884.
Jules Verne viewed the development of guncotton with optimism. He referred to the substance several times in his novels. His adventurers carried firearms employing this substance. In his From the Earth to the Moon, guncotton was used to launch a projectile into space.
Because of their fluffy and nearly white appearance, nitrocellulose products are often referred to as cottons, e.g. lacquer cotton, celluloid cotton, and gun cotton.
Guncotton was originally made from cotton (as the source of cellulose) but contemporary methods use highly processed cellulose from wood pulp. While guncotton is dangerous to store, the hazards it presents can be minimized by storing it dampened with various liquids, such as alcohol. For this reason, accounts of guncotton usage dating from the early 20th century refer to "wet guncotton."
The power of guncotton made it suitable for blasting. As a projectile driver, it had around six times the gas generation of an equal volume of black powder and produced less smoke and less heating.
Artillery shells filled with gun cotton were widely used during the American Civil War, and its use was one of the reasons the conflict was seen as the "first modern war." In combination with breech-loading artillery, such high explosive shells could cause greater damage than previous solid cannonballs.
During the first World War, British authorities were slow to introduce grenades, with soldiers at the front improvising by filling ration tin cans with gun cotton, scrap and a basic fuse.
Further research indicated the importance of washing the acidified cotton. Unwashed nitrocellulose (sometimes called pyrocellulose) may spontaneously ignite and explode at room temperature, as the evaporation of water results in the concentration of unreacted acid.
Film
In 1855, the first human-made plastic, nitrocellulose (branded Parkesine, patented in 1862), was created by Alexander Parkes from cellulose treated with nitric acid and a solvent. In 1868, American inventor John Wesley Hyatt developed a plastic material he named Celluloid, improving on Parkes' invention by plasticizing the nitrocellulose with camphor so that it could be processed into a photographic film. This was used commercially as "celluloid", a highly flammable plastic that until the mid-20th century formed the basis for lacquers and photographic film.
On May 2, 1887, Hannibal Goodwin filed a patent for "a photographic pellicle and process of producing same ... especially in connection with roller cameras", but the patent was not granted until September 13, 1898. In the meantime, George Eastman had already started production of roll-film using his own process.
Nitrocellulose was used as the first flexible film base, beginning with Eastman Kodak products in August 1889. Camphor is used as a plasticizer for nitrocellulose film, often called nitrate film. Goodwin's patent was sold to Ansco, which successfully sued Eastman Kodak for infringement of the patent and was awarded $5,000,000 in 1914 to Goodwin Film.
Nitrate film fires
Disastrous fires related to celluloid or "nitrate film" became regular occurrences in the motion picture industry throughout the silent era and for many years after the arrival of sound film. Projector fires and spontaneous combustion of nitrate footage stored in studio vaults and in other structures were often blamed during the early to mid 20th century for destroying or heavily damaging cinemas, inflicting many serious injuries and deaths, and for reducing to ashes the master negatives and original prints of tens of thousands of screen titles, turning many of them into lost films. Even when nitrate stock did not start the blaze, flames from other sources spread to large nearby film collections, producing intense and highly destructive fires.
In 1914the same year that Goodwin Film was awarded $5,000,000 from Kodak for patent infringementnitrate film fires incinerated a significant portion of the United States' early cinematic history. In that year alone, five very destructive fires occurred at four major studios and a film-processing plant. Millions of feet of film burned on March 19 at the Eclair Moving Picture Company in Fort Lee, New Jersey. Later that same month, many more reels and film cans of negatives and prints also burned at Edison Studios in New York City, in the Bronx. On May 13, a fire at Universal Pictures' Colonial Hall "film factory" in Manhattan consumed another extensive collection. Yet again, on June 13 in Philadelphia, a fire and a series of explosions ignited inside the 186-square-meter (2,000-square-foot) film vault of the Lubin Manufacturing Company and quickly wiped out virtually all of that studio's pre-1914 catalogue. Then a second fire hit the Edison Company at another location on December 9, at its film-processing complex in West Orange, New Jersey. That catastrophic fire started inside a film-inspection building and caused over $7,000,000 in property damages ($ today). Even after film technology changed, archives of older films remained vulnerable; the 1965 MGM vault fire burned many films that were decades old.
The use of volatile nitrocellulose film for motion pictures led many cinemas to fireproof their projection rooms with wall coverings made of asbestos. Those additions intended to prevent or at least delay the migration of flames beyond the projection areas. A training film for projectionists included footage of a controlled ignition of a reel of nitrate film, which continued to burn even when fully submerged in water. Once burning, it is extremely difficult to extinguish. Unlike most other flammable materials, nitrocellulose does not need a source of air to continue burning, since it contains sufficient oxygen within its molecular structure to sustain a flame. For this reason, immersing burning film in water may not extinguish it, and could actually increase the amount of smoke produced. Owing to public safety precautions, the United Kingdom's Health and Safety Executive to this day forbids transportation of nitrate film by post or public transit, or disposal with household refuse.
Cinema fires caused by the ignition of nitrocellulose film stock commonly occurred as well. In Ireland in 1926, it was blamed for the Dromcolliher cinema tragedy in County Limerick in which 48 people died. Then in 1929 at the Glen Cinema in Paisley, Scotland, a film-related fire killed 69 children. Today, nitrate film projection is rare and normally highly regulated and requires extensive precautions, including extra health-and-safety training for projectionists. A special projector certified to run nitrate films has many modifications, among them the chambering of the feed and takeup reels in thick metal covers with small slits to allow the film to run through them. The projector is additionally modified to accommodate several fire extinguishers with nozzles aimed at the film gate. The extinguishers automatically trigger if a piece of film near the gate starts to burn. While this triggering would likely damage or destroy a significant portion of the projector's components, it would contain a fire and prevent far greater damage. Projection rooms may also be required to have automatic metal covers for the projection windows, preventing the spread of fire to the auditorium. Today, the Dryden Theatre at the George Eastman Museum is one of a few theaters in the world that is capable of safely projecting nitrate films and regularly screens them to the public. The BFI Southbank in London is the only cinema in the United Kingdom licensed to show Nitrate Film.
The use of nitrate film and its fiery potential were certainly not issues limited to the realm of motion pictures or to commercial still photography. The film was also used for many years in medicine, where its hazardous nature was most acute, especially in its application to X-ray photography. In 1929, several tons of stored X-ray film were ignited by steam from a broken heating pipe at the Cleveland Clinic in Ohio. That tragedy claimed 123 lives during the fire and additional fatalities several days later, when hospitalized victims died due to inhaling excessive amounts of smoke from the burning film, which was laced with toxic gases such as sulfur dioxide and hydrogen cyanide. Related fires in other medical facilities prompted the growing disuse of nitrocellulose stock for X-rays by 1933, nearly two decades before its use was discontinued for motion-picture films in favour of cellulose acetate film, more commonly known as "safety film".
Nitrocellulose decomposition and new "safety" stocks
Nitrocellulose was found to gradually decompose, releasing nitric acid and further catalyzing the decomposition (eventually into a flammable powder). Decades later, storage at low temperatures was discovered as a means of delaying these reactions indefinitely. Many films produced during the early 20th century were lost through this accelerating, self-catalyzed disintegration or through studio warehouse fires, and many others were deliberately destroyed specifically to avoid the fire risk. Salvaging old films is a major problem for film archivists (see film preservation).
Nitrocellulose film base manufactured by Kodak can be identified by the presence of the word "nitrate" in dark letters along one edge; the word only in clear letters on a dark background indicates derivation from a nitrate base original negative or projection print, but the film in hand itself may be a later print or copy negative, made on safety film. Acetate film manufactured during the era when nitrate films were still in use was marked "Safety" or "Safety Film" along one edge in dark letters. 8, 9.5, and 16 mm film stocks, intended for amateur and other nontheatrical use, were never manufactured with a nitrate base in the west, but rumors exist of 16 mm nitrate film having been produced in the former Soviet Union and China.
Nitrate dominated the market for professional-use 35 mm motion picture film from the industry's origins to the early 1950s. While cellulose acetate-based safety film, notably cellulose diacetate and cellulose acetate propionate, was produced in the gauge for small-scale use in niche applications (such as printing advertisements and other short films to enable them to be sent through the mails without the need for fire safety precautions), the early generations of safety film base had two major disadvantages relative to nitrate: it was much more expensive to manufacture, and considerably less durable in repeated projection. The cost of the safety precautions associated with the use of nitrate was significantly lower than the cost of using any of the safety bases available before 1948. These drawbacks were eventually overcome with the launch of cellulose triacetate base film by Eastman Kodak in 1948. Cellulose triacetate superseded nitrate as the film industry's mainstay base very quickly. While Kodak had discontinued some nitrate film stocks earlier, it stopped producing various nitrate roll films in 1950 and ceased production of nitrate 35 mm motion picture film in 1951.
The crucial advantage cellulose triacetate had over nitrate was that it was no more of a fire risk than paper (the stock is often referred to as "non-flam": this is true—but it is combustible, just not in as volatile or as dangerous a way as nitrate), while it almost matched the cost and durability of nitrate. It remained in almost exclusive use in all film gauges until the 1980s, when polyester/PET film began to supersede it for intermediate and release printing.
Polyester is much more resistant to polymer degradation than either nitrate or triacetate. Although triacetate does not decompose in as dangerous a way as nitrate does, it is still subject to a process known as deacetylation, often nicknamed "vinegar syndrome" (due to the acetic acid smell of decomposing film) by archivists, which causes the film to shrink, deform, become brittle and eventually unusable. PET, like cellulose mononitrate, is less prone to stretching than other available plastics. By the late 1990s, polyester had almost entirely superseded triacetate for the production of intermediate elements and release prints.
Triacetate remains in use for most camera negative stocks because it can be "invisibly" spliced using solvents during negative assembly, while polyester film is usually spliced using adhesive tape patches, which leave visible marks in the frame area. However, ultrasonic splicing in the frame line area can be invisible. Also, polyester film is so strong, it will not break under tension and may cause serious damage to expensive camera or projector mechanisms in the event of a film jam, whereas triacetate film breaks easily, reducing the risk of damage. Many were opposed to the use of polyester for release prints for this reason, and because ultrasonic splicers are very expensive, beyond the budgets of many smaller theaters. In practice, though, this has not proved to be as much of a problem as was feared. Rather, with the increased use of automated long-play systems in cinemas, the greater strength of polyester has been a significant advantage in lessening the risk of a film performance being interrupted by a film break.
Despite its self-oxidizing hazards, nitrate is still regarded highly as the stock is more transparent than replacement stocks, and older films used denser silver in the emulsion. The combination results in a notably more luminous image with a high contrast ratio.
Fabric
The solubility of nitrocellulose was the basis for the first "artificial silk" by Georges Audemars in 1855, which he called "Rayon".. However, Hilaire de Chardonnet was the first to patent a nitrocellulose fiber marketed as "artificial silk" at the Paris Exhibition of 1889. Commercial production started in 1891, but the result was flammable and more expensive than cellulose acetate or cuprammonium rayon. Because of this predicament, production ceased early in the 1900s. Nitrocellulose was briefly known as "mother-in-law silk".
Frank Hastings Griffin invented the double-godet, a special stretch-spinning process that changed artificial silk to rayon, rendering it usable in many industrial products such as tire cords and clothing. Nathan Rosenstein invented the "spunize process" by which he turned rayon from a hard fiber to a fabric. This allowed rayon to become a popular raw material in textiles.
Coatings
Nitrocellulose lacquer manufactured by (among others) DuPont, was the primary material for painting automobiles for many years. Durability of finish, complexities of "multiple stage" modern finishes, and other factors including environmental regulation led manufacturers to choose newer technologies. It remained the favorite of hobbyists for both historical reasons and for the ease with which a professional finish can be obtained. Most automobile "touch up" paints are still made from lacquer because of its fast drying, easy application, and superior adhesion properties – regardless of the material used for the original finish. Guitars sometimes shared color codes with current automobiles. It fell out of favor for mass production use for a number of reasons including environmental regulation and the cost of application vs. "poly" finishes. However, Gibson still use nitrocellulose lacquers on all of their guitars, as well as Fender when reproducing historically accurate guitars. The nitrocellulose lacquer yellows and cracks over time, and custom shops will reproduce this aging to make instruments appear vintage. Guitars made by smaller shops (luthiers) also often use "nitro" as it has an almost mythical status among guitarists.
Hazards
Because of its explosive nature, not all applications of nitrocellulose were successful. In 1869, with elephants having been poached to near extinction, the billiards industry offered a US$10,000 prize to whoever came up with the best replacement for ivory billiard balls. John Wesley Hyatt created the winning replacement, which he created with a new material he invented, called camphored nitrocellulose—the first thermoplastic, better known as celluloid. The invention enjoyed a brief popularity, but the Hyatt balls were extremely flammable, and sometimes portions of the outer shell would explode upon impact. An owner of a billiard saloon in Colorado wrote to Hyatt about the explosive tendencies, saying that he did not mind very much personally but for the fact that every man in his saloon immediately pulled a gun at the sound. The process used by Hyatt to manufacture the billiard balls, patented in 1881, involved placing the mass of nitrocellulose in a rubber bag, which was then placed in a cylinder of liquid and heated. Pressure was applied to the liquid in the cylinder, which resulted in a uniform compression on the nitrocellulose mass, compressing it into a uniform sphere as the heat vaporized the solvents. The ball was then cooled and turned to make a uniform sphere. In light of the explosive results, this process was called the "Hyatt gun method".
An overheated container of dry nitrocellulose is believed to be the initial cause of the 2015 Tianjin explosions.
| Physical sciences | Amides and amines | Chemistry |
58256 | https://en.wikipedia.org/wiki/Wax | Wax | Waxes are a diverse class of organic compounds that are lipophilic, malleable solids near ambient temperatures. They include higher alkanes and lipids, typically with melting points above about 40 °C (104 °F), melting to give low viscosity liquids. Waxes are insoluble in water but soluble in nonpolar organic solvents such as hexane, benzene and chloroform. Natural waxes of different types are produced by plants and animals and occur in petroleum.
Chemistry
Waxes are organic compounds that characteristically consist of long aliphatic alkyl chains, although aromatic compounds may also be present. Natural waxes may contain unsaturated bonds and include various functional groups such as fatty acids, primary and secondary alcohols, ketones, aldehydes and fatty acid esters. Synthetic waxes often consist of homologous series of long-chain aliphatic hydrocarbons (alkanes or paraffins) that lack functional groups.
Plant and animal waxes
Waxes are synthesized by many plants and animals. Those of animal origin typically consist of wax esters derived from a variety of fatty acids and carboxylic alcohols. In waxes of plant origin, characteristic mixtures of unesterified hydrocarbons may predominate over esters. The composition depends not only on species, but also on geographic location of the organism.
Animal waxes
The best-known animal wax is beeswax, used in constructing the honeycombs of beehives, but other insects also secrete waxes. A major component of beeswax is myricyl palmitate which is an ester of triacontanol and palmitic acid. Its melting point is . Spermaceti occurs in large amounts in the head oil of the sperm whale. One of its main constituents is cetyl palmitate, another ester of a fatty acid and a fatty alcohol. Lanolin is a wax obtained from wool, consisting of esters of sterols.
Plant waxes
Plants secrete waxes into and on the surface of their cuticles as a way to control evaporation, wettability and hydration. The epicuticular waxes of plants are mixtures of substituted long-chain aliphatic hydrocarbons, containing alkanes, alkyl esters, fatty acids, primary and secondary alcohols, diols, ketones and aldehydes. From the commercial perspective, the most important plant wax is carnauba wax, a hard wax obtained from the Brazilian palm Copernicia prunifera. Containing the ester myricyl cerotate, it has many applications, such as confectionery and other food coatings, car and furniture polish, floss coating, and surfboard wax. Other more specialized vegetable waxes include jojoba oil, candelilla wax and ouricury wax.
Modified plant and animal waxes
Plant and animal based waxes or oils can undergo selective chemical modifications to produce waxes with more desirable properties than are available in the unmodified starting material. This approach has relied on green chemistry approaches including olefin metathesis and enzymatic reactions and can be used to produce waxes from inexpensive starting materials like vegetable oils.
Petroleum derived waxes
Although many natural waxes contain esters, paraffin waxes are hydrocarbons, mixtures of alkanes usually in a homologous series of chain lengths. These materials represent a significant fraction of petroleum. They are refined by vacuum distillation. Paraffin waxes are mixtures of saturated n- and iso- alkanes, naphthenes, and alkyl- and naphthene-substituted aromatic compounds. A typical alkane paraffin wax chemical composition comprises hydrocarbons with the general formula CnH2n+2, such as hentriacontane, C31H64. The degree of branching has an important influence on the properties. Microcrystalline wax is a lesser produced petroleum based wax that contains higher percentage of isoparaffinic (branched) hydrocarbons and naphthenic hydrocarbons.
Millions of tons of paraffin waxes are produced annually. They are used in foods (such as chewing gum and cheese wrapping), in candles and cosmetics, as non-stick and waterproofing coatings and in polishes.
Montan wax
Montan wax is a fossilized wax extracted from coal and lignite. It is very hard, reflecting the high concentration of saturated fatty acids and alcohols. Although dark brown and odorous, they can be purified and bleached to give commercially useful products.
Polyethylene and related derivatives
, about 200 million kilograms of polyethylene waxes were consumed annually.
Polyethylene waxes are manufactured by one of three methods:
The direct polymerization of ethylene, potentially including co-monomers;
The thermal degradation of high molecular weight polyethylene resin;
The recovery of low molecular weight fractions from high molecular weight resin production.
Each production technique generates products with slightly different properties. Key properties of low molecular weight polyethylene waxes are viscosity, density and melt point.
Polyethylene waxes produced by means of degradation or recovery from polyethylene resin streams contain very low molecular weight materials that must be removed to prevent volatilization and potential fire hazards during use. Polyethylene waxes manufactured by this method are usually stripped of low molecular weight fractions to yield a flash point >500 °F (>260 °C). Many polyethylene resin plants produce a low molecular weight stream often referred to as low polymer wax (LPW). LPW is unrefined and contains volatile oligomers, corrosive catalyst and may contain other foreign material and water. Refining of LPW to produce a polyethylene wax involves removal of oligomers and hazardous catalyst. Proper refining of LPW to produce polyethylene wax is especially important when being used in applications requiring FDA or other regulatory certification.
Uses
Waxes are mainly consumed industrially as components of complex formulations, often for coatings. The main use of polyethylene and polypropylene waxes is in the formulation of colourants for plastics. Waxes confer matting effects (i.e., to confer non-glossy finishes) and wear resistance to paints. Polyethylene waxes are incorporated into inks in the form of dispersions to decrease friction. They are employed as release agents, find use as slip agents in furniture, and confer corrosion resistance.
Candles
Waxes such as paraffin wax or beeswax, and hard fats such as tallow are used to make candles, used for lighting and decoration. Another fuel type used in candle manufacturing includes soy. Soy wax is made by the hydrogenation process using soybean oil.
Wood products
Waxes are used as finishes and coatings for wood products. Beeswax is frequently used as a lubricant on drawer slides where wood to wood contact occurs.
Other uses
Sealing wax was used to close important documents in the Middle Ages. Wax tablets were used as writing surfaces. There were different types of wax in the Middle Ages, namely four kinds of wax (Ragusan, Montenegro, Byzantine, and Bulgarian), "ordinary" waxes from Spain, Poland, and Riga, unrefined waxes and colored waxes (red, white, and green). Waxes are used to make waxed paper, impregnating and coating paper and card to waterproof it or make it resistant to staining, or to modify its surface properties. Waxes are also used in shoe polishes, wood polishes, and automotive polishes, as mold release agents in mold making, as a coating for many cheeses, and to waterproof leather and fabric. Wax has been used since antiquity as a temporary, removable model in lost-wax casting of gold, silver and other materials.
Wax with colorful pigments added has been used as a medium in encaustic painting, and is used today in the manufacture of crayons, china markers and colored pencils. Carbon paper, used for making duplicate typewritten documents was coated with carbon black suspended in wax, typically montan wax, but has largely been superseded by photocopiers and computer printers. In another context, lipstick and mascara are blends of various fats and waxes colored with pigments, and both beeswax and lanolin are used in other cosmetics. Ski wax is used in skiing and snowboarding. Also, the sports of surfing and skateboarding often use wax to enhance the performance.
Some waxes are considered food-safe and are used to coat wooden cutting boards and other items that come into contact with food. Beeswax or coloured synthetic wax is used to decorate Easter eggs in Romania, Ukraine, Poland, Lithuania and the Czech Republic. Paraffin wax is used in making chocolate covered sweets.
Wax is also used in wax bullets, which are used as simulation aids, and for wax sculpturing.
Specific examples
Animal waxes
Beeswax – produced by honey bees
Chinese wax – produced by the scale insect Ceroplastes ceriferus
Lanolin (wool wax) – from the sebaceous glands of sheep
Shellac wax – from the lac insect Kerria lacca
Spermaceti – from the head cavities and blubber of the sperm whale
Vegetable waxes
Bayberry wax – from the surface wax of the fruits of the bayberry shrub, Myrica faya
Candelilla wax – from the Mexican shrubs Euphorbia cerifera and Euphorbia antisyphilitica
Carnauba wax – from the leaves of the carnauba palm, Copernicia cerifera
Castor wax – catalytically hydrogenated castor oil
Esparto wax – a byproduct of making paper from esparto grass (Macrochloa tenacissima)
Japan wax – a vegetable triglyceride (not a true wax), from the berries of Rhus and Toxicodendron species
Jojoba oil – a liquid wax ester, from the seed of Simmondsia chinensis.
Ouricury wax – from the Brazilian feather palm, Syagrus coronata.
Rice bran wax – obtained from rice bran (Oryza sativa)
Soy wax – from soybean oil
Tallow tree wax – from the seeds of the tallow tree Triadica sebifera.
Mineral waxes
Ceresin waxes
Montan wax – extracted from lignite and brown coal
Ozocerite – found in lignite beds
Peat waxes
Petroleum waxes
Paraffin wax – made of long-chain alkane hydrocarbons
Microcrystalline wax – with very fine crystalline structure
| Physical sciences | Hydrocarbons | Chemistry |
58261 | https://en.wikipedia.org/wiki/Honey%20bee | Honey bee | A honey bee (often misspelled honeybee, akin to writing carpenterbee instead of carpenter bee) is a eusocial flying insect within the genus Apis of the bee clade, all native to mainland Afro-Eurasia. After bees spread naturally throughout Africa and Eurasia, humans became responsible for the current cosmopolitan distribution of honey bees, introducing multiple subspecies into South America (early 16th century), North America (early 17th century), and Australia (early 19th century).
Honey bees are known for their construction of perennial colonial nests from wax, the large size of their colonies, and surplus production and storage of honey, distinguishing their hives as a prized foraging target of many animals, including honey badgers, bears and human hunter-gatherers. Only 8 surviving species of honey bee are recognized, with a total of 43 subspecies, though historically 7 to 11 species are recognized. Honey bees represent only a small fraction of the roughly 20,000 known species of bees.
The best known honey bee is the western honey bee, (Apis mellifera), which was domesticated for honey production and crop pollination. The only other domesticated bee is the eastern honey bee (Apis cerana), which occurs in South, Southeast, and East Asia. Only members of the genus Apis are true honey bees, but some other types of bees produce and store honey and have been kept by humans for that purpose, including the stingless bees belonging to the genus Melipona and the Indian stingless or dammar bee Tetragonula iridipennis. Modern humans also use beeswax in making candles, soap, lip balms and various cosmetics, as a lubricant and in mould-making using the lost wax process.
Etymology and name
The genus name Apis is Latin for "bee". Although modern dictionaries may refer to Apis as either honey bee or honeybee, entomologist Robert Snodgrass asserts that correct usage requires two words, i.e., honey bee, because it is a kind or type of bee. It is incorrect to run the two words together, as in dragonfly or butterfly, which are appropriate because dragonflies and butterflies are not flies. Honey bee, not honeybee, is the listed common name in the Integrated Taxonomic Information System, the Entomological Society of America Common Names of Insects Database, and the Tree of Life Web Project.
Origin, systematics, and distribution
Honey bees appear to have their center of origin in South and Southeast Asia (including the Philippines), as all the extant species except Apis mellifera are native to that region. Notably, living representatives of the earliest lineages to diverge (Apis florea and Apis andreniformis) have their center of origin there.
The first Apis bees appear in the fossil record at the Eocene–Oligocene boundary (34 mya), in European deposits. The origin of these prehistoric honey bees does not necessarily indicate Europe as the place of origin of the genus, only that the bees were present in Europe by that time. Few fossil deposits are known from South Asia, the suspected region of honey bee origin, and fewer still have been thoroughly studied.
No Apis species existed in the New World during human times before the introduction of A. mellifera by Europeans. Only one fossil species is documented from the New World, Apis nearctica, known from a single 14 million-year-old specimen from Nevada.
The close relatives of modern honey bees – e.g., bumblebees and stingless bees – are also social to some degree, and social behavior is considered to be a trait that predates the origin of the genus. Among the extant members of Apis, the more basal species make single, exposed combs, while the more recently evolved species nest in cavities and have multiple combs, which has greatly facilitated their domestication.
Species
While about 20,000 species of bees exist, only eight species of honey bee are recognized, with a total of 43 subspecies, although historically seven to 11 species are recognized: Apis andreniformis (the black dwarf honey bee); Apis cerana (the eastern honey bee); Apis dorsata (the giant honey bee); Apis florea (the red dwarf honey bee); Apis koschevnikovi (Koschevnikov's honey bee); Apis laboriosa (the Himalayan giant honey bee); Apis mellifera (the western honey bee); and Apis nigrocincta (the Philippine honey bee).
Honey bees are the only extant members of the tribe Apini. Today's honey bees constitute three clades: Micrapis (the dwarf honey bees), Megapis (the giant honey bee), and Apis (the western honey bee and its close relatives).
Most species have historically been cultured or at least exploited for honey and beeswax by humans indigenous to their native ranges. Only two species have been truly domesticated: Apis mellifera and Apis cerana. A. mellifera has been cultivated at least since the time of the building of the Egyptian pyramids, and only that species has been moved extensively beyond its native range.
Micrapis
Apis florea and Apis andreniformis are small honey bees of southern and southeastern Asia. They make very small, exposed nests in trees and shrubs. Their stings are often incapable of penetrating human skin, so the hive and swarms can be handled with minimal protection. They occur largely sympatrically, though they are very distinct evolutionarily and are probably the result of allopatric speciation, their distribution later converging.
Given that A. florea is more widely distributed and A. andreniformis is considerably more aggressive, honey is, if at all, usually harvested from the former only. They are the most ancient extant lineage of honey bees, maybe diverging in the Bartonian (some 40 million years ago or slightly later) from the other lineages, but do not seem to have diverged from each other a long time before the Neogene. Apis florea have smaller wing spans than its sister species. Apis florea are also completely yellow with the exception of the scutellum of workers, which is black.
Megapis
Two species are recognized in the subgenus Megapis. They usually build single or a few exposed combs on high tree limbs, on cliffs, and sometimes on buildings. They can be very fierce. Periodically robbed of their honey by human "honey hunters", colonies are easily capable of stinging a human being to death if provoked.
Apis dorsata, the giant honey bee, is native and widespread across most of South and Southeast Asia.
A. d. binghami, the Indonesian giant honey bee, is classified as the Indonesian subspecies of the giant honey bee or a distinct species; in the latter case, A. d. breviligula and/or other lineages would probably also have to be considered species.
Apis laboriosa, the Himalayan giant honey bee, was initially described as a distinct species. Later, it was included in A. dorsata as a subspecies based on the biological species concept, though authors applying a genetic species concept have suggested it should be considered a separate species and more recent research has confirmed this classification. Essentially restricted to the Himalayas, it differs little from the giant honey bee in appearance, but has extensive behavioral adaptations that enable it to nest in the open at high altitudes despite low ambient temperatures. It is the largest living honey bee.
Apis
Eastern Apis species include three or four species, including A. koschevnikovi, A. nigrocincta, and A. cerana. The genetics of the western honey bee (A. mellifera) are unclear.
Koschevnikov's honey bee
Koschevnikov's honey bee (Apis koschevnikovi) is often referred to in the literature as the "red bee of Sabah"; however, A. koschevnikovi is pale reddish in Sabah State, Borneo, Malaysia, but a dark, coppery colour in the Malay Peninsula and Sumatra, Indonesia. Its habitat is limited to the tropical evergreen forests of the Malay Peninsula, Borneo and Sumatra and they do not live in tropical evergreen rain forests which extend into Thailand, Myanmar, Cambodia and Vietnam.
Philippine honey bee
Apis nigrocincta is a cavity-nesting species. The species has rust-coloured scapes, legs, and clypeuses, with reddish-tan hair colour that covers most of the body.
Eastern honey bee
Apis cerana, the eastern honey bee proper, is the traditional honey bee of southern and eastern Asia. One of its subspecies, the Indian honey bee (A. c. indica), was domesticated and kept in hives in a fashion similar to A. mellifera, though on a more limited, regional scale.
It has not been possible yet to resolve its relationship to the Bornean honey bee A. c. nuluensis and Apis nigrocincta from the Philippines to satisfaction; some researchers argue that these are indeed distinct species, but that A. cerana as defined is still paraphyletic, consisting of several separate species, though other researchers argue cerana is a single monophyletic species.
Western honey bee
A. mellifera, the most common domesticated species, was first domesticated before 2600 BC and was the third insect to have its genome mapped. It seems to have originated in eastern tropical Africa and spread from there to Europe and eastwards into Asia to the Tian Shan range. It is variously called the European, western, or common honey bee in different parts of the world. Many subspecies have adapted to the local geographic and climatic environments; in addition, breeds such as the Buckfast bee have been bred. Behavior, colour, and anatomy can be quite different from one subspecies or even strain to another.
A. mellifera phylogeny is the most enigmatic of all honey bee species. It seems to have diverged from its eastern relatives only during the Late Miocene. This would fit the hypothesis that the ancestral stock of cave-nesting honey bees was separated into the western group of East Africa and the eastern group of tropical Asia by desertification in the Middle East and adjacent regions, which caused declines of food plants and trees that provided nest sites, eventually causing gene flow to cease.
The diversity of A. mellifera subspecies is probably the product of a largely Early Pleistocene radiation aided by climate and habitat changes during the last ice age. That the western honey bee has been intensively managed by humans for many millennia – including hybridization and introductions – has apparently increased the speed of its evolution and confounded the DNA sequence data to a point where little of substance can be said about the exact relationships of many A. mellifera subspecies.
Apis mellifera is not native to the Americas, so it was not present when the European explorers and colonists arrived. However, other native bee species were kept and traded by indigenous peoples. In 1622, European colonists brought the German honey bee (A. m. mellifera) to the Americas first, followed later by the Italian honey bee (A. m. ligustica) and others. Many of the crops that depend on western honey bees for pollination have also been imported since colonial times. Escaped swarms (known as "wild" honey bees, but actually feral) spread rapidly as far as the Great Plains, usually preceding the colonists. Honey bees did not naturally cross the Rocky Mountains; they were transported by the Mormon pioneers to Utah in the late 1840s, and by ship to California in the early 1850s.
Africanized honey bee
Africanized honey bees (known colloquially as "killer bees") are hybrids between European stock and the East African lowland subspecies A. m. scutellata. They are often more aggressive than European honey bees and do not create as much of a honey surplus, but are more resistant to disease and are better foragers. Accidentally released from quarantine in Brazil, they have spread to North America and constitute a pest in some regions. However, these strains do not overwinter well, so they are not often found in the colder, more northern parts of North America. The original breeding experiment for which the East African lowland honey bees were brought to Brazil in the first place has continued (though not as originally intended). Novel hybrid strains of domestic and re-domesticated Africanized honey bees combine high resilience to tropical conditions and good yields. They are popular among beekeepers in Brazil.
Living and fossil honey bees (Apini: Apis)
Tribe Apini Latreille
Genus Apis Linnaeus (sensu lato)
henshawi species group (†Priorapis Engel, †Synapis Cockerell)
†A. vetusta Engel
†A. henshawi Cockerell
†A. petrefacta (Říha)
†A. miocenica Hong
†A. "longtibia" Zhang
†A. "Miocene 1"
armbrusteri species group (†Cascapis Engel)
†A. armbrusteri Zeuner
†A. nearctica, species novus
florea species group (Micrapis Ashmead)
A. florea Fabricius
A. andreniformis Smith
dorsata species group (Megapis Ashmead)
†A. lithohermaea Engel
A. dorsata Fabricius
A. laboriosa Smith
mellifera species group (Apis Linnaeus sensu stricto)
mellifera subgroup
A. mellifera Linnaeus (Apis Linnaeus sensu strictissimo)
cerana subgroup (Sigmatapis Maa)
A. cerana Fabricius
A. nigrocincta Smith
A. koschevnikovi Enderlein
Life cycle
As in a few other types of eusocial bees, a colony generally contains one queen bee, a female; seasonally up to a few thousand drone bees, or males; and tens of thousands of female worker bees. Details vary among the different species of honey bees, but common features include:
Eggs are laid singly in a cell in a wax honeycomb, produced and shaped by the worker bees. Using her spermatheca, the queen can choose to fertilize the egg she is laying, usually depending on which cell she is laying it into. Drones develop from unfertilised eggs and are haploid, while females (queens and worker bees) develop from fertilised eggs and are diploid. Larvae are initially fed with royal jelly produced by worker bees, later switching to honey and pollen. The exception is a larva fed solely on royal jelly, which will develop into a queen bee. The larva undergoes several moultings before spinning a cocoon within the cell, and pupating.
Young worker bees, sometimes called "nurse bees", clean the hive and feed the larvae. When their royal jelly-producing glands begin to atrophy, they begin building comb cells. They progress to other within-colony tasks as they become older, such as receiving nectar and pollen from foragers, and guarding the hive. Later still, a worker takes her first orientation flights and finally leaves the hive and typically spends the remainder of her life as a forager.
Worker bees cooperate to find food and use a pattern of "dancing" (known as the bee dance or waggle dance) to communicate information regarding resources with each other; this dance varies from species to species, but all living species of Apis exhibit some form of the behavior. If the resources are very close to the hive, they may also exhibit a less specific dance commonly known as the "round dance".
Honey bees also perform tremble dances, which recruit receiver bees to collect nectar from returning foragers.
Virgin queens go on mating flights away from their home colony to a drone congregation area and mate with multiple drones before returning. The drones die in the act of mating. Queen honey bees do not mate with drones from their home colony.
Colonies are established not by solitary queens, as in most bees, but by groups known as "swarms", which consist of a mated queen and a large contingent of worker bees. This group moves en masse to a nest site which was scouted by worker bees beforehand and whose location is communicated with a special type of dance. Once the swarm arrives, they immediately construct a new wax comb and begin to raise new worker brood. This type of nest founding is not seen in any other living bee genus, though several groups of vespid wasps also found new nests by swarming (sometimes including multiple queens). Also, stingless bees will start new nests with large numbers of worker bees, but the nest is constructed before a queen is escorted to the site, and this worker force is not a true "swarm".
Gallery
Winter survival
In cold climates, honey bees stop flying when the temperature drops below about and crowd into the central area of the hive to form a "winter cluster". The worker bees huddle around the queen bee at the center of the cluster, shivering to keep the center between at the start of winter (during the broodless period) and once the queen resumes laying. The worker bees rotate through the cluster from the outside to the inside so that no bee gets too cold. The outside edges of the cluster stay at about . The colder the weather is outside, the more compact the cluster becomes. During winter, they consume their stored honey to produce body heat. The amount of honey consumed during the winter is a function of winter length and severity, but ranges in temperate climates from . In addition, certain bees, including the western honey bee as well as Apis cerana, are known to engage in effective methods of nest thermoregulation during periods of varying temperature in both summer and winter. During the summer, however, this is achieved through fanning and water evaporation from water collected in various fields.
Pollination
Of all the honey bee species, only A. mellifera has been used extensively for commercial pollination of fruit and vegetable crops. The scale of these pollination services is commonly measured in the billions of dollars, credited with adding about 9% to the value of crops across the world. However, despite contributing substantially to crop pollination, there is debate about the potential spillover to natural landscapes and competition between managed honey bees and many of the ~20,000 species of wild pollinators.
Species of Apis are generalist floral visitors, and pollinate many species of flowering plants, but because of their "generalized" nature, they often do so inefficiently. Without specialized adaptations for specific flowers, their ability to reach pollen and nectar is often limited. This combined with their behavioural flexibility may be why they are the most commonly documented pollen thieves. Indeed, for plant species with more specialized pollinators, experiments show that increased honeybee visitation can actually reduce pollination, both where honey bees are non-native and even where they are native. What's more, their tendency to visit all species in a given area means that the pollen they carry for any one species is often very diluted. As such, they can provide some pollination to many plants, but most plants have some native pollinator that is more effective at pollinating that species. When honey bees are present as an invasive species in an area, they compete for flowers with native pollinators, which can actually push out the native species.
Claims of human dependency
Western honey bees have been described as essential to human food production, leading to claims that without their pollination humanity would starve or die out. Apples, blueberries, and cherries, for example, are 90 percent dependent on honeybee pollination. Albert Einstein is sometimes misquoted as saying "If bees disappeared off the face of the earth, man would only have four years left to live". Einstein did not say this and there is no science to support this prediction.
Many important crops need no insect pollination at all. The ten most important crops, comprising 60% of all human food energy, fall into this category: plantains are sterile and propagated by cuttings, as are cassava; potatoes, yams, and sweet potatoes are root vegetables propagated by tubers; soybeans are self-pollinated; and rice, wheat, sorghum, and maize, are wind-pollinated, as are most other grasses.
No crops originating in the New World depend on the western honey bee (Apis mellifera) at all, as the bee is an invasive species brought over with colonists in the last few centuries. Tomatoes, peppers, squash, and all other New World crops evolved with native pollinators such as squash bees, bumble bees, and other native bees. The stingless bees mentioned by Jefferson are distant relatives of the honey bees, in the genus Melipona.
Still, honey bees are considered "crucial to the food supply, pollinating more than 100 of the crops we eat, including nuts, vegetables, berries, citrus and melons." The USDA reports "Three-fourths of the world’s flowering plants and about 35 percent of the world’s food crops depend on animal pollinators to reproduce" and honey bees "pollinate 80 percent of all flowering plants, including more than 130 types of fruits and vegetables."
Nutrition
Honey bees obtain all of their nutritional requirements from a diverse combination of pollen and nectar. Pollen is the only natural protein source for honey bees. Adult worker honey bees consume 3.4–4.3 mg of pollen per day to meet a dry matter requirement of 66–74% protein. The rearing of one larva requires 125-187.5 mg pollen or 25–37.5 mg protein for proper development. Dietary proteins are broken down into amino acids, ten of which are considered essential to honey bees: methionine, tryptophan, arginine, lysine, histidine, phenylalanine, isoleucine, threonine, leucine, and valine. Of these amino acids, honey bees require highest concentrations of leucine, isoleucine, and valine, however elevated concentrations of arginine and lysine are required for brood rearing. In addition to these amino acids, some B vitamins including biotin, folic acid, nicotinamide, riboflavin, thiamine, pantothenate, and most importantly, pyridoxine are required to rear larvae. Pyridoxine is the most prevalent B vitamin found in royal jelly and concentrations vary throughout the foraging season with lowest concentrations found in May and highest concentrations found in July and August. Honey bees lacking dietary pyridoxine were unable to rear brood.
Pollen is also a lipid source for honey bees ranging from 0.8% to 18.9%. Lipids are metabolized during the brood stage for precursors required for future biosynthesis. Fat-soluble vitamins A, D, E, and K are not considered essential but have shown to significantly improve the number of brood reared. Honey bees ingest phytosterols from pollen to produce 24-methylenecholesterol and other sterols as they cannot directly synthesize cholesterol from phytosterols. Nurse bees have the ability to selectively transfer sterols to larvae through brood food.
Nectar is collected by foraging worker bees as a source of water and carbohydrates in the form of sucrose. The dominant monosaccharides in honey bee diets are fructose and glucose but the most common circulating sugar in hemolymph is trehalose which is a disaccharide consisting of two glucose molecules. Adult worker honey bees require 4 mg of utilizable sugars per day and larvae require about 59.4 mg of carbohydrates for proper development.
Honey bees require water to maintain osmotic homeostasis, prepare liquid brood food, and to cool the hive through evaporation. A colony's water needs can generally be met by nectar foraging as it has high water content. Occasionally on hot days or when nectar is limited, foragers will collect water from streams or ponds to meet the needs of the hive.
Beekeeping
The only domesticated species of honey bee are A. mellifera and A. cerana, and they are often maintained, fed, and transported by beekeepers. In Japan, where A. mellifera is vulnerable to local hornets and disease, the Japanese honey bee A. cerana japonica is used in its place. Modern hives also enable beekeepers to transport bees, moving from field to field as the crop needs pollinating and allowing the beekeeper to charge for the pollination services they provide, revising the historical role of the self-employed beekeeper, and favoring large-scale commercial operations. Bees of various types other than honey bees are also domesticated and used for pollination or other means around the world, including Tetragonula iridipennis in India, the blue orchard bee for tree nut and fruit pollination in the United States, and a number of species of Bombus (bumblebees) for pollination in various regions globally, such as tomatoes, which are not effectively pollinated by honey bees.
Colony collapse disorder
Primarily in places where western honey bees were imported by humans, periodic collapses in western honey bee populations have occurred at least since the late 19th century.
However, as humans continued to manipulate the western honey bee and deliberately transferred them on a global scale, diseases simultaneously spread and harmed managed colonies. Colony losses have occurred periodically throughout history. Fungus, mites, and starvation have all been thought to be the cause of the deaths. Limited occurrences resembling CCD were documented as early as 1869. Colony collapses were called "May Disease" in Colorado in 1891 and 1896.
Starting in the first decade of the 21st century, abnormally high die-offs (30–70% of hives) of western honey bee colonies have occurred in North America. This has been dubbed "colony collapse disorder" (CCD) and was at first unexplained. It seems to be caused by a combination of factors rather than a single pathogen or poison, possibly including neonicotinoid pesticides or Israeli acute paralysis virus.
A survey by the University of Maryland and Auburn University published in 2023 found the number of United States honeybee colonies "remained relatively stable" although 48% of colonies were lost in the year that ended April 1, 2023, with a 12-year average annual mortality rate of 39.6%. The previous year (2021-2022) the loss was 39% and the 2020-2021 loss was 50.8%. Beekeepers told the surveying scientists that 21% loss over the winter is acceptable and more than three-fifths of beekeepers surveyed said their losses were higher than that in 2022-2023.
Parasites
Acarapis woodi
Acarapis woodi (or "tracheal mites") are parasitic mites which live and reproduce in adult bees' tracheae, or respiratory tubes, piercing the tube walls with their mouthparts to feed on haemolymph. To infest new hosts, the mites must find newly emerged bees; after three days, the bristles (setae) guarding the spiracles are firm enough to prevent the mites' entry into the tracheae. Mite infestations are known as acarine, and have been called "Isle of Wight disease".
Galleria mellonella
Larval stages of the moth Galleria mellonella parasitize both wild and cultivated honey bees, in particular Apis mellifera and Apis cerana. Eggs are laid within the hive, and the larvae that hatch tunnel through and destroy the honeycombs that contain bee larva and their honey stores. The tunnels they create are lined with silk, which entangles and starves emerging bees. Destruction of honeycombs also result in honey leaking and being wasted. Both G. mellonella adults and larvae are possible vectors for pathogens that can infect bees, including the Israeli acute paralysis virus and the black queen cell virus.
To manage the mite, temperature treatments are possible, but also distorts wax of the honeycombs. Chemical fumigants, particularly CO2, are also used.
Varroa mites
Varroa mites are arguably the biggest threat to honey bees in the United States. These mites invade hives and reproduce by laying eggs on pupa. The hatching mites eat away at the pupa, causing deformities as well as spreading disease. If not detected and treated early on, the mite population may increase to such an extent that the hive will succumb to the diseases and deformities caused by the mites. It was widely believed that the mites drank the blood of bees. However, a 2018 study Article in PNAS: "Linking pesticides and gut health in bees" showed that they actually feed on the fat body tissue of live bees, not the blood.
Mite treatment is accomplished by several methods, including treatment strips and acid vaporization.
Bee products
Honey
Honey is the complex substance made when bees ingest nectar, process it, and store the substance into honey combs. All living species of Apis have had their honey gathered by indigenous peoples for consumption. A. mellifera and A. cerana are the only species that have had their honey harvested for commercial purposes.
Beeswax
Worker bees of a certain age secrete beeswax from a series of exocrine glands on their abdomens. They use the wax to form the walls and caps of the comb. As with honey, beeswax is gathered by humans for various purposes such as candle making, waterproofing, soap and cosmetics manufacturing, pharmaceuticals, art, furniture polish and more.
Bee bread
Bees collect pollen in their pollen baskets and carry it back to the hive.
Worker bees combine pollen, honey and glandular secretions and allow it to ferment in the comb to make bee bread. The fermentation process releases additional nutrients from the pollen and can produce antibiotics and fatty acids which inhibit spoilage. Bee bread is eaten by nurse bees (younger workers) which produce the protein-rich royal jelly needed by the queen and developing larvae in their hypopharyngeal glands.
In the hive, pollen is used as a protein source necessary during brood-rearing. In certain environments, excess pollen can be collected from the hives of A. mellifera and A. cerana. The product is used as a health supplement. It has been used with moderate success as a source of pollen for hand pollination.
Bees as food
Bee brood – the eggs, larvae or pupae of honey bees – is nutritious and seen as a delicacy in countries such as Indonesia, Mexico, Thailand, and many African countries; it has been consumed since ancient times by the Chinese and Egyptians.
Adult wild honeybees are also consumed as a food in parts of China, including Yunnan. According to a worker at a Yunnan-based specialty restaurant, the bees are best served "deep-fried with salt and pepper", and they are "naturally sweet and tasty". Kellie Schmitt of CNN described the dish as one of "Shanghai's weirdest foods".
Propolis
Propolis is a resinous mixture collected by honey bees from tree buds, sap flows or other botanical sources, which is used as a sealant for unwanted open spaces in the hive. Propolis may cause severe allergic reactions and have adverse interactions with prescription drugs in some individuals. Propolis is also used in wood finishes on string instruments.
Royal jelly
Royal jelly is a honey bee secretion used to nourish the larvae. It is marketed for its alleged but unsupported claims of health benefits. On the other hand, it may cause severe allergic reactions in some individuals.
Sexes and castes
Honey bees have three castes: drones, workers, and queens. Drones are male, while workers and queens are female.
Drones
Drones are typically haploid, having only one set of chromosomes, and primarily exist for the purpose of reproduction. They are produced by the queen if she chooses not to fertilize an egg or by an unfertilized laying worker. There are rare instances of diploid drone larvae. This phenomenon usually arises when there are more than two generations of brother-sister mating. Sex determination in honey bees is initially due to a single locus, called the complementary sex determiner (csd) gene. In developing bees, if the conditions are that the individual is heterozygous for the csd gene, they will develop into females. If the conditions are so that the individual is hemizygous or homozygous for the csd gene, they will develop into males. The instances where the individual is homozygous at this gene are the instances of diploid males. Drones take 24 days to develop, and may be produced from summer through to autumn, numbering as many as 500 per hive. They are expelled from the hive during the winter months when the hive's primary focus is warmth and food conservation. Drones have large eyes used to locate queens during mating flights. They do not defend the hive or kill intruders, and do not have a stinger.
Workers
Workers have two sets of chromosomes. They are produced from an egg that the queen has selectively fertilized from stored sperm. Workers typically develop in 21 days. A typical colony may contain as many as 60,000 worker bees. Workers exhibit a wider range of behaviors than either queens or drones. Their duties change with age in the following order (beginning with cleaning out their own cell after eating through their capped brood cell): feed brood, receive nectar, clean hive, guard duty, and foraging. Some workers engage in other specialized behaviors, such as "undertaking" (removing corpses of their nestmates from inside the hive).
Workers have morphological specializations, including the pollen basket (corbicula), abdominal glands that produce beeswax, brood-feeding glands, and barbs on the sting. Under certain conditions (for example, if the colony becomes queenless), a worker may develop ovaries.
Worker honey bees perform different behavioural tasks that cause them to be exposed to different local environments. The gut microbial composition of workers varies according to the landscape and plant species they forage, such as differences in rapeseed crops, and with different hive tasks, such as nursing or food processing.
Queens
Queen honey bees are created when worker bees feed a single female larva an exclusive diet of a food called "royal jelly". Queens are produced in oversized cells and develop in only 16 days; they differ in physiology, morphology, and behavior from worker bees. In addition to the greater size of the queen, she has a functional set of ovaries, and a spermatheca, which stores and maintains sperm after she has mated. Apis queens practice polyandry, with one female mating with multiple males. The highest documented mating frequency for an Apis queen is in Apis nigrocincta, where queens mate with an extremely high number of males with observed numbers of different matings ranging from 42 to 69 drones per queen. The sting of queens is not barbed like a worker's sting, and queens lack the glands that produce beeswax. Once mated, queens may lay up to 2,000 eggs per day. They produce a variety of pheromones that regulate the behavior of workers, and help swarms track the queen's location during the swarming.
Queen-worker conflict
When a fertile female worker produces drones, a conflict arises between her interests and those of the queen. The worker shares half her genes with the drone and one-quarter with her brothers, favouring her offspring over those of the queen. The queen shares half her genes with her sons and one-quarter with the sons of fertile female workers. This pits the worker against the queen and other workers, who try to maximize their reproductive fitness by rearing the offspring most related to them. This relationship leads to a phenomenon known as "worker policing". In these rare situations, other worker bees in the hive who are genetically more related to the queen's sons than those of the fertile workers will patrol the hive and remove worker-laid eggs. Another form of worker-based policing is aggression toward fertile females. Some studies have suggested a queen pheromone which may help workers distinguish worker- and queen-laid eggs, but others indicate egg viability as the key factor in eliciting the behavior. Worker policing is an example of forced altruism, where the benefits of worker reproduction are minimized and that of rearing the queen's offspring maximized.
In very rare instances workers subvert the policing mechanisms of the hive, laying eggs which are removed at a lower rate by other workers; this is known as anarchic syndrome. Anarchic workers can activate their ovaries at a higher rate and contribute a greater proportion of males to the hive. Although an increase in the number of drones would decrease the overall productivity of the hive, the reproductive fitness of the drones' mother would increase. Anarchic syndrome is an example of selection working in opposite directions at the individual and group levels for the stability of the hive.
Under ordinary circumstances the death (or removal) of a queen increases reproduction in workers, and a significant proportion of workers will have active ovaries in the absence of a queen. The workers of the hive produce a last batch of drones before the hive eventually collapses. Although during this period worker policing is usually absent, in certain groups of bees it continues.
According to the strategy of kin selection, worker policing is not favored if a queen does not mate multiple times. Workers would be related by three-quarters of their genes, and the difference in relationship between sons of the queen and those of the other workers would decrease. The benefit of policing is negated, and policing is less favored. Experiments confirming this hypothesis have shown a correlation between higher mating rates and increased rates of worker policing in many species of social hymenoptera.
Timeline of reproduction
For Apis mellifera, queens are the central reproducers among their colonies. Although reproduction may occur around the calendar, it may stop in the late fall due to falling temperatures. If a colony does not have a queen or she is unable to reproduce, workers are able to lay unfertilized eggs that may develop into males. The queens, however, do not reach this point immediately. Typically, it takes a queen 16 days to reach adulthood, with an additional week to begin developing and laying eggs. To begin the process of reproduction in a honeybee colony, workers begin to produce queen larvae while simultaneously finding a place to create a new hive. The queen larvae will then hatch at the old hive, and the queens will fight one another until there is only a single queen left to begin reproducing.
Reproductive strategies
Once a queen matures and is ready to begin reproducing, she will begin making flights to orient to mating in free flight and finding mates before actually beginning to mate. Queens that are ready to mate take between 1 and 6 flights across multiple consecutive days, called nuptial flights. Over the course of their nuptial flights, queens engage with multiple mates and have little control over the number of times they do so.
The process of queens engaging with their mates is not widely understood because the process takes place in free flight, so it is difficult to observe despite various advances in technology and observation techniques. It begins with drones flying in the same area where they know the queen will soon arrive, waiting for her to join them. When the queen arrives, she is crowded immediately by the drones who are eager to mate with her. The drones receive a signal from the queen that her "sting chamber" is open, which induces the drones to mate with her and bring forward their physical contact which warrants reproduction. A successful drone clasps onto the queen and releases seminal fluid and spermatozoa into the queen. After this process is complete, the drone typically remains inside of the queen, which is indicative of the drone's desire to deter other drones from engaging with the queen and reproducing. This behavior also indicates that if the drone blocks other drones from mating with the queen, it will allow the mating drone to fertilize a greater number of the queen's eggs. If the drone does not remain within the queen and removes itself from her, the drone is able to reproduce again with slim chances. Finally, the drone will die after mating with the queen within minutes or hours after reproduction is complete.
Defense
All honey bees live in colonies where the workers sting intruders as a form of defense, and alarmed bees release a pheromone that stimulates the attack response in other bees. The different species of honey bees are distinguished from all other bee species by the possession of small barbs on the sting, but these barbs are found only in the worker bees.
The sting apparatus, including the barbs, may have evolved specifically in response to predation by vertebrates, as the barbs do not usually function (and the sting apparatus does not detach) unless the sting is embedded in fleshy tissue. While the sting can also penetrate the membranes between joints in the exoskeleton of other insects (and is used in fights between queens), in the case of Apis cerana japonica, defense against larger insects such as predatory wasps (e.g. Asian giant hornet) is usually performed by surrounding the intruder with a mass of defending worker bees, which vibrate their muscles vigorously to raise the temperature of the intruder to a lethal level ("balling"). Previously, heat alone was thought to be responsible for killing intruding wasps, but recent experiments have demonstrated the increased temperature in combination with increased carbon dioxide levels within the ball produce the lethal effect. This phenomenon is also used to kill a queen perceived as intruding or defective, an action known to beekeepers as 'balling the queen', named for the ball of bees formed.
Defense can vary based on the habitat of the bee. In the case of those honey bee species with open combs (e.g., A. dorsata), would-be predators are given a warning signal that takes the form of a "wave" that spreads as a ripple across a layer of bees densely packed on the surface of the comb when a threat is perceived, and consists of bees momentarily arching their bodies and flicking their wings. In cavity dwelling species such as Apis cerana, Apis mellifera, and Apis nigrocincta, entrances to these cavities are guarded and checked for intruders in incoming traffic. Another act of defense against nest invaders, particularly wasps, is "body shaking", a violent and pendulum like swaying of the abdomen, performed by worker bees.
A 2020 study of Apis cerana in Vietnam found that they use feces and even human urine to defend their hives against raids by hornets (Vespa soror), a strategy not replicated by their European and North American counterparts, though collection and use of feces in nest construction is well-known in stingless bees.
Venom
The stings of honey bees are barbed and therefore embed themselves into the sting site, and the sting apparatus has its own musculature and ganglion which keep delivering venom even after detachment. The gland which produces the alarm pheromone is also associated with the sting apparatus. The embedded stinger continues to emit additional alarm pheromone after it has torn loose; other defensive workers are thereby attracted to the sting site. The worker dies after the sting becomes lodged and is subsequently torn loose from the bee's abdomen. The honey bee's venom, known as apitoxin, carries several active components, the most abundant of which is melittin, and the most biologically active are enzymes, particularly phospholipase A2.
Honey bee venom is under laboratory and clinical research for its potential properties and uses in reducing risks for adverse events from bee venom therapy, rheumatoid arthritis, and use as an immunotherapy for protection against allergies from insect stings. Bee venom products are marketed in many countries, but, as of 2018, there are no approved clinical uses for these products which carry various warnings for potential allergic reactions.
Competition
With an increased number of honey bees in a specific area due to beekeeping, Western honey bees (as an invasive species) and native wild bees often have to compete for the limited habitat and food sources available, and Western honey bees may become defensive in response to the seasonal arrival of competition from other colonies, particularly Africanized bees which may be on the offence and defence year round due to their tropical origin.
Communication
Honey bees are known to communicate through many different chemicals and odors, as is common in insects. They also rely on a sophisticated dance language that conveys information about the distance and direction to a specific location (typically a nutritional source, e.g., flowers or water). The dance language is also used during the process of colony fission, or swarming, when scouts communicate the location and quality of nesting sites.
The details of the signalling being used vary from species to species; for example, the two smallest species, Apis andreniformis and A. florea, dance on the upper surface of the comb, which is horizontal (not vertical, as in other species), and worker bees orient the dance in the actual compass direction of the resource to which they are recruiting.
Carniolan honey bees (Apis mellifera carnica) use their antennae asymmetrically for social interactions, with a strong lateral preference to use their right antennae.
There has been speculation as to honey bee consciousness. While honey bees lack the parts of the brain that a human being uses for consciousness like the cerebral cortex or even the cerebrum itself, when those parts of a human brain are damaged, the midbrain seems able to provide a small amount of consciousness. Honey bees have a tiny structure that appears similar to a human midbrain, so if it functions the same way they may possibly be able to achieve a small amount of simple awareness of their bodies.
Symbolism
The bee was used as a symbol of government by Emperor Napoleon I of France. Both the Hindu Atharva Veda and the ancient Greeks associated lips anointed with honey with the gift of eloquence and even of prescience. The priestess at Delphi was the "Delphic Bee".
The Quran has a Sura (chapter) titled "The Bee". It is named after honey bees, and contains a comparison of the industry and adaptability of honey bees to the industry of man.
In ancient Egyptian mythology, honey bees were believed to be born from the tears of the Sun God, Ra. Because of their divine origin, they were used to represent the Pharaoh. They were also used as a symbol of Lower Egypt in conjuction with the sedge, which represented Upper Egypt.
In Joseph and Asenath, a work composed by ancient Egyptian Jews who may have been affiliated with the Leontopolis temple, bee and honey imagery appears when Asenath converts and is visited by an angel. If the work was indeed connected to the Leontopolis temple, the bees likely represent Levite priests, and the imagery intends to signify the legitimacy of a Jewish temple in Egypt.
A community of honey bees has often been employed by political theorists as a model of human society, from Aristotle and Plato to Virgil. Honey bees, signifying immortality and resurrection, were royal emblems of the Merovingians. The state of Utah is called the "Beehive State", the state emblem is the beehive, the state insect is the honey bee, and a beehive and the word "industry" appear on both the state flag and seal.
Gallery
| Biology and health sciences | Hymenoptera | Animals |
58264 | https://en.wikipedia.org/wiki/Fig%20wasp | Fig wasp | Fig wasps are wasps of the superfamily Chalcidoidea which spend their larval stage inside fig syconia. Some are pollinators but others simply feed off the plant. The non-pollinators belong to several groups within the superfamily Chalcidoidea, while the pollinators are in the family Agaonidae. Pollinating fig wasps are all gall-makers, non-pollinating fig wasps either make their own galls or usurp the galls of other fig wasps; reports of their being parasitoids are considered dubious.
History
Aristotle recorded in his History of Animals that the fruits of the wild fig (the caprifig) contain psenes (fig wasps); these begin life as grubs (larvae), and the adult psen splits its "skin" (pupa) and flies out of the fig to find and enter a cultivated fig, saving it from dropping. He believed that the psen was generated spontaneously; he did not recognise that the fig was reproducing sexually and that the psen was assisting in that process.
Taxonomy
The fig wasps are a polyphyletic group, including several lineages whose similarities are based upon their shared association with figs. In 2022, family Agaonidae was updated to include only the pollinating fig wasps. Other fig wasps are now included in the families Epichrysomallidae, Eurytomidae, Melanosomellidae, Ormyridae, Pteromalidae, and Torymidae.
Morphological adaptations
In the Agaonidae, the female (as in most Hymenoptera) has four wings, whereas the males are wingless. The primary functions of agaonid males are to mate with the females while still within the fig syconium (inverted flower) and to chew a hole for the females to escape from the fig interior. This is the reverse of sex-linked functions in Strepsiptera and bagworms, where the male has wings and the female never leaves the host.
The non-pollinating fig wasps have developed several impressive morphological adaptations in order to oviposit eggs within the fig syconium. Many species have extremely long ovipositors, so that they can deposit eggs from the outside of the syconium (Subtribe Sycoryctina of Otitesellini and Subfamily Sycophaginae). Others have evolved to enter the syconium in the same way as the Agaonidae, and now resemble the pollinators morphologically (Subtribe Sycoecina of Otitesellini).
Most figs (more than 600 species) have syconia that contain three types of flowers: male, short female, and long female. Female fig wasps can reach the ovaries of short female flowers with their ovipositors, but not long female flowers. Thus, the short female flowers grow wasps, and the long flowers only seeds. Contrary to popular belief, ripe figs are not full of dead wasps and the "crunchy bits" in the fruit are only seeds. The fig actually produces an enzyme called ficain (also known as ficin) which digests the dead wasps and the fig absorbs the nutrients to create the ripe fruits and seeds. Several commercial and ornamental varieties of fig are parthenocarpic and do not require pollination to produce (sterile) fruits; these varieties need not be visited by fig wasps to bear fruit.
Life cycle
The life cycle of the fig wasp is closely intertwined with that of the fig tree it inhabits. The wasps that inhabit a particular tree can be divided into two groups; pollinating and non-pollinating. The pollinating wasps are part of an obligate nursery pollination mutualism with the fig tree, while the non-pollinating wasps feed off the plant without benefiting it. The life cycles of the two groups, however, are similar.
Though the lives of individual species differ, a typical pollinating fig wasp life cycle is as follows. At the beginning of the cycle, a mated mature female pollinator wasp enters the immature "fruit" (actually a stem-like structure known as a syconium) through a small natural opening (the ostiole) and deposits her eggs in the cavity.
Forcing her way through the ostiole, the mated mature female often loses her wings and most of her antennae. To facilitate her passage through the ostiole, the underside of the female's head is covered with short spines that provide purchase on the walls of the ostiole.
In depositing her eggs, the female also deposits pollen she picked up from her original host fig. This pollinates some of the female flowers on the inside surface of the fig and allows them to mature. After the female wasp lays her eggs and follows through with pollination, she dies.
After pollination, there are several species of non-pollinating wasps that deposit their eggs before the figs harden. These wasps act as parasites to either the fig or possibly the pollinating wasps.
As the fig develops, the wasp eggs hatch and develop into larvae. After going through the pupal stage, the mature male’s first act is to mate with a female - before the female hatches. Consequently, the female will emerge pregnant. The males of many species lack wings and cannot survive outside the fig for a sustained period of time. After mating, a male wasp begins to dig out of the fig, creating a tunnel through which the females escape.
Once out of the fig, the male wasps quickly die. The females find their way out, picking up pollen as they do. They then fly to another tree of the same species, where they deposit their eggs and allow the cycle to begin again.
Coevolution
The fig–wasp mutualism originated between 70 and 90 million years ago as the product of a unique evolutionary event. Since then, cocladogenesis and coadaptation on a coarse scale between wasp genera and fig sections have been demonstrated by both morphological and molecular studies. This illustrates the tendency towards coradiation of figs and wasps. Such strict cospeciation should result in identical phylogenetic trees for the two lineages and recent work mapping fig sections onto molecular phylogenies of wasp genera and performing statistical comparisons has provided strong evidence for cospeciation at that scale.
Groups of genetically well-defined pollinator wasp species coevolve in association with groups of genetically poorly defined figs. The constant hybridization of the figs promotes the constant evolution of new pollinator wasp species. Host switching and pollinator host sharing may contribute to the incredible diversity of figs and fig wasp species like Pegoscapus as they result in hybridization and introgression.
Genera
Fig wasp genera and classification:
Agaonidae
Agaoninae
Agaon
Alfonsiella
Allotriozoon
Blastophaga
Courtella
Deilagaon
Dolichoris
Elisabethiella
Eupristina
Nigeriella
Paragaon
Pegoscapus
Platyscapa
Pleistodontes
Waterstoniella
Wiebesia
Kradibiinae
Ceratosolen
Kradibia
Tetrapusiinae
Tetrapus
Epichrysomallidae
Acophila
Asycobia
Camarothorax
Epichrysomalla
Eufroggattia
Herodotia
Lachaisea
Meselatus
Neosycophila
Odontofroggatia
Parapilkhanivora
Sycobia
Sycobiomorphella
Sycomacophila
Sycophilodes
Sycophilomorpha
Sycotetra
Pteromalidae
Colotrechinae
Podvina
Pteromalinae
Adiyodiella
Apocrypta
Arachonia
Bouceka
Comptoniella
Critogaster
Crossogaster
Diaziella
Dobunabaa
Eujacobsonia
Ficicola
Gaudalia
Grandiana
Grasseiana
Hansonita
Lipothymus
Marginalia
Micranisa
Micrognathophora
Otitesella
Parasycobia
Philocaenus
Philosycus
Philosycella
Philotrypesis
Philoverdance
Robertsia
Seres
Sycoecus
Sycoscapter
Walkerella
Watshamiella
Sycophaginae
Anidarnes
Eukoebelea
Idarnes
Pseudidarnes
Sycophaga
Ormyridae
Ormyrus
Eurytomidae
Bruchophagus
Eurytoma
Ficomila
Syceurytoma
Sycophila
Torymidae
Megastigmus
Physothorax
Torymus
Museum collections
One of the world's major fig wasp collections resides in Leeds Museums and Galleries' Discovery Centre, and was collected by Dr. Steve Compton.
| Biology and health sciences | Hymenoptera | Animals |
58288 | https://en.wikipedia.org/wiki/Louse | Louse | Louse (: lice) is the common name for any member of the clade Phthiraptera, which contains nearly 5,000 species of wingless parasitic insects. Phthiraptera has variously been recognized as an order, infraorder, or a parvorder, as a result of developments in phylogenetic research.
Lice are obligate parasites, living externally on warm-blooded hosts which include every species of bird and mammal, except for monotremes, pangolins, and bats. Lice are vectors of diseases such as typhus.
Chewing lice live among the hairs or feathers of their host and feed on skin and debris, whereas sucking lice pierce the host's skin and feed on blood and other secretions. They usually spend their whole life on a single host, cementing their eggs, called nits, to hairs or feathers. The eggs hatch into nymphs, which moult three times before becoming fully grown, a process that takes about four weeks.
Genetic evidence indicates that lice are a highly modified lineage of Psocoptera (now called Psocodea), commonly known as booklice, barklice or barkflies. The oldest known fossil lice are from the Cretaceous.
Humans host two species of louse—the head louse and the body louse are subspecies of Pediculus humanus; and the pubic louse, Pthirus pubis. The body louse has the smallest genome of any known insect; it has been used as a model organism and has been the subject of much research.
Lice were ubiquitous in human society until at least the Middle Ages. They appear in folktales, songs such as The Kilkenny Louse House, and novels such as James Joyce's Finnegans Wake. They commonly feature in the psychiatric disorder delusional parasitosis. A louse was one of the early subjects of microscopy, appearing in Robert Hooke's 1667 book, Micrographia.
Morphology and diversity
Lice are divided into two groups: sucking lice, which obtain their nourishment from feeding on the sebaceous secretions and body fluids of their host; and chewing lice, which are scavengers, feeding on skin, fragments of feathers or hair, and debris found on the host's body. Many lice are specific to a single species of host and have co-evolved with it. In some cases, they live on only a particular part of the body. Some animals are known to host up to fifteen different species, although one to three is typical for mammals, and two to six for birds. Lice generally cannot survive for long if removed from their host. If their host dies, lice can opportunistically use phoresis to hitch a ride on a fly and attempt to find a new host.
Sucking lice range in length from . They have narrow heads and oval, flattened bodies. They have no ocelli, and their compound eyes are reduced in size or absent. Their antennae are short with three to five segments, and their mouthparts, which are retractable into their head, are adapted for piercing and sucking. There is a cibarial pump at the start of the gut; it is powered by muscles attached to the inside of the cuticle of the head. The mouthparts consist of a proboscis which is toothed, and a set of stylets arranged in a cylinder inside the proboscis, containing a salivary canal (ventrally) and a food canal (dorsally). The thoracic segments are fused, the abdominal segments are separate, and there is a single large claw at the tip of each of the six legs.
Chewing lice are also flattened and can be slightly larger than sucking lice, ranging in length from . They are similar to sucking lice in form but the head is wider than the thorax and all species have compound eyes. There are no ocelli and the mouthparts are adapted for chewing. The antennae have three to five segments and are slender in the suborder Ischnocera, but club-shaped in the suborder Amblycera. The legs are short and robust, and terminated by one or two claws. Some species of chewing lice house symbiotic bacteria in bacteriocytes in their bodies. These may assist in digestion because if the insect is deprived of them, it will die.
Lice are usually cryptically coloured to match the fur or feathers of the host. A louse's colour varies from pale beige to dark grey; however, if feeding on blood, it may become considerably darker.
Female lice are usually more common than males, and some species are parthenogenetic, with young developing from unfertilized eggs. A louse's egg is commonly called a nit. Many lice attach their eggs to their hosts' hair with specialized saliva; the saliva/hair bond is very difficult to sever without specialized products. Lice inhabiting birds, however, may simply leave their eggs in parts of the body inaccessible to preening, such as the interior of feather shafts. Living louse eggs tend to be pale whitish, whereas dead louse eggs are yellower. Lice are exopterygotes, being born as miniature versions of the adult, known as nymphs. The young moult three times before reaching the final adult form, usually within a month after hatching.
Humans host three different kinds of lice: head lice, body lice, and pubic lice. Head lice and body lice are subspecies of Pediculus humanus, and pubic lice are a separate species, Pthirus pubis. Lice infestations can be controlled with lice combs, and medicated shampoos or washes.
Ecology
The average number of lice per host tends to be higher in large-bodied bird species than in small ones. Lice have an aggregated distribution across bird individuals, i.e. most lice live on a few birds, while most birds are relatively free of lice. This pattern is more pronounced in territorial than in colonial—more social—bird species.
Host organisms that dive under water to feed on aquatic prey harbour fewer taxa of lice.
Bird taxa that are capable of exerting stronger antiparasitic defence—such as stronger T cell immune response or larger uropygial glands—harbour more taxa of Amblyceran lice than others.
Reductions in the size of host populations may cause a long-lasting reduction of louse taxonomic richness, for example, birds introduced into New Zealand host fewer species of lice there than in Europe. Louse sex ratios are more balanced in more social hosts and more female-biased in less social hosts, presumably due to the stronger isolation among louse subpopulations (living on separate birds) in the latter case. The extinction of a species results in the extinction of its host-specific lice. Host-switching is a random event that would seem very rarely likely to be successful, but speciation has occurred over evolutionary time-scales so it must be successfully accomplished sometimes.
Lice may reduce host life expectancy if the infestation is heavy, but most seem to have little effect on their host. The habit of dust bathing in domestic hens is probably an attempt by the birds to rid themselves of lice. Lice may transmit microbial diseases and helminth parasites, but most individuals spend their whole life cycle on a single host and are only able to transfer to a new host opportunistically. Ischnoceran lice may reduce the thermoregulation effect of the plumage; thus heavily infested birds lose more heat than others.
Lice infestation is a disadvantage in the context of sexual rivalry.
Evolution
Phthiraptera lice are members of Psocodea (formerly Psocoptera), the order that contains booklice, barklice and barkflies. Within Psocodea, lice are within the suborder Troctomorpha, and most closely related to the family Liposcelididae. The oldest confirmed fossil louse is Archimenopon myanmarensis, an amblyceran from the Cretaceous amber from Myanmar. Another early representative of the group is a bird louse, Megamenopon rasnitsyni, from Eckfelder Maar, Germany, which dates to the Eocene, around 44 million years ago. Saurodectes vrsanskyi from the Early Cretaceous (Aptian) Zaza Formation of Buryatia, Russia, has also been suggested to be a louse, but this is tentative.
Placental mammal lice had a single common ancestor that lived on Afrotheria with this arising from host-switching from an ancient avian host.
Cladogram showing the position of Phthiraptera within Psocodea:
Classification
Phthiraptera is clearly a monophyletic grouping, united as the members are by a number of derived features including their parasitism on warm-blooded vertebrates and the combination of their metathoracic ganglia with their abdominal ganglia to form a single ventral nerve junction. The order has traditionally been divided into two suborders, the sucking lice (Anoplura) and the chewing lice (Mallophaga); however, subsequent classifications suggest that the Mallophaga are paraphyletic and four suborders were then recognized:
Anoplura: sucking lice, occurring on mammals exclusively
Rhynchophthirina: parasites of elephants and warthogs
Ischnocera: mostly avian chewing lice, with one family parasitizing mammals
Amblycera: a primitive suborder of chewing lice, widespread on birds, and also occurring on South American and Australian mammals
Upon finding that Phthiraptera was nested within Psocoptera, Phthiraptera, in 2021 de Moya et al. proposed reducing the rank of Phthiraptera to infraorder, and the four suborders to parvorder. These changes were accepted by Psocodea Species File and others, with the exception of placing Phthiraptera under the infraorder Nanopsocetae, as a parvorder, with the four subgroups listed above. These classifications are likely to change in the future as a result of ongoing phylogenetic research.
Nearly 5,000 species of louse have been identified, about 4,000 being parasitic on birds and 800 on mammals. Lice are present on every continent in all the habitats that their host animals occupy. They are found even in the Antarctic, where penguins carry 15 species of lice (in the genera Austrogonoides and Nesiotinus). The oldest known record of the group is Megamenopon rasnitsyni from the Eocene of Germany, but it is essentially a modern form, belonging to Amblycera, so the group as a whole likely has an origin in the Mesozoic.
Phylogeny
Lice have been the subject of significant DNA research in the 2000s that led to discoveries on human evolution. The three species of sucking lice that parasitize human beings belong to two genera, Pediculus and Pthirus: head lice (Pediculus humanus capitis), body lice (Pediculus humanus humanus), and pubic lice (Pthirus pubis). Human head and body lice (genus Pediculus) share a common ancestor with chimpanzee lice, while pubic lice (genus Pthirus) share a common ancestor with gorilla lice. Using phylogenetic and cophylogenetic analysis, Reed et al. hypothesized that Pediculus and Pthirus are sister taxa and monophyletic. In other words, the two genera descended from the same common ancestor. The age of divergence between Pediculus and its common ancestor is estimated to be 6-7 million years ago, which matches the age predicted by chimpanzee-hominid divergence. Because parasites rely on their hosts, hostparasite cospeciation events are likely.
Genetic evidence suggests that human ancestors acquired pubic lice from gorillas approximately 3-4 million years ago. Unlike the genus Pediculus, the divergence in Pthirus does not match the age of host divergence that likely occurred 7 million years ago. Reed et al. propose a Pthirus species host-switch around 3-4 million years ago. While it is difficult to determine if a parasitehost switch occurred in evolutionary history, this explanation is the most parsimonious (containing the fewest evolutionary changes).
Additionally, the DNA differences between head lice and body lice provide corroborating evidence that humans used clothing between 80,000 and 170,000 years ago, before leaving Africa. Human head and body lice occupy distinct ecological zones: head lice live and feed on the scalp, while body lice live on clothing and feed on the body. Because body lice require clothing to survive, the divergence of head and body lice from their common ancestor provides an estimate of the date of introduction of clothing in human evolutionary history.
The mitochondrial genome of the human species of the body louse (Pediculus humanus humanus), the head louse (Pediculus humanus capitis) and the pubic louse (Pthirus pubis) fragmented into a number of minichromosomes, at least seven million years ago. Analysis of mitochondrial DNA in human body and hair lice reveals that greater genetic diversity existed in African than in non-African lice. Human lice can also shed light on human migratory patterns in prehistory. The dominating theory of anthropologists regarding human migration is the Out of Africa Hypothesis. Genetic diversity accumulates over time, and mutations occur at a relatively constant rate. Because there is more genetic diversity in African lice, the lice and their human hosts must have existed in Africa before anywhere else.
In human culture
In social history
Lice have been intimately associated with human society throughout history. In the Middle Ages, they were essentially ubiquitous. At the death of Thomas Becket, Archbishop of Canterbury in 1170, it was recorded that "The vermin boiled over like water in a simmering cauldron, and the onlookers burst into alternate weeping and laughing". The clergy often saw lice and other parasites as a constant reminder of human frailty and weakness. Monks and nuns would purposely ignore grooming themselves and suffer from infestations to express their religious devotion. A mediaeval treatment for lice was an ointment made from pork grease, incense, lead, and aloe.
Robert Hooke's 1667 book, Micrographia: or some physiological descriptions of minute bodies made by magnifying glasses with observations and Inquiries thereupon, illustrated a human louse, drawn as seen down an early microscope.
Margaret Cavendish's satirical The Description of a New World, Called The Blazing-World (1668) has "Lice-men" as "mathematicians", investigating nature by trying to weigh the air like the real scientist Robert Boyle.
In 1935 the Harvard medical researcher Hans Zinsser wrote the book Rats, Lice and History, alleging that both body and head lice transmit typhus between humans. Despite this, the modern view is that only the body louse can transmit the disease.
Soldiers in the trenches of the First World War suffered severely from lice, and the typhus they carried. The Germans boasted that they had lice under effective control, but themselves suffered badly from lice in the Second World War on the Eastern Front, especially in the Battle of Stalingrad. "Delousing" became a euphemism for the extermination of Jews in concentration camps such as Auschwitz under the Nazi regime.
In the psychiatric disorder delusional parasitosis, patients express a persistent irrational fear of animals such as lice and mites, imagining that they are continually infested and complaining of itching, with "an unshakable false belief that live organisms are present in the skin".
In science
The human body louse Pediculus humanus humanus has (2010) the smallest insect genome known. This louse can transmit certain diseases while the human head louse (P. h. capitis), to which it is closely related, cannot. With their simple life history and small genomes, the pair make ideal model organisms to study the molecular mechanisms behind the transmission of pathogens and vector competence.
In literature and folklore
James Joyce's 1939 book Finnegans Wake has the character Shem the Penman infested with "foxtrotting fleas, the lieabed lice, ... bats in his belfry".
Clifford E. Trafzer's A Chemehuevi Song: The Resilience of a Southern Paiute Tribe retells the story of Sinawavi (Coyote)'s love for Poowavi (Louse). Her eggs are sealed in a basket woven by her mother, who gives it to Coyote, instructing him not to open it before he reaches home. Hearing voices coming from it, however, Coyote opens the basket and the people, the world's first human beings, pour out of it in all directions.
The Irish songwriter John Lyons (b. 1934) wrote the popular song The Kilkenny Louse House. The song contains the lines "Well we went up the stairs and we put out the light, Sure in less than five minutes, I had to show fight. For the fleas and the bugs they collected to march, And over me stomach they formed a great arch". It has been recorded by Christie Purcell (1952), Mary Delaney on From Puck to Appleby (2003), and the Dubliners on Double Dubliners (1972) among others.
Robert Burns dedicated a poem to the louse, inspired by witnessing one on a lady's bonnet in church: "Ye ugly, creepin, blastid wonner, Detested, shunn'd, by saint and sinner, How dare ye set your fit upon her, sae fine lady! Gae somewhere else, and seek your dinner on some poor body." John Milton in Paradise Lost mentioned the biblical plague of lice visited upon pharaoh: "Frogs, lice, and flies must all his palace fill with loathed intrusion, and filled all the land." John Ray recorded a Scottish proverb, "Gie a beggar a bed and he'll repay you with a Louse."
In Shakespeare's Troilus and Cressida, Thersites compares Menelaus, brother of Agamemnon, to a louse: "Ask me not what I would be, if I were not Thersites; for I care not to be the louse of a lazar, so I were not Menelaus."
Woodlouse
The name woodlouse or wood-louse is given to crustaceans of the suborder Oniscidea within the order Isopoda, unrelated to lice. The Oxford English Dictionarys earliest citation of this usage is from 1611.
| Biology and health sciences | Insects and other hexapods | null |
58358 | https://en.wikipedia.org/wiki/Tryptophan | Tryptophan | Tryptophan (symbol Trp or W) is an α-amino acid that is used in the biosynthesis of proteins. Tryptophan contains an α-amino group, an α-carboxylic acid group, and a side chain indole, making it a polar molecule with a non-polar aromatic beta carbon substituent. Tryptophan is also a precursor to the neurotransmitter serotonin, the hormone melatonin, and vitamin B3 (niacin). It is encoded by the codon UGG.
Like other amino acids, tryptophan is a zwitterion at physiological pH where the amino group is protonated (–; pKa = 9.39) and the carboxylic acid is deprotonated ( –COO−; pKa = 2.38).
Humans and many animals cannot synthesize tryptophan: they need to obtain it through their diet, making it an essential amino acid.
Tryptophan is named after the digestive enzymes trypsin, which were used in its first isolation from casein proteins. It was assigned the one-letter symbol W based on the double ring being visually suggestive to the bulky letter.
Function
Amino acids, including tryptophan, are used as building blocks in protein biosynthesis, and proteins are required to sustain life. Tryptophan is among the less common amino acids found in proteins, but it plays important structural or functional roles whenever it occurs. For instance, tryptophan and tyrosine residues play special roles in "anchoring" membrane proteins within the cell membrane. Tryptophan, along with other aromatic amino acids, is also important in glycan-protein interactions. In addition, tryptophan functions as a biochemical precursor for the following compounds:
Serotonin (a neurotransmitter), synthesized by tryptophan hydroxylase.
Melatonin (a neurohormone) is in turn synthesized from serotonin, via N-acetyltransferase and 5-hydroxyindole-O-methyltransferase enzymes.
Kynurenine, to which tryptophan is mainly (more than 95%) metabolized. Two enzymes, namely indoleamine 2,3-dioxygenase (IDO) in the immune system and the brain, and tryptophan 2,3-dioxygenase (TDO) in the liver, are responsible for the synthesis of kynurenine from tryptophan. The kynurenine pathway of tryptophan catabolism is altered in several diseases, including psychiatric disorders such as schizophrenia, major depressive disorder, and bipolar disorder.
Niacin, also known as vitamin B3, is synthesized from tryptophan via kynurenine and quinolinic acids.
Auxins (a class of phytohormones) are synthesized from tryptophan.
The disorder fructose malabsorption causes improper absorption of tryptophan in the intestine, reduced levels of tryptophan in the blood, and depression.
In bacteria that synthesize tryptophan, high cellular levels of this amino acid activate a repressor protein, which binds to the trp operon. Binding of this repressor to the tryptophan operon prevents transcription of downstream DNA that codes for the enzymes involved in the biosynthesis of tryptophan. So high levels of tryptophan prevent tryptophan synthesis through a negative feedback loop, and when the cell's tryptophan levels go down again, transcription from the trp operon resumes. This permits tightly regulated and rapid responses to changes in the cell's internal and external tryptophan levels.
Recommended dietary allowance
In 2002, the U.S. Institute of Medicine set a Recommended Dietary Allowance (RDA) of 5 mg/kg body weight/day of tryptophan for adults 19 years and over.
Dietary sources
Tryptophan is present in most protein-based foods or dietary proteins. It is particularly plentiful in chocolate, oats, dried dates, milk, yogurt, cottage cheese, red meat, eggs, fish, poultry, sesame, chickpeas, almonds, sunflower seeds, pumpkin seeds, hemp seeds, buckwheat, spirulina, and peanuts. Contrary to the popular belief that cooked turkey contains an abundance of tryptophan, the tryptophan content in turkey is typical of poultry.
Medical use
Depression
Because tryptophan is converted into 5-hydroxytryptophan (5-HTP) which is then converted into the neurotransmitter serotonin, it has been proposed that consumption of tryptophan or 5-HTP may improve depression symptoms by increasing the level of serotonin in the brain. Tryptophan is sold over the counter in the United States (after being banned to varying extents between 1989 and 2005) and the United Kingdom as a dietary supplement for use as an antidepressant, anxiolytic, and sleep aid. It is also marketed as a prescription drug in some European countries for the treatment of major depression. There is evidence that blood tryptophan levels are unlikely to be altered by changing the diet, but consuming purified tryptophan increases the serotonin level in the brain, whereas eating foods containing tryptophan does not.
In 2001 a Cochrane review of the effect of 5-HTP and tryptophan on depression was published. The authors included only studies of a high rigor and included both 5-HTP and tryptophan in their review because of the limited data on either. Of 108 studies of 5-HTP and tryptophan on depression published between 1966 and 2000, only two met the authors' quality standards for inclusion, totaling 64 study participants. The substances were more effective than placebo in the two studies included but the authors state that "the evidence was of insufficient quality to be conclusive" and note that "because alternative antidepressants exist which have been proven to be effective and safe, the clinical usefulness of 5-HTP and tryptophan is limited at present". The use of tryptophan as an adjunctive therapy in addition to standard treatment for mood and anxiety disorders is not supported by the scientific evidence.
Insomnia
The American Academy of Sleep Medicine's 2017 clinical practice guidelines recommended against the use of tryptophan in the treatment of insomnia due to poor effectiveness.
Side effects
Potential side effects of tryptophan supplementation include nausea, diarrhea, drowsiness, lightheadedness, headache, dry mouth, blurred vision, sedation, euphoria, and nystagmus (involuntary eye movements).
Interactions
Tryptophan taken as a dietary supplement (such as in tablet form) has the potential to cause serotonin syndrome when combined with antidepressants of the MAOI or SSRI class or other strongly serotonergic drugs. Because tryptophan supplementation has not been thoroughly studied in a clinical setting, its interactions with other drugs are not well known.
Isolation
The isolation of tryptophan was first reported by Frederick Hopkins in 1901. Hopkins recovered tryptophan from hydrolysed casein, recovering 4–8 g of tryptophan from 600 g of crude casein.
Biosynthesis and industrial production
As an essential amino acid, tryptophan is not synthesized from simpler substances in humans and other animals, so it needs to be present in the diet in the form of tryptophan-containing proteins. Plants and microorganisms commonly synthesize tryptophan from shikimic acid or anthranilate: anthranilate condenses with phosphoribosylpyrophosphate (PRPP), generating pyrophosphate as a by-product. The ring of the ribose moiety is opened and subjected to reductive decarboxylation, producing indole-3-glycerol phosphate; this, in turn, is transformed into indole. In the last step, tryptophan synthase catalyzes the formation of tryptophan from indole and the amino acid serine.
The industrial production of tryptophan is also biosynthetic and is based on the fermentation of serine and indole using either wild-type or genetically modified bacteria such as B. amyloliquefaciens, B. subtilis, C. glutamicum or E. coli. These strains carry mutations that prevent the reuptake of aromatic amino acids or multiple/overexpressed trp operons. The conversion is catalyzed by the enzyme tryptophan synthase.
Society and culture
Showa Denko contamination scandal
There was a large outbreak of eosinophilia-myalgia syndrome (EMS) in the U.S. in 1989, with more than 1,500 cases reported to the CDC and at least 37 deaths. After preliminary investigation revealed that the outbreak was linked to intake of tryptophan, the U.S. Food and Drug Administration (FDA) recalled tryptophan supplements in 1989 and banned most public sales in 1990, with other countries following suit.
Subsequent studies suggested that EMS was linked to specific batches of L-tryptophan supplied by a single large Japanese manufacturer, Showa Denko. It eventually became clear that recent batches of Showa Denko's L-tryptophan were contaminated by trace impurities, which were subsequently thought to be responsible for the 1989 EMS outbreak. However, other evidence suggests that tryptophan itself may be a potentially major contributory factor in EMS. There are also claims that a precursor reached sufficient concentrations to form a toxic dimer.
The FDA loosened its restrictions on sales and marketing of tryptophan in February 2001, but continued to limit the importation of tryptophan not intended for an exempted use until 2005.
The fact that the Showa Denko facility used genetically engineered bacteria to produce the contaminated batches of L-tryptophan later found to have caused the outbreak of eosinophilia-myalgia syndrome has been cited as evidence of a need for "close monitoring of the chemical purity of biotechnology-derived products". Those calling for purity monitoring have, in turn, been criticized as anti-GMO activists who overlook possible non-GMO causes of contamination and threaten the development of biotech.
Turkey meat and drowsiness hypothesis
A common assertion in the US and the UK is that heavy consumption of turkey meat—as seen during Thanksgiving and Christmas—results in drowsiness, due to high levels of tryptophan contained in turkey. However, the amount of tryptophan in turkey is comparable with that of other meats. Drowsiness after eating may be caused by other foods eaten with the turkey, particularly carbohydrates. Ingestion of a meal rich in carbohydrates triggers the release of insulin. Insulin in turn stimulates the uptake of large neutral branched-chain amino acids (BCAA), but not tryptophan, into muscle, increasing the ratio of tryptophan to BCAA in the blood stream. The resulting increased tryptophan ratio reduces competition at the large neutral amino acid transporter (which transports both BCAA and aromatic amino acids), resulting in more uptake of tryptophan across the blood–brain barrier into the cerebrospinal fluid (CSF). Once in the CSF, tryptophan is converted into serotonin in the raphe nuclei by the normal enzymatic pathway. The resultant serotonin is further metabolised into the hormone melatonin—which is an important mediator of the circadian rhythm—by the pineal gland. Hence, these data suggest that "feast-induced drowsiness"—or postprandial somnolence—may be the result of a heavy meal rich in carbohydrates, which indirectly increases the production of melatonin in the brain, and thereby promotes sleep.
Research
Yeast amino acid metabolism
In 1912 Felix Ehrlich demonstrated that yeast metabolizes the natural amino acids essentially by splitting off carbon dioxide and replacing the amino group with a hydroxyl group. By this reaction, tryptophan gives rise to tryptophol.
Serotonin precursor
Tryptophan affects brain serotonin synthesis when given orally in a purified form and is used to modify serotonin levels for research. Low brain serotonin level is induced by administration of tryptophan-poor protein in a technique called acute tryptophan depletion. Studies using this method have evaluated the effect of serotonin on mood and social behavior, finding that serotonin reduces aggression and increases agreeableness.
Psychedelic effects
Tryptophan produces the head-twitch response (HTR) in rodents when administered at sufficiently high doses. The HTR is induced by serotonergic psychedelics like lysergic acid diethylamide (LSD) and psilocybin and is a behavioral proxy of psychedelic effects. Tryptophan is converted into the trace amine tryptamine and tryptamine is N-methylated by indolethylamine N-methyltransferase (INMT) into N-methyltryptamine (NMT) and N,N-dimethyltryptamine (N,N-DMT), which are known serotonergic psychedelics.
Fluorescence
Tryptophan is an important intrinsic fluorescent probe (amino acid), which can be used to estimate the nature of the microenvironment around the tryptophan residue. Most of the intrinsic fluorescence emissions of a folded protein are due to excitation of tryptophan residues.
| Biology and health sciences | Amino acids | Biology |
58422 | https://en.wikipedia.org/wiki/Aviation | Aviation | Aviation includes the activities surrounding mechanical flight and the aircraft industry. Aircraft includes fixed-wing and rotary-wing types, morphable wings, wing-less lifting bodies, as well as lighter-than-air aircraft such as hot air balloons and airships.
Aviation began in the 18th century with the development of the hot air balloon, an apparatus capable of atmospheric displacement through buoyancy. Clément Ader built the "Ader Éole" in France and made an uncontrolled, powered hop in 1890. This is the first powered aircraft, although it did not achieve controlled flight. Some of the most significant advancements in aviation technology came with the controlled gliding flying of Otto Lilienthal in 1896; then a large step in significance came with the construction of the first powered airplane by the Wright brothers in the early 1900s. Since that time, aviation has been technologically revolutionized by the introduction of the jet which permitted a major form of transport throughout the world.
Etymology
The word aviation was coined by the French writer and former naval officer Gabriel La Landelle in 1863. He originally derived the term from the verb avier (an unsuccessful neologism for "to fly"), itself derived from the Latin word avis ("bird") and the suffix -ation.
History
Early beginnings
There are early legends of human flight such as the stories of Icarus in Greek myth, Jamshid and Shah Kay Kāvus in Persian myth, and the flying automaton of Archytas of Tarentum (428–347 BC). Later, somewhat more credible claims of short-distance human flights appear, such as the winged flights of Abbas ibn Firnas (810–887, recorded in the 17th century), Eilmer of Malmesbury (11th century, recorded in the 12th century), and the hot-air Passarola of Bartholomeu Lourenço de Gusmão (1685–1724).
Lighter than air
The modern age of aviation began with the first untethered human lighter-than-air flight on November 21, 1783, of a hot air balloon designed by the Montgolfier brothers. The usefulness of balloons was limited because they could only travel downwind. It was immediately recognized that a steerable, or dirigible, balloon was required. Jean-Pierre Blanchard flew the first human-powered dirigible in 1784 and crossed the English Channel in one in 1785.
Rigid airships became the first aircraft to transport passengers and cargo over great distances. The best known aircraft of this type were manufactured by the German Zeppelin company.
The most successful Zeppelin was the Graf Zeppelin. It flew over one million miles, including an around-the-world flight in August 1929. However, the dominance of the Zeppelins over the airplanes of that period, which had a range of only a few hundred miles, was diminishing as airplane design advanced. The "Golden Age" of the airships ended on May 6, 1937. That year the Hindenburg caught fire, killing 36 people. The cause of the Hindenburg accident was initially blamed on the use of hydrogen instead of helium as the lift gas. An internal investigation by the manufacturer revealed that the coating used in the material covering the frame was highly flammable and allowed static electricity to build up in the airship. Changes to the coating formulation reduced the risk of further Hindenburg type accidents. Although there have been periodic initiatives to revive their use, airships have seen only niche application since that time. There had been previous airship accidents that were more fatal, for instance, a British R38 on , but the Hindenburg was the first to be captured on newsreel.
Heavier than air
In 1799, Sir George Cayley set forth the concept of the modern airplane as a fixed-wing flying machine with separate systems for lift, propulsion, and control.
Otto Lilienthal was the first person to make well-documented, repeated, successful flights with gliders, therefore making the idea of "heavier than air" a reality. Newspapers and magazines published photographs of Lilienthal gliding, favorably influencing public and scientific opinion about the possibility of flying machines becoming practical.
Lilienthal's work led to him developing the concept of the modern wing. His flight attempts in Berlin in 1891 are seen as the beginning of human flight and the "Lilienthal Normalsegelapparat" is considered to be the first airplane in series production, making the Maschinenfabrik Otto Lilienthal in Berlin the first air plane production company in the world.
Lilienthal is often referred to as either the "father of aviation" or "father of flight".
Early dirigible developments included machine-powered propulsion (Henri Giffard, 1852), rigid frames (David Schwarz, 1896) and improved speed and maneuverability (Alberto Santos-Dumont, 1901)
There are many competing claims for the earliest powered, heavier-than-air flight. The first recorded powered flight was carried out by Clément Ader on October 9, 1890, in his bat-winged, fully self-propelled fixed-wing aircraft, the Ader Éole. It was reportedly the first manned, powered, heavier-than-air flight of a significant distance () but insignificant altitude from level ground. Seven years later, on October 14, 1897, Ader's Avion III was tested without success in front of two officials from the French War ministry. The report on the trials was not publicized until 1910, as they had been a military secret. In November 1906, Ader claimed to have made a successful flight on October 14, 1897, achieving an "uninterrupted flight" of around . Although widely believed at the time, these claims were later discredited.
The Wright brothers made the first successful powered, controlled and sustained airplane flight on December 17, 1903, a feat made possible by their invention of three-axis control and in-house development of an engine with a sufficient power-to-weight ratio. Only a decade later, at the start of World War I, heavier-than-air powered aircraft had become practical for reconnaissance, artillery spotting, and even attacks against ground positions.
Aircraft began to transport people and cargo as designs grew larger and more reliable. The Wright brothers took aloft the first passenger, Charles Furnas, one of their mechanics, on May 14, 1908.
During the 1920s and 1930s great progress was made in the field of aviation, including the first transatlantic flight of Alcock and Brown in 1919, Charles Lindbergh's solo transatlantic flight in 1927, and Charles Kingsford Smith's transpacific flight the following year. One of the most successful designs of this period was the Douglas DC-3, which became the first airliner to be profitable carrying passengers exclusively, starting the modern era of passenger airline service. By the beginning of World War II, many towns and cities had built airports, and there were numerous qualified pilots available. During World War II one of the first jet engines was developed by Hans con Ohain, and accomplished the world's first jet-powered flight in 1939. The war brought many innovations to aviation, including the first jet aircraft and the first liquid-fueled rockets.
After World War II, especially in North America, there was a boom in general aviation, both private and commercial, as thousands of pilots were released from military service and many inexpensive war-surplus transport and training aircraft became available. Manufacturers such as Cessna, Piper, and Beechcraft expanded production to provide light aircraft for the new middle-class market.
By the 1950s, the development of civil jets grew, beginning with the de Havilland Comet, though the first widely used passenger jet was the Boeing 707, because it was much more economical than other aircraft at that time. At the same time, turboprop propulsion started to appear for smaller commuter planes, making it possible to serve small-volume routes in a much wider range of weather conditions.
Since the 1960s composite material airframes and quieter, more efficient engines have become available, and Concorde provided supersonic passenger service for more than two decades, but the most important lasting innovations have taken place in instrumentation and control. The arrival of solid-state electronics, the Global Positioning System, satellite communications, and increasingly small and powerful computers and LED displays, have dramatically changed the cockpits of airliners and, increasingly, of smaller aircraft as well. Pilots can navigate much more accurately and view terrain, obstructions, and other nearby aircraft on a map or through synthetic vision, even at night or in low visibility.
On June 21, 2004, SpaceShipOne became the first privately funded aircraft to make a spaceflight, opening the possibility of an aviation market capable of leaving the Earth's atmosphere. Meanwhile, the need to decarbonize the aviation industry to face the climate crisis has increased research into aircraft powered by alternative fuels, such as ethanol, electricity, hydrogen, and even solar energy, with flying prototypes becoming more common.
Operations of aircraft
Civil aviation
Civil aviation includes all non-military flying, both general aviation and scheduled air transport.
Air transport
There are five major manufacturers of civil transport aircraft (in alphabetical order):
Airbus, based in Europe
Boeing, based in the United States
Bombardier, based in Canada
Embraer, based in Brazil
United Aircraft Corporation, based in Russia, with its subsidiaries Ilyushin, Tupolev, and Sukhoi
Boeing, Airbus, Ilyushin and Tupolev concentrate on wide-body and narrow-body jet airliners, while Bombardier, Embraer and Sukhoi concentrate on regional airliners. Large networks of specialized parts suppliers from around the world support these manufacturers, who sometimes provide only the initial design and final assembly in their own plants. The Chinese ACAC consortium has also recently entered the civil transport market with its Comac ARJ21 regional jet.
Until the 1970s, most major airlines were flag carriers, sponsored by their governments and heavily protected from competition. Since then, open skies agreements have resulted in increased competition and choice for consumers, coupled with falling prices for airlines. The combination of high fuel prices, low fares, high salaries, and crises such as the September 11 attacks and the SARS pandemic have driven many older airlines to government-bailouts, bankruptcy or mergers. At the same time, low-cost carriers such as Ryanair, Southwest and WestJet have flourished.
General aviation
General aviation includes all non-scheduled civil flying, both private and commercial. General aviation may include business flights, air charter, private aviation, flight training, ballooning, paragliding, parachuting, gliding, hang gliding, aerial photography, foot-launched powered hang gliders, air ambulance, crop dusting, charter flights, traffic reporting, police air patrols and forest fire fighting.
Each country regulates aviation differently, but general aviation usually falls under different regulations depending on whether it is private or commercial and on the type of equipment involved.
Many small aircraft manufacturers serve the general aviation market, with a focus on private aviation and flight training.
The most important recent developments for small aircraft (which form the bulk of the GA fleet) have been the introduction of advanced avionics (including GPS) that were formerly found only in large airliners, and the introduction of composite materials to make small aircraft lighter and faster. Ultralight and homebuilt aircraft have also become increasingly popular for recreational use, since in most countries that allow private aviation, they are much less expensive and less heavily regulated than certified aircraft.
Military aviation
Simple balloons were used as surveillance aircraft as early as the 18th century. Over the years, military aircraft have been built to meet ever increasing capability requirements. Manufacturers of military aircraft compete for contracts to supply their government's arsenal. Aircraft are selected based on factors like cost, performance, and the speed of production.
Types of military aviation
Fighter aircraft's primary function is to destroy other aircraft. (e.g. F-35, Eurofighter Typhoon, F-15, MiG-29, Su-27, and F-22).
Ground attack aircraft are used against tactical earth-bound targets. (e.g. Panavia Tornado, A-10, Il-2, J-22 Orao, AH-64 and Su-25).
Bombers are generally used against more strategic targets, such as factories and oil fields. (e.g. B-2, Tu-95, Mirage IV, and B-52).
Transport aircraft are used to transport hardware and personnel. (e.g. C-17 Globemaster III, C-130 Hercules and Mil Mi-26).
Surveillance and reconnaissance aircraft obtain information about enemy forces. (e.g. RC-135, E-8, U-2, OH-58 and MiG-25R).
Unmanned aerial vehicles (UAVs) are used primarily as reconnaissance fixed-wing aircraft, though many also carry payloads (e.g. MQ-9, RQ-4, and MQ-1C Gray Eagle). Cargo aircraft are in development.
Missiles deliver warheads, normally explosives.
Air safety
Aviation safety means the state of an aviation system or organization in which risks associated with aviation activities, related to, or in direct support of the operation of aircraft, are reduced and controlled to an acceptable level. It encompasses the theory, practice, investigation, and categorization of flight failures, and the prevention of such failures through regulation, education, and training. It can also be applied in the context of campaigns that inform the public as to the safety of air travel.
Aviation MRO
A maintenance, repair, and overhaul organization (MRO) is a firm that ensures airworthiness or air transport. According to a 2024 article, "maintenance (M) involves inspecting, cleaning, oiling, and changing aircraft parts after a certain number of flight hours. Repair (R) is restoring the original function of parts and components. Overhaul (O) refers to extensive maintenance, the complete refurbishment of the aircraft, and upgrades in avionics, which can take several weeks to complete." Airlines are legally obligated to certify airworthiness, meaning that a civil aviation authority must approve an aircraft suitable for safe flight operations. MRO firms are responsible for this process, thoroughly checking and documenting all components' repairs while tracking mechanical, propulsion, and electronic parts. Aviation regulators oversee maintenance practices in the country of aircraft registration, manufacture, or current location. All aircraft maintenance activities must adhere to international regulations that mandate standards.
Aviation accidents and incidents
An aviation accident is defined by the Convention on International Civil Aviation Annex 13 as an occurrence associated with the operation of an aircraft which takes place between the time any person boards the aircraft with the intention of flight until such time as all such persons have disembarked, in which a person is fatally or seriously injured, the aircraft sustains damage or structural failure or the aircraft is missing or is completely inaccessible. An accident in which the damage to the aircraft is such that it must be written off, or in which the plane is destroyed, is called a hull loss accident.
The first fatal aviation accident occurred in a Wright Model A aircraft at Fort Myer, Virginia, US, on September 17, 1908, resulting in injury to the pilot, Orville Wright, and death of the passenger, Signal Corps Lieutenant Thomas Selfridge. The worst aviation accident in history was the Tenerife airport disaster on March 27, 1977, when 583 people died when two Boeing 747 jumbo jets, operated by Pan Am and KLM collided on a runway in Los Rodeos airport, now known as Tenerife North.
An aviation incident is defined as an occurrence, other than an accident, associated with the operation of an aircraft that affects or could affect the safety of operations.
Air traffic control
Air traffic control (ATC) involves communication with aircraft to help maintain separation – that is, they ensure that aircraft are sufficiently far enough apart horizontally or vertically for no risk of collision. Controllers may co-ordinate position reports provided by pilots, or in high traffic areas (such as the United States) they may use radar to see aircraft positions.
Becoming an air traffic controller in the United States typically requires an associate or bachelor's degree from the Air Traffic Collegiate Training Initiative. The FAA also requires extensive training, along with medical examinations and background checks. Some controllers are required to work weekend, night, and holiday shifts.
There are generally four different types of ATC:
Center controllers, who control aircraft en route between airports
Control towers (including tower, ground control, clearance delivery, and other services), which control aircraft within a small distance (typically 10–15 km horizontal, and 1,000 m vertical) of an airport.
Oceanic controllers, who control aircraft over international waters between continents, generally without radar service.
Terminal controllers, who control aircraft in a wider area (typically 50–80 km) around busy airports
ATC is especially important for aircraft flying under instrument flight rules (IFR), when they may be in weather conditions that do not allow the pilots to see other aircraft. However, in very high-traffic areas, especially near major airports, aircraft flying under visual flight rules (VFR) are also required to follow instructions from ATC.
In addition to separation from other aircraft, ATC may provide weather advisories, terrain separation, navigation assistance, and other services to pilots, depending on their workload.
ATC do not control all flights. The majority of VFR (Visual Flight Rules) flights in North America are not required to contact ATC (unless they are passing through a busy terminal area or using a major airport), and in many areas, such as northern Canada and low altitude in northern Scotland, air traffic control services are not available even for IFR flights at lower altitudes.
Environmental impact
Like all activities involving combustion, operating powered aircraft (from airliners to hot air balloons) releases soot and other pollutants into the atmosphere. Greenhouse gases such as carbon dioxide (CO2) are also produced. In addition, there are environmental impacts specific to aviation: for instance,
Aircraft operating at high altitudes near the tropopause (mainly large jet airliners) emit aerosols and leave contrails, both of which can increase cirrus cloud formation – cloud cover may have increased by up to 0.2% since the birth of aviation. Clouds can have both a cooling and warming effect. They reflect some of the sun's rays back into space, but also block some of the heat radiated by Earth's surface. On average, both thin natural cirrus clouds and contrails have a net warming effect.
Aircraft operating at high altitudes near the tropopause can also release chemicals that interact with greenhouse gases at those altitudes, particularly nitrogen compounds, which interact with ozone, increasing ozone concentrations.
Most light piston aircraft burn avgas, which contains tetraethyllead (TEL). Some lower-compression piston engines can operate on unleaded mogas, and turbine engines and diesel engines – neither of which require lead – are appearing on some newer light aircraft.
Another environmental impact of aviation is noise pollution, mainly caused by aircraft taking off and landing. Sonic booms were a problem with supersonic aircraft such as the Concorde.
| Technology | Aviation | null |
1031149 | https://en.wikipedia.org/wiki/Silviculture | Silviculture | Silviculture is the practice of controlling the growth, composition/structure, as well as quality of forests to meet values and needs, specifically timber production.
The name comes from the Latin ('forest') and ('growing'). The study of forests and woods is termed silvology. Silviculture also focuses on making sure that the treatment(s) of forest stands are used to conserve and improve their productivity.
Generally, silviculture is the science and art of growing and cultivating forest crops based on a knowledge of silvics, the study of the life history and general characteristics of forest trees and stands, with reference to local/regional factors. The focus of silviculture is the control, establishment and management of forest stands. The distinction between forestry and silviculture is that silviculture is applied at the stand-level, while forestry is a broader concept. Adaptive management is common in silviculture, while forestry can include natural/conserved land without stand-level management and treatments being applied.
Silvicultural systems
The origin of forestry in German-speaking Europe has defined silvicultural systems broadly as high forest (), coppice with standards () and compound coppice, short rotation coppice, and coppice (). There are other systems as well. These varied silvicultural systems include several harvesting methods, which are often wrongly said to be a silvicultural systems, but may also be called rejuvenating or regenerating method depending on the purpose.
The high forest system is further subdivided in German:
High forest ()
Age class forest ()
Even-aged forestry
Clear cutting ()
Shelterwood cutting ()
Seed-tree method
Uneven-aged forestry
The Femel selection cutting (group selection cutting) ()
Strip selection cutting (strip-and-group felling system) ()
Shelterwood wedge cutting ()
Mixed-form regeneration methods ()
Continuous cover forestry ()
Uneven-aged forestry
Selection forest ()
Target diameter harvesting ()
These names give the impression is that these are neatly defined systems, but in practice there are variations within these harvesting methods in accordance with to local ecology and site conditions. While location of an archetypal form of harvesting technique can be identified (they all originated somewhere with a particular forester, and have been described in the scientific literature), and broad generalizations can be made, these are merely rules of thumb rather than strict blueprints on how techniques might be applied. This misunderstanding has meant that many older English textbooks did not capture the true complexity of silviculture as practiced where it originated in Mitteleuropa.
This silviculture was culturally predicated on wood production in temperate and boreal climates and did not deal with tropical forestry. The misapplication of this philosophy to those tropical forests has been problematic. There is also an alternative silvicultural tradition which developed in Japan and thus created a different biocultural landscape called satoyama.
After harvesting comes regeneration, which may be split into natural and artificial (see below), and tending, which includes release treatments, pruning, thinning and intermediate treatments. It is conceivable that any of these three phases (harvesting, regeneration, and tending) may happen at the same time within a stand, depending on the goal for that particular stand.
Regeneration
Regeneration is basic to the continuation of forested, as well as to the afforestation of treeless land. Regeneration can take place through self-sown seed ("natural regeneration"), by artificially sown seed, or by planted seedlings. In whichever case, the performance of regeneration depends on its growth potential and the degree to which its environment allows the potential to be expressed. Seed, of course, is needed for all regeneration modes, both for natural or artificial sowing and for raising planting stock in a nursery.
The process of natural regeneration involves the renewal of forests by means of self-sown seeds, root suckers, or coppicing. In natural forests, conifers rely almost entirely on regeneration through seed. Most of the broadleaves, however, are able to regenerate by the means of emergence of shoots from stumps (coppice) and broken stems.
Seedbed requirements
Any seed, self-sown or artificially applied, requires a seedbed suitable for securing germination.
In order to germinate, a seed requires suitable conditions of temperature, moisture, and aeration. For seeds of many species, light is also necessary, and facilitates the germination of seeds in other species, but spruces are not exacting in their light requirements, and will germinate without light. White spruce seed germinated at 35 °F (1.7 °C) and 40 °F (4.4 °C) after continuous stratification for one year or longer and developed radicles less than long in the cold room. When exposed to light, those germinants developed chlorophyll and were normally phototropic with continued elongation.
For survival in the short and medium terms, a germinant needs: a continuing supply of moisture; freedom from lethal temperature; enough light to generate sufficient photosynthate to support respiration and growth, but not enough to generate lethal stress in the seedling; freedom from browsers, tramplers, and pathogens; and a stable root system. Shade is very important to the survival of young seedlings. In the longer term, there must be an adequate supply of essential nutrients and an absence of smothering.
In undisturbed forest, decayed windfallen stemwood provides the most favorable seedbed for germination and survival. Seedlings growing on such sites are less likely to be buried by accumulated snowpack and leaf litter, and less likely to be subject to flooding. Advantages conferred by those microsites include: more light, higher temperatures in the rooting zone, and better mycorrhizal development. According to a 1940 survey in the Porcupine Hills of Manitoba, approximately 90% of spruce seedlings were germinating from this substrate.
Mineral soil seedbeds are more receptive than the undisturbed forest floor, and are generally moister and more readily rewetted than the organic forest floor. However, exposed mineral soil, much more so than organic-surfaced soil, is subject to frost heaving and shrinkage during drought. The forces generated in soil by frost or drought are quite enough to break roots.
The range of microsites occurring on the forest floor can be broadened, and their frequency and distribution influenced by site preparation. Each microsite has its own microclimate. Microclimates near the ground are better characterized by vapour pressure deficit and net incident radiation, rather than the standard measurements of air temperature, precipitation, and wind pattern.
Aspect is an important component of microclimate, especially in relation to temperature and moisture regimes. Germination and seedling establishment of Engelmann spruce were much better on north than on south aspect seedbeds in the Fraser Experimental Forest, Colorado; the ratios of seeds to 5-year-old seedlings were determined as 32:1, 76:1, and 72:1 on north aspect bladed-shaded, bladed-unshaded, and undisturbed-shaded seedbeds, respectively. Clearcut openings of adjacent to an adequate seed source, and not more than 6 tree-heights wide, could be expected to secure acceptable regeneration (4,900, 5-year-old trees per hectare), whereas on undisturbed-unshaded north aspects, and on all seedbed treatments tested on south aspects, seed to seedling ratios were so high that the restocking of any clearcut opening would be questionable.
At least seven variable factors may influence seed germination: seed characteristics, light, oxygen, soil reaction (pH), temperature, moisture, and seed enemies. Moisture and temperature are the most influential, and both are affected by exposure. The difficulty of securing natural regeneration of Norway spruce and Scots pine in northern Europe led to the adoption of various forms of reproduction cuttings that provided partial shade or protection to seedlings from hot sun and wind. The main objective of echeloned strips or border-cuttings with northeast exposure was to protect regeneration from overheating, and was originated in Germany and deployed successfully by A. Alarik in 1925 and others in Sweden. On south and west exposures, direct insolation and heat reflected from tree trunks often result in temperatures lethal to young seedlings, as well as desiccation of the surface soil, which inhibits germination. The sun is less injurious on eastern exposures because of the lower temperature in the early morning, related to higher humidity and presence of dew.
In 1993, Henry Baldwin, after noting that summer temperatures in North America are often higher than those in places where border-cuttings have been found useful, reported the results of a survey of regeneration in a stand of red spruce plus scattered white spruce that had been isolated by clearcutting on all sides, so furnishing an opportunity for observing regeneration on different exposures in this old-field stand at Dummer, New Hampshire. The regeneration included a surprisingly large number of balsam fir seedlings from the 5% stand component of that species. The maximum density of spruce regeneration, determined 4 rods (20 m) inside from the edge of the stand on a north 20°E exposure, was 600,000/ha, with almost 100,000 balsam fir seedlings.
A prepared seedbed remains receptive for a relatively short period, seldom as long as 5 years, sometimes as short as 3 years. Seedbed receptivity on moist, fertile sites decreases with particular rapidity, and especially on such sites, seedbed preparation should be scheduled to take advantage of good seed years. In poor seed years, site preparation can be carried out on mesic and drier sites with more chance of success, because of the generally longer receptivity of seedbeds there than those on moister sites. Although an indifferent seed year can suffice if seed distribution is good and environmental conditions favourable to seedling germination and survival, small amounts of seed are particularly vulnerable to depredation by small mammals. Considerable flexibility is possible in timing site preparation to coincide with cone crops. Treatment can be applied either before any logging takes place, between partial cuts, or after logging. In cut and leave strips, seedbed preparation can be carried out as a single operation, pre-scarifying the leave strips, post-scarifying the cut strips.
Broadcast burning is not recommended as a method of preparing sites for natural regeneration, as it rarely exposes enough mineral soil to be sufficiently receptive, and the charred organic surfaces are a poor seedbed for spruce. A charred surface may get too hot for good germination and may delay germination until fall, with subsequent overwinter mortality of unhardened seedlings. Piling and burning of logging slash, however, can leave suitable exposures of mineral soil.
Season of planting
Artificial regeneration
With a view to reducing the time needed to produce planting stock, experiments were carried out with white spruce and three other coniferous species from Wisconsin seed in the longer, frost-free growing season in Florida, 125 vs. 265 days in central Wisconsin and northern Florida, respectively. As the species studied are adapted to long photoperiods, extended daylengths of 20 hours were applied in Florida. Other seedlings were grown under extended daylength in Wisconsin and with natural daylength in both areas. After two growing seasons, white spruce under long days in Florida were about the same as those in Wisconsin, but twice as tall as plants under natural Wisconsin photoperiods. Under natural days in Florida, with the short local photoperiod, white spruce was severely dwarfed and had a low rate of survival. Black spruce responded similarly. After two growing seasons, long day plants of all 4 species in Florida were well balanced, with good development of both roots and shoots, equaling or exceeding the minimum standards for 2+1 and 2+2 outplanting stock of Lake States species. Their survival when lifted in February and outplanted in Wisconsin equalled that of 2+2 Wisconsin-grown transplants. Artificial extension of the photoperiod in the northern Lake States greatly increased height increment of white and black spruces in the second growing season.
Optimum conditions for seedling growth have been determined for the production of containerized planting stock. Alternating day/night temperatures have been found more suitable than a constant temperature; at 400 lumens/m2 light regime, a 28 °C/20 °C day/night temperatures have been recommended for white spruce. However, temperature optima are not necessarily the same at different ages and sizes. In 1984, R. Tinus investigated the effects of combinations of day and night temperature on height, caliper, and dry weight of 4 seed sources of Engelmann spruce. The 4 seed sources appeared to have very similar temperature requirements, with night optima about the same of slightly lower than daylight optima.
Tree provenance is important in artificial regeneration. Good provenance takes into account suitable tree genetics and a good environmental fit for planted / seeded trees in a forest stand. The wrong genotype can lead to failed regeneration, or poor trees that are prone to pathogens and undesired outcomes.
Artificial regeneration has been a more common method involving planting because it is more dependable than natural regeneration. Planting can involve using seedlings (from a nursery), (un)rooted cuttings, or seeds.
Whichever method is chosen it can be assisted by tending techniques also known as intermediate stand treatments.
The fundamental genetic consideration in artificial regeneration is that seed and planting stock must be adapted to the planting environment. Most commonly, the method of managing seed and stock deployment is through a system of defined seed zones, within which seed and stock can be moved without risk of climatic maladaptation. Ontario adopted a seed zone system in the 1970s based on G.A. Hills' 1952 site regions and provincial resource district boundaries, but Ontario's seed zones are now based on homogeneous climatic regions developed with the Ontario Climate Model. The regulations stipulate that source-identified seedlots may be either a general collection, when only the seed zone of origin is known, or a stand collection from a specific latitude and longitude. The movement of general-collection seed and stock across seed zone boundaries is prohibited, but the use of stand-collection seed and stock in another seed zone is acceptable when the Ontario Climate Model shows that the planting site and place of seed origin are climatically similar. The 12 seed zones for white spruce in Quebec are based mainly on ecological regions, with a few modifications for administrative convenience.
Seed quality varies with source. Seed orchards produce seed of the highest quality, then, in order of decreasing seed quality produced, seed production areas and seed collection areas follow, with controlled general collections and uncontrolled general collections producing the least characterized seed.
Seeds
Dewinging, extraction
When seed is first separated from cones it is mixed with foreign matter, often 2 to 5 times the volume of the seed. The more or less firmly attached membranous wings on the seed must be detached before it is cleaned of foreign matter. The testa must not incur damage during the dewinging process. Two methods have been used, dry and wet. Dry seed may be rubbed gently through a sieve that has a mesh through which only seed without wings can pass. Large quantities of seed can be processed in dewinging machines, which use cylinders of heavy wire mesh and rapidly revolving stiff brushes within to remove the wings. In the wet process, seed with wings attached are spread out 10 cm to 15 cm deep on a tight floor and slightly moistened throughout; light leather flails are used to free seed from the wings. B. Wang described a unique wet dewinging procedure in 1973 using a cement mixer, used at the Petawawa tree seed processing facility. Wings of white and Norway spruce seed can be removed by dampening the seed slightly before it is run through a fanning mill for the last time. Any moistened seed must be dried before fermentation or moulding sets in.
Seed viability
A fluorescein diacetate (FDA) biochemical viability test for several species of conifer seed, including white spruce, estimates the proportion of live seed (viability) in a seedlot, and hence the percentage germination of a seedlot. The accuracy of predicting percentage germination was within +/- 5 for most seedlots. White spruce seed can be tested for viability by an indirect method, such as the fluorescein diacetate (FDA) test or ‘Ultra-sound'; or by the direct growth method of ‘germination'. Samples of white spruce seed inspected in 1928 varied in viability from 50% to 100%, but averaged 93%. A 1915 inspection reported 97% viability for white spruce seed.
Germinative testing
The results of a germination test are commonly expressed as germinative capacity or a germination percentage, which is the percentage of seeds that germinate during a period of time, ending when germination is practically complete. During extraction and processing, white spruce seeds gradually lost moisture, and total germination increased. Mittal et al. (1987) reported that white spruce seed from Algonquin Park, Ontario, obtained the maximum rate (94% in 6 days) and 99% total germination in 21 days after 14-week pre-chilling. The pre-treatment of 1% sodium hypochlorite increased germinability.
Encouraged by Russian success in using ultrasonic waves to improve the germinative energy and percentage germination of seeds of agricultural crops, Timonin (1966) demonstrated benefits to white spruce germination after exposure of seeds to 1, 2, or 4 minutes of ultrasound generated by an M.S.E. ultrasonic disintegrator with a power consumption of 280 VA and power impact of 1.35 amperes. However, no seeds germinated after 6 minutes of exposure to ultrasound.
Seed dormancy
Seed dormancy is a complex phenomenon and is not always consistent within species. Cold stratification of white spruce seed to break dormancy has been specified as a requirement, but Heit (1961) and Hellum (1968) regarded stratification as unnecessary. Cone handling and storage conditions affect dormancy in that cold, humid storage (5 °C, 75% to 95% relative humidity) of the cones prior to extraction seemingly eliminated dormancy by overcoming the need to stratify. Periods of cold, damp weather during the period of cone storage might provide natural cold (stratification) treatment. Once dormancy was removed in cone storage, subsequent kiln-drying and seed storage did not reactivate dormancy.
Haddon and Winston (1982) found a reduction in viability of stratified seeds after 2 years of storage and suggested that stress might have been caused by stratification, e.g., by changes in seed biochemistry, reduced embryo vigor, seed aging or actual damage to the embryo. They further questioned the quality of the 2-year-old seed even though high germination occurred in the samples that were not stratified.
Cold stratification
Cold stratification is the term applied to the storing of seeds in (and, strictly, in layers with) a moist medium, often peat or sand, with a view to maintaining viability and overcoming dormancy. Cold stratification is the term applied to storage at near-freezing temperatures, even if no medium is used. A common method of cold stratification, is to soak seed in tap water for up to 24 h, superficially dry it, then store moist for some weeks or even months at temperatures just above freezing. Although Hellum (1968) found that cold stratification of an Alberta seed source led to irregular germination, with decreasing germination with increasing length of the stratification period, Hocking's (1972) paired test with stratified and nonstratified Alberta seed from several sources revealed no trends in response to stratification. Hocking suggested that seed maturity, handling, and storage needed to be controlled before the need for stratification could be determined. Later, Winston and Haddon (1981) found that the storage of white spruce cones for 4 weeks at 5 °C prior to extraction obviated the need for stratification.
Seed ripeness
Seed maturity cannot be predicted accurately from cone flotation, cone moisture content, cone specific gravity; but the province of B.C. found embryo occupying 90% + of the corrosion cavity and megagametophyte being firm and whitish in colour are the best predictors for white spruce in B.C., and Quebec can forecast seed maturity some weeks in advance by monitoring seed development in relation to heat-sums and the phenological progression of the inflorescence of fireweed (Epilobium angustifolium L.), an associated plant species. Cone collection earlier than one week before seed maturity would reduce seed germination and viability during storage. Four stages of maturation were determined by monitoring carbohydrates, polyols, organic acids, respiration, and metabolic activity. White spruce seeds require a 6-week post-harvest ripening period in the cones to obtain maximum germinability, however, based on cumulative degree-days, seed from the same trees and stand showed 2-week cone storage was sufficient.
Forest tree nurseries
See Plant nursery
Forest tree plantations
Plantation establishment criteria
Plantations may be considered successful when outplant performance satisfies certain criteria. The term "free growing" is applied in some jurisdictions. Ontario's "Free-to-Grow" (FTG) equivalent relates to a forest stand that meets a minimum stocking standard and height requirement, and is essentially free of competition from surrounding vegetation that might impede growth. The FTG concept was introduced with the advent of the Forest Management Agreement program in Ontario in 1980 and became applicable to all management units in 1986. Policy, procedures, and methodologies readily applicable by forest unit managers to assess the effectiveness of regeneration programs were still under development during the Class Environmental Assessment hearings.
In British Columbia, the Forest Practices Code (1995) governs performance criteria. To minimize the subjectivity of assessing deciduous competition as to whether or not a plantation is established, minimum specifications of number, health, height, and competition have been specified in British Columbia. However, minimum specifications are still subjectively set and may need to be fine-tuned in order to avoid unwarranted delay in according established status to a plantation. For example, a vigorous white spruce with a strong, multi-budded leading shoot and its crown fully exposed to light on 3 sides would not qualify as free-growing in the current British Columbia Code but would hardly warrant description as unestablished.
Competition
Competition arises when individual organisms are sufficiently close together to incur growth constraint through mutual modification of the local environment. Plants may compete for light, moisture and nutrients, but seldom for space per se. Vegetation management directs more of the site's resources into usable forest products, rather than just eliminating all competing plants. Ideally, site preparation ameliorates competition to levels that relieve the outplant of constraints severe enough to cause prolonged check.
The diversity of boreal and sub-boreal broadleaf-conifer mixed tree species stands, commonly referred to as the "mixedwoods", largely preclude the utility of generalizations and call for the development of management practices incorporating the greater inherent complexity of broadleaf-conifer mixtures, relative to single-species or mixed-species conifer forest. After harvesting or other disturbance, mixedwood stands commonly enter a prolonged period in which hardwoods overtop the coniferous component, subjecting them to intense competition in an understorey. It is well established that the regeneration and growth potential of understorey conifers in mixedwood stands is correlated to the density of competing hardwoods. To help apply "free-to-grow" regulations in British Columbia and Alberta, management guidelines based on distance-dependent relations within a limited radius of crop trees were developed, but Lieffers et al. (2002) found that free-growing stocking standards did not adequately characterize light competition between broadleaf and conifer components in boreal mixedwood stands, and further noted that adequate sampling using current approaches would be operationally prohibitive.
Many promising plantations have failed through lack of tending. Young crop trees are often ill-equipped to fight it out with competition resurgent following initial site preparation and planting.
Perhaps the most direct evaluation of the effect of competition on plantation establishment is provided by an effective herbicide treatment, given it is performed correctly and without contamination of waters of the state. The fact that herbicide treatment does not always produce positive results should not obscure the demonstrated potential of herbicides for significantly promoting plantation establishment. Factors that can vitiate the effectiveness of a herbicide treatment include: weather, especially temperature, prior to and during application; weather, especially wind, during application; weather, especially precipitation, in the 12 to 24 hours after application; vegetation characteristics, including species, size, shape, phenological stage, vigour, and distribution of weeds; crop characteristics, including species, phenology, and condition; the effects of other treatments, such as preliminary shearblading, burning or other prescribed or accidental site preparation; and the herbicide used, including dosage, formulation, carrier, spreader, and mode of application. There is a lot that can go wrong, but a herbicide treatment can be as good or better than any other method of site preparation.
Competition indices
The study of competition dynamics requires both a measure of the competition level and a measure of crop response. Various competition indices have been developed, e.g., by Bella (1971) and Hegyi (1974) based on stem diameter, by Arney (1972), Ek and Monserud (1974), and Howard and Newton (1984) based on canopy development, and Daniels (1976), Wagner (1982), and Weiner (1984) with proximity-based models. Studies generally considered tree response to competition in terms of absolute height or basal area, but Zedaker (1982) and Brand (1986) sought to quantify crop tree size and environmental influences by using relative growth measures.
Tending
Tending is the term applied to pre-harvest silvicultural treatment of forest crop trees at any stage after initial planting or seeding. The treatment can be of the crop itself (e.g., spacing, pruning, thinning, and improvement cutting) or of competing vegetation (e.g., weeding, cleaning).
Planting
How many trees per unit area (spacing) that should be planted is not an easily answered question. Establishment density targets or regeneration standards have commonly been based on traditional practice, with the implicit aim of getting the stand quickly to the free-to-grow stage. Money is wasted if more trees are planted than are needed to achieve desired stocking rates, and the chance to establish other plantations is proportionately diminished. Ingress (natural regeneration) on a site is difficult to predict and often becomes surprisingly evident only some years after planting has been carried out. Early stand development after harvesting or other disturbance undoubtedly varies greatly among sites, each of which has its own peculiar characteristics.
For all practical purposes, the total volume produced by a stand on a given site is constant and optimum for a wide range of density or stocking. It can be decreased, but not increased, by altering the amount of growing stock to levels outside this range. Initial density affects stand development in that close spacing leads to full site utilization more quickly than wider spacing. Economic operability can be advanced by wide spacing even if total production is less than in closely spaced stands.
Beyond the establishment stage, the relationship of average tree size and stand density is very important. Various density-management diagrams conceptualizing the density-driven stand dynamics have been developed. Smith and Brand's (1988) diagram has mean tree volume on the vertical axis and the number of trees/ha on the horizontal axis: a stand can either have many little trees or a few big ones. The self-thinning line shows the largest number of trees of a given size/ha that can be carried at any given time. However, Willcocks and Bell (1995) caution against using such diagrams unless specific knowledge of the stand trajectory is known.
In the Lake States, plantations have been made with the spacing between trees varying from 3 by 3 to 10 by 10 feet (0.9 m by 0.9 m to 3.0 m by 3.0 m). Kittredge recommended that no fewer than 600 established trees per acre (1483/ha) be present during the early life of a plantation. To insure this, at least 800 trees per acre (1077/ha) should be planted where 85% survival may be expected, and at least 1200/ac (2970/ha) if only half of them can be expected to live. This translates into recommended spacings of 5 by 5 to 8 by 8 feet (1.5 m by 1.5 m to 2.4 m by 2.4 m) for plantings of conifers, including white spruce in the Lake States.
Enrichment planting
A strategy for enhancing natural forests' economic value is to increase their concentration of economically important, indigenous tree species by planting seeds or seedlings for future harvest, which can be accomplished with enrichment planting (EP).
Release treatments
Weeding: A process of getting rid of saplings' or seedlings' competition by mowing, application of herbicide, or other method of removal from the surroundings.
Cleaning: Release of select saplings from competition by overtopping trees of a comparable age. The treatment favors trees of a desired species and stem quality.
Liberation cutting: A treatment that releases tree seedling or saplings by removing older overtopping trees.
Spacing
Over-crowded regeneration tends to stagnate. The problem is aggravated in species that have little self-pruning ability, such as white spruce. Spacing is a thinning (of natural regeneration), in which all trees other than those selected for retention at fixed intervals are cut. The term juvenile spacing is used when most or all of the cut trees are unmerchantable. Spacing can be used to obtain any of a wide range of forest management objectives, but it is especially undertaken to reduce density and control stocking in young stands and prevent stagnation, and to shorten the rotation, i.e., to speed the production of trees of a given size. Volume growth of individual trees and the merchantable growth of stands are increased. The primary rationale for spacing is that thinning is the projected decline in maximum allowable cut. And since wood will be concentrated on fewer, larger, and more uniform stems, operating and milling costs will be minimized.
Methods for spacing may be: manual, using various tools, including power saws, brush saws, and clippers; mechanical, using choppersand mulchers; chemical; or combinations of several methods. One treatment has had notable success in spacing massively overstocked (<100 000 stems/ha) natural regeneration of spruce and fir in Maine. Fitted to helicopter, the Thru-Valve boom emits herbicide spray droplets 1000 μm to 2000 μm in diameter at very low pressure. Swaths 1.2 m wide and leave strips 2.4 m wide were obtained with "knife-edge" precision when the herbicide was applied by helicopter flying at a height of 21 m at a speed of 40–48 km/h. It seems likely that no other method could be as cost-effective.
Twenty years after spacing to 2.5 × 2.5 m, 30-year-old mixed stands of balsam fir and white spruce in the Green River watershed, New Brunswick, averaged 156.9 m3/ha.
A spacing study of 3 conifers (white spruce, red pine and jack pine) was established at Moodie, Manitoba, on flat, sandy, nutritionally poor soils with a fresh moisture regime. Twenty years after planting, red pine had the largest average dbh, 15% greater than jack pine, while white spruce dbh was less than half that of the pines. Crown width showed a gradual increase with spacing for all 3 conifers. Results to date were suggesting optimum spacings between 1.8 m and 2.4 m for both pines; white spruce was not recommended for planting on such sites.
Comparable data are generated by espacement trials, in which trees are planted at a range of densities. Spacings of 1.25 m, 1.50 m, 1.75 m, 2.00 m, 2.50 m, and 3.00 m on 4 site classes were used in the 1922 trial at Petawawa, Ontario. In the first of 34 old field white spruce plantations used to investigate stand development in relation to spacing at Petawawa, Ontario, regular rows were planted at average spacings of from 4 × 4 to 7 × 7 feet (1.22 m × 1.22 m to 2.13 m × 2.13 m). Spacings up to 10 × 10 feet (3.05 m × 3.03 m) were subsequently included in the study. Yield tables based on 50 years of data showed:
Except for merchantable volumes at age 20 and site classes 50 and 60, closer spacings gave greater standing volumes at all ages than did wider spacings, the relative difference decreasing with age.
Merchantable volume as a proportion of total volume increases with age, and is greater at wider than at closer spacings.
Current annual volume increment culminates sooner at closer than at wider spacings.
A smaller espacement trial, begun in 1951 near Thunder Bay, Ontario, included white spruce at spacings of 1.8 m, 2.7 m, and 3.6 m. At the closest spacing, mortality had begun at 37 years, but not at the wider spacings.
The oldest interior spruce espacement trial in British Columbia was established in 1959 near Houston in the Prince Rupert Forest Region. Spacings of 1.2 m, 2.7 m, 3.7 m, and 4.9 m were used, and trees were measured 6, 12, 16, 26, and 30 years after planting. At wide espacements, trees developed larger diameters, crowns, and branches, but (at 30 years) basal area and total volume/ha were greatest in the closest espacement (Table 6.38). In more recent trials in the Prince George Region of British Columbia (Table 6.39) and in Manitoba, planting density of white spruce had no effect on growth after up to 16 growing seasons, even at spacings as low as 1.2 m. The slowness of juvenile growth and of crown closure delay the response to intra-competition. Initially, close spacing might even provide a positive nurse effect to offset any negative response to competition.
Thinning
See Thinning
Thinning is an operation that artificially reduces the number of trees growing in a stand with the aim of hastening the development of the remainder. The goal of thinning is to control the amount and distribution of available growing space. By altering stand density, foresters can influence the growth, quality, and health of residual trees. It also provides an opportunity to capture mortality and cull the commercially less desirable, usually smaller and malformed, trees. Unlike regeneration treatments, thinnings are not intended to establish a new tree crop or create permanent canopy openings.
Thinning greatly influences the ecology and micro-meteorology of the stand, lowering the inter-tree competition for water. The removal of any tree from a stand has repercussions on the remaining trees both above-ground and below. Silvicultural thinning is a powerful tool that can be used to influence stand development, stand stability, and the characteristics of the harvestable products.
Tending and thinning regimes and wind and snow damage are intimately related when considering intensive conifer plantations designed for maximum production.
Previous studies have demonstrated that repeated thinnings over the course of a forest rotation increase carbon stores relative to stands that are clear-cut on short rotations and that the carbon benefits differ according to thinning method (e.g., thinning from above versus below).
Precommercial thinning
In the early development of forest stand, density of trees remain high and there is competition among trees for nutrients. When natural regeneration or artificial seeding has resulted in dense, overstocked young stands, natural thinning will in most cases eventually reduce stocking to more silviculturally desirable levels. But by the time some trees reach merchantable size, others will be overmature and defective, and others will still be unmerchantable. To reduce this unbalance and to obtain more economic returns, in the early stage, one kind of cleaning is done which is known as precommercial thinning. Generally, one or two times precommercial thinning is done to facilitate the growth of the tree
The yield of merchantable wood can be greatly increased and the rotation shortened by precommercial thinning. Mechanical and chemical methods have been applied, but their costliness has militated against their ready adoption.
Pruning
Pruning, as a silvicultural practice, refers to the removal of the lower branches of the young trees (also giving the shape to the tree) so clear knot-free wood can subsequently grow over the branch stubs. Clear knot-free lumber has a higher value. Pruning has been extensively carried out in the Radiata pine plantations of New Zealand and Chile; however, the development of finger joint technology in the production of lumber and mouldings has led to many forestry companies reconsidering their pruning practices. Brashing is an alternative name for the same process.
Pruning can be done to all trees, or more cost effectively to a limited number of trees. There are two types of pruning: natural or self-pruning and artificial pruning. Most cases of self-pruning happen when branches do not receive enough sunlight and die. Wind can also take part in natural pruning which can break branches. Artificial pruning is where people are paid to come and cut the branches. Or it can be natural, where trees are planted close enough that the effect is to cause self-pruning of low branches as energy is put into growing up for light reasons and not branchiness.
Stand Conversion
The term stand conversion refers to a change from one silvicultural system to another and includes species conversion, i.e., a change from one species (or set of species) to another. Such change can be effected intentionally by various silvicultural means, or incidentally by default e.g., when high-grading has removed the coniferous content from a mixedwood stand, which then becomes exclusively self-perpetuating aspen. In general, such sites as these are the most likely to be considered for conversion.
Growth and yield
In discussing yields that might be expected from the Canadian spruce forests, Haddock (1961) noted that Wright's (1959) quotation of spruce yields in the British Isles of 220 cubic feet per acre (15.4 m3/ha) per year and in Germany of 175 cubic feet per acre (12.25 m3/ha) per year was misleading, at least if it was meant to imply that such yields might be approached in the Boreal Forest Region of Canada. Haddock thought that Wright's suggestion of 20 to 40 (average 30) cubic feet per acre (1.4 m3/ha to 2.8 m3/ha (average 2.1 m3/ha) per year was more reasonable, but still somewhat optimistic.
The principal way forest resource managers influence growth and yield is to manipulate the mixture of species and number (density) and distribution (stocking) of individuals that form the canopy of the stand. Species composition of much of the boreal forest in North America already differs greatly from its pre-exploitation state. There is less spruce and more hardwoods in the second-growth forest than in the original forest; Hearnden et al. (1996) calculated that the spruce cover type had declined from 18% to only 4% of the total forested area in Ontario. Mixedwood occupies a greater proportion of Ontario's second-growth forest (41%) than in the original (36%), but its component of white spruce is certainly much diminished.
Growth performance is certainly influenced by site conditions and thus by the kind and degree of site preparation in relation to the nature of the site. It is important to avoid the assumption that site preparation of a particular designation will have a particular silvicultural outcome. Scarification, for instance, not only covers a wide range of operations that scarify, but also any given way of scarifying can have significantly different results depending on site conditions at the time of treatment. In point of fact, the term is commonly misapplied. Scarification is defined as "loosening the top soil of open areas, or breaking up the forest floor, in preparation for regenerating by direct seeding or natural seedfall", but the term is often misapplied to practices that include scalping, screefing, and blading, which pare off low and surface vegetation, together with most off its roots to expose a weed-free surface, generally in preparation for sowing or planting thereon.
Thus, it is not surprising that literature can be used to support the view that the growth of seedlings on scarified sites is much superior to that of growth on similar sites that have not been scarified, while other evidence supports the contrary view that scarification can reduce growth. Detrimental results can be expected from scarification that impoverishes the rooting zone or exacerbates edaphic or climatic constraints.
Burning site preparation has enhanced spruce seedling growth, but it must be supposed that burning could be detrimental if the nutrient capital is significantly depleted. An obvious factor greatly influencing regeneration is competition from other vegetation. In a pure stand of Norway spruce, for instance, Roussel (1948) found the following relationships:
A factor of some importance in solar radiation–reproduction relationships is excess heating of the soil surface by radiation. This is especially important for seedlings, such as spruce, whose first leaves do not shade the base of the stem at the soil surface. Surface temperatures in sandy soils on occasion reach lethal temperatures of 50 °C to 60 °C.
Common methods of harvesting
Silvicultural regeneration methods combine both the harvest of the timber on the stand and re-establishment of the forest. The proper practice of sustainable forestry should mitigate the potential negative impacts, but all harvest methods will have some impacts on the land and residual stand. The practice of sustainable forestry limits the impacts such that the values of the forest are maintained in perpetuity. Silvicultural prescriptions are specific solutions to a specific set of circumstances and management objectives. Following are some common methods:
Clearcut harvesting
Conventional clearcut harvesting is relatively simple: all trees on a cutblock are felled and bunched with bunches aligned to the skidding direction, and a skidder then drags the bunches to the closest log deck. Feller-buncher operators concentrate on the width of the felled swath, the number of trees in a bunch, and the alignment of the bunch. Providing a perimeter boundary is felled during daylight, night-shift operations can continue without the danger of trespassing beyond the block. Productivity of equipment is maximized because units can work independently of one another.
Clearcutting
An even-aged regeneration method that can employ either natural or artificial regeneration. It involves the complete removal of the forest stand at one time. Clearcutting can be biologically appropriate with species that typically regenerate from stand replacing fires or other major disturbances, such as Lodgepole Pine (Pinus contorta). Alternatively, clearcutting can change the dominating species on a stand with the introduction of non-native and invasive species as was shown at the Blodgett Forest Research Station near Georgetown, California. Additionally, clearcutting can prolong slash decomposition, expose soil to erosion, impact visual appeal of a landscape and remove essential wildlife habitat. It is particularly useful in regeneration of tree species such as Douglas-fir (Pseudotsuga menziesii) which is shade intolerant.. In addition, the general public's distaste for even-aged silviculture, particularly clearcutting, is likely to result in a greater role for uneven-aged management on public lands as well. Across Europe, and in parts of North America, even-aged, production-orientated and intensively managed plantations are beginning to be regarded in the same way as old industrial complexes: something to abolish or convert to something else.
Clearcutting will impact many site factors important in their effect on regeneration, including air and soil temperatures. Kubin and Kemppainen (1991), for instance, measured temperatures in northern Finland from 1974 through 1985 in three clear-felled areas and in three neighbouring forest stands dominated by Norway spruce. Clear felling had no significant influence on air temperature at 2 m above the ground surface, but the daily air temperature maxima at 10 cm were greater in the clear-felled area than in the uncut forest, while the daily minima at 10 cm were lower. Night frosts were more common in the clear-felled area. Daily soil temperatures at 5 cm depth were 2 °C to 3 °C greater in the clear-felled area than in the uncut forest, and temperatures at depths of 50 cm and 100 cm were 3 °C to 5 °C greater. The differences between the clear-felled and uncut areas did not diminish during the 12 years following cutting.
Coppicing
A regeneration method which depends on the sprouting of cut trees. Most hardwoods, the coast redwood, and certain pines naturally sprout from stumps and can be managed through coppicing. Coppicing is generally used to produce fuelwood, pulpwood, and other products dependent on small trees. A close relative of coppicing is pollarding. Three systems of coppice woodland management are generally recognized: simple coppice, coppice with standards, and the coppice selection system.
In Compound coppicing or coppicing with standards, some of the highest quality trees are retained for multiple rotations in order to obtain larger trees for different purposes.
Direct seeding
Prochnau (1963), four years after sowing, found that 14% of viable white spruce seed sown on mineral soil had produced surviving seedlings, at a seed:seedling ratio of 7.1:1. With Engelmann spruce, Smith and Clark (1960) obtained average seventh year seed:seedling ratios of 21:1 on scarified seedbeds on dry sites, 38:1 on moist sites, and 111:1 on litter seedbeds.
Group selection
The group selection method is an uneven-aged regeneration method that can be used when mid-tolerant species regeneration is desired. The group selection method can still result in residual stand damage in dense stands, however directional falling can minimize the damage. Additionally, foresters can select across the range of diameter classes in the stand and maintain a mosaic of age and diameter classes.
Classical European silviculture achieved impressive results with systems such as Henri Biolley's in Switzerland, in which the number and size of trees harvested were determined by reference to data collected from every tree in every stand measured every seven years.
While not designed to be applied to boreal mixedwoods, the is described briefly here to illustrate the degree of sophistication applied by some European foresters to the management of their forests. Development of management techniques that allowed for stand development to be monitored and guided into sustainable paths were in part a response to past experience, particularly in Central European countries, of the negative effects of pure, uniform stands with species often unsuited to the site, which greatly increased the risk of soil degradation and biotic diseases. Increased mortality and decreased increment generated widespread concern, especially after reinforcement by other environmental stresses.
More or less uneven-aged, mixed forests of preponderantly native species, on the other hand, treated along natural lines, have proved to be healthier and more resistant to all kinds of external dangers; and in the long run such stands are more productive and easier to protect.
However, irregular stands of this type are definitely more difficult to manage—new methods and techniques had to be sought particularly for the establishment of inventories, as well as increment control and yield regulation. In Germany, for instance, since the beginning of the nineteenth century under the influence of G.L. Hartig (1764–1837), yield regulation has been effected almost exclusively by allotment or formula methods based on the conception of the uniform normal forest with a regular succession of cutting areas.
In France, on the other hand, efforts were made to apply another kind of forest management, one that aimed to bring all parts of the forest to a state of highest productive capacity in perpetuity. In 1878, the French forester A. Gurnaud (1825–1898) published a description of a for determining increment and yield. The method was based on the fact that through careful, selective harvesting, the productivity of the residual stand can be improved, because timber is removed as a cultural operation. In this method, the increment of stands is accurately determined periodically with the object of gradually converting the forest, through selective management and continuous experimentation, to a condition of equilibrium at maximum productive capacity.
Henri Biolley (1858–1939) was the first to apply Gurnaud's inspired ideas to practical forestry. From 1890 on, he managed the forests of his Swiss district according to these principles, devoting himself for almost 50 years to the study of increment and a treatment of stands directed towards the highest production, and proving the practicability of the check method. In 1920, he published this study giving a theoretical basis of management of forests under the check method, describing the procedures to be applied in practice (which he partly developed and simplified), and evaluating the results.
Biolley's pioneering work formed the basis upon which most Swiss forest management practices were later developed, and his ideas have been generally accepted. Today, with the trend of intensifying forest management and productivity in most countries, the ideas and application of careful, continuous treatment of stands with the aid of the volume check method are meeting with ever-growing interest. In Britain and Ireland, for example, there is increased application of continuous cover forestry principles to create permanently irregular structures in many woodlands.
Patch cut
Row and broadcast seeding
Spot and row seeders use less seed that does broadcast ground or aerial seeding but may induce clumping. Row and spot seeding confer greater ability to control seed placement than does broadcast seeding. Also, only a small percentage of the total area needs to be treated.
In the aspen type of the Great Lakes region, direct sowing of the seed of conifers has usually failed. However, Gardner (1980) after trials in Yukon, which included broadcast seeding of white spruce seed at 2.24 kg/ha that secured 66.5% stocking in the Scarified Spring Broadcast treatment three years after seeding, concluded that the technique held "considerable promise".
Seed-tree
An even-aged regeneration method that retains widely spaced residual trees in order to provide uniform seed dispersal across a harvested area. In the seed-tree method, 2-12 seed trees per acre (5-30/ha) are left standing in order to regenerate the forest. They will be retained until regeneration has become established at which point they may be removed. It may not always be economically viable or biologically desirable to re-enter the stand to remove the remaining seed trees. Seed-tree cuts can also be viewed as a clearcut with natural regeneration and can also have all of the problems associated with clearcutting. This method is most suited for light-seeded species and those not prone to windthrow.
Selection systems
Selection systems are appropriate where uneven stand structure is desired, particularly where the need to retain continuous cover forest for aesthetic or environmental reasons outweighs other management considerations. Selection logging has been suggested as being of greater utility than shelterwood systems in regenerating old-growth Engelmann Spruce Sub-alpine Fir (ESSF) stands in southern British Columbia. In most areas, selection logging favours regeneration of fir more than the more light-demanding spruce. In some areas, selection logging can be expected to favour spruce over less tolerant hardwood species (Zasada 1972) or lodgepole pine.
Shelter spot seeding
The use of shelters to improve germination and survival in spot seedings seeks to capture the benefits of greenhouse culture, albeit miniature. The Hakmet seed shelter, for instance, is a semi-transparent plastic cone 8 cm high, with openings of 7 cm diameter in the 7.5 cm diameter base and 17 mm diameter in the 24 mm diameter top. This miniature greenhouse increases air humidity, reduces soil desiccation, and raises air and soil temperatures to levels more favourable to germination and seedling growth than those offered by unprotected conditions. The shelter is designed to break down after a few years of exposure to ultraviolet radiation.
Seed shelters and spring sowing significantly improved stocking compared with bare spot seeding, but sheltering did not significantly improve growth. Stocking of bare seedspots was extremely low, possibly due to smothering of seedlings by abundant broadleaf and herbaceous litter, particularly that from aspen and red raspberry, and exacerbated by strong competition from graminoids and raspberry.
Cone shelters (Cerkon™) usually produced greater survival than unsheltered seeding on scarified seedspots in trials of direct seeding techniques in interior Alaska, and funnel shelters (Cerbel™) usually produced greater survival than unsheltered seeding on non-scarified seedspots. Both shelter types are manufactured by AB Cerbo in Trollhättan, Sweden. Both are made of light-degradable, white, opaque plastic, and are 8 cm high when installed.
White spruce seed was sown in Alaska on a burned site in summer 1984, and protected by white plastic cones on small spots scarified by hand, or by white funnels placed directly into the residual ash and organic material. A group of six ravens (Corvus corax) was observed in the area about one week after sowing was completed in mid-June. Damage averaged 68% with cones and 50% with funnels on an upland area, and 26% with funnels on a floodplain area. Damage by ravens was only 0.13% on unburned but otherwise similar areas.
In seeding trials in Manitoba between 1960 and 1966 aimed at converting aspen stands to spruce–aspen mixedwoods, 1961 scarification in the Duck Mountain Provincial Forest remained receptive to natural seeding for many years.
Shelterwood
In general terms, the shelterwood system is a series of partial cuts that removes the trees of an existing stand over several years and eventually culminates in a final cut that creates a new even-aged stand. It is an even-aged regeneration method that removes trees in a series of three harvests: 1) Preparatory cut; 2) Establishment cut; and 3) Removal cut. The success of practising a shelterwood system is closely related to: 1. the length of the regeneration period, i.e. the time from the shelterwood cutting to the date when a new generation of trees has been established; 2.the quality of the new tree stand with respect to stand density and growth; and 3.the value increment of the shelter trees. Information on the establishment, survival and growth of seedlings influenced by the cover of shelter trees, as well as on the growth of these trees, is needed as a basis for modelling the economic return of practising a shelterwood system. The method's objective is to establish new forest reproduction under the shelter of the retained trees. Unlike the seed-tree method, residual trees alter understory environmental conditions (i.e. sunlight, temperature, and moisture) that influence tree seedling growth. This method can also find a middle ground with the light ambiance by having less light accessible to competitors while still being able to provide enough light for tree regeneration. Hence, shelterwood methods are most often chosen for site types characterized by extreme conditions, in order to create a new tree generation within a reasonable time period. These conditions are valid foremost on level ground sites which are either dry and poor or moist and fertile.
Shelterwood systems
Shelterwood systems involve two, three, or exceptionally more partial cuttings. A final cut is made once adequate natural regeneration has been obtained. The shelterwood system is most commonly applied as a two-cut uniform shelterwood, first an initial regeneration (seed) cut, the second a final harvest cut. In stands less than 100 years old, a light preparatory cut can be useful. A series of intermediate cuts at intervals of 10–20 years has been recommended for intensively managed stands.
From operational or economic standpoints, however, there are disadvantages to the shelterwood system: harvesting costs are higher; trees left for deferred cutting may be damaged during the regeneration cut or related extraction operations; the increased risk of blowdown threatens the seed source; damage from bark beetles is likely to increase; regeneration may be damaged during the final cut and related extraction operations; the difficulty of any site preparation would be increased; and incidental damage to regeneration might be caused by any site preparation operations.
Single-tree selection
The single-tree selection method is an uneven-aged regeneration method most suitable when shade tolerant species regeneration is desired. It is typical for older and diseased trees to be removed, thus thinning the stand and allowing for younger, healthy trees to grow. Single-tree selection can be very difficult to implement in dense or sensitive stands and residual stand damage can occur. This method is also disturbs the canopy layer the least out of all other methods.
Spot seeding
Spot seeding was found to be the most economical and reliable of the direct seeding methods for converting aspen and paper birch to spruce and pine. In the Chippewa National Forest (Lake States), seed-spot sowing of 10 seeds each of white spruce and white pine under 40-year aspen after different degrees of cutting on gave second-season results clearly indicating the need to remove or disturb the forest floor to obtain germination of seeded white spruce and white pine.
Spot seeding of coniferous seed, including white spruce, has had occasional success, but several constraining factors commonly limit germination success: the drying out of the forest floor before the roots of germinants reach underlying moisture reserves; and, particularly under hardwoods, the smothering of small seedlings by snow-pressed leaf litter and lesser vegetation. Kittredge and Gervorkiantz (1929) determined that removal of the aspen forest floor increased germination percentage after the second season in seed spots of both white pine and white spruce, in four plots, from 2.5% to 5%, from 8% to 22%, from 1% to 9.5%, and from 0% to 15%.
Spot seeding requires less seed than broadcast seeding and tends to achieve more uniform spacing, albeit sometimes with clumping. The devices used in Ontario for manual spot seeding are the "oil can" seeder, seeding sticks, and shakers. The oil can is a container fitted with a long spout through which a predetermined number of seeds are released with each flick of the seeder.
Strip cutting
Harvesting cutblocks where only a portion of the trees are to be removed is very different from clearcutting. First, trails must be located to provide access for the felling and skidding/forwarding equipment. These trails must be carefully located to ensure that the trees remaining meet the desired quality criteria and stocking density. Second, the equipment must not damage the residual stand. The further desiderata are outlined by Sauder (1995).
The dearth of seed and a deficiency of receptive seedbeds were recognized as major reasons for the lack of success of clearcut harvesting. One remedy attempted in British Columbia and Alberta has been alternate strip cutting. The greater seed source from uncut trees between the cut strips, and the disturbance to the forest floor within the cut strips could be expected to increase the amount of natural regeneration. Trees were cut to a diameter limit in the cut strips, but large trees in the leave strips often proved too much of a temptation and were cut too, thus removing those trees that would otherwise have been the major source of seed.
An unfortunate consequence of strip thinning was the build-up of spruce beetle populations. Shaded slash from the initial cut, together with an increase in the number of windthrown trees in the leave strips, provided conditions ideally suited to the beetle.
Underplanting
DeLong et al. (1991) suggested underplanting 30- to 40-year-old aspen stands, on the basis of the success of natural spruce in regenerating under stands of such stands: "By planting, spacing can be controlled enabling easier protection of the spruce during stand entry for harvesting of the aspen overstorey".
Variable retention
A harvesting and regeneration method which is a relatively new silvicultural system that retains forest structural elements (stumps, logs, snags, trees, understory species and undisturbed layers of forest floor) for at least one rotation in order to preserve environmental values associated with structurally complex forests.
"Uneven-aged and even-aged methods differ in the scale and intensity of disturbance. Uneven-aged methods maintain a mix of tree sizes or ages within a habitat patch by periodically harvesting individual or small groups of trees, Even-aged methods harvest most or all of the overstory and create a fairly uniform habitat patch dominated by trees of the same age". Even-aged management systems have been the prime methods to use when studying the effects on birds.
Mortality
A survey in 1955–56 to determine survival, development, and the reasons for success or failure of conifer pulpwood plantations (mainly of white spruce) in Ontario and Quebec up to 32 years old found that the bulk of the mortality occurred within the first four years of planting, unfavourable site and climate being the main causes of failure.
Advance growth
Naturally regenerated trees in an understorey prior to harvesting constitute a classic case of good news and bad news. Understorey white spruce is of particular importance in mixedwoods dominated by aspen, as in the B15, B18a, and B19a Sections of Manitoba, and elsewhere. Until the latter part of the last century, white spruce understorey was mostly viewed as money in the bank on a long-term, low interest deposit, with final yield to be realized after slow natural succession, but the resource became increasingly threatened with the intensification of harvesting of aspen. White spruce plantations on mixedwood sites proved expensive, risky, and generally unsuccessful. This prompted efforts to see what might be done about growing aspen and white spruce on the same landbase by protecting existing white spruce advance growth, leaving a range of viable crop trees during the first cut, then harvesting both hardwoods and spruce in the final cut. Information about the understorey component is critical to spruce management planning. The ability of then current harvesting technology and crews employed to provide adequate protection for white spruce understories was questioned by Brace and Bella. Specialized equipment and training, perhaps with financial incentives, may be needed to develop procedures that would confer the degree of protection needed for the system to be feasible. Effective understorey management planning requires more than improved mixedwood inventory.
Avoidance of damage to the understorey will always be a desideratum. Sauder's (1990) paper on mixedwood harvesting describes studies designed to evaluate methods of reducing non-trivial damage to understorey residuals that would compromise their chance of becoming a future crop tree. Sauder concluded that: (1) operational measures that protected residual stems may not unduly increase costs, (2) all felling, conifers and hardwoods, needs to be done in one operation to minimize the entry of the feller-buncher into the residual stand, (3) several operational procedures can reduce understorey damage, some of them without incurring extra costs, and (4) successful harvesting of treatment blocks depends primarily on the intelligent location of skid trails and landings. In summary, the key to protecting the white spruce understorey without sacrificing logging efficiency is a combination of good planning, good supervision, the use of appropriate equipment, and having conscientious, well-trained operators.Even the best plan will not reduce understorey damage unless its implementation is supervised.
New stands need to be established to provide for future supply of commercial white spruce from 150,000 ha of boreal mixedwoods in four of Rowe's (1972) regional Forest Sections straddling Alberta, Saskatchewan, and Manitoba, roughly from Peace River AB to Brandon MB. In the 1980s, with harvesting using conventional equipment and procedures, a dramatic increase in the demand for aspen posed a serious problem for the associated spruce understorey. Formerly, white spruce in the understories had developed to commercial size through natural succession under the protection of the hardwoods. Brace articulated a widespread concern: "The need for protection of spruce as a component of boreal mixedwoods goes beyond concern for the future commercial softwood timber supply. Concerns also include fisheries and wildlife habitat, aesthetics and recreation, a general dissatisfaction with clearcutting in mixedwoods and a strong interest in mixedwood perpetuation, as expressed recently in 41 public meetings on forestry development in northern Alberta...".
On the basis of tests of three logging systems in Alberta, Brace (1990) affirmed that significant amounts of understorey can be retained using any of those systems provided that sufficient effort is directed towards protection. Potential benefits would include increased short-term softwood timber supply, improved wildlife habitat and cutblock aesthetics, as well as reduced public criticism of previous logging practices. Stewart et al. (2001) developed statistical models to predict the natural establishment and height growth of understorey white spruce in the boreal mixedwood forest in Alberta using data from 148 permanent sample plots and supplementary information about height growth of white spruce regeneration and the amount and type of available substrate. A discriminant model correctly classified 73% of the sites as to presence or absence of a white spruce understorey, based on the amount of spruce basal area, rotten wood, ecological nutrient regime, soil clay fraction, and elevation, although it explained only 30% of the variation in the data. On sites with a white spruce understorey, a regression model related the abundance of regeneration to rotten wood cover, spruce basal area, pine basal area, soil clay fraction, and grass cover (R² = 0.36). About half the seedlings surveyed grew on rotten wood, and only 3% on mineral soil, and seedlings were ten times more likely to have established on these substrates than on litter. Exposed mineral soil covered only 0.3% of the observed transect area.
Advance growth management
Advance growth management, i.e., the use of suppressed understorey trees, can reduce reforestation costs, shorten rotations, avoid denuding the site of trees, and also reduce adverse impacts on aesthetic, wildlife, and watershed values. To be of value, advance growth must have acceptable species composition and distribution, have potential for growth following release, and not be vulnerable to excessive damage from logging.
The age of advance growth is difficult to estimate from its size, as white that appears to be two- to three-year-old may well be more than twenty years old. However, age does not seem to determine the ability of advance growth of spruce to respond to release, and trees older than 100 years have shown rapid rates of growth after release. Nor is there a clear relationship between the size of advance growth and its growth rate when released.
Where advance growth consists of both spruce and fir, the latter is apt to respond to release more quickly than the former, whereas spruce does respond. If the ratio of fir to spruce is large, however, the greater responsiveness to release of fir may subject the spruce to competition severe enough to negate much of the effect of release treatment. Even temporary relief from shrub competition has increased height growth rates of white spruce in northwestern New Brunswick, enabling the spruce to overtop the shrubs.
Site preparation
Site preparation is any of various treatments applied to a site in order to ready it for seeding or planting. The purpose is to facilitate the regeneration of that site by the chosen method. Site preparation may be designed to achieve, singly or in any combination: improved access, by reducing or rearranging slash, and amelioration of adverse forest floor, soil, vegetation, or other biotic factors. Site preparation is undertaken to ameliorate one or more constraints that would otherwise be likely to thwart the objectives of management. A valuable bibliography on the effects of soil temperature and site preparation on subalpine and boreal tree species has been prepared by McKinnon et al. (2002).
Site preparation is the work that is done before a forest area is regenerated. Some types of site preparation are burning.
Burning
Broadcast burning is commonly used to prepare clearcut sites for planting, e.g., in central British Columbia, and in the temperate region of North America generally.
Prescribed burning is carried out primarily for slash hazard reduction and to improve site conditions for regeneration; all or some of the following benefits may accrue:
a) Reduction of logging slash, plant competition, and humus prior to direct seeding, planting, scarifying or in anticipation of natural seeding in partially cut stands or in connection with seed-tree systems.
b) Reduction or elimination of unwanted forest cover prior to planting or seeding, or prior to preliminary scarification thereto.
c) Reduction of humus on cold, moist sites to favour regeneration.
d) Reduction or elimination of slash, grass, or brush fuels from strategic areas around forested land to reduce the chances of damage by wildfire.
Prescribed burning for preparing sites for direct seeding was tried on a few occasions in Ontario, but none of the burns was hot enough to produce a seedbed that was adequate without supplementary mechanical site preparation.
Changes in soil chemical properties associated with burning include significantly increased pH, which Macadam (1987) in the Sub-boreal Spruce Zone of central British Columbia found persisting more than a year after the burn. Average fuel consumption was 20 to 24 t/ha and the forest floor depth was reduced by 28% to 36%. The increases correlated well with the amounts of slash (both total and ≥7 cm diameter) consumed. The change in pH depends on the severity of the burn and the amount consumed; the increase can be as much as two units, a hundred-fold change. Deficiencies of copper and iron in the foliage of white spruce on burned clearcuts in central British Columbia might be attributable to elevated pH levels.
Even a broadcast slash fire in a clearcut does not give a uniform burn over the whole area. Tarrant (1954), for instance, found only 4% of a 140-ha slash burn had burned severely, 47% had burned lightly, and 49% was unburned. Burning after windrowing obviously accentuates the subsequent heterogeneity.
Marked increases in exchangeable calcium also correlated with the amount of slash at least 7 cm in diameter consumed. Phosphorus availability also increased, both in the forest floor and in the 0 cm to 15 cm mineral soil layer, and the increase was still evident, albeit somewhat diminished, 21 months after burning. However, in another study in the same Sub-boreal Spruce Zone found that although it increased immediately after the burn, phosphorus availability had dropped to below pre-burn levels within nine months.
Nitrogen will be lost from the site by burning, though concentrations in remaining forest floor were found by Macadam (1987) to have increased in two of six plots, the others showing decreases. Nutrient losses may be outweighed, at least in the short term, by improved soil microclimate through the reduced thickness of forest floor where low soil temperatures are a limiting factor.
The Picea/Abies forests of the Alberta foothills are often characterized by deep accumulations of organic matter on the soil surface and cold soil temperatures, both of which make reforestation difficult and result in a general deterioration in site productivity; Endean and Johnstone (1974) describe experiments to test prescribed burning as a means of seedbed preparation and site amelioration on representative clear-felled Picea/Abies areas. Results showed that, in general, prescribed burning did not reduce organic layers satisfactorily, nor did it increase soil temperature, on the sites tested. Increases in seedling establishment, survival, and growth on the burned sites were probably the result of slight reductions in the depth of the organic layer, minor increases in soil temperature, and marked improvements in the efficiency of the planting crews. Results also suggested that the process of site deterioration has not been reversed by the burning treatments applied.
Ameliorative intervention
Slash weight (the oven-dry weight of the entire crown and that portion of the stem < 4 inches in diameter) and size distribution are major factors influencing the forest fire hazard on harvested sites. Forest managers interested in the application of prescribed burning for hazard reduction and silviculture, were shown a method for quantifying the slash load by Kiil (1968). In west-central Alberta, he felled, measured, and weighed 60 white spruce, graphed (a) slash weight per merchantable unit volume against diameter at breast height (dbh), and (b) weight of fine slash (<1.27 cm) also against dbh, and produced a table of slash weight and size distribution on one acre of a hypothetical stand of white spruce. When the diameter distribution of a stand is unknown, an estimate of slash weight and size distribution can be obtained from average stand diameter, number of trees per unit area, and merchantable cubic foot volume. The sample trees in Kiil's study had full symmetrical crowns. Densely growing trees with short and often irregular crowns would probably be overestimated; open-grown trees with long crowns would probably be underestimated.
The need to provide shade for young outplants of Engelmann spruce in the high Rocky Mountains is emphasized by the U.S. Forest Service. Acceptable planting spots are defined as microsites on the north and east sides of down logs, stumps, or slash, and lying in the shadow cast by such material. Where the objectives of management specify more uniform spacing, or higher densities, than obtainable from an existing distribution of shade-providing material, redistribution or importing of such material has been undertaken.
Access
Site preparation on some sites might be done simply to facilitate access by planters, or to improve access and increase the number or distribution of microsites suitable for planting or seeding.
Wang et al. (2000) determined field performance of white and black spruces eight and nine years after outplanting on boreal mixedwood sites following site preparation (Donaren disc trenching versus no trenching) in two plantation types (open versus sheltered) in southeastern Manitoba. Donaren trenching slightly reduced the mortality of black spruce but significantly increased the mortality of white spruce. Significant difference in height was found between open and sheltered plantations for black spruce but not for white spruce, and root collar diameter in sheltered plantations was significantly larger than in open plantations for black spruce but not for white spruce. Black spruce open plantation had significantly smaller volume (97 cm3) compared with black spruce sheltered (210 cm3), as well as white spruce open (175 cm3) and sheltered (229 cm3) plantations. White spruce open plantations also had smaller volume than white spruce sheltered plantations. For transplant stock, strip plantations had a significantly higher volume (329 cm3) than open plantations (204 cm3). Wang et al. (2000) recommended that sheltered plantation site preparation should be used.
Mechanical
Up to 1970, no "sophisticated" site preparation equipment had become operational in Ontario, but the need for more efficacious and versatile equipment was increasingly recognized. By this time, improvements were being made to equipment originally developed by field staff, and field testing of equipment from other sources was increasing.
According to J. Hall (1970), in Ontario at least, the most widely used site preparation technique was post-harvest mechanical scarification by equipment front-mounted on a bulldozer (blade, rake, V-plow, or teeth), or dragged behind a tractor (Imsett or S.F.I. scarifier, or rolling chopper). Drag type units designed and constructed by Ontario's Department of Lands and Forests used anchor chain or tractor pads separately or in combination, or were finned steel drums or barrels of various sizes and used in sets alone or combined with tractor pad or anchor chain units.
J. Hall's (1970) report on the state of site preparation in Ontario noted that blades and rakes were found to be well suited to post-cut scarification in tolerant hardwood stands for natural regeneration of yellow birch. Plows were most effective for treating dense brush prior to planting, often in conjunction with a planting machine. Scarifying teeth, e.g., Young's teeth, were sometimes used to prepare sites for planting, but their most effective use was found to be preparing sites for seeding, particularly in backlog areas carrying light brush and dense herbaceous growth. Rolling choppers found application in treating heavy brush but could be used only on stone-free soils. Finned drums were commonly used on jack pine–spruce cutovers on fresh brushy sites with a deep duff layer and heavy slash, and they needed to be teamed with a tractor pad unit to secure good distribution of the slash. The S.F.I. scarifier, after strengthening, had been "quite successful" for two years, promising trials were under way with the cone scarifier and barrel ring scarifier, and development had begun on a new flail scarifier for use on sites with shallow, rocky soils. Recognition of the need to become more effective and efficient in site preparation led the Ontario Department of Lands and Forests to adopt the policy of seeking and obtaining for field testing new equipment from Scandinavia and elsewhere that seemed to hold promise for Ontario conditions, primarily in the north. Thus, testing was begun of the Brackekultivator from Sweden and the Vako-Visko rotary furrower from Finland.
According to J. Charton and A. Peterson, motormanual scarification is best suited for small restoration projects (less than 25,000 trees) or in ecologically sensitive areas such as riparian zones or areas that are prone to erosion.
According to J. Charton, scarification intensity can effect first year seedling mortality and growth. Scarification should be properly applied to various site conditions to ensure that it works in a positive manner for planted seedlings. Since both fireweed and bluejoint grass were shown as soil moisture moderators, reduced scarification intensity may be beneficial to planted seedlings in the wetter areas found on the Kenai Peninsula. However, other factors such as encouraging natural regeneration to promote pre-beetle species compositions should be considered. Reforestation managers should balance response to scarification in wet areas to achieve the proper balance between planted seedling survival and growth and achieving the desired level of natural regeneration.
Mounding
Site preparation treatments that create raised planting spots have commonly improved outplant performance on sites subject to low soil temperature and excess soil moisture. Mounding can certainly have a big influence on soil temperature. Draper et al. (1985), for instance, documented this as well as the effect it had on root growth of outplants (Table 30).
The mounds warmed up quickest, and at soil depths of 0.5 cm and 10 cm averaged 10 and 7 °C higher, respectively, than in the control. On sunny days, daytime surface temperature maxima on the mound and organic mat reached 25 °C to 60 °C, depending on soil wetness and shading. Mounds reached mean soil temperatures of 10 °C at 10 cm depth five days after planting, but the control did not reach that temperature until fifty-eight days after planting. During the first growing season, mounds had three times as many days with a mean soil temperature greater than 10 °C than did the control microsites.
Draper et al.'s (1985) mounds received five times the amount of photosynthetically active radiation (PAR) summed over all sampled microsites throughout the first growing season; the control treatment consistently received about 14% of daily background PAR, while mounds received over 70%. By November, fall frosts had reduced shading, eliminating the differential. Quite apart from its effect on temperature, incident radiation is also important photosynthetically. The average control microsite was exposed to levels of light above the compensation point for only three hours, i.e., one-quarter of the daily light period, whereas mounds received light above the compensation point for eleven hours, i.e., 86% of the same daily period. Assuming that incident light in the 100-600 μE/m2/s intensity range is the most important for photosynthesis, the mounds received over four times the total daily light energy that reached the control microsites.
Orientation of linear site preparation, e.g., disk-trenching
With linear site preparation, orientation is sometimes dictated by topography or other considerations, but where the orientation can be chosen, it can make a significant difference. A disk-trenching experiment in the Sub-boreal Spruce Zone in interior British Columbia investigated the effect on growth of young outplants (lodgepole pine) in 13 microsite planting positions: berm, hinge, and trench; in north, south, east, and west aspects, as well as in untreated locations between the furrows. Tenth-year stem volumes of trees on south, east, and west-facing microsites were significantly greater than those of trees on north-facing and untreated microsites. However, planting spot selection was seen to be more important overall than trench orientation.
In a Minnesota study, the N–S strips accumulated more snow, but the snow melted faster than on E–W strips in the first year after felling. Snow-melt was faster on strips near the centre of the strip-felled area than on border strips adjoining the intact stand. The strips, 50 feet (15.24 m) wide, alternating with uncut strips 16 feet (4.88 m) wide, were felled in a Pinus resinosa stand, aged 90 to 100 years.
| Technology | Trees and forestry | null |
1031639 | https://en.wikipedia.org/wiki/Victoria%20amazonica | Victoria amazonica | Victoria amazonica is a species of flowering plant, the second largest in the water lily family Nymphaeaceae. It is called Vitória-Régia or Iaupê-Jaçanã ("the lilytrotter's waterlily") in Brazil and Atun Sisac ("great flower") in Inca (Quechua). Its native region is tropical South America, specifically Guyana and the Amazon Basin.
Taxonomy
The species is a member of the genus Victoria, placed in the family Nymphaeaceae or sometimes in the Euryalaceae. The first published description of the genus was by John Lindley in October 1837, based on specimens of this plant returned from British Guiana by Robert Schomburgk. Lindley named the genus after the newly ascended Queen Victoria, and the species Victoria regia. The spelling in Schomburgk's description in Athenaeum, published the month before, was given as Victoria Regina. Despite this spelling being adopted by the Botanical Society of London for their new emblem, Lindley's was the version used throughout the 19th century.
An earlier account of the species, Euryale amazonica by Eduard Friedrich Poeppig, in 1832 described an affinity with Euryale ferox. A collection and description was also made by the French botanist Aimé Bonpland in 1825. In 1850 James De Carle Sowerby recognized Poeppig's earlier description and transferred its epithet amazonica. The new name was rejected by Lindley. The current name, Victoria amazonica, did not come into widespread use until the 20th century.
Cytology
The diploid chromosome count of Victoria amazonica is 20.
Description
Victoria amazonica has very large leaves (lamina) (and commonly called "pads" or "lily pads"), up to in diameter, that float on the water's surface on a submerged stalk (petiole), in length, rivaling the length of the green anaconda, a snake local to its habitat. These leaves are enormously buoyant if the weight is distributed evenly over the entire surface of the leaf (as by a piece of plywood, which should be of neutral buoyancy). In 1896 a V. amazonica leaf at Tower Grove Park, Saint Louis, Missouri bore the "unprecedented" weight of 250 pounds (113.6 kg). However, in 1867 William Sowerby of Regents Park Botanic Garden in London placed 426 pounds (193.9 kg) on a leaf only 5' 6" (168 cm) in diameter. One leaf of a specimen grown in Ghent, Belgium bore a load of 498 pounds (226 kg) It is the second-largest waterlily in the world.
V. amazonica is native to the shallow waters of the Amazon River basin, such as oxbow lakes (called iguarapes) and bayous (called paranas). In their native habitat, the flowers first begin to open as the sun starts to set and can take up to 48 hours to fully open up. These flowers can grow up to 40 cm (16 in) in diameter and 3.5 pounds ( 1.6 kilograms) in weight., exceeded in mass only by members of the genus Rafflesia. All of the flowers of one particular plant will, on a given evening, all be in the female phase or all in the male phase, so that pollination must be by a different individual, precluding self-pollination.
The stem and underside of the leaves are coated with many small spines to defend itself from fish and other herbivores that dwell underwater, although they can also play an offensive role in crushing rival plants in the vicinity as the lily unfolds as it aggressively seeks and hogs sunlight, depriving other plants directly beneath its leaves of such vital resource and significantly darkening the waters below. Younger giant water lilies are even known to swing their spiny stalks and buds around as they grow to forcibly make space for themselves.
Ecology
Each plant continues to produce flowers for a full growing season, and they have co-evolved a mutualistic relationship with a species of scarab beetle of the genus Cyclocephala as a pollinator. All the buds in a single patch will begin to open at the same time and as they do, they give off a fruity smell. At this point the flower petals are white, and the beetles are attracted both to the colour and the smell of the flower. At nightfall the flower stops producing the odor, and it closes, trapping the beetles inside its carpellary appendages. Here, the stamens are protected by the paracarpels and for the next day the flower continues to remain closed. The cavity in which the beetle is trapped is composed of a spongy, starchy tissue that provides nourishment for the beetle. During this time, anthocyanins start to be released by the plant, which in turn changes the petals from white to a reddish pink colour, a sign that the flower will have been pollinated. As the beetle feeds inside the flower, the stamens fall inward and the anthers, which have already fallen, drop pollen on the stamens. During the evening of the second day, the flowers will have opened enough to release the beetle, and as it pushes its way through the stamens it becomes covered in pollen. These insects will then go on to find a newly opened water lily and pollinate with the pollen they are carrying from the previous flower. This process was described in detail by Sir Ghillean Prance and Jorge Arius.
History
Victoria regia, as it was named, was described by Tadeáš Haenke in 1801. It was once the subject of rivalry between Victorian gardeners in England. Always on the lookout for a spectacular new species with which to impress their peers, Victorian "gardeners" such as the Duke of Devonshire and the Duke of Northumberland started a well-mannered competition to become the first to cultivate and bring to flower this enormous lily. In the end, the two aforementioned dukes became the first to achieve this, Joseph Paxton (for the Duke of Devonshire) being the first in November 1849 by replicating the lily's warm swampy habitat (not easy in winter in England with only coal-fired boilers for heating), and a "Mr Ivison" the second and more constantly successful (for Northumberland) at Syon House.
The species captured the imagination of the public and was the subject of several dedicated monographs. The botanical illustrations of cultivated specimens in Fitch and W.J. Hooker's 1851 work Victoria Regia received critical acclaim in the Athenaeum, "they are accurate, and they are beautiful". "The Duke of Devonshire presented Queen Victoria with one of the first of these flowers and named it in her honour. The lily, with ribbed undersurface and leaves veining "like transverse girders and supports", "as Paxton's inspiration for The Crystal Palace, a building four times the size of St. Peter's in Rome."
It is depicted in the Guyanese coat of arms.
Gallery
| Biology and health sciences | Nymphaeales | Plants |
1032059 | https://en.wikipedia.org/wiki/King%20crab | King crab | King crabs are decapod crustaceans of the family Lithodidae that are chiefly found in deep waters and are adapted to cold environments. They are composed of two subfamilies: Lithodinae, which tend to inhabit deep waters, are globally distributed, and comprise the majority of the family's species diversity; and Hapalogastrinae, which are endemic to the North Pacific and inhabit exclusively shallow waters. King crabs superficially resemble true crabs but are generally understood to be closest to the pagurid hermit crabs. This placement of king crabs among the hermit crabs is supported by several anatomical peculiarities which are present only in king crabs and hermit crabs, making them a prominent example of carcinisation among decapods. Several species of king crabs, especially in Alaskan and southern South American waters, are targeted by commercial fisheries and have been subject to overfishing.
Taxonomy
The phylogeny of king crabs as hermit crabs who underwent secondary calcification and left their shell has been suspected since the late 1800s. They are believed to have originated during the Early Miocene in shallow North Pacific waters, where most king crab genera – including all Hapalogastrinae – are distributed and where they exhibit a high amount of morphological diversity.
In 2007, the king crabs were moved from their classification among the hermit crabs in the superfamily Paguroidea into a separate superfamily, Lithodoidea. This was not without controversy, as there is widespread consensus in the scientific community that king crabs are derived from hermit crabs and closely related to pagurid hermit crabs; therefore, a separate superfamily in the classification poorly reflected the phylogenetic relationship of this taxon. In 2023, king crabs were folded back into Paguroidea, with Lithodoidea being considered superseded. The king crab's relationship to other hermit crabs as well as the family's internal phylogeny can be seen in the following two cladograms:
, there are 15 known genera of king crabs across two subfamilies. These include:
Hapalogastrinae
Lithodinae
Description
King crabs are distinctive among hermit crabs for their superficial similarity to true crabs. They are a morphologically diverse group, but they all have in common the functionalities of their five pairs of legs, called pereopods: the first and anteriormost set are chelipeds whose right side is generally noticeably more robust than the left; the second, third, and fourth are walking legs tipped with sharp dactyli; and the fifth, used for cleaning, are very small and generally sit inside the branchial chamber. On their underside, they have a short abdomen – composed of calcified plates – which is asymmetrical in females. This abdomen (sometimes called a pleon) is folded against the underside of the cephalothorax and is composed of six segments – called somites or pleonites – and a telson. In Hapalogastrinae, this abdomen is soft, while it is hard and calcified in members of Lithodinae. Lithodids lack any sort of uropod seen in some decapods.
Distribution
King crabs are typically found in deep waters, especially in polar and subpolar regions and near hydrothermal vents and cold seeps. Members of Lithodinae can be found in all five of the world's oceans, namely the Pacific, Atlantic, Indian, Southern, and Arctic, while members of Hapalogastrinae are only found in the North Pacific. Members of Hapalogastrinae exhibit a tolerance for higher temperatures than Lithodinae; whereas Lithodinae tend to live exclusively in deep waters or – less commonly – high-latitude shallow waters, Hapalogastrinae are found only in shallow waters (<).
Fisheries
Because of their large size, the taste of their meat, and their status as a delicacy, some species of king crabs are caught and sold as food. Red (Paralithodes camtschaticus) and blue (Paralithodes platypus) king crabs are heavily targeted by commercial fisheries in Alaska and have been for several decades. However, populations have fluctuated in the past 25 years, and some areas are currently closed due to overfishing. Alaskan fisheries additionally target the golden king crab (Lithodes aequispinus). In South America, both the southern king crab (Lithodes santolla) and several species of Paralomis are targeted by commercial fisheries, and as a result, the population of L. santolla has seen a dramatic decline.
Symbionts and parasites
Juveniles of species of king crabs, including Neolithodes diomedeae, use a species (Scotoplanes Sp. A) of sea cucumber (often known as "sea pigs") as hosts and can be found on top of and under Scotoplanes. The Scotoplanes reduce the risk of predation for the N. diomedeae, while the Scotoplanes are not harmed from being hosts, which supports the consensus that the two organisms have a commensal relationship. Endosymbiotic microorganisms of the order Eccrinida have been found in Paralithodes camtschaticus and Lithodes maja, living in their hindgut between molts.
Some species of king crab, including those of the genera Lithodes, Neolithodes, Paralithodes, and likely Echidnocerus, act as hosts to some parasitic species of careproctus fish. The careproctus lays eggs in the gill chamber of the king crab which serves as a well-protected and aerated area for the eggs to reside until they hatch. On occasion king crabs have been found to be host to the eggs of multiple species of careproctus simultaneously. King crabs are additionally parasitized by rhizocephalan genus Briarosaccus, a type of barnacle. The barnacle irreversibly sterilizes the crab, and over 50% of some king crab populations are affected.
| Biology and health sciences | Crabs and hermit crabs | Animals |
1032610 | https://en.wikipedia.org/wiki/Focus%20%28optics%29 | Focus (optics) | In geometrical optics, a focus, also called an image point, is a point where light rays originating from a point on the object converge. Although the focus is conceptually a point, physically the focus has a spatial extent, called the blur circle. This non-ideal focusing may be caused by aberrations of the imaging optics. Even in the absence of aberrations, the smallest possible blur circle is the Airy disc caused by diffraction from the optical system's aperture; diffraction is the ultimate limit to the light focusing ability of any optical system. Aberrations tend to worsen as the aperture diameter increases, while the Airy circle is smallest for large apertures.
An image, or image point or region, is in focus if light from object points is converged almost as much as possible in the image, and out of focus if light is not well converged. The border between these is sometimes defined using a "circle of confusion" criterion.
A principal focus or focal point is a special focus:
For a lens, or a spherical or parabolic mirror, it is a point onto which collimated light parallel to the axis is focused. Since light can pass through a lens in either direction, a lens has two focal points – one on each side. The distance in air from the lens or mirror's principal plane to the focus is called the focal length.
Elliptical mirrors have two focal points: light that passes through one of these before striking the mirror is reflected such that it passes through the other.
The focus of a hyperbolic mirror is either of two points which have the property that light from one is reflected as if it came from the other.
Diverging (negative) lenses and convex mirrors do not focus a collimated beam to a point. Instead, the focus is the point from which the light appears to be emanating, after it travels through the lens or reflects from the mirror. A convex parabolic mirror will reflect a beam of collimated light to make it appear as if it were radiating from the focal point, or conversely, reflect rays directed toward the focus as a collimated beam. A convex elliptical mirror will reflect light directed towards one focus as if it were radiating from the other focus, both of which are behind the mirror. A convex hyperbolic mirror will reflect rays emanating from the focal point in front of the mirror as if they were emanating from the focal point behind the mirror. Conversely, it can focus rays directed at the focal point that is behind the mirror towards the focal point that is in front of the mirror as in a Cassegrain telescope.
| Physical sciences | Optics | null |
1032780 | https://en.wikipedia.org/wiki/Preventive%20healthcare | Preventive healthcare | Preventive healthcare, or prophylaxis, is the application of healthcare measures to prevent diseases. Disease and disability are affected by environmental factors, genetic predisposition, disease agents, and lifestyle choices, and are dynamic processes that begin before individuals realize they are affected. Disease prevention relies on anticipatory actions that can be categorized as primal, primary, secondary, and tertiary prevention.
Each year, millions of people die of preventable causes. A 2004 study showed that about half of all deaths in the United States in 2000 were due to preventable behaviors and exposures. Leading causes included cardiovascular disease, chronic respiratory disease, unintentional injuries, diabetes, and certain infectious diseases. This same study estimates that 400,000 people die each year in the United States due to poor diet and a sedentary lifestyle. According to estimates made by the World Health Organization (WHO), about 55 million people died worldwide in 2011, and two-thirds of these died from non-communicable diseases, including cancer, diabetes, and chronic cardiovascular and lung diseases. This is an increase from the year 2000, during which 60% of deaths were attributed to these diseases.)
Preventive healthcare is especially important given the worldwide rise in the prevalence of chronic diseases and deaths from these diseases. There are many methods for prevention of disease. One of them is prevention of teenage smoking through information giving. It is recommended that adults and children aim to visit their doctor for regular check-ups, even if they feel healthy, to perform disease screening, identify risk factors for disease, discuss tips for a healthy and balanced lifestyle, stay up to date with immunizations and boosters, and maintain a good relationship with a healthcare provider. In pediatrics, some common examples of primary prevention are encouraging parents to turn down the temperature of their home water heater in order to avoid scalding burns, encouraging children to wear bicycle helmets, and suggesting that people use the air quality index (AQI) to check the level of pollution in the outside air before engaging in sporting activities. Some common disease screenings include checking for hypertension (high blood pressure), hyperglycemia (high blood sugar, a risk factor for diabetes mellitus), hypercholesterolemia (high blood cholesterol), screening for colon cancer, depression, HIV and other common types of sexually transmitted disease such as chlamydia, syphilis, and gonorrhea, mammography (to screen for breast cancer), colorectal cancer screening, a Pap test (to check for cervical cancer), and screening for osteoporosis. Genetic testing can also be performed to screen for mutations that cause genetic disorders or predisposition to certain diseases such as breast or ovarian cancer. However, these measures are not affordable for every individual and the cost effectiveness of preventive healthcare is still a topic of debate.
Overview
Preventive healthcare strategies are described as taking place at the primal, primary, secondary, and tertiary prevention levels.
Although advocated as preventive medicine in the early twentieth century by Sara Josephine Baker, in the 1940s, Hugh R. Leavell and E. Gurney Clark coined the term primary prevention. They worked at the Harvard and Columbia University Schools of Public Health, respectively, and later expanded the levels to include secondary and tertiary prevention. Goldston (1987) notes that these levels might be better described as "prevention, treatment, and rehabilitation", although the terms primary, secondary, and tertiary prevention are still in use today. The concept of primal prevention has been created much more recently, in relation to the new developments in molecular biology over the last fifty years, more particularly in epigenetics, which point to the paramount importance of environmental conditions, both physical and affective, on the organism during its fetal and newborn life, or so-called primal period of life.
Primal and primordial preventions
Primal prevention is health promotion par excellence. New knowledge in molecular biology, in particular epigenetics, points to how much affective as well as physical environment during fetal and newborn life may determine adult health. This way of promoting health consists mainly in providing future parents with pertinent, unbiased information on primal health and supporting them during their child's primal period of life (i.e., "from conception to first anniversary" according to definition by the Primal Health Research Centre, London). This includes adequate parental leave, ideally for both parents, with kin caregiving and financial help where needed.
Primordial prevention refers to all measures designed to prevent the development of risk factors in the first place, early in life, and even preconception, as Ruth A. Etzel has described it "all population-level actions and measures that inhibit the emergence and establishment of adverse environmental, economic, and social conditions". This could be reducing air pollution or prohibiting endocrine-disrupting chemicals in food-handling equipment and food contact materials.
Primary prevention
Primary prevention consists of traditional health promotion and "specific protection". Health promotion activities include prevention strategies such as health education and lifestyle medicine, and are current, non-clinical life choices such as eating nutritious meals and exercising often, that prevent lifestyle-related medical conditions, improve the quality of life, and create a sense of overall well-being. Preventing disease and creating overall well-being prolongs life expectancy. Health-promotional activities do not target a specific disease or condition but rather promote health and well-being on a very general level. On the other hand, specific protection targets a type or group of diseases and complements the goals of health promotion.
Food
Food is the most basic tool in preventive health care. Poor nutrition is linked to various chronic illnesses. Because of this, having a healthy diet and proper nutrition can be used to prevent illnesses.
Access
The 2011 National Health Interview Survey performed by the Centers for Disease Control was the first national survey to include questions about ability to pay for food. Difficulty with paying for food, medicine, or both is a problem facing 1 out of 3 Americans. If better food options were available through food banks, soup kitchens, and other resources for low-income people, obesity and the chronic conditions that come along with it would be better controlled. A food desert is an area with restricted access to healthy foods due to a lack of supermarkets within a reasonable distance. These are often low-income neighborhoods with the majority of residents lacking transportation. There have been several grassroots movements since 1995 to encourage urban gardening, using vacant lots to grow food cultivated by local residents. Mobile fresh markets are another resource for residents in a "food desert", which are specially outfitted buses bringing affordable fresh fruits and vegetables to low-income neighborhoods.
Food education and guidance
It has been proposed that healthy longevity diets are included in standard healthcare as switching from a "typical Western diet" could often extend life by a decade.
Protective measures
Specific protective measures such as water purification, sewage treatment, and the development of personal hygienic routines, such as regular hand-washing, safe sex to prevent sexually transmitted infections, became mainstream upon the discovery of infectious disease agents and have decreased the rates of communicable diseases which are spread in unsanitary conditions.
Scientific advancements in genetics have contributed to the knowledge of hereditary diseases and have facilitated progress in specific protective measures in individuals who are carriers of a disease gene or have an increased predisposition to a specific disease. Genetic testing has allowed physicians to make quicker and more accurate diagnoses and has allowed for tailored treatments or personalized medicine.
Food safety has a significant impact on human health and food quality monitoring has increased.
Water, including drinking water, is also monitored in many cases for securing health. There also is some monitoring of air pollution. In many cases, environmental standards such as via maximum pollution levels, regulation of chemicals, occupational hygiene requirements or consumer protection regulations establish some protection in combination with the monitoring.
Preventive measures like vaccines and medical screenings are also important. Using PPE properly and getting the recommended vaccines and screenings can help decrease the spread of respiratory diseases, protecting the healthcare workers as well as their patients.
Secondary prevention
Secondary prevention deals with latent diseases and attempts to prevent an asymptomatic disease from progressing to symptomatic disease. Certain diseases can be classified as primary or secondary. This depends on definitions of what constitutes a disease, though, in general, primary prevention addresses the root cause of a disease or injury whereas secondary prevention aims to detect and treat a disease early on. Secondary prevention consists of "early diagnosis and prompt treatment" to contain the disease and prevent its spread to other individuals, and "disability limitation" to prevent potential future complications and disabilities from the disease. Early diagnosis and prompt treatment for a syphilis patient would include a course of antibiotics to destroy the pathogen and screening and treatment of any infants born to syphilitic mothers. Disability limitation for syphilitic patients includes continued check-ups on the heart, cerebrospinal fluid, and central nervous system of patients to curb any damaging effects such as blindness or paralysis.
Tertiary prevention
Finally, tertiary prevention attempts to reduce the damage caused by symptomatic disease by focusing on mental, physical, and social rehabilitation. Unlike secondary prevention, which aims to prevent disability, the objective of tertiary prevention is to maximize the remaining capabilities and functions of an already disabled patient. Goals of tertiary prevention include: preventing pain and damage, halting progression and complications from disease, and restoring the health and functions of the individuals affected by disease. For syphilitic patients, rehabilitation includes measures to prevent complete disability from the disease, such as implementing work-place adjustments for the blind and paralyzed or providing counseling to restore normal daily functions to the greatest extent possible.
The general use of machinery that has adequate ventilation and airflow is suggested for these patients in order to halt progression and complications of disease. A study conducted in nursing homes to prevent diseases concluded that the use of evaporative humidifiers to maintain the indoor humidity within the range 40–60% can reduce respiratory risk. Certain diseases thrive in different humidities, so the use of the humidifiers can help kill the particles of diseases.
Leading causes of preventable death
United States
The leading preventable cause of death in the United States is tobacco; however, poor diet and lack of exercise may soon surpass tobacco as a leading cause of death. These behaviors are modifiable and public health and prevention efforts could make a difference to reduce these deaths.
Worldwide
The leading causes of preventable death worldwide share similar trends to the United States. There are a few differences between the two, such as malnutrition, pollution, and unsafe sanitation, that reflect health disparities between the developing and developed world.
However, several of the leading causes of death – or underlying contributors to earlier death – may not be included as "preventable" causes of death. A study concluded that pollution was "responsible for approximately 9 million deaths per year" in 2019. And another study concluded that the global mean loss of life expectancy (a measure similar to years of potential life lost) from air pollution in 2015 was 2.9 years, substantially more than, for example, 0.3 years from all forms of direct violence, albeit a significant fraction of the LLE is considered to be unavoidable (such as pollution from some natural wildfires).
A landmark study conducted by the World Health Organization and the International Labour Organization found that exposure to long working hours is the occupational risk factor with the largest attributable burden of disease, i.e. an estimated 745,000 fatalities from ischemic heart disease and stroke events in 2016. With this study, prevention of exposure to long working hours has emerged as a priority for prevention healthcare in workplace settings.
Child mortality
In 2010, 7.6 million children died before reaching the age of 5. While this is a decrease from 9.6 million in 2000, it was still far from the fourth Millennium Development Goal to decrease child mortality by two-thirds by 2015. Of these deaths, about 64% were due to infection including diarrhea, pneumonia, and malaria. About 40% of these deaths occurred in neonates (children ages 1–28 days) due to pre-term birth complications. The highest number of child deaths occurred in Africa and Southeast Asia. As of 2015 in Africa, almost no progress has been made in reducing neonatal death since 1990. In 2010, India, Nigeria, Democratic Republic of the Congo, Pakistan, and China contributed to almost 50% of global child deaths. Targeting efforts in these countries is essential to reducing the global child death rate.
Child mortality is caused by factors including poverty, environmental hazards, and lack of maternal education. In 2003, the World Health Organization created a list of interventions in the following table that were judged economically and operationally "feasible," based on the healthcare resources and infrastructure in 42 nations that contribute to 90% of all infant and child deaths. The table indicates how many infant and child deaths could have been prevented in 2000, assuming universal healthcare coverage.
Preventive methods
Obesity
Obesity is a major risk factor for a wide variety of conditions including cardiovascular diseases, hypertension, certain cancers, and type 2 diabetes. In order to prevent obesity, it is recommended that individuals adhere to a consistent exercise regimen as well as a nutritious and balanced diet. A healthy individual should aim for acquiring 10% of their energy from proteins, 15-20% from fat, and over 50% from complex carbohydrates, while avoiding alcohol as well as foods high in fat, salt, and sugar. Sedentary adults should aim for at least half an hour of moderate-level daily physical activity and eventually increase to include at least 20 minutes of intense exercise, three times a week. Preventive health care offers many benefits to those that chose to participate in taking an active role in the culture. The medical system in our society is geared toward curing acute symptoms of disease after the fact that they have brought us into the emergency room. An ongoing epidemic within American culture is the prevalence of obesity. Healthy eating and regular exercise play a significant role in reducing an individual's risk for type 2 diabetes. A 2008 study concluded that about 23.6 million people in the United States had diabetes, including 5.7 million that had not been diagnosed. 90 to 95 percent of people with diabetes have type 2 diabetes. Diabetes is the main cause of kidney failure, limb amputation, and new-onset blindness in American adults.
Sexually transmitted infections
Sexually transmitted infections (STIs), such as syphilis and HIV, are common but preventable with safe-sex practices. STIs can be asymptomatic, or cause a range of symptoms. Preventive measures for STIs are called prophylactics. The term especially applies to the use of condoms, which are highly effective at preventing disease, but also to other devices meant to prevent STIs, such as dental dams and latex gloves. Other means for preventing STIs include education on how to use condoms or other such barrier devices, testing partners before having unprotected sex, receiving regular STI screenings, to both receive treatment and prevent spreading STIs to partners, and, specifically for HIV, regularly taking prophylactic antiretroviral drugs, such as Truvada. Post-exposure prophylaxis, started within 72 hours (optimally less than 1 hour) after exposure to high-risk fluids, can also protect against HIV transmission.
Malaria prevention using genetic modification
Genetically modified mosquitoes are being used in developing countries to control malaria. This approach has been subject to objections and controversy.
Thrombosis
Thrombosis is a serious circulatory disease affecting thousands, usually older persons undergoing surgical procedures, women taking oral contraceptives and travelers. The consequences of thrombosis can be heart attacks and strokes. Prevention can include exercise, anti-embolism stockings, pneumatic devices, and pharmacological treatments.
Cancer
In recent years, cancer has become a global problem. Low and middle income countries share a majority of the cancer burden largely due to exposure to carcinogens resulting from industrialization and globalization. However, primary prevention of cancer and knowledge of cancer risk factors can reduce over one third of all cancer cases. Primary prevention of cancer can also prevent other diseases, both communicable and non-communicable, that share common risk factors with cancer.
Lung cancer
Lung cancer is the leading cause of cancer-related deaths in the United States and Europe and is a major cause of death in other countries. Tobacco is an environmental carcinogen and the major underlying cause of lung cancer. Between 25% and 40% of all cancer deaths and about 90% of lung cancer cases are associated with tobacco use. Other carcinogens include asbestos and radioactive materials. Both smoking and second-hand exposure from other smokers can lead to lung cancer and eventually death.
Prevention of tobacco use is paramount to prevention of lung cancer. Individual, community, and statewide interventions can prevent or cease tobacco use. 90% of adults in the U.S. who have ever smoked did so prior to the age of 20. In-school prevention/educational programs, as well as counseling resources, can help prevent and cease adolescent smoking. Other cessation techniques include group support programs, nicotine replacement therapy (NRT), hypnosis, and self-motivated behavioral change. Studies have shown long term success rates (>1 year) of 20% for hypnosis and 10%-20% for group therapy.
Cancer screening programs serve as effective sources of secondary prevention. The Mayo Clinic, Johns Hopkins, and Memorial Sloan-Kettering hospitals conducted annual x-ray screenings and sputum cytology tests and found that lung cancer was detected at higher rates, earlier stages, and had more favorable treatment outcomes, which supports widespread investment in such programs.
Legislation can also affect smoking prevention and cessation. In 1992, Massachusetts (United States) voters passed a bill adding an extra 25 cent tax to each pack of cigarettes, despite intense lobbying and $7.3 million spent by the tobacco industry to oppose this bill. Tax revenue goes toward tobacco education and control programs and has led to a decline of tobacco use in the state.
Lung cancer and tobacco smoking are increasing worldwide, especially in China. China is responsible for about one-third of the global consumption and production of tobacco products. Tobacco control policies have been ineffective as China is home to 350 million regular smokers and 750 million passive smokers and the annual death toll is over 1 million. Recommended actions to reduce tobacco use include decreasing tobacco supply, increasing tobacco taxes, widespread educational campaigns, decreasing advertising from the tobacco industry, and increasing tobacco cessation support resources. In Wuhan, China, a 1998 school-based program implemented an anti-tobacco curriculum for adolescents and reduced the number of regular smokers, though it did not significantly decrease the number of adolescents who initiated smoking. This program was therefore effective in secondary but not primary prevention and shows that school-based programs have the potential to reduce tobacco use.
Skin cancer
Skin cancer is the most common cancer in the United States. The most lethal form of skin cancer, melanoma, leads to over 50,000 annual deaths in the United States. Childhood prevention is particularly important because a significant portion of ultraviolet radiation exposure from the sun occurs during childhood and adolescence and can subsequently lead to skin cancer in adulthood. Furthermore, childhood prevention can lead to the development of healthy habits that continue to prevent cancer for a lifetime.
The Centers for Disease Control and Prevention (CDC) recommends several primary prevention methods including: limiting sun exposure between 10 AM and 4 PM, when the sun is strongest, wearing tighter-weave natural cotton clothing, wide-brim hats, and sunglasses as protective covers, using sunscreens that protect against both UV-A and UV-B rays, and avoiding tanning salons. Sunscreen should be reapplied after sweating, exposure to water (through swimming for example) or after several hours of sun exposure. Since skin cancer is very preventable, the CDC recommends school-level prevention programs including preventive curricula, family involvement, participation and support from the school's health services, and partnership with community, state, and national agencies and organizations to keep children away from excessive UV radiation exposure.
Most skin cancer and sun protection data comes from Australia and the United States. An international study reported that Australians tended to demonstrate higher knowledge of sun protection and skin cancer knowledge, compared to other countries. Of children, adolescents, and adults, sunscreen was the most commonly used skin protection. However, many adolescents purposely used sunscreen with a low sun protection factor (SPF) in order to get a tan. Various Australian studies have shown that many adults failed to use sunscreen correctly; many applied sunscreen well after their initial sun exposure and/or failed to reapply when necessary. A 2002 case-control study in Brazil showed that only 3% of case participants and 11% of control participants used sunscreen with SPF >15.
Cervical cancer
Cervical cancer ranks among the top three most common cancers among women in Latin America, sub-Saharan Africa, and parts of Asia. Cervical cytology screening aims to detect abnormal lesions in the cervix so that women can undergo treatment prior to the development of cancer. Given that high quality screening and follow-up care has been shown to reduce cervical cancer rates by up to 80%, most developed countries now encourage sexually active women to undergo a Pap test every 3–5 years. Finland and Iceland have developed effective organized programs with routine monitoring and have managed to significantly reduce cervical cancer mortality while using fewer resources than unorganized, opportunistic programs such as those in the United States or Canada.
In developing nations in Latin America, such as Chile, Colombia, Costa Rica, and Cuba, both public and privately organized programs have offered women routine cytological screening since the 1970s. However, these efforts have not resulted in a significant change in cervical cancer incidence or mortality in these nations. This is likely due to low quality, inefficient testing. However, Puerto Rico, which has offered early screening since the 1960s, has witnessed almost a 50% decline in cervical cancer incidence and almost a four-fold decrease in mortality between 1950 and 1990. Brazil, Peru, India, and several high-risk nations in sub-Saharan Africa which lack organized screening programs, have a high incidence of cervical cancer.
Colorectal cancer
Colorectal cancer is globally the second most common cancer in women and the third-most common in men, and the fourth most common cause of cancer death after lung, stomach, and liver cancer, having caused 715,000 deaths in 2010.
It is also highly preventable; about 80 percent of colorectal cancers begin as benign growths, commonly called polyps, which can be easily detected and removed during a colonoscopy. Other methods of screening for polyps and cancers include fecal occult blood testing. Lifestyle changes that may reduce the risk of colorectal cancer include increasing consumption of whole grains, fruits and vegetables, and reducing consumption of red meat.
Dementia
Health disparities and barriers to accessing care
Access to healthcare and preventive health services is unequal, as is the quality of care received. A study conducted by the Agency for Healthcare Research and Quality (AHRQ) revealed health disparities in the United States. In the United States, elderly adults (>65 years old) received worse care and had less access to care than their younger counterparts. The same trends are seen when comparing all racial minorities (black, Hispanic, Asian) to white patients, and low-income people to high-income people. Common barriers to accessing and utilizing healthcare resources included lack of income and education, language barriers, and lack of health insurance. Minorities were less likely than whites to possess health insurance, as were individuals who completed less education. These disparities made it more difficult for the disadvantaged groups to have regular access to a primary care provider, receive immunizations, or receive other types of medical care. Additionally, uninsured people tend to not seek care until their diseases progress to chronic and serious states and they are also more likely to forgo necessary tests, treatments, and filling prescription medications.
These sorts of disparities and barriers exist worldwide as well. Often, there are decades of gaps in life expectancy between developing and developed countries. For example, Japan has an average life expectancy that is 36 years greater than that in Malawi. Low-income countries also tend to have fewer physicians than high-income countries. In Nigeria and Myanmar, there are fewer than 4 physicians per 100,000 people while Norway and Switzerland have a ratio that is ten-fold higher. Common barriers worldwide include lack of availability of health services and healthcare providers in the region, great physical distance between the home and health service facilities, high transportation costs, high treatment costs, and social norms and stigma toward accessing certain health services.
Economics of lifestyle-based prevention
With lifestyle factors such as diet and exercise rising to the top of preventable death statistics, the economics of healthy lifestyle is a growing concern. There is little question that positive lifestyle choices provide an investment in health throughout life. To gauge success, traditional measures such as the quality years of life method (QALY), show great value. However, that method does not account for the cost of chronic conditions or future lost earnings because of poor health.
Developing future economic models that would guide both private and public investments as well as drive future policy to evaluate the efficacy of positive lifestyle choices on health is a major topic for economists globally. Americans spend over three trillion a year on health care but have a higher rate of infant mortality, shorter life expectancies, and a higher rate of diabetes than other high-income nations because of negative lifestyle choices. Despite these large costs, very little is spent on prevention for lifestyle-caused conditions in comparison. In 2016, the Journal of the American Medical Association estimated that $101 billion was spent in 2013 on the preventable disease of diabetes, and another $88 billion was spent on heart disease. In an effort to encourage healthy lifestyle choices, as of 2010 workplace wellness programs were on the rise but the economics and effectiveness data were continuing to evolve and develop.
Health insurance coverage impacts lifestyle choices, even intermittent loss of coverage had negative effects on healthy choices in the U.S. The repeal of the Affordable Care Act (ACA) could significantly impact coverage for many Americans as well as "The Prevention and Public Health Fund" which is the U.S. first and only mandatory funding stream dedicated to improving public health including counseling on lifestyle prevention issues, such as weight management, alcohol use, and treatment for depression.
Because in the U.S. chronic illnesses predominate as a cause of death and pathways for treating chronic illnesses are complex and multifaceted, prevention is a best practice approach to chronic disease when possible. In many cases, prevention requires mapping complex pathways to determine the ideal point for intervention. Cost-effectiveness of prevention is achievable, but impacted by the length of time it takes to see effects/outcomes of intervention. This makes prevention efforts difficult to fund—particularly in strained financial contexts. Prevention potentially creates other costs as well, due to extending the lifespan and thereby increasing opportunities for illness. In order to assess the cost-effectiveness of prevention, the cost of the preventive measure, savings from avoiding morbidity, and the cost from extending the lifespan need to be considered. Life extension costs become smaller when accounting for savings from postponing the last year of life, which makes up a large fraction of lifetime medical expenditures and becomes cheaper with age. Prevention leads to savings only if the cost of the preventive measure is less than the savings from avoiding morbidity net of the cost of extending the life span. In order to establish reliable economics of prevention for illnesses that are complicated in origin, knowing how best to assess prevention efforts, i.e. developing useful measures and appropriate scope, is required.
Effectiveness
There is no general consensus as to whether or not preventive healthcare measures are cost-effective, but they increase the quality of life dramatically. There are varying views on what constitutes a "good investment." Some argue that preventive health measures should save more money than they cost, when factoring in treatment costs in the absence of such measures. Others have argued in favor of "good value" or conferring significant health benefits even if the measures do not save money. Furthermore, preventive health services are often described as one entity though they comprise a myriad of different services, each of which can individually lead to net costs, savings, or neither. Greater differentiation of these services is necessary to fully understand both the financial and health effects.
A 2010 study reported that in the United States, vaccinating children, cessation of smoking, daily prophylactic use of aspirin, and screening of breast and colorectal cancers had the most potential to prevent premature death. Preventive health measures that resulted in savings included vaccinating children and adults, smoking cessation, daily use of aspirin, and screening for issues with alcoholism, obesity, and vision failure. These authors estimated that if usage of these services in the United States increased to 90% of the population, there would be net savings of $3.7 billion, which comprised only about -0.2% of the total 2006 United States healthcare expenditure. Despite the potential for decreasing healthcare spending, utilization of healthcare resources in the United States still remains low, especially among Latinos and African-Americans. Overall, preventive services are difficult to implement because healthcare providers have limited time with patients and must integrate a variety of preventive health measures from different sources.
While these specific services bring about small net savings, not every preventive health measure saves more than it costs. A 1970s study showed that preventing heart attacks by treating hypertension early on with drugs actually did not save money in the long run. The money saved by evading treatment from heart attack and stroke only amounted to about a quarter of the cost of the drugs. Similarly, it was found that the cost of drugs or dietary changes to decrease high blood cholesterol exceeded the cost of subsequent heart disease treatment. Due to these findings, some argue that rather than focusing healthcare reform efforts exclusively on preventive care, the interventions that bring about the highest level of health should be prioritized.
In 2008, Cohen et al. outlined a few arguments made by skeptics of preventive healthcare. Many argue that preventive measures only cost less than future treatment when the proportion of the population that would become ill in the absence of prevention is fairly large. The Diabetes Prevention Program Research Group conducted a 2012 study evaluating the costs and benefits in quality-adjusted life-years or QALYs of lifestyle changes versus taking the drug metformin. They found that neither method brought about financial savings, but were cost-effective nonetheless because they brought about an increase in QALYs. In addition to scrutinizing costs, preventive healthcare skeptics also examine efficiency of interventions. They argue that while many treatments of existing diseases involve use of advanced equipment and technology, in some cases, this is a more efficient use of resources than attempts to prevent the disease. Cohen suggested that the preventive measures most worth exploring and investing in are those that could benefit a large portion of the population to bring about cumulative and widespread health benefits at a reasonable cost.
Cost-effectiveness of childhood obesity interventions
There are at least four nationally implemented childhood obesity interventions in the United States: the Sugar-Sweetened Beverage excise tax (SSB), the TV AD program, active physical education (Active PE) policies, and early care and education (ECE) policies. They each have similar goals of reducing childhood obesity. The effects of these interventions on BMI have been studied, and the cost-effectiveness analysis (CEA) has led to a better understanding of projected cost reductions and improved health outcomes. The Childhood Obesity Intervention Cost-Effectiveness Study (CHOICES) was conducted to evaluate and compare the CEA of these four interventions.
Gortmaker, S.L. et al. (2015) states: "The four initial interventions were selected by the investigators to represent a broad range of nationally scalable strategies to reduce childhood obesity using a mix of both policy and programmatic strategies... 1. an excise tax of $0.01 per ounce of sweetened beverages, applied nationally and administered at the state level (SSB), 2. elimination of the tax deductibility of advertising costs of TV advertisements for "nutritionally poor" foods and beverages seen by children and adolescents (TV AD), 3. state policy requiring all public elementary schools in which physical education (PE) is currently provided to devote ≥50% of PE class time to moderate and vigorous physical activity (Active PE), and 4. state policy to make early child educational settings healthier by increasing physical activity, improving nutrition, and reducing screen time (ECE)." The CHOICES found that SSB, TV AD, and ECE led to net cost savings. Both SSB and TV AD increased quality adjusted life years and produced yearly tax revenue of 12.5 billion U.S. dollars and 80 million U.S. dollars, respectively.
Some challenges with evaluating the effectiveness of child obesity interventions include:
The economic consequences of childhood obesity are both short and long term. In the short term, obesity impairs cognitive achievement and academic performance. Some believe this is secondary to negative effects on mood or energy, but others suggest there may be physiological factors involved. Furthermore, obese children have increased health care expenses (e.g. medications, acute care visits). In the long term, obese children tend to become obese adults with associated increased risk for a chronic condition such as diabetes or hypertension. Any effect on their cognitive development may also affect their contributions to society and socioeconomic status.
In the CHOICES, it was noted that translating the effects of these interventions may in fact differ among communities throughout the nation. In addition it was suggested that limited outcomes are studied and these interventions may have an additional effect that is not fully appreciated.
Modeling outcomes in such interventions in children over the long term is challenging because advances in medicine and medical technology are unpredictable. The projections from cost-effective analysis may need to be reassessed more frequently.
Economics of U.S. preventive care
As of 2009, the cost-effectiveness of preventive care is a highly debated topic. While some economists argue that preventive care is valuable and potentially cost saving, others believe it is an inefficient waste of resources. Preventive care is composed of a variety of clinical services and programs including annual doctor's check-ups, annual immunizations, and wellness programs; recent models show that these simple interventions can have significant economic impacts.
Clinical preventive services and programs
Research on preventive care addresses the question of whether it is cost saving or cost effective and whether there is an economics evidence base for health promotion and disease prevention. The need for and interest in preventive care is driven by the imperative to reduce health care costs while improving quality of care and the patient experience. Preventive care can lead to improved health outcomes and cost savings potential. Services such as health assessments/screenings, prenatal care, and telehealth and telemedicine can reduce morbidity or mortality with low cost or cost savings. Specifically, health assessments/screenings have cost savings potential, with varied cost-effectiveness based on screening and assessment type. Inadequate prenatal care can lead to an increased risk of prematurity, stillbirth, and infant death. Time is the ultimate resource and preventive care can help mitigate the time costs. Telehealth and telemedicine is one option that has gained consumer interest, acceptance, and confidence and can improve quality of care and patient satisfaction.
Economics for investment
There are benefits and trade-offs when considering investment in preventive care versus other types of clinical services. Preventive care can be a good investment as supported by the evidence base and can drive population health management objectives. The concepts of cost saving and cost-effectiveness are different and both are relevant to preventive care. Preventive care that may not save money may still provide health benefits; thus, there is a need to compare interventions relative to impact on health and cost.
Preventive care transcends demographics and is applicable to people of every age. The Health Capital Theory underpins the importance of preventive care across the lifecycle and provides a framework for understanding the variances in health and health care that are experienced. It treats health as a stock that provides direct utility. Health depreciates with age and the aging process can be countered through health investments. The theory further supports that individuals demand good health, that the demand for health investment is a derived demand (i.e. investment is health is due to the underlying demand for good health), and the efficiency of the health investment process increases with knowledge (i.e. it is assumed that the more educated are more efficient consumers and producers of health).
The prevalence elasticity of demand for prevention can also provide insights into the economics. Demand for preventive care can alter the prevalence rate of a given disease and further reduce or even reverse any further growth of prevalence. Reduction in prevalence subsequently leads to reduction in costs. There are a number of organizations and policy actions that are relevant when discussing the economics of preventive care services. The evidence base, viewpoints, and policy briefs from the Robert Wood Johnson Foundation, the Organisation for Economic Co-operation and Development (OECD), and efforts by the U.S. Preventive Services Task Force (USPSTF) all provide examples that improve the health and well-being of populations (e.g. preventive health assessments/screenings, prenatal care, and telehealth/telemedicine). The Affordable Care Act (ACA) has major influence on the provision of preventive care services, although it is currently under heavy scrutiny and review by the new administration. According to the Centers for Disease Control and Prevention (CDC), the ACA makes preventive care affordable and accessible through mandatory coverage of preventive services without a deductible, copayment, coinsurance, or other cost sharing.
The U.S. Preventive Services Task Force (USPSTF), a panel of national experts in prevention and evidence-based medicine, works to improve health of Americans by making evidence-based recommendations about clinical preventive services. They do not consider the cost of a preventive service when determining a recommendation. Each year, the organization delivers a report to Congress that identifies critical evidence gaps in research and recommends priority areas for further review.
The National Network of Perinatal Quality Collaboratives (NNPQC), sponsored by the CDC, supports state-based perinatal quality collaboratives (PQCs) in measuring and improving upon health care and health outcomes for mothers and babies. These PQCs have contributed to improvements such as reduction in deliveries before 39 weeks, reductions in healthcare associated bloodstream infections, and improvements in the utilization of antenatal corticosteroids.
Telehealth and telemedicine has realized significant growth and development recently. The Center for Connected Health Policy (The National Telehealth Policy Resource Center) has produced multiple reports and policy briefs on the topic of Telehealth and Telemedicine and how they contribute to preventive services. Policy actions and provision of preventive services do not guarantee utilization. Reimbursement has remained a significant barrier to adoption due to variances in payer and state level reimbursement policies and guidelines through government and commercial payers. Americans use preventive services at about half the recommended rate and cost-sharing, such as deductibles, co-insurance, or copayments, also reduce the likelihood that preventive services will be used. Despite the ACA's enhancement of Medicare benefits and preventive services, there were no effects on preventive service utilization, calling out the fact that other fundamental barriers exist.
Affordable Care Act and preventive healthcare
The Patient Protection and Affordable Care Act, also known as just the Affordable Care Act or Obamacare, was passed and became law in the United States on March 23, 2010. The finalized and newly ratified law was to address many issues in the U.S. healthcare system, which included expansion of coverage, insurance market reforms, better quality, and the forecast of efficiency and costs. Under the insurance market reforms the act required that insurance companies no longer exclude people with pre-existing conditions, allow for children to be covered on their parents' plan until the age of 26, and expand appeals that dealt with reimbursement denials. The Affordable Care Act also banned the limited coverage imposed by health insurances, and insurance companies were to include coverage for preventive health care services. The U.S. Preventive Services Task Force has categorized and rated preventive health services as either A or B, as to which insurance companies must comply and present full coverage. Not only has the U.S. Preventive Services Task Force provided graded preventive health services that are appropriate for coverage, they have also provided many recommendations to clinicians and insurers to promote better preventive care to ultimately provide better quality of care and lower the burden of costs.
Health insurance
Healthcare insurance companies are willing to pay for preventive care despite the fact that patients are not acutely sick in hope that it will prevent them from developing a chronic disease later on in life. Today, health insurance plans offered through the Marketplace, mandated by the Affordable Care Act are required to provide certain preventive care services free of charge to patients. Section 2713 of the Affordable Care Act, specifies that all private Marketplace and all employer-sponsored private plans (except those grandfathered in) are required to cover preventive care services that are ranked A or B by the U.S. Preventive Services Task Force free of charge to patients. UnitedHealthcare insurance company has published patient guidelines at the beginning of the year explaining their preventive care coverage.
Evaluating incremental benefits
Evaluating the incremental benefits of preventive care requires a longer period of time when compared to acutely ill patients. Inputs into the model such as discounting rate and time horizon can have significant effects on the results. One controversial subject is use of a 10-year time frame to assess cost effectiveness of diabetes preventive services by the Congressional Budget Office.
Preventive care services mainly focus on chronic disease. The Congressional Budget Office has provided guidance that further research is needed in the area of the economic impacts of obesity in the U.S. before the CBO can estimate budgetary consequences. A bipartisan report published in May 2015 recognizes the potential of preventive care to improve patients' health at individual and population levels while decreasing the healthcare expenditure.
Economic case
Mortality from modifiable risk factors
Chronic diseases such as heart disease, stroke, diabetes, obesity and cancer have become the most common and costly health problems in the United States. In 2014, it was projected that by 2023 that the number of chronic disease cases would increase by 42%, resulting in $4.2 trillion in treatment and lost economic output. They are also among the top ten leading causes of mortality. Chronic diseases are driven by risk factors that are largely preventable. Sub-analysis performed on all deaths in the United States in 2000 revealed that almost half were attributed to preventable behaviors including tobacco, poor diet, physical inactivity and alcohol consumption. More recent analysis reveals that heart disease and cancer alone accounted for nearly 46% of all deaths. Modifiable risk factors are also responsible for a large morbidity burden, resulting in poor quality of life in the present and loss of future life earning years. It is further estimated that by 2023, focused efforts on the prevention and treatment of chronic disease may result in 40 million fewer chronic disease cases, potentially reducing treatment costs by $220 billion.
Childhood vaccinations
Childhood immunizations are largely responsible for the increase in life expectancy in the 20th century. From an economic standpoint, childhood vaccines demonstrate a very high return on investment. According to Healthy People 2020, for every birth cohort that receives the routine childhood vaccination schedule, direct health care costs are reduced by $9.9 billion and society saves $33.4 billion in indirect costs. The economic benefits of childhood vaccination extend beyond individual patients to insurance plans and vaccine manufacturers, all while improving the health of the population.
Health capital theory
The burden of preventable illness extends beyond the healthcare sector, incurring costs related to lost productivity among workers in the workforce. Indirect costs related to poor health behaviors and associated chronic disease costs U.S. employers billions of dollars each year.
According to the American Diabetes Association (ADA), medical costs for employees with diabetes are twice as high as for workers without diabetes and are caused by work-related absenteeism ($5 billion), reduced productivity at work ($20.8 billion), inability to work due to illness-related disability ($21.6 billion), and premature mortality ($18.5 billion). Reported estimates of the cost burden due to increasingly high levels of overweight and obese members in the workforce vary, with best estimates suggesting 450 million more missed work days, resulting in $153 billion each year in lost productivity, according to the CDC Healthy Workforce.
The health capital model explains how individual investments in health can increase earnings by "increasing the number of healthy days available to work and to earn income." In this context, health can be treated both as a consumption good, wherein individuals desire health because it improves quality of life in the present, and as an investment good because of its potential to increase attendance and workplace productivity over time. Preventive health behaviors such as healthful diet, regular exercise, access to and use of well-care, avoiding tobacco, and limiting alcohol can be viewed as health inputs that result in both a healthier workforce and substantial cost savings.
Quality-adjusted life years
Health benefits of preventive care measures can be described in terms of quality-adjusted life-years (QALYs) saved. A QALY takes into account length and quality of life, and is used to evaluate the cost-effectiveness of medical and preventive interventions. Classically, one year of perfect health is defined as 1 QALY and a year with any degree of less than perfect health is assigned a value between 0 and 1 QALY. As an economic weighting system, the QALY can be used to inform personal decisions, to evaluate preventive interventions and to set priorities for future preventive efforts.
Cost-saving and cost-effective benefits of preventive care measures are well established. The Robert Wood Johnson Foundation evaluated the prevention cost-effectiveness literature, and found that many preventive measures meet the benchmark of <$100,000 per QALY and are considered to be favorably cost-effective. These include screenings for HIV and chlamydia, cancers of the colon, breast and cervix, vision screening, and screening for abdominal aortic aneurysms in men >60 in certain populations. Alcohol and tobacco screening were found to be cost-saving in some reviews and cost-effective in others. According to the RWJF analysis, two preventive interventions were found to save costs in all reviews: childhood immunizations and counseling adults on the use of aspirin.
Minority populations
Health disparities are increasing in the United States for chronic diseases such as obesity, diabetes, cancer, and cardiovascular disease. Populations at heightened risk for health inequities are the growing proportion of racial and ethnic minorities, including African Americans, American Indians, Hispanics/Latinos, Asian Americans, Alaska Natives and Pacific Islanders.
According to the Racial and Ethnic Approaches to Community Health (REACH), a national CDC program, non-Hispanic blacks currently have the highest rates of obesity (48%), and risk of newly diagnosed diabetes is 77% higher among non-Hispanic blacks, 66% higher among Hispanics/Latinos and 18% higher among Asian Americans compared to non-Hispanic whites. Current U.S. population projections predict that more than half of Americans will belong to a minority group by 2044. Without targeted preventive interventions, medical costs from chronic disease inequities will become unsustainable. Broadening health policies designed to improve delivery of preventive services for minority populations may help reduce substantial medical costs caused by inequities in health care, resulting in a return on investment.
Policies
Chronic disease is a population level issue that requires population health level efforts and national and state level public policy to effectively prevent, rather than individual level efforts. The United States currently employs many public health policy efforts aligned with the preventive health efforts discussed above. The Centers for Disease Control and Prevention support initiatives such as Health in All Policies and HI-5 (Health Impact in 5 Years), and collaborative efforts that aim to consider prevention across sectors and address social determinants of health as a method of primary prevention for chronic disease.
Obesity
Policies that address the obesity epidemic should be proactive and far-reaching, including a variety of stakeholders both in healthcare and in other sectors. Recommendations from the Institute of Medicine in 2012 suggest that "concerted action be taken across and within five environments (physical activity (PA), food and beverage, marketing and messaging, healthcare and worksites, and schools) and all sectors of society (including government, business and industry, schools, child care, urban planning, recreation, transportation, media, public health, agriculture, communities, and home) in order for obesity prevention efforts to truly be successful."
There are dozens of current policies acting at either (or all of) the federal, state, local and school levels. Most states employ a physical education requirement of 150 minutes of physical education per week at school, a policy of the National Association of Sport and Physical Education. In some cities, including Philadelphia, a sugary food tax is employed. This is a part of an amendment to Title 19 of the Philadelphia Code, "Finance, Taxes and Collections", Chapter 19-4100, Sugar-Sweetened Beverage Tax that was approved 2016, which establishes an excise tax of $0.015 per fluid ounce on distributors of beverages sweetened with both caloric and non-caloric sweeteners. Distributors are required to file a return with the department, and the department can collect taxes, among other responsibilities. These policies can be a source of tax credits. Under the Philadelphia policy, businesses can apply for tax credits with the revenue department on a first-come, first-served basis. This applies until the total amount of credits for a particular year reaches one million dollars.
Recently, advertisements for food and beverages directed at children have received much attention. The Children's Food and Beverage Advertising Initiative (CFBAI) is a self-regulatory program of the food industry. Each participating company makes a public pledge that details its commitment to advertise only foods that meet certain nutritional criteria to children under 12 years old. This is a self-regulated program with policies written by the Council of Better Business Bureaus. The Robert Wood Johnson Foundation funded research to test the efficacy of the CFBAI. The results showed progress in terms of decreased advertising of food products that target children and adolescents.
Childhood immunization policies
Despite nationwide controversies over childhood vaccination and immunization, there are policies and programs at the federal, state, local and school levels outlining vaccination requirements. All states require children to be vaccinated against certain communicable diseases as a condition for school attendance. However, only 18 states allow exemptions for "philosophical or moral reasons." Diseases for which vaccinations form part of the standard ACIP vaccination schedule are diphtheria tetanus pertussis (whooping cough), poliomyelitis (polio), measles, mumps, rubella, haemophilus influenzae type b, hepatitis B, influenza, and pneumococcal infections. The CDC website maintains such schedules.
The CDC website describes a federally funded program, Vaccines for Children (VFC), which provides vaccines at no cost to children who might not otherwise be vaccinated because of inability to pay. Additionally, the Advisory Committee on Immunization Practices (ACIP) is an expert vaccination advisory board that informs vaccination policy and guides on-going recommendations to the CDC, incorporating the most up-to-date cost-effectiveness and risk-benefit evidence in its recommendations.
| Biology and health sciences | Fields of medicine | null |
1034172 | https://en.wikipedia.org/wiki/Gladiolus | Gladiolus | Gladiolus (from Latin, the diminutive of gladius, a sword) is a genus of perennial cormous flowering plants in the iris family (Iridaceae).
It is sometimes called the 'sword lily', but is usually called by its generic name (plural gladioli).
The genus occurs in Asia, Mediterranean Europe, South Africa, and tropical Africa. The center of diversity is in the Cape Floristic Region. The genera Acidanthera, Anomalesia, Homoglossum, and Oenostachys, formerly considered distinct, are now included in Gladiolus.
Description
Gladioli grow from round, symmetrical corms (similar to crocuses) that are enveloped in several layers of brownish, fibrous tunics.
Their stems are generally unbranched, producing 1 to 9 narrow, sword-shaped, longitudinal grooved leaves, enclosed in a sheath. The lowest leaf is shortened to a cataphyll. The leaf blades can be plane or cruciform in cross section. The adaxial and abaxial surfaces of the leaf exhibit micro-striations with aligned micro-protrusions, which are coated with waxy nano-flakes. This three-level surface structure enables the leaf to shed rainfall droplets in a unidirectional manner due to anisotropic superhydrophobicity features, as reported by Mahesh C. Dubey et al.
The flowers of unmodified wild species vary from very small to perhaps 40 mm across, and inflorescences bearing anything from one to several flowers. The spectacular giant flower spikes in commerce are the products of centuries of hybridisation and selection.
The flower spikes are large and one-sided, with secund, bisexual flowers, each subtended by 2 leathery, green bracts. The sepals and the petals are almost identical in appearance, and are termed tepals. They are united at their base into a tube-shaped structure. The dorsal tepal is the largest, arching over the three stamens. The outer three tepals are narrower. The perianth is funnel-shaped, with the stamens attached to its base. The style has three filiform, spoon-shaped branches, each expanding towards the apex.
The ovary is 3-locular with oblong or globose capsules, containing many, winged brown, longitudinally dehiscent seeds.
These flowers are variously coloured, ranging from pink to reddish or light purple with white, contrasting markings, or white to cream or orange to red.
Ecology
The South African species were originally pollinated by long-tongued anthophorini bees, but some changes in the pollination system have occurred, allowing pollination by sunbirds, noctuid and Hawk-moths, long-tongued flies and several others. In the temperate zones of Europe many of the hybrid large flowering sorts of gladiolus can be pollinated by small well-known wasps. Actually, they are not very good pollinators because of the large flowers of the plants and the small size of the wasps. Another insect in this zone which can try some of the nectar of the gladioli is the best-known European Hawk-moth Macroglossum stellatarum which usually pollinates many popular garden flowers like Petunia, Zinnia, Dianthus and others.
Gladioli are used as food plants by the larvae of some Lepidoptera species including the Large Yellow Underwing, and gladiolus thrips.
Horticulture
Gladioli have been extensively hybridized and a wide range of ornamental flower colours are available from the many varieties. The main hybrid groups have been obtained by crossing between four or five species, followed by selection: 'Grandiflorus', 'Primulines' and 'Nanus'. They can make very good cut flowers for display.
The majority of the species in this genus are diploid with 30 chromosomes (2n=30) but the Grandiflora hybrids are tetraploid and possess 60 chromosomes (2n=4x=60). This is because the main parental species of these hybrids is Gladiolus dalenii which is also tetraploid and includes a wide range of varieties (like the Grandiflora hybrids).
Gallery
Cultivation
In temperate zones, the corms of most species and hybrids should be lifted in autumn and stored over winter in a frost-free place, then replanted in spring. Some species from Europe and high altitudes in Africa, as well as the small 'Nanus' hybrids, are much hardier (to at least ) and can be left in the ground in regions with sufficiently dry winters. 'Nanus' is hardy to Zones 5–8. The large-flowered types require moisture during the growing season, and must be individually staked as soon as the sword-shaped flower heads appear. The leaves must be allowed to die down naturally before lifting and storing the corms. Plants are propagated either from small cormlets produced as offsets by the parent corms, or from seed. In either case, they take several years to get to flowering size. Clumps should be dug up and divided every few years to keep them vigorous.
They are affected by thrips, (thrip simplex), and wasps Dasyproctus bipunctatus, which burrow into the flowers causing them to collapse and die.
Numerous garden cultivars have been developed, of which ‘Robinetta’ (a G. recurvus hybrid), with pink flowers, has gained the Royal Horticultural Society's Award of Garden Merit.
In culture
Gladiolus is the birth flower of August.
Gladioli are the flowers associated with a fortieth wedding anniversary.
American Ragtime composer Scott Joplin composed a rag called “Gladiolus Rag”
"Gladiolus" was the word Frank Neuhauser correctly spelled to win the 1st National Spelling Bee in 1925.
The Australian comedian and personality Dame Edna Everage's signature flowers were gladioli, which she referred to as "gladdies".
The ancient Graeco-Roman god Pluto was said to wear a wreath of what is traditionally identified as a type of Gladiolus, called phasganion or xiphion in Koine Greek.
The Mancunian singer Morrissey is known to dance with gladioli hanging from his back pocket or in his hands, especially during the era of The Smiths. This trait of his was made known in the music video for "This Charming Man", where he swung a bunch of yellow gladioli while singing.
Gladioli are traditionally given to people who finish the International Four Days Marches Nijmegen. This likely derives from their association with victory, from the time when gladiators were showered with them upon victory.
Species
The genus Gladiolus contains about 300 species, the World Checklist of Selected Plant Families had over 276 species in 1988, , it accepted 300 species.
There are 260 species of Gladiolus endemic to southern Africa, and 76 in tropical Africa.
About 10 species are native to Eurasia.
The genus Gladiolus has been divided into many sections. Most species, however, are only tentatively placed.
Gladiolus abbreviatus Andrews
Gladiolus abyssinicus (Brongn. ex Lem.) B.D.Jacks.
Gladiolus actinomorphanthus P.A.Duvign. & Van Bockstal
Gladiolus acuminatus F.Bolus
Gladiolus aequinoctialis Herb.
Gladiolus aladagensis Eker & Sağıroğlu
Gladiolus alatus L. (sect. Hebea)
Gladiolus albens Goldblatt & J.C.Manning
Gladiolus amplifolius Goldblatt
Gladiolus anatolicus (Boiss.) Stapf
Gladiolus andringitrae Goldblatt
Gladiolus angustus L. (sect. Blandus) – long-tubed painted lady
Gladiolus antakiensis A.P.Ham.
Gladiolus antandroyi Goldblatt
Gladiolus appendiculatus G.Lewis
Gladiolus aquamontanus Goldblatt & Vlok
Gladiolus arcuatus Klatt
Gladiolus atropictus Goldblatt & J.C.Manning
Gladiolus atropurpureus Baker
Gladiolus atroviolaceus Boiss.
Gladiolus attilae Kit Tan
Gladiolus aurantiacus Klatt
Gladiolus aureus Baker – golden gladiolus
Gladiolus balensis Goldblatt
Gladiolus baumii Harms
Gladiolus bellus C. H. Wright
Gladiolus benguellensis Baker (sect. Ophiolyza)
Gladiolus bilineatus G. J. Lewis
Gladiolus blommesteinii L.Bolus
Gladiolus bojeri (Baker) Goldblatt
Gladiolus bonaespei Goldblatt & M.P.de Vos
Gladiolus boranensis Goldblatt
Gladiolus brachyphyllus Bolus f.
Gladiolus brevifolius Jacq. (sect. Linearifolius)
Gladiolus brevitubus G. Lewis
Gladiolus buckerveldii (L. Bolus) Goldblatt
Gladiolus bullatus Thunb. ex G. Lewis – Caledon bluebell.
Gladiolus caeruleus Goldblatt & J.C. Manning
Gladiolus calcaratus G. Lewis
Gladiolus calcicola Goldblatt
Gladiolus canaliculatus Goldblatt
Gladiolus candidus (Rendle) Goldblatt
Gladiolus cardinalis Curtis (sect. Blandus)
Gladiolus carinatus Aiton – occurring in Darling, South Africa, and locally called the "blou pypie" ("blue pipe")
Gladiolus carmineus C. H. Wright (sect. Blandus) – cliff lily
Gladiolus carneus F.Delaroche (sect. Blandus) – large painted lady
Gladiolus caryophyllaceus (Burm. f.) Poiret
Gladiolus cataractarum Oberm.
Gladiolus caucasicus Herb.
Gladiolus ceresianus L. Bolus
Gladiolus chelamontanus Goldblatt
Gladiolus chevalierianus Marais
Gladiolus communis L. (sect. Gladiolus) – common cornflag, (type species)
Gladiolus comptonii G.J.Lewis
Gladiolus crassifolius Baker
Gladiolus crispulatus L. Bolus
Gladiolus cruentus T. Moore (sect. Ophiolyza)
Gladiolus cunonius (L.) Gaertn.
Gladiolus curtifolius Marais
Gladiolus curtilimbus P.A.Duvign. & Van Bockstal ex S.Córdova
Gladiolus cylindraceus G. Lewis
Gladiolus dalenii (sect. Ophiolyza)
Gladiolus davisoniae F.Bolus
Gladiolus debeerstii De Wild.
Gladiolus debilis Ker Gawler (sect. Homoglossum) – small painted lady
Gladiolus decaryi Goldblatt
Gladiolus decoratus Baker
Gladiolus delpierrei Goldblatt
Gladiolus densiflorus Baker
Gladiolus deserticola Goldblatt
Gladiolus dichrous (Bullock) Goldblatt
Gladiolus diluvialis Goldblatt & J.C.Manning
Gladiolus dolichosiphon Goldblatt & J.C.Manning
Gladiolus dolomiticus Oberm.
Gladiolus dzavakheticus Eristavi
Gladiolus ecklonii Lehm.
Gladiolus elliotii Baker (sect. Ophiolyza)
Gladiolus emiliae L. Bolus
Gladiolus engysiphon G. Lewis
Gladiolus equitans Thunb. (sect. Hebea)
Gladiolus erectiflorus Baker
Gladiolus exiguus G. Lewis
Gladiolus exilis G.J.Lewis
Gladiolus fenestratus Goldblatt
Gladiolus ferrugineus Goldblatt & J.C.Manning
Gladiolus filiformis Goldblatt & J.C.Manning
Gladiolus flanaganii Baker – suicide gladiolus
Gladiolus flavoviridis Goldblatt
Gladiolus floribundus Jacq.
Gladiolus fourcadei (L.Bolus) Goldblatt & M.P.de Vos
Gladiolus gandavensis
Gladiolus geardii L. Bolus
Gladiolus goldblattianus Geerinck
Gladiolus gracilis Jacq. (sect. Homoglossum) – reed bells
Gladiolus gracillimus Baker
Gladiolus grandiflorus Andrews (sect. Blandus)
Gladiolus grantii Baker
Gladiolus gregarius Welw. ex Baker (sect. Densiflorus)
Gladiolus griseus Goldblatt & J.C. Manning
Gladiolus gueinzii Kunze
Gladiolus gunnisii (Rendle) Marais
Gladiolus guthriei F. Bol. (sect. Linearifolius)
Gladiolus hajastanicus Gabrieljan
Gladiolus halophilus Boiss. & Heldr.
Gladiolus harmsianus Vaupel
Gladiolus hirsutus Jacq. (sect. Linearifolius) – small pink Afrikaner, lapmuis
Gladiolus hollandii L. Bolus
Gladiolus horombensis Goldblatt
Gladiolus huillensis (Welw. ex Baker) Goldblatt
Gladiolus humilis Stapf
Gladiolus huttonii (N.E.Br.) Goldblatt & M.P.de Vos
Gladiolus hyalinus Jacq.
Gladiolus illyricus W.D.J.Koch – wild gladiolus
Gladiolus imbricatus L.
Gladiolus inandensis Baker
Gladiolus inflatus Thunb.
Gladiolus inflexus Goldblatt & J.C. Manning
Gladiolus insolens Goldblatt & J.C. Manning
Gladiolus intonsus Goldblatt
Gladiolus invenustus G. J. Lewis
Gladiolus involutus D.Delaroche (sect. Hebea)
Gladiolus iroensis (A. Chev.) Marais
Gladiolus italicus P. Mill. (sect. Gladiolus) – Italian gladiolus, cornflag
Gladiolus jonquilodorus Eckl. ex G.J.Lewis
Gladiolus juncifolius Goldblatt
Gladiolus kamiesbergensis G. Lewis
Gladiolus karooicus Goldblatt & J.C.Manning
Gladiolus kotschyanus Boiss.
Gladiolus lapeirousioides Goldblatt
Gladiolus laxiflorus Baker
Gladiolus ledoctei P.A.Duvign. & Van Bockstal
Gladiolus leonensis Marais
Gladiolus leptosiphon Bolus f.
Gladiolus liliaceus Houtt. (sect. Homoglossum)
Gladiolus linearifolius Vaupel
Gladiolus lithicola Goldblatt
Gladiolus longicollis Baker (sect. Homoglossum)
Gladiolus longispathaceus Cufod.
Gladiolus loteniensis Hilliard & Burtt
Gladiolus lundaensis Goldblatt
Gladiolus luteus Lam.
Gladiolus macneilii Oberm.
Gladiolus maculatus Sweet
Gladiolus magnificus (Harms) Goldblatt
Gladiolus malvinus Goldblatt & J.C. Manning
Gladiolus manikaensis Goldblatt
Gladiolus mariae van der Burgt
Gladiolus marlothii G. Lewis
Gladiolus martleyi L. Bolus (sect. Homoglossum)
Gladiolus meliusculus (G. Lewis) Goldblatt & J.C. Manning
Gladiolus melleri Baker (sect. Ophiolyza)
Gladiolus menitskyi Gabrieljan
Gladiolus mensensis (Schweinf.) Goldblatt
Gladiolus meridionalis G.J.Lewis
Gladiolus metallicola Goldblatt
Gladiolus micranthus Baker, 1901
Gladiolus microcarpus G. Lewis
Gladiolus microspicatus P.A.Duvign. & Van Bockstal ex S.Córdova
Gladiolus miniatus Eckl.
Gladiolus mirus Vaupel
Gladiolus monticola G. Lewis ex Goldblatt & J.C. Manning
Gladiolus mosambicensis Baker
Gladiolus mostertiae L. Bolus
Gladiolus muenzneri F. Vaup
Gladiolus murgusicus Mikheev
Gladiolus murielae Kelway (syn. G. callianthus) – Abyssinian gladiolus
Gladiolus mutabilis G.J.Lewis
Gladiolus negeliensis Goldblatt
Gladiolus nerineoides G. Lewis
Gladiolus nigromontanus Goldblatt
Gladiolus nyasicus Goldblatt
Gladiolus oatesii Rolfe
Gladiolus ochroleucus Baker (sect. Densiflorus)
Gladiolus oliganthus Baker
Gladiolus oligophlebius Baker
Gladiolus oppositiflorus Herbert (sect. Ophiolyza)
Gladiolus orchidiflorus Andrews (sect. Hebea)
Gladiolus oreocharis Schltr.
Gladiolus ornatus Klatt
Gladiolus overbergensis Goldblatt & M.P.de Vos
Gladiolus palustris Gaudin – marsh gladiolus
Gladiolus papilio Hook. f. (sect. Densiflorus) – goldblotch gladiolus
Gladiolus pappei Baker (sect. Blandus)
Gladiolus pardalinus Goldblatt & J.C. Manning
Gladiolus parvulus Schltr.
Gladiolus patersoniae Bolus f.
Gladiolus pauciflorus Baker ex Oliv.
Gladiolus pavonia Goldblatt & J.C. Manning
Gladiolus permeabilis Delaroche (sect. Hebea)
Gladiolus perrieri Goldblatt
Gladiolus persicus Boiss.
Gladiolus phoenix Goldblatt & J.C.Manning
Gladiolus pole-evansii Verd.
Gladiolus praecostatus Marais
Gladiolus pretoriensis Kuntze
Gladiolus priorii (N. E. Br.) Goldblatt & De Vos
Gladiolus pritzelii Diels
Gladiolus puberulus Vaupel
Gladiolus pubigerus G. Lewis
Gladiolus pulcherrimus (G. Lewis) Goldblatt & J.C. Manning
Gladiolus pungens P.A.Duvign. & Van Bockstal ex S.Córdova
Gladiolus pusillus Goldblatt
Gladiolus quadrangularis (Burm. f.) Ker Gawler
Gladiolus quadrangulus (Delaroche) Barnard
Gladiolus recurvus L. (sect. Homoglossum)
Gladiolus reginae Goldblatt & J.C.Manning
Gladiolus rehmannii Baker
Gladiolus rhodanthus J.C.Manning & Goldblatt
Gladiolus richardsiae Goldblatt
Gladiolus robertsoniae Bolus f.
Gladiolus robiliartianus P.A.Duvign.
Gladiolus rogersii Baker
Gladiolus roseolus Chiov.
Gladiolus roseovenosus Goldblatt & J.C. Manning
Gladiolus rubellus Goldblatt
Gladiolus rudis Lichtst. ex Roem. & Schult.
Gladiolus rufomarginatus G.J.Lewis
Gladiolus rupicola F. Vaupel
Gladiolus saccatus (Klatt) Goldblatt & M.P. de Vos
Gladiolus salmoneicolor P.A.Duvign. & Van Bockstal ex S.Córdova
Gladiolus salteri G. Lewis
Gladiolus saundersii Hook. f. – Saunders' gladiolus, Lesotho lily
Gladiolus saxatilis Goldblatt & J.C.Manning
Gladiolus scabridus Goldblatt & J.C.Manning
Gladiolus schweinfurthii Baker
Gladiolus scullyi Baker
Gladiolus sekukuniensis P.J.D.Winter
Gladiolus sempervirens G.J.Lewis
Gladiolus serapiiflorus Goldblatt
Gladiolus serenjensis Goldblatt
Gladiolus sericeovillosus Hook. f.
Gladiolus serpenticola Goldblatt & J.C. Manning
Gladiolus somalensis Goldblatt & Thulin
Gladiolus speciosus Thunb.
Gladiolus splendens (Sweet) Herbert
Gladiolus stefaniae Oberm.
Gladiolus stellatus G. Lewis
Gladiolus stenolobus Goldblatt
Gladiolus stenosiphon Goldblatt
Gladiolus stokoei G.J.Lewis
Gladiolus subcaeruleus G. Lewis
Gladiolus sudanicus Goldblatt
Gladiolus sufflavus (G. Lewis) Goldblatt & J.C. Manning
Gladiolus sulculatus Goldblatt
Gladiolus symonsii F.Bolus
Gladiolus szovitsii Grossh.
Gladiolus taubertianus Schltr.
Gladiolus tenuis M. Bieb.
Gladiolus teretifolius Goldblatt & De Vos
Gladiolus trichonemifolius Ker Gawl. (sect. Homoglossum)
Gladiolus triphyllus (Sm.) Ker Gawl.
Gladiolus tristis L. (sect. Homoglossum)
Gladiolus tshombeanus P.A.Duvign. & Van Bockstal
Gladiolus uitenhagensis Goldblatt & Vlok
Gladiolus undulatus L. (sect. Blandus) – large white Afrikaner, wall gladiolus
Gladiolus unguiculatus Baker
Gladiolus usambarensis Marais ex Goldblatt
Gladiolus uysiae L. Bolus ex G. Lewis
Gladiolus vaginatus Bolus f. (sect. Homoglossum)
Gladiolus vandermerwei (L. Bolus) Goldblatt & De Vos
Gladiolus variegatus (G.J.Lewis) Goldblatt & J.C.Manning
Gladiolus varius Bolus f.
Gladiolus velutinus De Wild.
Gladiolus venustus G. Lewis (sect. Hebea)
Gladiolus verdickii De Wild. & T.Durand
Gladiolus vernus Oberm.
Gladiolus vigilans Barnard
Gladiolus vinosomaculatus Kies
Gladiolus violaceolineatus G.J.Lewis
Gladiolus virescens Thunb. (sect. Hebea)
Gladiolus virgatus Goldblatt & J.C.Manning
Gladiolus viridiflorus G. Lewis
Gladiolus watermeyeri L.Bolus (sect. Hebea)
Gladiolus watsonioides Baker – Mackinder's gladiolus
Gladiolus watsonius Thunb. (sect. Homoglossum)
Gladiolus wilsonii (Baker) Goldblatt & J.C.Manning
Gladiolus woodii Baker
Gladiolus zambesiacus Baker
Gladiolus zimbabweensis Goldblatt
Known hybrids include:
Gladiolus × colvillii (G. cardinalis × G. tristis): Colville's gladiolus
Gladiolus × gandavensis (G. dalenii × G. oppositiflorus) (sect. Ophiolyza)
Gladiolus × hortulanus
| Biology and health sciences | Monocots | null |
1034453 | https://en.wikipedia.org/wiki/Podcast | Podcast | A podcast is a program made available in digital format for download over the Internet. Typically, a podcast is an episodic series of digital audio files that users can download to a personal device to listen to at a time of their choosing. Podcasts are primarily an audio medium, but some distribute in video, either as their primary content or as a supplement to audio; popularised in recent years by video platform YouTube.
A podcast series usually features one or more recurring hosts engaged in a discussion about a particular topic or current event. Discussion and content within a podcast can range from carefully scripted to completely improvised. Podcasts combine elaborate and artistic sound production with thematic concerns ranging from scientific research to slice-of-life journalism. Many podcast series provide an associated website with links and show notes, guest biographies, transcripts, additional resources, commentary, and occasionally a community forum dedicated to discussing the show's content.
The cost to the consumer is low, and many podcasts are free to download. Some podcasts are underwritten by corporations or sponsored, with the inclusion of commercial advertisements. In other cases, a podcast could be a business venture supported by some combination of a paid subscription model, advertising or product delivered after sale. Because podcast content is often free, podcasting is often classified as a disruptive medium, adverse to the maintenance of traditional revenue models.
Podcasting is the preparation and distribution of audio or video files using RSS feeds to the devices of subscribed users. A podcaster normally buys this service from a podcast hosting company such as SoundCloud or Libsyn. Hosting companies then distribute these media files to podcast directories and streaming services, such as Apple and Spotify, which users can listen to on their smartphones or digital music and multimedia players.
, there are at least 3,369,942 podcasts and 199,483,500 episodes.
Etymology
"Podcast" is a portmanteau of "iPod" and "broadcast". The earliest use of "podcasting" was traced to The Guardian columnist and BBC journalist Ben Hammersley, who coined it in early February 2004 while writing an article for The Guardian newspaper. The term was first used in the audioblogging community in September 2004, when Danny Gregoire introduced it in a message to the iPodder-dev mailing list, from where it was adopted by podcaster Adam Curry. Despite the etymology, the content can be accessed using any computer or similar device that can play media files. The term "podcast" predates Apple's addition of podcasting features to the iPod and the iTunes software.
History
In September 2000, early MP3 player manufacturer i2Go offered a service called MyAudio2Go.com which allowed users to download news stories for listening on a PC or MP3 player. The service was available for about a year until i2Go's demise in 2001.
In October 2000, the concept of attaching sound and video files in RSS feeds was proposed in a draft by Tristan Louis. The idea was implemented by Dave Winer, a software developer and an author of the RSS format.
In August 2004, Adam Curry launched his show Daily Source Code,
focused on chronicling his everyday life, delivering news, and discussions about the development of podcasting. Curry promoted new and emerging internet audio shows in an attempt to gain traction in the development of what would come to be known as podcasting. Daily Source Code was initially directed at podcast developers. As its audience became interested in the format, these developers were inspired to create and produce their own projects and a community of pioneer podcasters quickly developed.
iPodderX, released in September 2004 by August Trometer and based on earlier work by Ray Slakinski, was the first GUI application for podcasts.
In June 2005, Apple released iTunes 4.9, which added formal support for podcasts, thus negating the need to use a separate program in order to download and transfer them to a mobile device. Although this made access to podcasts more convenient and widespread, it also effectively ended advancement of podcatchers by independent developers. Additionally, Apple issued cease and desist orders to many podcast application developers and service providers for using the term "iPod" or "Pod" in their products' names.
By 2007, audio podcasts were doing what was historically accomplished via radio broadcasts, which had been the source of radio talk shows and news programs since the 1930s. This shift occurred as a result of the evolution of internet capabilities along with increased consumer access to cheaper hardware and software for audio recording and editing.
As of early 2019, the podcasting industry still generated little overall revenue, although the number of persons who listen to podcasts continues to grow steadily. Edison Research, which issues the Podcast Consumer quarterly tracking report estimated that 90 million persons in the U.S. had listened to a podcast in January 2019. As of 2020, 58% of the population of South Korea and 40% of the Spanish population had listened to a podcast in the last month. 12.5% of the UK population had listened to a podcast in the last week and 22% of the United States population listens to at least one podcast weekly. The form is also acclaimed for its low overhead for a creator to start and maintain their show, merely requiring a microphone, a computer or mobile device, and associated software to edit and upload the final product. Some form of acoustic quieting is also often utilised.
IP issues in trademark and patent law
Trademark applications
Between February March 10 and 25, 2005, Shae Spencer Management, LLC of Fairport, New York filed a trademark application to register the term "podcast" for an "online pre-recorded radio program over the internet". On September 9, 2005, the United States Patent and Trademark Office (USPTO) rejected the application, citing Wikipedia's podcast entry as describing the history of the term. The company amended their application in March 2006, but the USPTO rejected the amended application as not sufficiently differentiated from the original. In November 2006, the application was marked as abandoned.
Apple trademark protections
On September 26, 2004, it was reported that Apple Inc. had started to crack down on businesses using the string "POD", in product and company names. Apple sent a cease and desist letter that week to Podcast Ready, Inc., which markets an application known as "myPodder". Lawyers for Apple contended that the term "pod" has been used by the public to refer to Apple's music player so extensively that it falls under Apple's trademark cover. Such activity was speculated to be part of a bigger campaign for Apple to expand the scope of its existing iPod trademark, which included trademarking "IPOD", "IPODCAST", and "POD". On November 16, 2006, the Apple Trademark Department stated that "Apple does not object to third-party usage of the generic term 'podcast' to accurately refer to podcasting services" and that "Apple does not license the term". However, no statement was made as to whether or not Apple believed they held rights to it.
Personal Audio lawsuits
Personal Audio, a company referred to as a "patent troll" by the Electronic Frontier Foundation (EFF), filed a patent on podcasting in 2009 for a claimed invention in 1996. In February 2013, Personal Audio started suing high-profile podcasters for royalties, including The Adam Carolla Show and the HowStuffWorks podcast. In October 2013, the EFF filed a petition with the US Trademark Office to invalidate the Personal Audio patent. On August 18, 2014, the EFF announced that Adam Carolla had settled with Personal Audio. Finally, on April 10, 2015, the U.S. Patent and Trademark Office invalidated five provisions of Personal Audio's podcasting patent.
Production and listening
A podcast generator maintains a central list of the files on a server as a web feed that one can access through the Internet. The listener or viewer uses special client application software on a computer or media player, known as a podcast client, which accesses this web feed, checks it for updates, and downloads any new files in the series. This process can be automated to download new files automatically, so it may seem to listeners as though podcasters broadcast or "push" new episodes to them. Podcast files can be stored locally on the user's device, or streamed directly. There are several different mobile applications that allow people to follow and listen to podcasts. Many of these applications allow users to download podcasts or stream them on demand. Most podcast players or applications allow listeners to skip around the podcast and to control the playback speed. Much podcast listening occurs during commuting; because of restrictions on travel during the COVID-19 pandemic, the number of unique listeners in the US decreased by 15% in the last three weeks of March 2020.
Podcasting has been considered a converged medium (a medium that brings together audio, the web and portable media players), as well as a disruptive technology that has caused some individuals in radio broadcasting to reconsider established practices and preconceptions about audiences, consumption, production and distribution.
Podcasts can be produced at little to no cost and are usually disseminated free-of-charge, which sets this medium apart from the traditional 20th-century model of "gate-kept" media and their production tools. Podcasters can, however, still monetize their podcasts by allowing companies to purchase ad time. They can also garner support from listeners through crowdfunding websites like Patreon, which provide special extras and content to listeners for a fee.
Types of podcasts
Podcasts vary in style, format, and topical content. Podcasts are partially patterned on previous media genres but depart from them systematically in certain computationally observable stylistic respects. The conventions and constraints which govern that variation are emerging and vary over time and markets; podcast listeners have various preferences of styles but conventions to address them and communicate about them are still unformed. Some current examples of types of podcasts are given below. This list is likely to change as new types of content, new technology to consume podcasts, and new use cases emerge.
Enhanced podcasts
An enhanced podcast, also known as a slidecast, is a type of podcast that combines audio with a slide show presentation. It is similar to a video podcast in that it combines dynamically generated imagery with audio synchronization, but it is different in that it uses presentation software to create the imagery and the sequence of display separately from the time of the original audio podcast recording. The Free Dictionary, YourDictionary, and PC Magazine define an enhanced podcast as "an electronic slide show delivered as a podcast". Enhanced podcasts are podcasts that incorporate graphics and chapters. iTunes developed an enhanced podcast feature called "Audio Hyperlinking" that they patented in 2012. Enhanced podcasts can be used by businesses or in education. Enhanced podcasts can be created using QuickTime AAC or Windows Media files. Enhanced podcasts were first used in 2006.
Fiction podcast
A fiction podcast (also referred to as a "scripted podcast" or "audio drama") is similar to a radio drama, but in podcast form. They deliver a fictional story, usually told over multiple episodes and seasons, using multiple voice actors, dialogue, sound effects, and music to enrich the story. Fiction podcasts have attracted a number of well-known actors as voice talents, including Demi Moore and Matthew McConaughey as well as from content producers like Netflix, Spotify, Marvel Comics, and DC Comics. Unlike other genres, downloads of fiction podcasts increased by 19% early in the COVID-19 pandemic.
Podcast novels
A podcast novel (also known as a "serialized audiobook" or "podcast audiobook") is a literary form that combines the concepts of a podcast and an audiobook. Like a traditional novel, a podcast novel is a work of literary fiction; however, it is recorded into episodes that are delivered online over a period of time. The episodes may be delivered automatically via RSS or through a website, blog, or other syndication method. Episodes can be released on a regular schedule, e.g., once a week, or irregularly as each episode is completed. In the same manner as audiobooks, some podcast novels are elaborately narrated with sound effects and separate voice actors for each character, similar to a radio play or scripted podcast, but many have a single narrator and few or no sound effects.
Some podcast novelists give away a free podcast version of their book as a form of promotion. On occasion such novelists have secured publishing contracts to have their novels printed. Podcast novelists have commented that podcasting their novels lets them build audiences even if they cannot get a publisher to buy their books. These audiences then make it easier to secure a printing deal with a publisher at a later date. These podcast novelists also claim the exposure that releasing a free podcast gains them makes up for the fact that they are giving away their work for free.
Video podcasts
A video podcast is a podcast that features video content. Web television series are often distributed as video podcasts. Dead End Days, a serialized dark comedy about zombies released from October 31, 2003, through 2004, is commonly believed to be the first video podcast.
Live podcasts
A number of podcasts are recorded either in total or for specific episodes in front of a live audience. Ticket sales allow the podcasters an additional way of monetizing. Some podcasts create specific live shows to tour which are not necessarily included on the podcast feed. Events including the London Podcast Festival, SF Sketchfest and others regularly give a platform for podcasters to perform live to audiences.
Technology
Software
Podcast episodes are widely stored and encoded in the mp3 digital audio format and then hosted on dedicated or shared webserver space. Syndication of podcasts' episodes across various websites and platforms is based on RSS feeds, an XML-formatted file citing information about the episode and the podcast itself.
Hardware
The most basic equipment for a podcast is a computer and a microphone. It is helpful to have a sound-proof room and headphones. The computer should have a recording or streaming application installed. Typical microphones for podcasting are connected using USB. If the podcast involves two or more people, each person requires a microphone, and a USB audio interface is needed to mix them together. If the podcast includes video, then a separate webcam might be needed, and additional lighting.
| Technology | Media and communication | null |
3088935 | https://en.wikipedia.org/wiki/Zebra%20dove | Zebra dove | The zebra dove (Geopelia striata), also known as the barred ground dove, or barred dove, is a species of bird of the dove family, Columbidae, native to Southeast Asia. They are small birds with a long tail, predominantly brownish-grey in colour with black-and-white barring. The species is known for its pleasant, soft, staccato cooing calls.
Taxonomy
In 1743 the English naturalist George Edwards included a description and a picture of the zebra dove in his A Natural History of Uncommon Birds. His drawing was made from a live specimen at the home of admiral Charles Wager in Parsons Green near London. Edwards was told that the dove had been brought from the East Indies. When in 1766 the Swedish naturalist Carl Linnaeus updated his Systema Naturae for the twelfth edition, he included the zebra dove and placed it with all the other pigeons in the genus Columba. Linnaeus included a brief description, coined the binomial name Columba striata and cited Edwards's work. The specific name striata is from the Latin striatus meaning "striated". The type locality has been restricted to the island of Java in Indonesia. The species is now placed in the genus Geopelia that was introduced by the English naturalist William Swainson in 1837. The zebra dove is monotypic: no subspecies are recognised.
The zebra dove is closely related to the peaceful dove of Australia and New Guinea and the barred dove of eastern Indonesia. These two were classified as subspecies of the zebra dove until recently and the names peaceful dove and barred dove were often applied to the whole species.
Description
The birds are small and slender with a long, narrow tail. The upperparts are brownish-grey with black-and-white barring. The underparts are pinkish with black bars on the sides of the neck, breast and belly. The face is blue-grey with bare blue skin around the eyes. There are white tips to the tail feathers. Juveniles are duller and paler than the adults. They can also have brown feathers. Zebra doves are 20–23 centimetres in length with a wingspan of 24–26 cm.
Their call is a series of soft, staccato cooing notes. In Thailand and Indonesia, the birds are popular as pets because of their calls and cooing competitions are held to find the bird with the best voice. In Indonesia this bird is called perkutut. In the Philippines they are known as batobatong katigbe ("pebbled katigbe") and kurokutok, onomatopoeic to their calls. They are also known as tukmo in Filipino, a name also given to the spotted dove (Spilopelia chinensis) and other wild doves. In Malaysia this bird is called merbuk.
Distribution and habitat
The native range of the species extends from Southern Thailand, Tenasserim, Peninsular Malaysia, and Singapore to the Indonesian islands of Sumatra and Java. It may also be native to Borneo, Bali, Lombok, Sumbawa, and the Philippine islands.
The zebra dove is popular in captivity and many populations have appeared outside its native range due to birds escaping or being deliberately released. It can now be found in central Thailand, Laos, Borneo, Sulawesi, Hawaii (introduced in 1922), Tahiti (1950), New Caledonia, the Seychelles, the Chagos Archipelago (1960), Mauritius (before 1768), Réunion, and Saint Helena.
It inhabits scrub, farmland, and open country in lowland areas and is commonly seen in parks and gardens. Trapping for the cagebird industry has led to them becoming rare in parts of Indonesia but in most parts of its range it is common. Zebra doves are among the most abundant birds in some places such as Hawaii and the Seychelles.
Behaviour and ecology
Breeding
In its native range the breeding season is from September to June. The males perform a courtship display where they bow and coo while raising and spreading the tail. Upon selection of a nesting site, the female will place herself there and will make guttural sounds to attract males to help build the nest. The nest is a simple platform of leaves and grass blades. It is built in a bush or tree or sometimes on the ground and sometimes on window ledges. One or two white eggs are laid and are incubated by both parents for 13 to 18 days. The young leave the nest within two weeks and can fly well after three weeks.
Feeding
The zebra dove feeds on small grass and weed seeds. They will also eat insects and other small invertebrates. They prefer to forage on bare ground, short grass or on roads, scurrying about with rodent-like movement. Unlike other doves, they forage alone, or in pairs. Their colouration camouflages them well when on the ground.
| Biology and health sciences | Columbimorphae | Animals |
3090255 | https://en.wikipedia.org/wiki/Cun%20%28unit%29 | Cun (unit) | A cun ( ; Pinyin cùn IPA |mi=), often glossed as the Chinese inch, is a traditional Chinese unit of length. Its traditional measure is the width of a person's thumb at the knuckle, whereas the width of the two forefingers denotes 1.5 cun and the width of four fingers (except the thumb) side-by-side is 3 cuns. It continues to be used to chart acupuncture points on the human body, and, in various uses for traditional Chinese medicine.
The cun was part of a larger decimal system. A cun was made up of 10 fen, which depending on the period approximated lengths or widths of millet grains, and represented one-tenth of a chi ("Chinese foot"). In time the lengths were standardized, although to different values in different jurisdictions. (See Chi (unit) for details.)
In Hong Kong, using the traditional standard, it measures ~3.715 cm (~1.463 in) and is written "tsun". In the twentieth century in the Republic of China, the lengths were standardized to fit with the metric system, and in current usage in People's Republic of China and Taiwan it measures cm (~1.312 in).
In Japan, the corresponding unit, , was standardized at mm (3. cm, ~1.193 in, or ~0.09942 ft).
| Physical sciences | East Asian | Basics and measurement |
3090712 | https://en.wikipedia.org/wiki/Mylonite | Mylonite | Mylonite is a fine-grained, compact metamorphic rock produced by dynamic recrystallization of the constituent minerals resulting in a reduction of the grain size of the rock. Mylonites can have many different mineralogical compositions; it is a classification based on the textural appearance of the rock.
Formation
Mylonites are ductilely deformed rocks formed by the accumulation of large shear strain, in ductile fault zones. There are many different views on the formation of mylonites, but it is generally agreed that crystal-plastic deformation must have occurred, and that fracturing and cataclastic flow are secondary processes in the formation of mylonites. Mechanical abrasion of grains by milling does not occur, although this was originally thought to be the process that formed mylonites, which were named from the Greek μύλος mylos, meaning mill. Mylonites form at depths of no less than 4 km.
There are many different mechanisms that accommodate crystal-plastic deformation. In crustal rocks the most important processes are dislocation creep and diffusion creep. Dislocation generation acts to increase the internal energy of crystals. This effect is compensated through grain-boundary-migration recrystallization which reduces the internal energy by increasing the grain boundary area and reducing the grain volume, storing energy at the mineral grain surface. This process tends to organize dislocations into subgrain boundaries. As more dislocations are added to subgrain boundaries, the misorientation across that subgrain boundary will increase until the boundary becomes a high-angle boundary and the subgrain effectively becomes a new grain. This process, sometimes referred to as subgrain rotation recrystallization, acts to reduce the mean grain size. Volume and grain-boundary diffusion, the critical mechanisms in diffusion creep, become important at high temperatures and small grain sizes. Thus some researchers have argued that as mylonites are formed by dislocation creep and dynamic recrystallization, a transition to diffusion creep can occur once the grain size is reduced sufficiently.
Mylonites generally develop in ductile shear zones where high rates of strain are focused. They are the deep crustal counterparts to cataclastic brittle faults that create fault breccias.
Classification
Blastomylonites are coarse grained, often sugary in appearance without distinct tectonic banding.
Ultramylonites usually have undergone extreme grainsize reduction. In structural geology, ultramylonite is a kind of mylonite defined by more than 90%. Ultramylonite is often hard, dark, cherty to flinty in appearance and sometimes resemble pseudotachylite and obsidian. In reverse, ultramylonite-like rocks are sometimes "deformed pseudotachylyte".
Mesomylonites have undergone an appreciable amount of grainsize reduction, and are defined by their modal percentage of matrix grains being between 50 and 90%.
Protomylonites are mylonites which have experienced limited grainsize reduction, and are defined by their modal percentage of matrix grains being less than 50%. Because mylonitisation is incomplete in these rocks, relict grains and textures are apparent, and some protomylonites can resemble foliated cataclasite or even some schists.
Phyllonites are phyllosilicate (e.g., chlorite or mica)-rich mylonites. They typically have a well-developed secondary shear (C') fabric.
Interpretation
Determining the displacements that occur in mylonite zones depends on correctly determining the orientations of the finite strain axis and inferring how those orientations change with respect to the incremental strain axis. This is referred to as determining the shear sense. It is common practice to assume that the deformation is plane strain simple shear deformation. This type of strain field assumes that deformation occurs in a tabular zone where displacement is parallel to the shear zone boundary. Furthermore, during deformation the incremental strain axis maintains a 45-degree angle to the shear zone boundary. The finite strain axes are initially parallel to the incremental axis, but rotate away during progressive deformation.
Kinematic indicators are structures in mylonites that allow the sense of shear to be determined. Most kinematic indicators are based on deformation in simple shear and infer sense of rotation of the finite strain axes with respect to the incremental strain axes. Because of the constraints imposed by simple shear, displacement is assumed to occur in the foliation plane in a direction parallel to the mineral stretching lineation. Therefore, a plane parallel to the lineation and perpendicular to the foliation is viewed to determine the shear sense.
The most common shear sense indicators are C/S fabrics, asymmetric porphyroclasts, vein and dike arrays, mantled porphyroclasts and mineral fibers. All of these indicators have a monoclinic symmetry which is directly related to the orientations of the finite strain axes. Although structures like asymmetric folds and boudinages are also related to the orientations of the finite strain axes, these structures can form from distinct strain paths and are not reliable kinematic indicators.
| Physical sciences | Metamorphic rocks | Earth science |
3092190 | https://en.wikipedia.org/wiki/Typhoon | Typhoon | A typhoon is a tropical cyclone that develops between 180° and 100°E in the Northern Hemisphere and which produces sustained hurricane-force winds of at least . This region is referred to as the Northwestern Pacific Basin, accounting for almost one third of the world's tropical cyclones. The term hurricane refers to a tropical cyclone (again with sustained winds of at least ) in the north central and northeast Pacific, and the north Atlantic. In all of the preceding regions, weaker tropical cyclones are called tropical storms. For organizational purposes, the northern Pacific Ocean is divided into three regions: the eastern (North America to 140°W), central (140°W to 180°), and western (180° to 100°E). The Regional Specialized Meteorological Center (RSMC) for tropical cyclone forecasts is in Japan, with other tropical cyclone warning centres for the northwest Pacific in Hawaii (the Joint Typhoon Warning Center), the Philippines, and Hong Kong. Although the RSMC names each system, the main name list itself is coordinated among 18 countries that have territories threatened by typhoons each year.
Within most of the northwestern Pacific, there are no official typhoon seasons as tropical cyclones form throughout the year. Like any tropical cyclone, there are several main requirements for typhoon formation and development. It must be in sufficiently warm sea surface temperatures, atmospheric instability, high humidity in the lower-to-middle levels of the troposphere, have enough Coriolis effect to develop a low pressure centre, a pre-existing low level focus or disturbance, and a low vertical wind shear. Although the majority of storms form between June and November, a few storms may occur between December and May (although tropical cyclone formation is very rare during that time). On average, the northwestern Pacific features the most numerous and intense tropical cyclones globally. Like other basins, they are steered by the subtropical ridge towards the west or northwest, with some systems recurving near and east of Japan. The Philippines receive the brunt of the landfalls, with China and Japan being less often impacted. However, some of the deadliest typhoons in history have struck China. Southern China has the longest record of typhoon impacts for the region, with a thousand-year sample via documents within their archives. Taiwan has received the wettest known typhoon on record for the northwest Pacific tropical cyclone basins. However, Vietnam recognises its typhoon season as lasting from the beginning of June through to the end of November, with an average of four to six typhoons hitting the country annually.
According to the statistics of the Joint Typhoon Warning Center, from 1950 to 2022, the Northwest Pacific generated an average of 26.5 named tropical cyclones each year, of which an average of 16.6 reached typhoon standard or above as defined by the Joint Typhoon Warning Center.
Nomenclature
Etymology
The etymology of typhoon is either Chinese or Persian-Hindustani origin.
Typhoon may trace to (meaning "winds which long last"), first attested in 1124 in China. It was pronounced as in Min Chinese at the time, but later evolved to [hɔŋ tʰai]. New characters were created to match the sound, no later than 1566. The word was introduced to Mandarin Chinese in the inverted Mandarin order , later picked up by foreign sailors to appear as typhoon. The usage of was not dominant until Chu Coching, the head of meteorology of the national academy from 1929 to 1936, declared it to be the standard term. There were 29 alternative terms for typhoon recorded in a chronicle in 1762, now mostly replaced by , although or continues to be used in Min Chinese- and Wu Chinese- speaking areas from Chaozhou, Guangdong to Taizhou, Zhejiang.
Some English linguists proposed the English word typhoon traced to the Cantonese pronunciation of (correspond to Mandarin ), in turn the Cantonese word traced to Arabic. This claim contradicts the fact that the Cantonese term for typhoon was before the national promotion of . (meaning "winds which long last") was first attested in 280, being the oldest Chinese term for typhoon. Not one Chinese historical record links to an Arabic or foreign origin. On the other hand, Chinese records consistently assert foreigners refer typhoon as "black wind". "Black wind" eventually enters the vocabulary of Jin Chinese as .
Alternatively, some dictionaries propose that typhoon derived from (طوفان) tūfān, meaning storm in Persian and Hindustani. The root of (طوفان) tūfān possibly traces to the Ancient Greek mythological creature Typhôn. In French typhon was attested as storm in 1504. Portuguese traveler Fernão Mendes Pinto referred to a tufão in his memoir published in 1614. The earliest form in English was "touffon" (1588), later as touffon, tuffon, tufon, tuffin, tuffoon, tayfun, tiffoon, typhawn.
Intensity classifications
A tropical depression is the lowest category that the Japan Meteorological Agency uses and is the term used for a tropical system that has wind speeds not exceeding . A tropical depression is upgraded to a tropical storm should its sustained wind speeds exceed . Tropical storms also receive official names from RSMC Tokyo. Should the storm intensify further and reach sustained wind speeds of then it will be classified as a severe tropical storm. Once the system's maximum sustained winds reach wind speeds of , the JMA will designate the tropical cyclone as a typhoon—the highest category on its scale.
Since 2009 the Hong Kong Observatory has divided typhoons into three different classifications: typhoon, severe typhoon and super typhoon. A typhoon has wind speed of 64–79 knots (73–91 mph; 118–149 km/h), a severe typhoon has winds of at least , and a super typhoon has winds of at least . The United States' Joint Typhoon Warning Center (JTWC) unofficially classifies typhoons with wind speeds of at least 130 knots (67 m/s; 150 mph; 241 km/h)—the equivalent of a strong Category 4 storm in the Saffir-Simpson scale—as super typhoons. However, the maximum sustained wind speed measurements that the JTWC uses are based on a 1-minute averaging period, akin to the U.S.'s National Hurricane Center and Central Pacific Hurricane Center. As a result, the JTWC's wind reports are higher than JMA's measurements, as the latter is based on a 10-minute averaging interval.
Genesis
There are six main requirements for tropical cyclogenesis: sufficiently warm sea surface temperatures, atmospheric instability, high humidity in the lower to middle levels of the troposphere, enough Coriolis force to develop a low pressure center, a pre-existing low level focus or disturbance, and low vertical wind shear. While these conditions are necessary for tropical cyclone formation, they do not guarantee that a tropical cyclone will form. Normally, an ocean temperature of 26.5 °C (79.7 °F) spanning through a depth of at least is considered the minimum to maintain the special mesocyclone that is the tropical cyclone. These warm waters are needed to maintain the warm core that fuels tropical systems. A minimum distance of 500 km (300 mi) from the equator is normally needed for tropical cyclogenesis.
Whether it be a depression in the Intertropical Convergence Zone (ITCZ) or monsoon trough, a broad surface front, or an outflow boundary, a low level feature with sufficient vorticity and convergence is required to begin tropical cyclogenesis. About 85 to 90 percent of Pacific typhoons form within the monsoon trough. Even with perfect upper-level conditions and the required atmospheric instability, the lack of a surface focus will prevent the development of organized convection and a surface low. Vertical wind shear of less than 10 m/s (20 kn, 33 ft/s) between the ocean surface and the tropopause is required for tropical cyclone development. Typically with Pacific typhoons, there are two jets of outflow: one to the north ahead of an upper trough in the westerlies, and a second towards the equator.
In general, the westerly wind increases associated with the Madden–Julian oscillation lead to increased tropical cyclogenesis in all tropical cyclone basins. As the oscillation propagates from west to east, it leads to an eastward march in tropical cyclogenesis with time during that hemisphere's summer season. On average, twice per year twin tropical cyclones will form in the western Pacific Ocean, near the 5th parallel north and the 5th parallel south, along the same meridian, or line of longitude. There is an inverse relationship between tropical cyclone activity in the western Pacific basin and the North Atlantic basin, however. When one basin is active, the other is normally quiet, and vice versa. The main reason for this appears to be the phase of the Madden–Julian oscillation, or MJO, which is normally in opposite modes between the two basins at any given time.
Frequency
Nearly one-third of the world's tropical cyclones form within the western Pacific. This makes this basin the most active on Earth. Pacific typhoons have formed year-round, with peak months from August to October. The peak months correspond to that of the Atlantic hurricane seasons. Along with a high storm frequency, this basin also features the most globally intense storms on record. One of the most recent busy seasons was 2013. Tropical cyclones form in any month of the year across the northwest Pacific Ocean and concentrate around June and November in the northern Indian Ocean. The area just northeast of the Philippines is the most active place on Earth for tropical cyclones to exist.
Across the Philippines themselves, activity reaches a minimum in February, before increasing steadily through June and spiking from July through October, with September being the most active month for tropical cyclones across the archipelago. Activity falls off significantly in November, although Typhoon Haiyan, the strongest Philippine typhoon on record, was a November typhoon. The most frequently impacted areas of the Philippines by tropical cyclones are northern and central Luzon and eastern Visayas. A ten-year average of satellite determined precipitation showed that at least 30 percent of the annual rainfall in the northern Philippines could be traced to tropical cyclones, while the southern islands receive less than 10 percent of their annual rainfall from tropical cyclones. The genesis and intensity of typhoons are also modulated by slow variation of the sea surface temperature and circulation features following a near-10-year frequency.
Paths
Most tropical cyclones form on the side of the subtropical ridge closer to the equator, then move poleward past the ridge axis before recurving north and northeast into the main belt of the westerlies. Most typhoons form in a region in the northwest Pacific known as typhoon alley, where the planet's most powerful tropical cyclones most frequently develop. When the subtropical ridge shifts due to El Niño, so will the preferred tropical cyclone tracks. Areas west of Japan and Korea tend to experience many fewer September–November tropical cyclone impacts during El Niño and neutral years. During El Niño years, the break in the subtropical ridge tends to lie near 130°E, which would favor the Japanese archipelago. During La Niña years, the formation of tropical cyclones, and the subtropical ridge position, shift westward across the western Pacific Ocean, which increases the landfall threat to China and greater intensity to Philippines. Those that form near the Marshall Islands find their way to Jeju Island, Korea. Typhoon paths follow three general directions.
Straight track (or straight runner). A general westward path affects the Philippines, southern China, Taiwan, and Vietnam.
A parabolic recurving track. Storms recurving affect the eastern Philippines, eastern China, Taiwan, Korea, Japan, and the Russian Far East.
Northward track. From point of origin, the storm follows a northerly direction, only affecting small islands.
A rare few storms, like Hurricane John, were redesignated as typhoons as they originated in the Eastern/Central Pacific and moved into the western Pacific.
Basin monitoring
Within the Western Pacific, RSMC Tokyo-Typhoon Center, part of the Japan Meteorological Agency, has had the official warning responsibility for the whole of the Western Pacific since 1989, and the naming responsibility for systems of tropical storm strength or greater since 2000. However each National Meteorological and Hydrological Service within the western Pacific has the responsibility for issuing warnings for land areas about tropical cyclones affecting their country, such as the Joint Typhoon Warning Center for United States agencies, the Philippine Atmospheric, Geophysical and Astronomical Services Administration (PAGASA) for interests in the island archipelago nation, and the Hong Kong Observatory for storms that come close enough to cause the issuance of warning signals.
Name sources and name list
The list of names consists of entries from 14 southeast and east Asian nations and regions and the United States who have territories directly affected by typhoons. The submitted names are arranged into a list, the names on the list will be used from up to down, from left to right. When all names on the list are used, it will start again from the left-top corner. When a typhoon causes damage in a region, the affected region can request for retiring the name in the next session of the ESCAP/WMO Typhoon Committee. A new name will be decided by the region whose name was retired.
Unlike tropical cyclones in other parts of the world, typhoons are not named after people. Instead, they generally refer to animals, flowers, astrological signs, and a few personal names. However, Philippines (PAGASA) retains its own naming list, which consists of both human names and other objects. Japan and some other East Asian countries also assign numbers to typhoons.
Storms that cross the date line from the central Pacific retain their original name, but the designation of hurricane becomes typhoon.
Records
The most active Western Pacific typhoon season was in 1964, when 39 storms of tropical storm strength formed. Only 15 seasons had 30 or more storms developing since reliable records began. The least activity seen in the northwest Pacific Ocean was during the 2010 Pacific typhoon season, when only 14 tropical storms and seven typhoons formed. In the Philippines, the most active season since 1945 for tropical cyclone strikes was 1993, when nineteen tropical cyclones moved through the country. There was only one tropical cyclone that moved through the Philippines in 1958. The 2004 Pacific typhoon season was the busiest for Okinawa since 1957. Within Guangdong in southern China, during the past thousand years, the most active decades for typhoon strikes were the 1660s and 1670s.
The highest reliably-estimated maximum sustained winds on record for a typhoon was that of Typhoon Haiyan at shortly before its landfall in the central Philippines on November 8, 2013. The most intense storm based on minimum pressure was Typhoon Tip in the northwestern Pacific Ocean in 1979, which reached a minimum pressure of and maximum sustained wind speeds of 165 knots (85 m/s, 190 mph, 310 km/h). The deadliest typhoon of the 20th century was Typhoon Nina, which killed nearly 100,000 in China in 1975 due to a flood that caused 12 reservoirs to fail. After Typhoon Morakot landed in Taiwan at midnight on August 8, 2009, almost the entire southern region of Taiwan (Chiayi County/Chiayi City, Tainan County/Tainan City (now merged as Tainan), Kaohsiung County/Kaohsiung City (now merged as Kaohsiung), and Pingtung County) and parts of Taitung County and Nantou County were flooded by record-breaking heavy rain. The rainfall in Pingtung County reached 2,327 millimeters (91.6 in), breaking all rainfall records of any single place in Taiwan induced by a single typhoon, and making the cyclone the wettest known typhoon.
| Physical sciences | Storms | Earth science |
3093514 | https://en.wikipedia.org/wiki/Satellite%20galaxies%20of%20the%20Milky%20Way | Satellite galaxies of the Milky Way | The Milky Way has several smaller galaxies gravitationally bound to it, as part of the Milky Way subgroup, which is part of the local galaxy cluster, the Local Group.
There are 61 small galaxies confirmed to be within of the Milky Way, but not all of them are necessarily in orbit, and some may themselves be in orbit of other satellite galaxies. The only ones visible to the naked eye are the Large and Small Magellanic Clouds, which have been observed since prehistory. Measurements with the Hubble Space Telescope in 2006 suggest the Magellanic Clouds may be moving too fast to be orbiting the Milky Way. Of the galaxies confirmed to be in orbit, the largest is the Sagittarius Dwarf Spheroidal Galaxy, which has a diameter of or roughly a twentieth that of the Milky Way.
Characteristics
Satellite galaxies that orbit from of the edge of the disc of the Milky Way Galaxy to the edge of the dark matter halo of the Milky Way at from the center of the galaxy, are generally depleted in hydrogen gas compared to those that orbit more distantly. This is because of their interactions with the dense hot gas halo of the Milky Way that strip cold gas from the satellites. Satellites beyond that region still retain copious quantities of gas.
List
The Milky Way's satellite galaxies include the following:
Map with clickable regions
Streams
The Sagittarius Dwarf Spheroidal Galaxy is currently in the process of being consumed by the Milky Way and is expected to pass through it within the next 100 million years. The Sagittarius Stream is a stream of stars in polar orbit around the Milky Way leeched from the Sagittarius Dwarf. The Virgo Stellar Stream is a stream of stars that is believed to have once been an orbiting dwarf galaxy that has been completely distended by the Milky Way's gravity.
| Physical sciences | Notable galaxies | Astronomy |
11986060 | https://en.wikipedia.org/wiki/Zika%20fever | Zika fever | Zika fever, also known as Zika virus disease or simply Zika, is an infectious disease caused by the Zika virus. Most cases have no symptoms, but when present they are usually mild and can resemble dengue fever. Symptoms may include fever, red eyes, joint pain, headache, and a maculopapular rash. Symptoms generally last less than seven days. It has not caused any reported deaths during the initial infection. Mother-to-child transmission during pregnancy can cause microcephaly and other brain malformations in some babies. Infections in adults have been linked to Guillain–Barré syndrome (GBS).
Zika fever is mainly spread via the bite of mosquitoes of the Aedes type. It can also be sexually transmitted and potentially spread by blood transfusions. Infections in pregnant women can spread to the baby. Diagnosis is by testing the blood, urine, or saliva for the presence of the virus's RNA when the person is sick, or the blood for antibodies after symptoms are present more than a week.
Prevention involves decreasing mosquito bites in areas where the disease occurs and proper condom use. Efforts to prevent bites include the use of insect repellent, covering much of the body with clothing, mosquito nets, and getting rid of standing water where mosquitoes reproduce. There is no effective vaccine. Health officials recommended that women in areas affected by the 2015–16 Zika outbreak consider putting off pregnancy and that pregnant women not travel to these areas. While there is no specific treatment, paracetamol (acetaminophen) may help with the symptoms. Hospital admission is rarely necessary.
The virus that causes the disease was first isolated in Africa in 1947. The first documented outbreak among people occurred in 2007 in the Federated States of Micronesia. An outbreak started in Brazil in 2015, and spread to the Americas, Pacific, Asia, and Africa. This led the World Health Organization to declare it a Public Health Emergency of International Concern in February 2016. The emergency was lifted in November 2016, but 84 countries still reported cases as of March 2017. The last proven case of Zika spread in the Continental United States was in 2017.
Signs and symptoms
Most people who are infected have no or few symptoms. Otherwise the most common signs and symptoms of Zika fever are fever, rash, conjunctivitis (red eyes), muscle and joint pain, and headache, which are similar to signs and symptoms of dengue and chikungunya fever. The time from a mosquito bite to developing symptoms is not yet known, but is probably a few days to a week. The disease lasts for several days to a week. It is usually mild enough for people not to go to a hospital.
Due to being in the same family as dengue, there has been concern that it could cause similar bleeding disorders. However that has only been documented in one case, with blood seen in semen, also known as hematospermia.
Guillain–Barré syndrome
Zika virus infections have been strongly associated with Guillain–Barré syndrome (GBS), which is a rapid onset of muscle weakness caused by the immune system damaging the peripheral nervous system, and which can progress to paralysis. While both GBS and Zika infection can simultaneously occur in the same individual, it is difficult to definitively identify Zika virus as the cause of GBS. Though Zika virus has been shown to infect human Schwann cells. Several countries affected by Zika outbreaks have reported increases in the rate of new cases of GBS. During the 2013–2014 outbreak in French Polynesia there were 42 reported cases of GBS over three months, compared to between 3 and 10 annually before the outbreak.
Pregnancy
The disease spreads from mother to child in the womb and can cause multiple problems, most notably microcephaly, in the baby. The full range of birth defects caused by infection during pregnancy is not known, but they appear to be common, with large-scale abnormalities seen in up to 42% of live births. The most common observed associations have been abnormalities with brain and eye development such as microcephaly and chorioretinal scarring. Less commonly there have been systemic abnormalities such as hydrops fetalis, where there is abnormal accumulation of fluid in the fetus. These abnormalities can lead to intellectual problems, seizures, vision problems, hearing problems, problems feeding and slow development.
Whether the stage of pregnancy at which the mother becomes infected affects the risk to the fetus is not well understood, nor is whether other risk factors affect outcomes. One group has estimated the risk of a baby developing microcephaly at about 1% when the mother is infected during the first trimester, with the risk of developing microcephaly becoming uncertain beyond the first trimester. Affected babies might appear normal but actually have brain abnormalities; infection in newborns could also lead to brain damage.
Cause
Reservoir
Zika virus is a mosquito-borne flavivirus closely related to the dengue and yellow fever viruses. While mosquitoes are the vector, the main reservoir species remains unknown, though serological evidence has been found in both West African monkeys and rodents.
Transmission
Transmission is via the bite of mosquitoes from the genus Aedes, primarily Aedes aegypti in tropical regions. It has also been isolated from Ae. africanus, Ae. apicoargenteus, Ae. luteocephalus, Ae. albopictus, Ae. vittatus and Ae. furcifer. During the 2007 outbreak on Yap Island in the South Pacific, Aedes hensilli was the vector, while Aedes polynesiensis spread the virus in French Polynesia in 2013.
Zika virus can also spread by sexual transmission from infected men to their partners. Zika virus has been isolated from semen samples, with one person having 100,000 times more virus in semen than blood or urine, two weeks after being infected. It is unclear why levels in semen can be higher than other body fluids, and it is also unclear how long infectious virus can remain in semen. There have also been cases of men with no symptoms of Zika virus infection transmitting the disease. The CDC has recommended that all men who have travelled to affected areas should wait at least 6 months before trying to attempt conception, regardless of whether they were ill. To date there have been no reported sexual transmissions from women to their sexual partners. Oral, anal, or vaginal sex can spread the disease.
Cases of vertical perinatal transmission have been reported. The CDC recommends that women with Zika fever should wait at least 8 weeks after they start having symptoms of the disease before attempting to conceive. There have been no reported cases of transmission from breastfeeding, but infectious virus has been found in breast milk.
Like other flaviviruses it could potentially be transmitted by blood transfusion and several affected countries have developed strategies to screen blood donors. The U.S. FDA has recommended universal screening of blood products for Zika. The virus is detected in 3% of asymptomatic blood donors in French Polynesia.
Pathophysiology
In fruit flies microcephaly appears to be caused by the flavivirid virus protein NS4A, which can disrupt brain growth by hijacking a pathway that regulates the growth of new neurons.
Diagnosis
It is difficult to diagnose Zika virus infection based on clinical signs and symptoms alone due to overlaps with other arboviruses that are endemic to similar areas. The US Centers for Disease Control and Prevention (CDC) advises that "based on the typical clinical features, the differential diagnosis for Zika virus infection is broad. In addition to dengue, other considerations include leptospirosis, malaria, rickettsia, group A streptococcus, rubella, measles, and parvovirus, enterovirus, adenovirus, and alphavirus infections (e.g., chikungunya, Mayaro, Ross River, Barmah Forest, O'nyong'nyong, and Sindbis viruses)."
In small case series, routine chemistry and complete blood counts have been normal in most patients. A few have been reported to have mild leukopenia, thrombocytopenia, and elevated liver transaminases.
Zika virus can be identified by reverse transcriptase PCR (RT-PCR) in acutely ill patients. However, the period of viremia can be short and the World Health Organization (WHO) recommends RT-PCR testing be done on serum collected within 1 to 3 days of symptom onset or on saliva samples collected during the first 3 to 5 days. When evaluating paired samples, Zika virus was detected more frequently in saliva than serum. Urine samples can be collected and tested up to 14 days after the onset of symptoms, as the virus has been seen to survive longer in the urine than either saliva or serum. The longest period of having a detectable level of the virus has been 11 days and the Zika virus does not appear to establish latency.
Later on, serology for the detection of specific IgM and IgG antibodies to the Zika virus can be used. IgM antibodies can be detectable within 3 days of the onset of illness. Serological cross-reactions with closely related flaviviruses such as dengue and West Nile virus as well as vaccines to flaviviruses are possible. As of 2019, the FDA has authorized two tests to detect Zika virus antibodies.
Screening in pregnancy
The CDC recommends screening some pregnant women even if they do not have symptoms of infection. Pregnant women who have traveled to affected areas should be tested between two and twelve weeks after their return from travel. Due to the difficulties with ordering and interpreting tests for Zika virus, the CDC also recommends that healthcare providers contact their local health department for assistance. For women living in affected areas, the CDC has recommended testing at the first prenatal visit with a doctor as well as in the mid-second trimester, though this may be adjusted based on local resources and the local burden of Zika virus. Additional testing should be done for any signs of Zika virus disease. Women with positive test results for Zika virus infection should have their fetus monitored by ultrasound every three to four weeks to monitor fetal anatomy and growth.
Infant testing
For infants with suspected congenital Zika virus disease, the CDC recommends testing with both serologic and molecular assays such as RT-PCR, IgM ELISA and plaque reduction neutralization test (PRNT). RT-PCR of the infants serum and urine should be performed in the first two days of life. Newborns with a mother who was potentially exposed and who have positive blood tests, microcephaly or intracranial calcifications should have further testing including a thorough physical investigation for neurologic abnormalities, dysmorphic features, splenomegaly, hepatomegaly, and rash or other skin lesions. Other recommended tests are cranial ultrasound, hearing evaluation, and eye examination. Testing should be done for any abnormalities encountered as well as for other congenital infections such as syphilis, toxoplasmosis, rubella, cytomegalovirus infection, lymphocytic choriomeningitis virus infection, and herpes simplex virus. Some tests should be repeated up to 6 months later as there can be delayed effects, particularly with hearing.
Infant feeding in areas of Zika virus transmission
In response to the widespread transmission of the Zika virus during the 2016 outbreak and concerns about viral genetic material detected in breast milk the World Health Organization (WHO) released a Guideline for infant feeding in areas of Zika virus transmission, first in 2016 and updated in 2021, where the evidence showed that despite the detection of Zika virus in breast milk, there is unclear evidence of transmission to the infant, and considering that Zika virus infection among infants is mild, the balance between desirable and undesirable effects favors breastfeeding versus not breastfeeding. According to the 2021WHO guidelines:
Infants born to mothers with suspected, probable, or confirmed Zika virus infection or who reside in or have traveled to areas of ongoing Zika virus transmission should be fed according to normal infant feeding guidelines. They should start breastfeeding within one hour of birth, be exclusively breastfed for six months, and have timely introduction of adequate, safe, and properly fed complementary foods while continuing breastfeeding up to two years of age or beyond.
Infants fed with expressed breast milk from mothers with suspected, probable, or confirmed Zika virus infection or who reside in or have traveled to areas of ongoing Zika virus transmission should be fed according to normal infant feeding guidelines (strong recommendation, very-low certainty of evidence).
Among infants (0–12 months) affected by complications associated with Zika virus infection, infant feeding practices should be modified (such as adjusting the environment, postural correction, or thickening feeds) to achieve and maintain optimal possible infant growth and development (strong recommendation, very- low certainty of evidence).
Mothers and caregivers of infants affected by complications associated with Zika virus (such as feeding difficulties) should receive skilled support from health-care workers to initiate and sustain optimal infant feeding practices
Prevention
The virus is spread by mosquitoes, making mosquito avoidance an important element of disease control. The CDC recommends that individuals:
Cover exposed skin by wearing long-sleeved shirts and long pants treated with permethrin.
Use an insect repellent containing DEET, picaridin, oil of lemon eucalyptus (OLE), or ethyl butylacetylaminopropionate (IR3535)
Always follow product directions and reapply as directed
If you are also using sunscreen, apply sunscreen first, let it dry, then apply insect repellent
Follow package directions when applying repellent on children. Avoid applying repellent to their hands, eyes, or mouth
Stay and sleep in screened-in or air-conditioned rooms
Use a bed net if the area where you are sleeping is exposed to the outdoors
Cover cribs, strollers, and carriers with mosquito netting for babies under 2 months old.
The CDC also recommends strategies for controlling mosquitoes such as eliminating standing water, repairing septic tanks, and using screens on doors and windows. Spraying insecticide is used to kill flying mosquitoes and larvicide can be used in water containers.
Because Zika virus can be sexually transmitted, men who have gone to an area where Zika fever is occurring should be counseled to either abstain from sex or use condoms for 6 months after travel if their partner is pregnant or could potentially become pregnant. Breastfeeding is still recommended by the WHO, even by women who have had Zika fever. There have been no recorded cases of Zika transmission to infants through breastfeeding, though the replicative virus has been detected in breast milk.
When returning from travel, with or without symptoms, it is suggested that prevention of mosquito bites continue for 3 weeks in order to reduce the risk of virus transmission to uninfected mosquitos.
CDC travel alert
Because of the "growing evidence of a link between Zika and microcephaly", in January 2016, the CDC issued a travel alert advising pregnant women to consider postponing travel to countries and territories with ongoing local transmission of Zika virus. Later, the advice was updated to caution pregnant women to avoid these areas entirely if possible and, if travel is unavoidable, to protect themselves from mosquito bites. Male partners of pregnant women and couples contemplating pregnancy who must travel to areas where Zika is active are advised to use condoms or abstain from sex. The agency also suggested that women thinking about becoming pregnant should consult with their physicians before traveling.
In September 2016, the CDC travel advisories included:
Cape Verde
Many parts of the Caribbean: Anguilla, Antigua and Barbuda, Aruba, The Bahamas, Barbados, Bonaire, British Virgin Islands, Cayman Islands, Cuba, Curaçao, Dominica, Dominican Republic, Grenada, Guadeloupe, Haiti, Jamaica, Martinique, Puerto Rico, Saba, Saint Saint Barthélemy, Saint Lucia, Saint Martin, Saint Vincent and the Grenadines, Sint Eustatius, Sint Maarten, Trinidad and Tobago, and the U.S. Virgin Islands
Central America: Belize, Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and Panama
Mexico
Most of South America: Argentina, Bolivia, Brazil, Colombia, Ecuador, French Guiana, Guyana, Paraguay, Peru, Suriname, and Venezuela
Several Pacific Islands: American Samoa, Fiji, Marshall Islands, Micronesia, New Caledonia, Papua New Guinea, Samoa, and Tonga
In Asia: Singapore, Malaysia, Brunei
In December 2020, no active Zika outbreaks were reported by the CDC.
WHO response
Both the regional Pan American Health Organization (PAHO) as well as the WHO have issued statements of concern about the widespread public health impact of the Zika virus and its links to GBS and microcephaly. The WHO Director-General, Margaret Chan, issued a statement in February 2016 "declaring that the recent cluster of microcephaly cases and other neurological disorders reported in Brazil, following a similar cluster in French Polynesia in 2014, constitutes a Public Health Emergency of International Concern." The declaration allowed the WHO to coordinate international response to the virus as well as gave its guidance the force of international law under the International Health Regulations. The declaration was ended in November 2016.
Vaccine
As of 2016, there was no available vaccine. Development was a priority of the US National Institutes of Health (NIH), but officials stated that development of a vaccine could take years. To speed new drug development regulatory strategies were proposed by the WHO and NIH. Animal and early human studies were underway as of September 2016. As of December 2019, there were several vaccine candidates in various stages of development.
Mosquito control
Disease control in the affected countries currently centers around mosquito control. Several approaches are available for the management of Aedes aegypti mosquito populations, including the destruction of larval breeding sites (the aquatic pools in which eggs are laid and larvae hatch before mosquito development into flying adults); and, insecticides targeting either the larval stages, adult mosquitoes or both. Additionally, a whole host of novel technologies are under current development for mosquito control and the World Health Organization has recently lent its support for the accelerated development of modern methods for mosquito control such as the use of Wolbachia bacteria to render mosquitoes resistant to the virus, and, the release of sterilized male mosquitoes that breed with wild female mosquitoes to give rise to non-viable offspring (offspring that do not survive to the biting, adult stage).
Oxitec's genetically modified OX513A mosquito was approved by Brazil's National Biosecurity Technical Commission (CTNBio) in April 2014 and it was being used to try to combat mosquitoes carrying the Zika virus in the town of Piracicaba, São Paulo in 2016.
In the 1940s and 1950s, the Aedes aegypti mosquito was eradicated on some Caribbean islands and in at least eighteen Latin American countries. Decreasing political will and presumably available money, mosquito resistance to insecticide, and a pace of urbanization that exceeded eradication efforts led to this mosquito's comeback.
Treatment
There is currently no specific treatment for Zika virus infection. Care is supportive with the treatment of pain, fever, and itching. Some authorities have recommended against using aspirin and other NSAIDs as these have been associated with hemorrhagic syndrome when used for other flaviviruses. Additionally, aspirin use is generally avoided in children when possible due to the risk of Reye syndrome.
Zika virus was poorly studied until the major outbreak in 2015, and no specific antiviral treatments are available as yet. Advice to pregnant women is to avoid any risk of infection so far as possible, as once infected there is little that can be done beyond supportive treatment.
Outcomes
Most of the time, Zika fever resolves on its own in two to seven days, but rarely, some people develop Guillain–Barré syndrome. The fetus of a pregnant woman who has Zika fever may die or be born with congenital central nervous system malformations, like microcephaly.
Epidemiology
In April 1947, as part of studies sponsored by the Rockefeller Foundation into yellow fever, 6 caged rhesus monkeys were placed in the canopy of the Zika Forest of Uganda. On April 18 one of the monkeys (no. 776) developed a fever and blood samples revealed the first known case of Zika fever. Population surveys at the time in Uganda found 6.1% of individuals to be seropositive for Zika. The first human cases were reported in Nigeria in 1954. A few outbreaks have been reported in tropical Africa and some areas in Southeast Asia. Until recently there were no documented cases of Zika virus in the Indian subcontinent, however, the first cases were reported in 2017 from Gujarat state and Tamil Nadu, more cases were reported in Rajasthan state involving an outbreak of 153 reported cases and in a pregnant women living in Kerala state. A 1954 study assessing blood samples from several people from different states found antibodies to Zika in healthy people in India which could indicate past exposure, though it could also be due to cross-reaction with other flaviviruses.
By using phylogenetic analysis of Asian strains, it was estimated that Zika virus had moved to Southeast Asia by 1945. In 1977–1978, Zika virus infection was described as a cause of fever in Indonesia. Before 2007, there were only 13 reported natural infections with Zika virus, all with a mild, self-limited febrile illness. As of July 2019, evidence of local transmission from mosquitoes to humans has been reported in a total of 87 countries from four of six WHO Regions; African, Americas, South-East Asia and Western Pacific.
Yap Islands
The first major outbreak, with 185 confirmed cases, was reported in 2007 in the Yap Islands of the Federated States of Micronesia. A total of 108 cases were confirmed by PCR or serology and 72 additional cases were suspected. The most common symptoms were rash, fever, arthralgia, and conjunctivitis, and no deaths were reported. The mosquito Aedes hensilli, which was the predominant species identified in Yap during the outbreak, was probably the main transmission vector. While the way of introduction of the virus on Yap Island remains uncertain, it is likely to have happened through the introduction of infected mosquitoes or a human infected with a strain related to those in Southeast Asia. This was also the first time Zika fever had been reported outside Africa and Asia. Before the Yap Island outbreak, only 14 human cases had ever been reported.
Oceania
In 2013–2014, several outbreaks of Zika were reported in French Polynesia, New Caledonia, Easter Island and the Cook Islands. The source of the virus was thought to be an independent introduction of the virus from Southeast Asia, unrelated to the Yap Islands outbreak.
Americas
Genetic analyses of Zika virus strains suggest that Zika first entered the Americas between May and December 2013. It was first detected in the Western Hemisphere in February 2014, and rapidly spread throughout South and Central America, reaching Mexico in November 2015. In 2016 it established local transmission in Florida and Texas. The first death in the United States due to Zika occurred in February 2016.
In May 2015, Brazil officially reported its first 16 cases of the illness. Although, a case of illness was reported in March 2015 in a returning traveller. According to the Brazilian Health Ministry, as of November 2015 there was no official count of the number of people infected with the virus in Brazil, since the disease is not subject to compulsory notification. Even so, cases were reported in 14 states of the country. Mosquito-borne Zika virus is suspected to be the cause of 2,400 possible cases of microcephaly and 29 infant deaths in Brazil in 2015 (of the 2400 or so notified cases in 2015, 2165 were under investigation in December 2015, 134 were confirmed and 102 were ruled out for microcephaly).
The Brazilian Health Ministry has reported at least 2,400 suspected cases of microcephaly in the country in 2015 as of 12 December, and 29 fatalities. Before the Zika outbreak, only an average of 150 to 200 cases per year were reported in Brazil. In the state of Pernambuco the reported rates of microcephaly in 2015 are 77 times higher than in the previous 5 years. A model using data from a Zika outbreak in French Polynesia estimated the risk of microcephaly in children born to mothers who acquired Zika virus in the first trimester to be 1%.
On 24 January 2016, the WHO warned that the virus is likely to spread to nearly all countries of the Americas, since its vector, the mosquito Aedes aegypti, is found in all countries in the region, except for Canada and continental Chile. The mosquito and dengue fever have been detected in Chile's Easter Island, some away from its closest point in mainland Chile, since 2002.
In February 2016, WHO declared the outbreak a Public Health Emergency of International Concern as evidence grew that Zika is a cause of birth defects and neurological problems. In April 2016, WHO stated there is a scientific consensus, based on preliminary evidence, that Zika is a cause of microcephaly in infants and Guillain–Barré syndrome in adults. Studies of this and prior outbreaks have found Zika infection during pregnancy to be associated with early pregnancy loss and other pregnancy problems. In the Americas the number of cases peaked during the first half of 2016 and declined through 2017–2018, with a total of 31,587 suspected, probable, and confirmed cases of ZIKV disease were reported in the Region of the Americas. Of these, 3,473 (11%) were laboratory-confirmed. Transmission persists at low levels in some areas and is not uniformly distributed within countries.
Asia
In 2016 imported or locally transmitted Zika was reported in all the countries of Asia except Brunei, Hong Kong, Myanmar and Nepal. Serological surveys have indicated that Zika virus is endemic in most areas of Asia, though at a low level. While there was a sharp rise in the number of cases of Zika detected in Singapore after the 2016 Summer Olympics in Brazil, genetic analysis revealed that the strains were more closely related to strains from Thailand than from those causing the epidemic in the Americas.
History
Origin of the name
It is named after the Zika Forest near Entebbe, Uganda, where the Zika virus was first identified.
Microcephaly and other infant disorders
Zika virus was first identified in the late 1940s in Kampala, Uganda, Africa but was first confirmed in Brazil. Since it was first identified, Zika has been found in more than 27 countries and territories. Following the initial Zika outbreak in Northeastern Brazil in May 2015, physicians observed a tremendous surge of reports of infants born with microcephaly, with 20 times the number of expected cases. Many of these cases have since been confirmed, leading WHO officials to project that approximately 2,500 infants will be found to have born in Brazil with Zika-related microcephaly.
Proving that Zika causes these effects was difficult and complex for several reasons. For example, the effects on an infant might not be seen until months after the mother's initial infection, long after the time when Zika is easily detected in the body. In addition, research was needed to determine the mechanism by which Zika produced these effects.
Since the initial outbreak, studies that use several different methods found evidence of a link, leading public health officials to conclude that it appears increasingly likely the virus is linked to microcephaly and miscarriage. On 1 February 2016, the World Health Organization declared recently reported clusters of microcephaly and other neurological disorders a Public Health Emergency of International Concern (PHEIC). On 8 March 2016, the WHO Committee reconfirmed that the association between Zika and neurological disorders is of global concern.
The Zika virus was first linked with newborn microcephaly during the Brazil Zika virus outbreak. In 2015, there were 2,782 suspected cases of microcephaly compared with 147 in 2014 and 167 in 2013. Confirmation of many of the recent cases is pending, and it is difficult to estimate how many cases went unreported before the recent awareness of the risk of virus infections.
In November 2015, the Zika virus was isolated in a newborn baby from the northeastern state of Ceará, Brazil, with microcephaly and other congenital disorders. The Lancet medical journal reported in January 2016 that the Brazilian Ministry of Health had confirmed 134 cases of microcephaly "believed to be associated with Zika virus infection" with an additional 2,165 cases in 549 counties in 20 states remaining under investigation. An analysis of 574 cases of microcephaly in Brazil during 2015 and the first week of 2016, reported in March 2016, found an association with maternal illness involving rash and fever during the first trimester of pregnancy. During this period, 12 Brazilian states reported increases of at least 3 standard deviations (SDs) in cases of microcephaly compared with 2000–14, with the northeastern states of Bahia, Paraíba and Pernambuco reporting increases of more than 20 SDs.
In January 2016, a baby in Oahu, Hawaii, was born with microcephaly, the first case in the United States of brain damage linked to the virus. The baby and mother tested positive for a past Zika virus infection. The mother, who had probably acquired the virus while traveling in Brazil in May 2015 during the early stages of her pregnancy, had reported her bout of Zika. She recovered before relocating to Hawaii. Her pregnancy had progressed normally, and the baby's condition was not known until birth.
In February 2016, ocular disorders in newborns have been linked to Zika virus infection. In one study in Pernambuco state in Brazil, about 40 percent of babies with Zika-related microcephaly also had scarring of the retina with spots, or pigment alteration. On 20 February 2016, Brazilian scientists announced that they had successfully sequenced the Zika virus genome and expressed hope that this would help in both developing a vaccine and in determining the nature of any link to birth defects.
Also in February 2016, rumors that microcephaly is caused by the use of the larvicide pyriproxyfen in drinking water were refuted by scientists. "It's important to state that some localities that do not use pyriproxyfen also had reported cases of microcephaly", read a Brazilian government statement. The Brazilian government also refuted conspiracy theories that chickenpox and rubella vaccinations or genetically modified mosquitoes were causing increases in microcephaly.
Researchers also suspected that the Zika virus could be transmitted by a pregnant woman to her baby ("vertical transmission"). This remained unproven until February 2016, when a paper by Calvet et al. was published, showing not only the Zika virus genome found in the amniotic fluid but also IgM antibodies against the virus. This means that not only can the virus cross the placental barrier, but also the antibodies produced by the mother can reach the fetus, which suggests that vertical transmission is plausible in these cases. One other study published in March 2016 by Mlakar and colleagues analyzed autopsy tissues from a fetus with microcephaly that was probably related to Zika virus; researchers found ZIKV in the brain tissue and suggested that the brain injuries were probably associated with the virus, which also shed a light on the vertical transmission theory. Also in March 2016, first solid evidence was reported on how the virus affects the development of the brain, indicating that it appears to preferentially kill developing brain cells.
The first cases of birth defects linked to Zika in Colombia and in Panama were reported in March 2016. In the same month, researchers published a prospective cohort study that found profound impacts in 29 percent of infants of mothers infected with Zika, some of whom were infected late in pregnancy. This study did not suffer from some of the difficulties of studying Zika: the study followed women who presented to a Rio de Janeiro clinic with fever and rash within the last five days. The women were then tested for Zika using PCR, then the progress of the pregnancies was followed using ultrasound.
Guillain–Barré syndrome
A high rate of the autoimmune disease Guillain–Barré syndrome (GBS), noted in the French Polynesia outbreak, has also been found in the outbreak that began in Brazil. Laboratory analysis found Zika infections in some patients with GBS in Brazil, El Salvador, Suriname, and Venezuela, and the WHO declared on 22 March 2016 that Zika appeared to be "implicated" in GBS infection and that if the pattern was confirmed it would represent a global public health crisis.
Research
Mechanism
Early in the 2015–16 Zika virus epidemic, research was begun to understand how the Zika virus causes microcephaly and other neurological disorders. However, with the 2019 election of Jair Bolsonaro in Brazil, who cut funding for research, and the emergence of the COVID-19 pandemic in early 2020, most Zika-related research projects were abandoned or reduced.
It may involve infection of the primary neural stem cells of the fetal brain, known as neural progenitor cells. The main roles of brain stem cells are to proliferate until the correct number is achieved, and then to produce neurons through the process of neurogenesis. Zika proteins NS4A and NS4B have also been shown to directly suppress neurogenesis. Infection of brain stem cells can cause cell death, which reduces the production of future neurons and leads to a smaller brain. Zika also appears to have an equal tropism for cells of the developing eye, leading to high rates of eye abnormalities as well.
In addition to inducing cell death, infection of neural progenitor cells may alter the process of cell proliferation, causing a depletion in the pool of progenitor cells. A large number of cases of microcephaly have been associated with inherited gene mutations, and specifically with mutations that lead to dysfunction of the mitotic spindle. There is some evidence that Zika virus may directly or indirectly interfere with mitotic function, this may play a role in altering cell proliferation.
Another line of research considers that Zika, unlike other flaviviruses, may target developing brain cells after it crosses the placenta, and considers the resulting damage likely to be the result of inflammation as a byproduct of the immune response to the infection of those cells.
Mosquito control
Some experimental prevention methods include breeding and releasing mosquitoes that have been genetically modified to prevent them from transmitting pathogens, or have been infected with the Wolbachia bacterium, believed to inhibit the spread of viruses. A strain of Wolbachia helped to reduce the vector competence of the Zika virus in infected Aedes aegypti released in Medellin, Colombia.
Gene drive is a technique for changing wild populations, for instance to combat insects so they cannot transmit diseases (in particular mosquitoes in the cases of malaria and Zika). Another method which been researched aims to render male mosquitoes infertile by nuclear radiation in the hope to reduce populations; this is done with a cobalt-60 gamma cell irradiator. In 2016 the World Health Organization encouraged field trials of transgenic male Aedes aegypti mosquitoes developed by Oxitec to try to halt the spread of the Zika virus.
| Biology and health sciences | Viral diseases | Health |
11996133 | https://en.wikipedia.org/wiki/Intercontinental%20and%20transoceanic%20fixed%20links | Intercontinental and transoceanic fixed links | A fixed link or fixed crossing is a permanent, unbroken road or rail connection across water that uses some combination of bridges, tunnels, and causeways and does not involve intermittent connections such as drawbridges or ferries. A bridge–tunnel combination is commonly used for major fixed links.
This is a list of proposed and actual transport links between continents and to offshore islands. | Technology | Ground transportation networks | null |
11009033 | https://en.wikipedia.org/wiki/Type%20II%20supernova | Type II supernova | A Type II supernova or SNII (plural: supernovae) results from the rapid collapse and violent explosion of a massive star. A star must have at least eight times, but no more than 40 to 50 times, the mass of the Sun () to undergo this type of explosion. Type II supernovae are distinguished from other types of supernovae by the presence of hydrogen in their spectra. They are usually observed in the spiral arms of galaxies and in H II regions, but not in elliptical galaxies; those are generally composed of older, low-mass stars, with few of the young, very massive stars necessary to cause a supernova.
Stars generate energy by the nuclear fusion of elements. Unlike the Sun, massive stars possess the mass needed to fuse elements that have an atomic mass greater than hydrogen and helium, albeit at increasingly higher temperatures and pressures, causing correspondingly shorter stellar life spans. The degeneracy pressure of electrons and the energy generated by these fusion reactions are sufficient to counter the force of gravity and prevent the star from collapsing, maintaining stellar equilibrium. The star fuses increasingly higher mass elements, starting with hydrogen and then helium, progressing up through the periodic table until a core of iron and nickel is produced. Fusion of iron or nickel produces no net energy output, so no further fusion can take place, leaving the nickel–iron core inert. Due to the lack of energy output creating outward thermal pressure, the core contracts due to gravity until the overlying weight of the star can be supported largely by electron degeneracy pressure.
When the compacted mass of the inert core exceeds the Chandrasekhar limit of about , electron degeneracy is no longer sufficient to counter the gravitational compression. A cataclysmic implosion of the core takes place within seconds. Without the support of the now-imploded inner core, the outer core collapses inwards under gravity and reaches a velocity of up to 23% of the speed of light, and the sudden compression increases the temperature of the inner core to up to 100 billion kelvins. Neutrons and neutrinos are formed via reversed beta-decay, releasing about 1046 joules (100 foe) in a ten-second burst. The collapse of the inner core is halted by the repulsive nuclear force and neutron degeneracy, causing the implosion to rebound and bounce outward. The energy of this expanding shock wave is sufficient to disrupt the overlying stellar material and accelerate it to escape velocity, forming a supernova explosion. The shock wave and extremely high temperature and pressure rapidly dissipate but are present for long enough to allow for a brief period during which the
production of elements heavier than iron occurs. Depending on initial mass of the star, the remnants of the core form a neutron star or a black hole. Because of the underlying mechanism, the resulting supernova is also described as a core-collapse supernova.
There exist several categories of Type II supernova explosions, which are categorized based on the resulting light curve—a graph of luminosity versus time—following the explosion. Type II-L supernovae show a steady (linear) decline of the light curve following the explosion, whereas Type II-P display a period of slower decline (a plateau) in their light curve followed by a normal decay. Type Ib and Ic supernovae are a type of core-collapse supernova for a massive star that has shed its outer envelope of hydrogen and (for Type Ic) helium. As a result, they appear to be lacking in these elements.
Formation
Stars far more massive than the sun evolve in complex ways. In the core of the star, hydrogen is fused into helium, releasing thermal energy that heats the star's core and provides outward pressure that supports the star's layers against collapse – a situation known as stellar or hydrostatic equilibrium. The helium produced in the core accumulates there. Temperatures in the core are not yet high enough to cause it to fuse. Eventually, as the hydrogen at the core is exhausted, fusion starts to slow down, and gravity causes the core to contract. This contraction raises the temperature high enough to allow a shorter phase of helium fusion, which produces carbon and oxygen, and accounts for less than 10% of the star's total lifetime.
In stars of less than eight solar masses, the carbon produced by helium fusion does not fuse, and the star gradually cools to become a white dwarf. If they accumulate more mass from another star, or some other source, they may become Type Ia supernovae. But a much larger star is massive enough to continue fusion beyond this point.
The cores of these massive stars directly create temperatures and pressures needed to cause the carbon in the core to begin to fuse when the star contracts at the end of the helium-burning stage. The core gradually becomes layered like an onion, as progressively heavier atomic nuclei build up at the center, with an outermost layer of hydrogen gas, surrounding a layer of hydrogen fusing into helium, surrounding a layer of helium fusing into carbon via the triple-alpha process, surrounding layers that fuse to progressively heavier elements. As a star this massive evolves, it undergoes repeated stages where fusion in the core stops, and the core collapses until the pressure and temperature are sufficient to begin the next stage of fusion, reigniting to halt collapse.
{| class="wikitable"
|+ Core-burning nuclear fusion stages for a 25-solar mass star
!rowspan="2"| Process
!rowspan="2"| Main fuel
!rowspan="2"| Main products
!colspan="3"| star
|-
!style="font-weight: normal"| Temperature(K)
!style="font-weight: normal"| Density(g/cm3)
!style="font-weight: normal"| Duration
|-
|| hydrogen burning
|| hydrogen
|| helium
| style="text-align:center;"|
| style="text-align:center;"| 10
| style="text-align:center;"|
|-
|| triple-alpha process
|| helium
|| carbon, oxygen
| style="text-align:center;"|
| style="text-align:center;"| 2000
| style="text-align:center;"|
|-
|| carbon-burning process
|| carbon
|| Ne, Na, Mg, Al
| style="text-align:center;"|
| style="text-align:center;"|
| style="text-align:center;"| 1000 years
|-
|| neon-burning process
|| neon
|| O, Mg
| style="text-align:center;"|
| style="text-align:center;"|
| style="text-align:center;"| 3 years
|-
|| oxygen-burning process
|| oxygen
|| Si, S, Ar, Ca
| style="text-align:center;"|
| style="text-align:center;"|
| style="text-align:center;"| 0.3 years
|-
|| silicon-burning process
|| silicon
|| nickel (decays into iron)
| style="text-align:center;"|
| style="text-align:center;"|
| style="text-align:center;"| 5 days
|}
Core collapse
The factor limiting this process is the amount of energy that is released through fusion, which is dependent on the binding energy that holds together these atomic nuclei. Each additional step produces progressively heavier nuclei, which release progressively less energy when fusing. In addition, from carbon-burning onwards, energy loss via neutrino production becomes significant, leading to a higher rate of reaction than would otherwise take place. This continues until nickel-56 is produced, which decays radioactively into cobalt-56 and then iron-56 over the course of a few months. As iron and nickel have the highest binding energy per nucleon of all the elements, energy cannot be produced at the core by fusion, and a nickel-iron core grows. This core is under huge gravitational pressure. As there is no fusion to further raise the star's temperature to support it against collapse, it is supported only by degeneracy pressure of electrons. In this state, matter is so dense that further compaction would require electrons to occupy the same energy states. However, this is forbidden for identical fermion particles, such as the electron – a phenomenon called the Pauli exclusion principle.
When the core's mass exceeds the Chandrasekhar limit of about , degeneracy pressure can no longer support it, and catastrophic collapse ensues. The outer part of the core reaches velocities of up to (23% of the speed of light) as it collapses toward the center of the star. The rapidly shrinking core heats up, producing high-energy gamma rays that decompose iron nuclei into helium nuclei and free neutrons via photodisintegration. As the core's density increases, it becomes energetically favorable for electrons and protons to merge via inverse beta decay, producing neutrons and elementary particles called neutrinos. Because neutrinos rarely interact with normal matter, they can escape from the core, carrying away energy and further accelerating the collapse, which proceeds over a timescale of milliseconds. As the core detaches from the outer layers of the star, some of these neutrinos are absorbed by the star's outer layers, beginning the supernova explosion.
For Type II supernovae, the collapse is eventually halted by short-range repulsive neutron-neutron interactions, mediated by the strong force, as well as by degeneracy pressure of neutrons, at a density comparable to that of an atomic nucleus. When the collapse stops, the infalling matter rebounds, producing a shock wave that propagates outward. The energy from this shock dissociates heavy elements within the core. This reduces the energy of the shock, which can stall the explosion within the outer core.
The core collapse phase is so dense and energetic that only neutrinos are able to escape. As the protons and electrons combine to form neutrons by means of electron capture, an electron neutrino is produced. In a typical Type II supernova, the newly formed neutron core has an initial temperature of about 100 billion kelvins, 104 times the temperature of the Sun's core. Much of this thermal energy must be shed for a stable neutron star to form, otherwise the neutrons would "boil away". This is accomplished by a further release of neutrinos. These 'thermal' neutrinos form as neutrino-antineutrino pairs of all flavors, and total several times the number of electron-capture neutrinos. The two neutrino production mechanisms convert the gravitational potential energy of the collapse into a ten-second neutrino burst, releasing about 1046 joules (100 foe).
Through a process that is not clearly understood, about 1%, or 1044 joules (1 foe), of the energy released (in the form of neutrinos) is reabsorbed by the stalled shock, producing the supernova explosion. Neutrinos generated by a supernova were observed in the case of Supernova 1987A, leading astrophysicists to conclude that the core collapse picture is basically correct. The water-based Kamiokande II and IMB instruments detected antineutrinos of thermal origin, while the gallium-71-based Baksan instrument detected neutrinos (lepton number = 1) of either thermal or electron-capture origin.
When the progenitor star is below about – depending on the strength of the explosion and the amount of material that falls back – the degenerate remnant of a core collapse is a neutron star. Above this mass, the remnant collapses to form a black hole. The theoretical limiting mass for this type of core collapse scenario is about . Above that mass, a star is believed to collapse directly into a black hole without forming a supernova explosion, although uncertainties in models of supernova collapse make calculation of these limits uncertain.
Theoretical models
The Standard Model of particle physics is a theory which describes three of the four known fundamental interactions between the elementary particles that make up all matter. This theory allows predictions to be made about how particles will interact under many conditions. The energy per particle in a supernova is typically 1–150 picojoules (tens to hundreds of MeV). The per-particle energy involved in a supernova is small enough that the predictions gained from the Standard Model of particle physics are likely to be basically correct. But the high densities may require corrections to the Standard Model. In particular, Earth-based particle accelerators can produce particle interactions which are of much higher energy than are found in supernovae, but these experiments involve individual particles interacting with individual particles, and it is likely that the high densities within the supernova will produce novel effects. The interactions between neutrinos and the other particles in the supernova take place with the weak nuclear force, which is believed to be well understood. However, the interactions between the protons and neutrons involve the strong nuclear force, which is much less well understood.
The major unsolved problem with Type II supernovae is that it is not understood how the burst of neutrinos transfers its energy to the rest of the star producing the shock wave which causes the star to explode. From the above discussion, only one percent of the energy needs to be transferred to produce an explosion, but explaining how that one percent of transfer occurs has proven extremely difficult, even though the particle interactions involved are believed to be well understood. In the 1990s, one model for doing this involved convective overturn, which suggests that convection, either from neutrinos from below, or infalling matter from above, completes the process of destroying the progenitor star. Heavier elements than iron are formed during this explosion by neutron capture, and from the pressure of the neutrinos pressing into the boundary of the "neutrinosphere", seeding the surrounding space with a cloud of gas and dust which is richer in heavy elements than the material from which the star originally formed.
Neutrino physics, which is modeled by the Standard Model, is crucial to the understanding of this process. The other crucial area of investigation is the hydrodynamics of the plasma that makes up the dying star; how it behaves during the core collapse determines when and how the shockwave forms and when and how it stalls and is reenergized.
In fact, some theoretical models incorporate a hydrodynamical instability in the stalled shock known as the "Standing Accretion Shock Instability" (SASI). This instability comes about as a consequence of non-spherical perturbations oscillating the stalled shock thereby deforming it. The SASI is often used in tandem with neutrino theories in computer simulations for re-energizing the stalled shock.
Computer models have been very successful at calculating the behavior of Type II supernovae when the shock has been formed. By ignoring the first second of the explosion, and assuming that an explosion is started, astrophysicists have been able to make detailed predictions about the elements produced by the supernova and of the expected light curve from the supernova.
Light curves for Type II-L and Type II-P supernovae
When the spectrum of a Type II supernova is examined, it normally displays Balmer absorption lines – reduced flux at the characteristic frequencies where hydrogen atoms absorb energy. The presence of these lines is used to distinguish this category of supernova from a Type I supernova.
When the luminosity of a Type II supernova is plotted over a period of time, it shows a characteristic rise to a peak brightness followed by a decline. These light curves have an average decay rate of 0.008 magnitudes per day; much lower than the decay rate for Type Ia supernovae. Type II is subdivided into two classes, depending on the shape of the light curve. The light curve for a Type II-L supernova shows a steady (linear) decline following the peak brightness. By contrast, the light curve of a Type II-P supernova has a distinctive flat stretch (called a plateau) during the decline; representing a period where the luminosity decays at a slower rate. The net luminosity decay rate is lower, at 0.0075 magnitudes per day for Type II-P, compared to 0.012 magnitudes per day for Type II-L.
The difference in the shape of the light curves is believed to be caused, in the case of Type II-L supernovae, by the expulsion of most of the hydrogen envelope of the progenitor star. The plateau phase in Type II-P supernovae is due to a change in the opacity of the exterior layer. The shock wave ionizes the hydrogen in the outer envelope – stripping the electron from the hydrogen atom – resulting in a significant increase in the opacity. This prevents photons from the inner parts of the explosion from escaping. When the hydrogen cools sufficiently to recombine, the outer layer becomes transparent.
Type IIn supernovae
The "n" denotes narrow, which indicates the presence of narrow or intermediate width hydrogen emission lines in the spectra. In the intermediate width case, the ejecta from the explosion may be interacting strongly with gas around the star – the circumstellar medium. The estimated circumstellar density required to explain the observational properties is much higher than that expected from the standard stellar evolution theory. It is generally assumed that the high circumstellar density is due to the high mass-loss rates of the Type IIn progenitors. The estimated mass-loss rates are typically higher than per year. There are indications that they originate as stars similar to luminous blue variables with large mass losses before exploding. SN 1998S and SN 2005gl are examples of Type IIn supernovae; SN 2006gy, an extremely energetic supernova, may be another example.
Some supernovae of type IIn show interactions with the circumstellar medium, which leads to an increased temperature of the cirumstellar dust. This warm dust can be observed as a brightening in the mid-infrared light. If the circumstellar medium extends further from the supernova, the mid-infrared brightening can cause an infrared echo, causing the brightening to last more than 1000 days. These kind of supernovae belong to the rare 2010jl-like supernovae, named after the archetypal SN 2010jl. Most 2010jl-like supernovae were discovered with the decommissioned Spitzer Space Telescope and the Wide-Field Infrared Survey Explorer (e.g. SN 2014ab, SN 2017hcc).
Type IIb supernovae
A Type IIb supernova has a weak hydrogen line in its initial spectrum, which is why it is classified as a Type II. However, later on the H emission becomes undetectable, and there is also a second peak in the light curve that has a spectrum which more closely resembles a Type Ib supernova. The progenitor could have been a massive star that expelled most of its outer layers, or one which lost most of its hydrogen envelope due to interactions with a companion in a binary system, leaving behind the core that consisted almost entirely of helium. As the ejecta of a Type IIb expands, the hydrogen layer quickly becomes more transparent and reveals the deeper layers.
The classic example of a Type IIb supernova is SN 1993J, while another example is Cassiopeia A. The IIb class was first introduced (as a theoretical concept) by Woosley et al. in 1987, and the class was soon applied to SN 1987K and SN 1993J.
| Physical sciences | Stellar astronomy | Astronomy |
11014498 | https://en.wikipedia.org/wiki/DVD | DVD | The DVD (common abbreviation for digital video disc or digital versatile disc) is a digital optical disc data storage format. It was invented and developed in 1995 and first released on November 1, 1996, in Japan. The medium can store any kind of digital data and has been widely used to store video programs (watched using DVD players), software and other computer files. DVDs offer significantly higher storage capacity than compact discs (CD) while having the same dimensions. A standard single-layer DVD can store up to 4.7 GB of data, a dual-layer DVD up to 8.5 GB. Variants can store up to a maximum of 17.08 GB.
Prerecorded DVDs are mass-produced using molding machines that physically stamp data onto the DVD. Such discs are a form of DVD-ROM because data can only be read and not written or erased. Blank recordable DVD discs (DVD-R and DVD+R) can be recorded once using a DVD recorder and then function as a DVD-ROM. Rewritable DVDs (DVD-RW, DVD+RW, and DVD-RAM) can be recorded and erased many times.
DVDs are used in DVD-Video consumer digital video format and less commonly in DVD-Audio consumer digital audio format, as well as for authoring DVD discs written in a special AVCHD format to hold high definition material (often in conjunction with AVCHD format camcorders). DVDs containing other types of information may be referred to as DVD data discs.
Etymology
The Oxford English Dictionary comments that, "In 1995, rival manufacturers of the product initially named digital video disc agreed that, in order to emphasize the flexibility of the format for multimedia applications, the preferred abbreviation DVD would be understood to denote digital versatile disc." The OED also states that in 1995, "The companies said the official name of the format will simply be DVD. Toshiba had been using the name 'digital video disc', but that was switched to 'digital versatile disc' after computer companies complained that it left out their applications."
"Digital versatile disc" is the explanation provided in a DVD Forum Primer from 2000 and in the DVD Forum's mission statement, which the purpose is to promote broad acceptance of DVD products on technology, across entertainment, and other industries.
Because DVDs became highly popular for the distribution of movies in the 2000s, the term DVD became popularly used in English as a noun to describe specifically a full-length movie released on the format; for example the sentence to "watch a DVD" describes watching a movie on DVD.
History
Development and launch
Released in 1987, CD Video used analog video encoding on optical discs matching the established standard size of audio CDs. Video CD (VCD) became one of the first formats for distributing digitally encoded films in this format, in 1993. In the same year, two new optical disc storage formats were being developed. One was the Multimedia Compact Disc (MMCD), backed by Philips and Sony (developers of the CD and CD-i), and the other was the Super Density (SD) disc, supported by Toshiba, Time Warner, Matsushita Electric, Hitachi, Mitsubishi Electric, Pioneer, Thomson, and JVC. By the time of the press launches for both formats in January 1995, the MMCD nomenclature had been dropped, and Philips and Sony were referring to their format as Digital Video Disc (DVD).
On May 3, 1995, an ad hoc, industry technical group formed from five computer companies (IBM, Apple, Compaq, Hewlett-Packard, and Microsoft) issued a press release stating that they would only accept a single format. The group voted to boycott both formats unless the two camps agreed on a single, converged standard. They recruited Lou Gerstner, president of IBM, to pressure the executives of the warring factions. In one significant compromise, the MMCD and SD groups agreed to adopt proposal SD 9, which specified that both layers of the dual-layered disc be read from the same side—instead of proposal SD 10, which would have created a two-sided disc that users would have to turn over. Philips/Sony strongly insisted on the source code, EFMPlus, that Kees Schouhamer Immink had designed for the MMCD, because it makes it possible to apply the existing CD servo technology. Its drawback was a loss from 5 to 4.7 Gbyte of capacity.
As a result, the DVD specification provided a storage capacity of 4.7 GB (4.38 GiB) for a single-layered, single-sided disc and 8.5 GB (7.92 GiB) for a dual-layered, single-sided disc. The DVD specification ended up similar to Toshiba and Matsushita's Super Density Disc, except for the dual-layer option. MMCD was single-sided and optionally dual-layer, whereas SD was two half-thickness, single-layer discs which were pressed separately and then glued together to form a double-sided disc.
Philips and Sony decided that it was in their best interests to end the format war, and on September 15, 1995 agreed to unify with companies backing the Super Density Disc to release a single format, with technologies from both. After other compromises between MMCD and SD, the group of computer companies won the day, and a single format was agreed upon. The computer companies also collaborated with the Optical Storage Technology Association (OSTA) on the use of their implementation of the ISO-13346 file system (known as Universal Disk Format) for use on the new DVDs. The format's details were finalized on December 8, 1995.
In November 1995, Samsung announced it would start mass-producing DVDs by September 1996. The format launched on November 1, 1996, in Japan, mostly with music video releases. The first major releases from Warner Home Video arrived on December 20, 1996, with four titles being available. The format's release in the U.S. was delayed multiple times, from August 1996, to October 1996, November 1996, before finally settling on early 1997. Players began to be produced domestically that winter, with March 24, 1997, as the U.S. launch date of the format proper in seven test markets.
Approximately 32 titles were available on launch day, mainly from the Warner Bros., MGM, and New Line libraries, with the notable inclusion of the 1996 film Twister. However, the launch was planned for the following day (March 25), leading to a distribution change with retailers and studios to prevent similar violations of breaking the street date. The nationwide rollout for the format happened on August 22, 1997.
DTS announced in late 1997 that they would be coming onto the format. The sound system company revealed details in a November 1997 online interview, and clarified it would release discs in early 1998. However, this date would be pushed back several times before finally releasing their first titles at the 1999 Consumer Electronics Show.
In 2001, blank DVD recordable discs cost the equivalent of $27.34 US dollars in 2022.
Adoption
Movie and home entertainment distributors adopted the DVD format to replace the ubiquitous VHS tape as the primary consumer video distribution format.
Immediately following the formal adoption of a unified standard for DVD, two of the four leading video game console companies (Sega and The 3DO Company) said they already had plans to design a gaming console with DVDs as the source medium. Sony stated at the time that they had no plans to use DVD in their gaming systems, despite being one of the developers of the DVD format and eventually the first company to actually release a DVD-based console. Game consoles such as the PlayStation 2, Xbox, and Xbox 360 use DVDs as their source medium for games and other software. Contemporary games for Windows were also distributed on DVD. Early DVDs were mastered using DLT tape, but using DVD-R DL or +R DL eventually became common. TV DVD combos, combining a standard definition CRT TV or an HD flat panel TV with a DVD mechanism under the CRT or on the back of the flat panel, and VCR/DVD combos were also available for purchase.
For consumers, DVD soon replaced VHS as the favored choice for home movie releases.
In 2001, DVD players outsold VCRs for the first time in the United States. At that time, one in four American households owned a DVD player. By 2007, about 80% of Americans owned a DVD player, a figure that had surpassed VCRs; it was also higher than personal computers or cable television.
Specifications
The DVD specifications created and updated by the DVD Forum are published as so-called DVD Books (e.g. DVD-ROM Book, DVD-Audio Book, DVD-Video Book, DVD-R Book, DVD-RW Book, DVD-RAM Book, DVD-AR (Audio Recording) Book, DVD-VR (Video Recording) Book, etc.). DVD discs are made up of two discs; normally one is blank, and the other contains data. Each disc is 0.6 mm thick, and they are glued together to form a DVD disc. The gluing process must be done carefully to make the disc as flat as possible to avoid both birefringence and "disc tilt", which is when the disc is not perfectly flat, preventing it from being read.
Some specifications for mechanical, physical and optical characteristics of DVD optical discs can be downloaded as freely available standards from the ISO website. There are also equivalent European Computer Manufacturers Association (Ecma) standards for some of these specifications, such as Ecma-267 for DVD-ROMs. Also, the DVD+RW Alliance publishes competing recordable DVD specifications such as DVD+R, DVD+R DL, DVD+RW or DVD+RW DL. These DVD formats are also ISO standards.
Some DVD specifications (e.g. for DVD-Video) are not publicly available and can be obtained only from the DVD Format/Logo Licensing Corporation (DVD FLLC) for a fee of US$5000. Every subscriber must sign a non-disclosure agreement as certain information on the DVD Books is proprietary and confidential.
Double-sided discs
Borrowing from the LaserDisc format, the DVD standard includes DVD-10 discs (Type B in ISO) with two recorded data layers such that only one layer is accessible from either side of the disc. This doubles the total nominal capacity of a DVD-10 disc to 9.4 GB (8.75 GiB), but each side is locked to 4.7 GB. Like DVD-5 discs, DVD-10 discs are defined as single-layer (SL) discs.
Dual-layer discs
DVD hardware accesses the additional layer (layer 1) by refocusing the laser through an otherwise normally-placed, semitransparent first layer (layer 0). This laser refocus—and the subsequent time needed to reacquire laser tracking—can cause a noticeable pause in A/V playback on earlier DVD players, the length of which varies between hardware. A printed message explaining that the layer-transition pause was not a malfunction became standard on DVD keep cases. During mastering, a studio could make the transition less obvious by timing it to occur just before a camera angle change or other abrupt shift, an early example being the DVD release of Toy Story. Later in the format's life, larger data buffers and faster optical pickups in DVD players made layer transitions effectively invisible regardless of mastering.
Dual-layer DVDs are recorded using Opposite Track Path (OTP).
Combinations of the above
The DVD Book also permits an additional disc type called DVD-14: a hybrid double-sided disc with one dual-layer side, one single-layer side, and a total nominal capacity of 12.3 GB. DVD-14 has no counterpart in ISO.
Both of these additional disc types are extremely rare due to their complicated and expensive manufacturing. For this reason, some DVDs that were initially issued as double-sided discs were later pressed as two-disc sets.
Note: The above sections regarding disc types pertain to 12 cm discs. The same disc types exist for 8 cm discs: ISO standards still regard these discs as Types A–D, while the DVD Book assigns them distinct disc types. DVD-14 has no analogous 8 cm type. The comparative data for 8 cm discs is provided further down.
DVD recordable and rewritable
HP initially developed recordable DVD media from the need to store data for backup and transport. DVD recordables are now also used for consumer audio and video recording. Three formats were developed: DVD-R/RW, DVD+R/RW (plus), and DVD-RAM. DVD-R is available in two formats, General (650 nm) and Authoring (635 nm), where Authoring discs may be recorded with CSS encrypted video content but General discs may not.
Dual-layer recording
Dual-layer recording (occasionally called double-layer recording) allows DVD-R and DVD+R discs to store nearly double the data of a single-layer disc—8.5 and 4.7 gigabyte capacities, respectively. The additional capacity comes at a cost: DVD±DLs have slower write speeds as compared to DVD±R. DVD-R DL was developed for the DVD Forum by Pioneer Corporation; DVD+R DL was developed for the DVD+RW Alliance by Mitsubishi Kagaku Media (MKM) and Philips.
Recordable DVD discs supporting dual-layer technology are backward-compatible with some hardware developed before the recordable medium.
Capacity
All units are expressed with SI/IEC prefixes (i.e., 1 Gigabyte = 1,000,000,000 bytes).
All units are expressed with SI/IEC prefixes (i.e., 1 Gigabyte = 1,000,000,000 bytes).
All units are expressed with SI/IEC prefixes (i.e., 1 Gigabyte = 1,000,000,000 bytes).
DVD drives and players
DVD drives are devices that can read DVD discs on a computer. DVD players are a particular type of devices that do not require a computer to work, and can read DVD-Video and DVD-Audio discs.
Transfer rates
Read and write speeds for the first DVD drives and players were 1,385 kB/s (1,353 KiB/s); this speed is usually called "1×". More recent models, at 18× or 20×, have 18 or 20 times that speed. For CD drives, 1× means 153.6 kB/s (150 KiB/s), about one-ninth as swift.
DVDs can spin at much higher speeds than CDs – DVDs can spin at up to 32000 RPM vs 23000 for CDs. In practice, they are not spun by optical drives anywhere close to these speeds to provide a safety margin. DVD drives limit reading speed to 16× (constant angular velocity), which means 9280 rotations per minute. Early-generation drives released before the mid-2000s have lower limits.
DVD recordable and rewritable discs can be read and written using either constant angular velocity (CAV), constant linear velocity (CLV), Partial constant angular velocity (P-CAV) or Zoned Constant Linear Velocity (Z-CLV or ZCLV).
Due to the slightly lower data density of dual layer DVDs (4.25 GB instead of 4.7 GB per layer), the required rotation speed is around 10% faster for the same data rate, which means that the same angular speed rating equals a 10% higher physical angular rotation speed. For that reason, the increase of reading speeds of dual layer media has stagnated at 12× (constant angular velocity) for half-height optical drives released since around 2005, and slim type optical drives are only able to record dual layer media at 6× (constant angular velocity), while reading speeds of 8× are still supported by such.
Disc quality measurements
The quality and data integrity of optical media is measureable, which means that future data losses caused by deteriorating media can be predicted well in advance by measuring the rate of correctable data errors.
Support of measuring the disc quality varies among optical drive vendors and models.
DVD-Video
DVD-Video is a standard for distributing video/audio content on DVD media. The format went on sale in Japan on November 1, 1996, in the United States on March 24, 1997, to line up with the 69th Academy Awards that day; in Canada, Central America, and Indonesia later in 1997; and in Europe, Australia, and Africa in 1998. DVD-Video became the dominant form of home video distribution in Japan when it first went on sale on November 1, 1996, but it shared the market for home video distribution in the United States for several years; it was June 15, 2003, when weekly DVD-Video in the United States rentals began outnumbering weekly VHS cassette rentals.
DVD-Video is still the dominant form of home video distribution worldwide except for in Japan where it was surpassed by Blu-ray Disc when Blu-ray first went on sale in Japan on March 31, 2006.
Security
The purpose of CSS is twofold:
CSS prevents byte-for-byte copies of an MPEG (digital video) stream from being playable since such copies do not include the keys that are hidden on the lead-in area of the restricted DVD.
CSS provides a reason for manufacturers to make their devices compliant with an industry-controlled standard, since CSS scrambled discs cannot in principle be played on noncompliant devices; anyone wishing to build compliant devices must obtain a license, which contains the requirement that the rest of the DRM system (region codes, Macrovision, and user operation prohibition) be implemented.
Successors and decline
In 2006, two new formats called HD DVD and Blu-ray Disc were released as the successor to DVD. HD DVD competed unsuccessfully with Blu-ray Disc in the format war of 2006–2008. A dual layer HD DVD can store up to 30 GB and a dual layer Blu-ray disc can hold up to 50 GB.
However, unlike previous format changes, e.g., vinyl to Compact Disc or VHS videotape to DVD, initially there was no immediate indication that production of the standard DVD will gradually wind down, as at the beginning of the 2010s they still dominated, with around 75% of video sales and approximately one billion DVD player sales worldwide as of April 2011. In fact, experts claimed that the DVD would remain the dominant medium for at least another five years as Blu-ray technology was still in its introductory phase, write and read speeds being poor and necessary hardware being expensive and not readily available.
Consumers initially were also slow to adopt Blu-ray due to the cost. By 2009, 85% of stores were selling Blu-ray Discs. A high-definition television and appropriate connection cables are also required to take advantage of Blu-ray disc. Some analysts suggested that the biggest obstacle to replacing DVD was due to its installed base; a large majority of consumers were satisfied with DVDs.
DVDs started to face competition from video on demand services around 2015. With increasing numbers of homes having high speed Internet connections, many people had the option to either rent or buy video from an online service, and view it by streaming it directly from that service's servers, meaning they no longer need any form of permanent storage media for video at all. By 2017, digital streaming services had overtaken the sales of DVDs and Blu-rays for the first time.
Until the end of the 2010s, manufacturers continued to release standard DVD titles, and the format remained the preferred one for the release of older television programs and films. Shows that were shot and edited entirely on film, such as Star Trek: The Original Series, could not be released in high definition without being re-scanned from the original film recordings. Shows that were made between the early 1980s and the early 2000s were generally shot on film, then transferred to video tape, and then edited natively in either NTSC or PAL; this makes high-definition transfers impossible, as these SD standards were baked into the final cuts of the episodes. Star Trek: The Next Generation was the only such show that had a Blu-ray release, as prints were re-scanned and edited from the ground up.
By the beginning of the 2020s, sales of DVD had dropped 86% with respect to the peak of DVD sales around 2005, while on-demand sales and, overall, subscription streaming of TV shows and movies grew by over 1,200%. At its peak, DVD sales represented almost two thirds of video market in the US; approximately 15 years later, around 2020, they fell to only 10% of the market.
By 2022, there was an increased demand of high definition media, where Ultra HD Blu-ray and regular Blu-ray formats made up for almost half of the US market while sales of physical media continued to shrink in favor of streaming services.
Longevity
Longevity of a storage medium is measured by how long the data remains readable, assuming compatible devices exist that can read it: that is, how long the disc can be stored until data is lost. Numerous factors affect longevity: composition and quality of the media (recording and substrate layers), humidity and light storage conditions, the quality of the initial recording (which is sometimes a matter of mutual compatibility of media and recorder), etc. According to NIST, "[a] temperature of 64.4 °F (18 °C) and 40% RH [Relative Humidity] would be considered suitable for long-term storage. A lower temperature and RH is recommended for extended-term storage."
As with CDs, the information and data storage will begin to degrade over time with most standard DVDs lasting up to 30 years depending on the type of environment they are stored and whether they are full with data.
According to the Optical Storage Technology Association (OSTA), "Manufacturers claim lifespans ranging from 30 to 100 years for DVD, DVD-R and DVD+R discs and up to 30 years for DVD-RW, DVD+RW and DVD-RAM."
According to a NIST/LoC research project conducted in 2005–2007 using accelerated life testing, "There were fifteen DVD products tested, including five DVD-R, five DVD+R, two DVD-RW and three DVD+RW types. There were ninety samples tested for each product. ... Overall, seven of the products tested had estimated life expectancies in ambient conditions of more than 45 years. Four products had estimated life expectancies of 30–45 years in ambient storage conditions. Two products had an estimated life expectancy of 15–30 years and two products had estimated life expectancies of less than 15 years when stored in ambient conditions." The life expectancies for 95% survival estimated in this project by type of product are tabulated below:
| Technology | Non-volatile memory | null |
11015826 | https://en.wikipedia.org/wiki/Blu-ray | Blu-ray | Blu-ray (Blu-ray Disc or BD) is a digital optical disc data storage format designed to supersede the DVD format. It was invented and developed in 2005 and released worldwide on June 20, 2006, capable of storing several hours of high-definition video (HDTV 720p and 1080p). The main application of Blu-ray is as a medium for video material such as feature films and for the physical distribution of video games for the PlayStation 3, PlayStation 4, PlayStation 5, Xbox One, and Xbox Series X. The name refers to the blue laser (actually a violet laser) used to read the disc, which allows information to be stored at a greater density than is possible with the longer-wavelength red laser used for DVDs, resulting in an increased capacity.
The polycarbonate disc is in diameter and thick, the same size as DVDs and CDs. Conventional (or "pre-BD-XL") Blu-ray discs contain 25GB per layer, with dual-layer discs (50GB) being the industry standard for feature-length video discs. Triple-layer discs (100GB) and quadruple-layer discs (128GB) are available for BD-XL re-writer drives.
While the DVD-Video specification has a maximum resolution of 480p (NTSC, pixels) or 576p (PAL, pixels), the initial specification for storing movies on Blu-ray discs defined a maximum resolution of 1080p ( pixels) at up to 24 progressive or 29.97 interlaced frames per second. Revisions to the specification allowed newer Blu-ray players to support videos with a resolution of pixels, with Ultra HD Blu-ray players extending the maximum resolution to 4K ( pixels) and progressive frame rates up to 60 frames per second. Aside from an 8K resolution ( pixels) Blu-ray format exclusive to Japan, videos with non-standard resolutions must use letterboxing to conform to a resolution supported by the Blu-ray specification. Besides these hardware specifications, Blu-ray is associated with a set of multimedia formats. Given that Blu-ray discs can contain ordinary computer files, there is no fixed limit as to which resolution of video can be stored when not conforming to the official specifications.
The BD format was developed by the Blu-ray Disc Association, a group representing makers of consumer electronics, computer hardware, and motion pictures. Sony unveiled the first Blu-ray Disc prototypes in October 2000, and the first prototype player was released in Japan in April 2003. Afterward, it continued to be developed until its official worldwide release on June 20, 2006, beginning the high-definition optical disc format war, where Blu-ray Disc competed with the HD DVD format. Toshiba, the main company supporting HD DVD, conceded in February 2008, and later released its own Blu-ray Disc player in late 2009. According to Media Research, high-definition software sales in the United States were slower in the first two years than DVD software sales. Blu-ray's competition includes video on demand (VOD) and DVD. In January 2016, 44% of U.S. broadband households had a Blu-ray player.
History
Early history
The information density of the DVD format was limited by the wavelength of the laser diodes used. Following protracted development, blue laser diodes operating at 405 nanometers became available on a production basis, allowing for the development of a denser storage format that could hold higher-definition media, with prototype discs made with diodes at a slightly longer wavelength of 407 nanometers in October 1998. Sony commenced two projects in collaboration with Panasonic, Philips, and TDK, applying the new diodes: UDO (Ultra Density Optical), and DVR Blue (together with Pioneer), a format of rewritable discs that would eventually become Blu-ray Disc (more specifically, BD-RE). The core technologies of the formats are similar. The first DVR Blue prototypes were unveiled by Sony at the CEATEC exhibition in October 2000. A trademark for the "Blue Disc" logo was filed on February 9, 2001. On February 19, 2002, the project was officially announced as Blu-ray Disc, and Blu-ray Disc Founders was founded by the nine initial members.
The first consumer device arrived in stores on April 10, 2003: the Sony BDZ-S77, a US$3,800 BD-RE recorder that was made available only in Japan. However, there was no standard for pre-recorded video, and no movies were released for this player. Hollywood studios insisted that players be equipped with digital rights management before they would release movies for the new format, and they wanted a new DRM system that would protect more against unauthorized copying than the failed Content Scramble System (CSS) used on DVDs. On October 4, 2004, the name Blu-ray Disc Founders was officially changed to the Blu-ray Disc Association (BDA), and 20th Century Fox joined the BDA's Board of Directors. The Blu-ray Disc physical specifications were completed in 2004.
The recording layer on which the data is stored lies under a protective layer and on top of a substrate made of polycarbonate plastic, compared to on either side on DVDs. Sony also announced in April 2004 a version using paper as the substrate developed with Toppan Printing, with up to 25GB storage.
In January 2005, TDK announced that it had developed an ultra-hard yet very thin polymer coating ("Durabis") for Blu-ray Discs; this was a significant technical advance because a far tougher protection was desired in the consumer market to protect bare discs against scratching and damage compared to DVD, given that Blu-ray Discs technically required a much thinner layer for the denser and higher-frequency blue laser. Cartridges, originally used for scratch protection, were no longer necessary and were scrapped. The BD-ROM specifications were finalized in early 2006.
Advanced Access Content System Licensing Administrator (AACS LA), a consortium founded in 2004, had been developing the DRM platform that could be used to distribute movies to consumers while preventing copying. However, the final AACS standard was delayed, and then delayed again when an important member of the Blu-ray Disc group voiced concerns. At the request of the initial hardware manufacturers, including Toshiba, Pioneer, and Samsung, an interim standard was published that did not include some features, such as managed copy, which would have let end users create copies limited to personal use.
Launch and sales developments
The first BD-ROM players (Samsung BD-P1000) were shipped in mid-June 2006, though HD DVD players beat them to market by a few months. The first Blu-ray Disc titles were released on June 20, 2006: 50 First Dates, The Fifth Element, Hitch, House of Flying Daggers, Underworld: Evolution, xXx (all from Sony), and MGM's The Terminator. The earliest releases used MPEG-2 video compression, the same method used on standard DVDs. The first releases using the newer VC-1 and AVC formats were introduced in September 2006. The first movies using 50GB dual-layer discs were introduced in October 2006. The first audio-only albums were released in May 2008.
By June 2008, over 2,500 Blu-ray Disc titles were available in Australia and the United Kingdom, with 3,500 in the United States and Canada. In Japan, over 3,300 titles had been released as of July 2010.
Competition from HD DVD
The DVD Forum, chaired by Toshiba, was split over whether to develop the more expensive blue laser technology. In March 2002 the forum approved a proposal, which was endorsed by Warner Bros. and other motion picture studios. The proposal involved compressing high-definition video onto dual-layer standard DVD-9 discs. In spite of this decision, however, the DVD Forum's Steering Committee announced in April that it was pursuing its own blue-laser high-definition video solution. In August, Toshiba and NEC announced their competing standard, the Advanced Optical Disc. It was finally adopted by the DVD Forum and renamed HD DVD the next year, after being voted down twice by DVD Forum members who were also Blu-ray Disc Association members—a situation that drew preliminary investigations by the U.S. Department of Justice.
HD DVD had a head start in the high-definition video market, as Blu-ray Disc sales were slow to gain market share. The first Blu-ray Disc player was perceived as expensive and buggy, and there were few titles available.
The Sony PlayStation 3, which contained a Blu-ray Disc player for primary storage, helped support Blu-ray. Sony also ran a more thorough and influential marketing campaign for the format. AVCHD camcorders were also introduced in 2006. These recordings can be played back on many Blu-ray Disc players without re-encoding but are not compatible with HD DVD players. By January 2007, Blu-ray Discs had outsold HD DVDs, and during the first three quarters of 2007, BD outsold HD DVD by about two to one. At CES 2007, Warner proposed Total Hi Def—a hybrid disc containing Blu-ray on one side and HD DVD on the other, but it was never released.
On June 28, 2007, 20th Century Fox cited Blu-ray Discs' adoption of the BD+ anticopying system as key to their decision to support the Blu-ray Disc format.
On January 4, 2008, a day before CES 2008, Warner Bros., the only major studio still releasing movies in both HD DVD and Blu-ray Disc format, announced that it would release only in Blu-ray after May 2008. This effectively included other studios that came under the Warner umbrella, such as New Line Cinema and HBO—though in Europe, HBO's distribution partner, the BBC, announced it would continue to release product on both formats while keeping an eye on market forces. This led to a chain reaction in the industry, with major American retailers such as Best Buy, Walmart, and Circuit City and Canadian chains such as Future Shop dropping HD DVD in their stores. Woolworths, then a major European retailer, dropped HD DVD from its inventory. Major DVD rental companies Netflix and Blockbuster said they would no longer carry HD DVD.
Following these new developments, on February 19, 2008, Toshiba announced it would end production of HD DVD devices, allowing Blu-ray Disc to become the industry standard for high-density optical discs. Universal Studios, the sole major studio to back HD DVD since its inception, said shortly after Toshiba's announcement: "While Universal values the close partnership we have shared with Toshiba, it is time to turn our focus to releasing new and catalog titles on Blu-ray Disc." Paramount Pictures, which started releasing movies only in HD DVD format during late 2007, also said it would start releasing on Blu-ray Disc. Both studios announced initial Blu-ray lineups in May 2008. With this, all major Hollywood studios supported Blu-ray.
Ongoing development
2005–2010
Although the Blu-ray Disc specification has been finalized, engineers continue to work on advancing the technology. By 2005, quad-layer (100GB) discs had been demonstrated on a drive with modified optics and standard unaltered optics. Hitachi stated that such a disc could be used to store 7hours of video (HDTV) or 3hours and 30minutes of video (ultra-high-definition television). In April 2006, TDK canceled plans to produce 8-layer 200GB Blu-ray Discs. In August 2006, TDK announced that it had created a working experimental Blu-ray Disc capable of holding 200GB of data on a single side, using six 33GB data layers. In 2007, Hitachi was reported to have plans to produce 200GB discs by 2009.
Behind closed doors at CES 2007, Ritek revealed that it had successfully developed a high-definition optical disc process that extended the disc capacity to ten layers, increasing the capacity of the discs to 250GB. However, it noted the major obstacle was that current read/write technology did not allow additional layers. JVC developed a three-layer technology that allows putting both standard-definition DVD data and HD data on a BD/(standard) DVD combination. This would have enabled the consumer to purchase a disc that can be played on DVD players and can also reveal its HD version when played on a BD player. Japanese optical disc manufacturer Infinity announced the first "hybrid" Blu-ray Disc/(standard) DVD combo, to be released on February 18, 2009. This disc set of the TV series Code Blue featured four hybrid discs containing a single Blu-ray Disc layer (25GB) and two DVD layers (9GB) on the same side of the disc.
In January 2007, Hitachi showcased a 100GB Blu-ray Disc, consisting of four layers containing 25GB each. It claimed that, unlike TDK's and Panasonic's 100GB discs, this disc would be readable on standard Blu-ray Disc drives that were currently in circulation, and it was believed that a firmware update was the only requirement to make it readable by then-current players and drives. In October 2007, they revealed a 100GB Blu-ray Disc drive. In December 2008, Pioneer Corporation unveiled a 400GB Blu-ray Disc (containing 16 data layers, 25GB each) compatible with current players after a firmware update. Its planned launch was in the 2009–10 time frame for ROM and 2010–13 for rewritable discs. Ongoing development was underway to create a 1TB Blu-ray Disc. In October 2009, TDK demonstrated a 10-layer 320GB Blu-ray Disc.
At CES 2009, Panasonic unveiled the DMP-B15, the first portable Blu-ray Disc player, and Sharp introduced the LC-BD60U and LC-BD80U series, the first LCD HDTVs with integrated Blu-ray Disc players. Sharp also announced that it would sell HDTVs with integrated Blu-ray Disc recorders in the United States by the end of 2009. Set-top box recorders were not being sold in the U.S. for fear of unauthorized copying. However, personal computers with Blu-ray recorder drives were available.
On January 1, 2010, Sony, in association with Panasonic, announced plans to increase the storage capacity on their Blu-ray Discs from 25GB to 33.4GB via a technology called i-MLSE (maximum likelihood sequence estimation). The higher-capacity discs, according to Sony, would be readable on existing Blu-ray Disc players with a firmware upgrade. This technology was later used on BDXL discs.
On July 20, 2010, the research team of Sony and Japanese Tohoku University announced the joint development of a blue-violet laser, to help create Blu-ray Discs with a capacity of 1TB using only two layers (and potentially more than 1TB with additional layering). By comparison, the first blue laser was invented in 1996, with the first prototype discs coming four years later.
2011–2015
On January 7, 2013, Sony announced that it would release "Mastered in 4K" Blu-ray Disc titles sourced at 4K and encoded at 1080p. "Mastered in 4K" Blu-ray Disc titles can be played on existing Blu-ray Disc players and have a larger color space using xvYCC. On January 14, 2013, Blu-ray Disc Association president Andy Parsons stated that a task force was created three months prior to conduct a study concerning an extension to the Blu-ray Disc specification that would add the ability to contain 4K UHD video.
On August 5, 2015, the BDA announced it would commence licensing the Ultra HD Blu-ray video format starting on August 24, 2015. The Ultra HD Blu-ray format delivered support for high dynamic range video that significantly expanded the range between the brightest and darkest elements, an expanded color range, a high frame rate of up to 60 frames per second for a smoother motion appearance, an increase of the supported resolution to for a more detailed picture, object-based sound formats, and an optional "digital bridge" feature. New players were required to play this format, and they became able to play all three of DVDs, traditional Blu-rays, and the new format. New Ultra HD Blu-ray Discs hold up to 66GB and 100GB of data on dual- and triple-layer discs, respectively.
Blu-ray's physical and file system specifications are publicly available on the BDA's website.
Future scope and market trends
According to Media Research, high-definition software sales in the United States were slower in the first two years than DVD software sales. 16.3 million DVD software units were sold in the first two years (1997–1998) compared to 8.3 million high-definition software units (2006–2007). One reason given for this difference was the smaller marketplace (26.5 million HDTVs in 2007 compared to 100 million SDTVs in 1998). Former HD DVD supporter Microsoft did not make a Blu-ray Disc drive for the Xbox 360. The 360's successor Xbox One features a Blu-ray drive, as does the PS4, with both supporting 3D Blu-ray after later firmware updates.
Shortly after the "format war" ended, Blu-ray Disc sales began to increase. A study by the NPD Group found that awareness of Blu-ray Disc had reached 60% of households in the United States. Nielsen VideoScan sales numbers showed that for some titles, such as 20th Century Fox's Hitman, up to 14% of total disc sales were from Blu-ray, although the average Blu-ray sales for the first half of the year were only around 5%. In December 2008, the Blu-ray Disc version of Warner Bros.' The Dark Knight sold 600,000 copies on the first day of its launch in the United States, Canada, and the United Kingdom. A week after the launch, The Dark Knight BD had sold over 1.7 million copies worldwide, making it the first Blu-ray Disc title to sell over a million copies in the first week of release.
According to Singulus Technologies AG, Blu-ray was adopted faster than the DVD format was at a similar period in its development. This conclusion was based on the fact that Singulus Technologies received orders for 21 Blu-ray dual-layer replication machines during the first quarter of 2008, while 17 DVD replication machines of this type were made in the same period in 1997. According to GfK Retail and Technology, in the first week of November 2008, sales of Blu-ray recorders surpassed DVD recorders in Japan. According to the Digital Entertainment Group, the number of Blu-ray Disc playback devices (both set-top box and game console) sold in the United States had reached 28.5 million by the end of 2010.
Blu-ray faces competition from video on demand and from new technologies that allow access to movies on any format or device, such as Digital Entertainment Content Ecosystem or Disney's Keychest. Some commentators suggested that renting Blu-ray would play a vital part in keeping the technology affordable while allowing it to move forward. In an effort to increase sales, studios began releasing films in combo packs with Blu-ray Discs and DVDs, as well as digital copies that can be played on computers and mobile devices. Some are released on "flipper" discs with Blu-ray on one side and DVD on the other. Other strategies are to release movies with the special features only on Blu-ray Discs and none on DVDs.
Blu-ray Discs cost no more to produce than DVD discs. However, reading and writing mechanisms are more complicated, making Blu-ray recorders, drives and players more expensive than their DVD counterparts. Adoption is also limited due to the widespread use of streaming media. Blu-ray Discs are used to distribute PlayStation 3, PlayStation 4, PlayStation 5, Xbox One and Xbox Series X games, and the aforementioned game consoles can play back regular Blu-ray Discs.
In the mid-2010s, the Ultra HD Blu-ray format was released which is an enhanced variant of Blu-ray compatible with the 4K resolution. Ultra HD Blu-ray discs and players became available in the first quarter of 2016, having a storage capacity of up to 100GB.
By December 2017, the specification for an 8K Blu-ray format was also completed. However, this specification was for Japan only so that it could be used by Japanese public broadcasters like NHK to broadcast in 8K resolution for the Tokyo 2020 Olympic Games in Japan.
Beyond Blu-ray
The Holographic Versatile Disc (HVD), described in the ECMA-377 standard, was in development by the Holography System Development (HSD) Forum using a green writing/reading laser (532nm) and a red positioning/addressing laser (650nm). It was to offer MPEG-2, MPEG-4 AVC (H.264), HEVC (H.265), and VC-1 encoding, supporting a maximum storage capacity of 6TB. No systems conforming to the Ecma International HVD standard have been released. The company responsible for HVD went bankrupt in 2010, making any releases unlikely.
Rise of boutique labels
A boutique Blu-ray label or specialty Blu-ray label is a home video distributor that releases films on Blu-ray or 4K Ultra HD Blu-ray format, characterized by a specific or niche target market and collectable features like "limited edition" or "special edition" releases, deluxe slipcases or packaging, and other materials. Examples of boutique Blu-ray labels include the American Genre Film Archive (AGFA), Arrow Films, Canadian International Pictures, The Criterion Collection, Kino Lorber, Severin Films, Shout! Factory, Twilight Time, Vinegar Syndrome, and the Warner Archive Collection.
Boutique Blu-ray labels, which are popular among collectors and enthusiasts of film and physical media, have been credited as a factor in a "Blu-ray renaissance" dating back to at least 2018, with some consumers choosing to purchase films on physical formats in an age of digital streaming. Reasons some consumers prefer Blu-rays to streaming include higher video quality, the tactile nature of owning a film physically, elaborate packaging, bonus features, and the desire to own or watch films that are not available in streaming services' libraries.
Physical media
Laser and optics
While a DVD uses a 650nm red laser, Blu-ray Disc uses a 405nm "blue" laser diode. Although the laser is called "blue", its color is actually in the violet range. The shorter wavelength can be focused to a smaller area, thus enabling it to read information recorded in pits that are less than half the size of those on a DVD, and can consequently be spaced more closely, resulting in a shorter track pitch, enabling a Blu-ray Disc to hold about five times the amount of information that can be stored on a DVD. The lasers are GaN (gallium nitride) laser diodes that produce 405nm light directly, that is, without frequency doubling or other nonlinear optical mechanisms. CDs use 780nm near-infrared lasers.
The minimum "spot size" on which a laser can be focused is limited by diffraction and depends on the wavelength of the light and the numerical aperture of the lens used to focus it. By decreasing the wavelength, increasing the numerical aperture from 0.60 to 0.85, and making the cover layer thinner to avoid unwanted optical effects, designers can cause the laser beam to focus on a smaller spot, which effectively allows more information to be stored in the same area. For a Blu-ray Disc, the spot size is 580nm. This allows a reduction of the pit size from 400nm for DVD to 150nm for Blu-ray Disc, and of the track pitch from 740nm to 320nm. See compact disc for information on optical discs' physical structure. In addition to the optical improvements, Blu-ray Discs feature improvements in data encoding that further increase the amount of content that can be stored.
Hard-coating technology
Given that the Blu-ray Disc data layer is closer to the surface of the disc compared to the DVD standard, it was found in early designs to be more vulnerable to scratches. The first discs were therefore housed in cartridges for protection, resembling Professional Discs introduced by Sony in 2003. Using a cartridge would increase the price of an already expensive medium, and would increase the size of Blu-ray Disc drives, so designers chose hard-coating of the pickup surface instead. TDK was the first company to develop a working scratch-protection coating for Blu-ray Discs, naming it Durabis. In addition, both Sony's and Panasonic's replication methods include proprietary hard-coat technologies. Sony's rewritable media are spin-coated, using a scratch-resistant acrylic and antistatic coating. Verbatim's recordable and rewritable Blu-ray Discs use their own proprietary technology, called Hard Coat. Colloidal silica-dispersed UV-curable resins are used for the hard coating, given that, according to the Blu-ray Disc Association, they offer the best tradeoff between scratch resistance, optical properties, and productivity.
The Blu-ray Disc specification requires the testing of resistance to scratches by mechanical abrasion. In contrast, DVD media are not required to be scratch-resistant, but since development of the technology, some companies, such as Verbatim, implemented hard-coating for more expensive lines of recordable DVDs.
Drive speeds
The table shows the speeds available. Even the lowest speed (1×) is sufficient to play and record real-time 1080p video; the higher speeds are relevant for general data storage and more sophisticated handling of video. BD discs are designed to cope with at least 5,000 rpm of rotational speed.
The usable data rate of a Blu-ray Disc drive can be limited by the capacity of the drive's data interface. With a USB 2.0 interface, the maximum exploitable drive speed is or (also called 8× speed). A USB 3.0 interface (with proper cabling) does not have this limitation, nor do even the oldest version of Serial ATA (SATA, ) nor the latest Parallel ATA () standards. Internal Blu-ray drives that are integrated into a computer (as opposed to physically separate and connected via a cable) typically have a SATA interface.
More recent half-height Blu-Ray writers have reached writing speeds of up to 16× (constant angular velocity) on single-layer BD-R media, while the highest reading speeds are 12×, presumably to prevent repeated physical stress on the disc. Slim type drives are limited to 6× speeds (constant angular velocity) due to spacial and power limitations.
The Blu-ray format has a write verification feature, similar to that of DVD-RAM, but brings this feature to a write-once disc for the first time. If activated, the correctness of the written data is verified immediately after being written so unreadable data can be written again. In this case, the writing speed is halved because half of the disc rotations are for writing only. "Write verification" is not an official term for the feature, only a description for what it does. The feature may be activated by default, as is the case in the disc writing utility growisofs. Deactivating write verification may be desirable to save time when mass-producing physical copies of data, since errors are unlikely to occur on physically undamaged media.
Media quality and data integrity
The quality and data integrity of optical media can be determined by measuring the rate of errors, of which higher rates may be an indication for deteriorating media, low-quality media, physical damage such as scratches, dust, and/or media written using a defective optical drive.
Errors on Blu-Ray media are measured using the so-called LDC (Long Distance Codes) and BIS (Burst Indication Subcodes) error parameters, of which rates below 13 and 15 respectively can be considered healthy.
Not all vendors and models of optical drives have error scanning functionality implemented.
Packaging
Pre-recorded Blu-ray Disc titles usually ship in packages similar to, but slightly smaller (18.5mm shorter and 2mm thinner: 135mm × 171.5mm × 13mm) and more rounded than, a standard DVD keep case, generally with the format prominently displayed in a horizontal stripe across the top of the case (translucent blue for Blu-ray video discs, clear for Blu-ray 3D video releases, red for PlayStation 3 Greatest Hits Games, transparent for regular PlayStation 3 games, transparent dark blue for PlayStation 4 and PlayStation 5 games, transparent green for Xbox One and Xbox Series X games and black for Ultra HD Blu-ray video releases). Warren Osborn and The Seastone Media Group, LLC created the package that was adopted worldwide following the Blu-ray versus HD DVD market adoption choice. Because Blu-ray cases are smaller than DVD cases, more Blu-rays than DVDs can fit on a shelf.
Types
BD-ROM
"Blu-ray Disc Read-Only Memory", or BD-ROM, is the technical term used for standard, factory-pressed Blu-ray discs. The content of these discs is written once and cannot be modified, and they can't be created by consumer optical disc recorders.
Mini Blu-ray Disc
The "Mini Blu-ray Disc" (also, "Mini-BD" and "Mini Blu-ray") is a compact variant of the Blu-ray Disc that can store 7.8GB of data in its single-layer configuration, or 15.6GB on a dual-layer disc. It is similar in concept to the MiniDVD and Mini CD. Recordable (BD-R) and rewritable (BD-RE) versions of Mini Blu-ray Disc have been developed specifically for compact camcorders and other compact recording devices.
Blu-ray Disc recordable
"Blu-ray Disc recordable" (BD-R) refers to two optical disc formats that can be recorded with an optical disc recorder. BD-Rs can be written to once, whereas Blu-ray Disc Recordable Erasable (BD-REs) can be erased and re-recorded multiple times. The current practical maximum speed for Blu-ray Discs is about 12× (). Higher speeds of rotation (5,000+ rpm) cause too much wobble for the discs to be written properly, as with the 24× () and 56× (, 11,200 rpm) maximum speeds, respectively, of standard DVDs and CDs. Since September 2007, BD-RE is also available in the smaller 8cm Mini Blu-ray Disc size.
On September 18, 2007, Pioneer and Mitsubishi codeveloped BD-R LTH ("Low to High" in groove recording), which features an organic dye recording layer that can be manufactured by modifying existing CD-R and DVD-R production equipment, significantly reducing manufacturing costs. In February 2008, Taiyo Yuden, Mitsubishi, and Maxell released the first BD-R LTH Discs, and in March 2008, Sony's PlayStation 3 officially gained the ability to use BD-R LTH Discs with the 2.20 firmware update. In May 2009 Verbatim/Mitsubishi announced the industry's first 6X BD-R LTH media, which allows recording a 25GB disc in about 16 minutes. Unlike with the previous releases of 120mm optical discs (i.e. CDs and standard DVDs), Blu-ray recorders hit the market almost simultaneously with Blu-ray's debut.
BD9 and BD5
The BD9 format was proposed to the Blu-ray Disc Association by Warner Home Video as a cost-effective alternative to the 25/50GB BD-ROM discs. The format was supposed to use the same codecs and program structure as Blu-ray Disc video but recorded onto less expensive 8.5GB dual-layer DVD. This red-laser media could be manufactured on existing DVD production lines with lower costs of production than the 25/50GB Blu-ray media.
Usage of BD9 for releasing content on "pressed" discs never caught on. With the end of the format war, manufacturers ramped production of Blu-ray Discs and lowered prices to compete with DVDs. On the other hand, the idea of using inexpensive DVD media became popular among individual users. A lower-capacity version of this format that uses single-layer 4.7GB DVDs has been unofficially called BD5. Both formats are being used by individuals for recording high-definition content in Blu-ray format onto recordable DVD media. Despite the fact that the BD9 format has been adopted as part of the BD-ROM basic format, none of the existing Blu-ray player models explicitly claim to be able to read it. Consequently, the discs recorded in BD9 and BD5 formats are not guaranteed to play on standard Blu-ray Disc players. AVCHD and AVCREC also use inexpensive media like DVDs, but unlike BD9 and BD5 these formats have limited interactivity, codec types, and data rates. As of March 2011, BD9 was removed as an official BD-ROM disc.
BDXL
The BDXL format for recordable Blu-ray discs allows 100GB and 128GB write-once discs, and 100GB rewritable discs for commercial applications. The BDXL specification was finalised in June 2010. BD-R 3.0 Format Specification (BDXL) defined a multi-layered disc recordable in BDAV format with the speed of 2× and 4×, capable of 100/128GB and usage of UDF2.5/2.6. BD-RE 4.0 Format Specification (BDXL) defined a multi-layered disc rewritable in BDAV with the speed of 2× and 4×, capable of 100GB and usage of UDF2.5 as file system.
Although the 66GB and 100GB BD-ROM discs used for Ultra HD Blu-ray use the same linear density as BDXL, the two formats are not compatible with each other, therefore it is not possible to use a triple layer BDXL disc to burn an Ultra HD Blu-ray Disc playable in an Ultra HD Blu-ray player, although standard 50GB BD-R dual-layer discs can be burned in the Ultra HD Blu-ray format.
IH-BD
The IH-BD (Intra-Hybrid Blu-ray) format includes a 25GB rewritable layer (BD-RE) and a 25GB write-once layer (BD-ROM), designed to work with existing Blu-ray Discs.
Data format standards
Filesystem
Blu-ray Disc specifies the use of Universal Disk Format (UDF) 2.50 as a convergent-friendly format for both PC and consumer electronics environments. It is used in the latest specifications of BD-ROM, BD-RE, and BD-R. In the first BD-RE specification (defined in 2002), the BDFS (Blu-ray Disc File System) was used. The BD-RE 1.0 specification was defined mainly for the digital recording of high-definition television (HDTV) broadcast television. The BDFS was replaced by UDF 2.50 in the second BD-RE specification in 2005, to enable interoperability among consumer electronics, Blu-ray recorders, and personal computer systems. These optical disc recording technologies enabled PC recording and playback of BD-RE. BD-R can use UDF 2.50/2.60.
The Blu-ray Disc application for recording of digital broadcasting has been developed as System Description Blu-ray Rewritable Disc Format Part 3 Audio Visual Basic Specifications (BDAV). The requirements related to the computer file system have been specified in System Description Blu-ray Rewritable Disc Format part 2 File System Specifications version 1.0 (BDFS). Initially, the BD-RE version 1.0 (BDFS) was specifically developed for recording of digital broadcasts using the Blu-ray Disc application (BDAV application). But these requirements are superseded by the Blu-ray Rewritable Disc File System Specifications version 2.0 (UDF) (a.k.a. RE 2.0) and Blu-ray Recordable Disc File System Specifications version 1.0 (UDF) (a.k.a. R 1.0). Additionally, a new application format, BDMV (System Description Blu-ray Disc Prerecorded Format part 3 Audio Visual Basic Specifications) for High Definition Content Distribution was developed for BD-ROM. The only file system developed for BDMV is the System Description Blu-ray Read-Only Disc Format part 2 File System Specifications version 1.0 (UDF) which defines the requirements for UDF 2.50. All BDMV application files are stored under a "BDMV" directory.
Application format
BDAV or BD-AV (Blu-ray Disc Audio/Visual): a consumer-oriented Blu-ray video format used for audio/video recording (defined in 2002).
BDMV or BD-MV (Blu-ray Disc Movie): a Blu-ray video format with menu capability commonly used for movie releases.
BDMV Recording specification (defined in September 2006 for BD-RE and BD-R).
RREF (Realtime Recording and Editing Format): a subset of BDMV designed for real-time recording and editing applications.
HFPA (High Fidelity Pure Audio): A high definition audio disc using the Blu-ray format
Media format
Container format
Audio, video, and other streams are multiplexed and stored on Blu-ray Discs in a container format based on the MPEG transport stream. It is also known as BDAV MPEG-2 transport stream and can use the filename extension .m2ts. Blu-ray Disc titles authored with menus are in the BDMV (Blu-ray Disc Movie) format and contain audio, video, and other streams in BDAV container. There is also the BDAV (Blu-ray Disc Audio/Visual) format, the consumer oriented alternative to the BDMV format used for movie releases. The BDAV format is used on BD-REs and BD-Rs for audio/video recording. BDMV format was later defined also for BD-RE and BD-R (in September 2006, in the third revision of BD-RE specification and second revision of BD-R specification).
Blu-ray Disc employs the MPEG transport stream recording method. That enables transport streams of digital broadcasts to be recorded as they are broadcast, without altering the format. It also enables flexible editing of a digital broadcast that is recorded as is and where the data can be edited just by rewriting the playback stream. Although it is quite natural, a function for high-speed and easy-to-use retrieval is built in. Blu-ray Disc Video use MPEG transport streams, compared to DVD's MPEG program streams. An MPEG transport stream contains one or more MPEG program streams, so this allows multiple video programs to be stored in the same file so they can be played back simultaneously (e.g., with the "picture-in-picture" effect).
Codecs
The BD-ROM specification mandates certain codec compatibilities for both hardware decoders (players) and movie software (content). Windows Media Player does not come with all of the codecs required to play Blu-ray Discs.
Video
Originally, BD-ROMs stored video up to pixel resolution at up to 60 (59.94) fields per second. Currently, with UHD BD-ROM, videos can be stored at a maximum of pixel resolution at up to 60 (59.94) frames per second, progressively scanned. While most current Blu-ray players and recorders can read and write video at the full 59.94p and 50p progressive format, new players for the UHD specifications will be able to read at video at either 59.94p and 50p formats.
For video, all players are required to process H.262/MPEG-2 Part 2, H.264/MPEG-4 Part 10: AVC, and SMPTE VC-1. BD-ROM titles with video must store video using one of the three mandatory formats; multiple formats on a single title are allowed. Blu-ray Disc allows video with a bit depth of 8-bits per color YCbCr with 4:2:0 chroma subsampling. The choice of formats affects the producer's licensing/royalty costs as well as the title's maximum run time, due to differences in compression efficiency. Discs encoded in MPEG-2 video typically limit content producers to around two hours of high-definition content on a single-layer (25GB) BD-ROM. The more-advanced video formats (VC-1 and MPEG-4 AVC) typically achieve a video run time twice that of MPEG-2, with comparable quality. MPEG-2, however, does have the advantage that it is available without licensing costs, as all MPEG-2 patents have expired.
MPEG-2 was used by many studios (including Paramount Pictures, which initially used the VC-1 format for HD DVD releases) for the first series of Blu-ray Discs, which were launched throughout 2006. Modern releases are now often encoded in either MPEG-4 AVC or VC-1, allowing film studios to place all content on one disc, reducing costs and improving ease of use. Using these formats also frees a lot of space for storage of bonus content in HD (1080i/p), as opposed to the SD (480i/p) typically used for most titles. Some studios, such as Warner Bros., have released bonus content on discs encoded in a different format than the main feature title. For example, the Blu-ray Disc release of Superman Returns uses VC-1 for the feature film and MPEG-2 for some of its bonus content. Today, Warner and other studios typically provide bonus content in the video format that matches the feature.
Audio
For audio, BD-ROM players are required to implement Dolby Digital (AC-3), DTS, and linear PCM. Players may optionally implement Dolby Digital Plus and DTS-HD High Resolution Audio as well as lossless 5.1 and 7.1 surround sound formats Dolby TrueHD and DTS-HD Master Audio. BD-ROM titles must use one of the mandatory schemes for the primary soundtrack. A secondary audiotrack, if present, may use any of the mandatory or optional codecs.
Bit rate
The Blu-ray specification defines a maximum data transfer rate of , a maximum AV bitrate of (for both audio and video data), and a maximum video bit rate of . In contrast, the HD DVD standard has a maximum data transfer rate of , a maximum AV bitrate of , and a maximum video bitrate of .
Java software interface
At the 2005 JavaOne trade show, it was announced that Sun Microsystems' Java cross-platform software environment would be included in all Blu-ray Disc players as a mandatory part of the standard. Java is used to implement interactive menus on Blu-ray Discs, as opposed to the method used on DVD-video discs. DVDs use pre-rendered MPEG segments and selectable subtitle pictures, which are considerably more primitive and rarely seamless. At the conference, Java creator James Gosling suggested that the inclusion of a Java virtual machine, as well as network connectivity in some BD devices, will allow updates to Blu-ray Discs via the Internet, adding content such as additional subtitle languages and promotional features not included on the disc at pressing time. This Java version is called BD-J and is built on a profile of the Globally Executable MHP (GEM) standard; GEM is the worldwide version of the Multimedia Home Platform standard.
Player profiles
The BD-ROM specification defines four Blu-ray Disc player profiles, including an audio-only player profile (BD-Audio) that does not require video decoding or BD-J. All of the video-based player profiles (BD-Video) are required to have a full implementation of BD-J.
On November 2, 2007, the Grace Period Profile was superseded by Bonus View as the minimum profile for new BD-Video players released to the market. When Blu-ray Disc software not authored with interactive features dependent on Bonus View or BD-Live hardware capabilities is played on Profile 1.0 players, it is able to play the main feature of the disc, but some extra features may not be available or will have limited capability.
BD-Live
The biggest difference between Bonus View and BD-Live is that BD-Live requires the Blu-ray Disc player to have an Internet connection to access Internet-based content. BD-Live features have included Internet chats, scheduled chats with the director, Internet games, downloadable featurettes, downloadable quizzes, and downloadable movie trailers. While some Bonus View players may have an Ethernet port, it is used for firmware updates and is not used for Internet-based content. In addition, Profile 2.0 also requires more local storage in order to handle this content.
Profile 1.0 players are not eligible for Bonus View or BD-Live compliant upgrades and do not have the function or capability to access these upgrades, with the exception of the latest players and the PlayStation 3. Internet is required to use.
Region codes
As with the implementation of region codes for DVDs, Blu-ray Disc players sold in a specific geographical region are designed to play only discs authorized by the content provider for that region. This is intended to permit content providers (motion picture studios, television production companies, etc.) to enact regional price discrimination and/or exclusive content licensing. According to the Blu-ray Disc Association, all Blu-ray Disc players and Blu-ray Disc-equipped computer systems are required to enforce regional coding. However, content providers need not use region playback codes. Some current estimates suggest 70% of available movie Blu-ray Discs from the major studios are region-free and can therefore be played on any Blu-ray Disc player in any region.
Movie distributors have different region-coding policies. Among major U.S. studios, Walt Disney Pictures, Warner Bros., Paramount Pictures, Universal Studios, and Sony Pictures have released most of their titles free of region-coding. MGM and Lionsgate have released a mix of region-free and region-coded titles. While 20th Century Fox initially released most of their titles region-coded, most of their post-Disney merger content is region-free. Vintage film restoration and distribution company The Criterion Collection uses US region-coding in all Blu-ray releases, with their releases in the UK market using UK region-coding.
The Blu-ray Disc region-coding scheme divides the world into three regions, labeled A, B, and C.
A new form of Blu-ray region-coding tests not only the region of the player/player software, but also its country code, repurposing a user setting intended for localization (PSR19) as a new form of regional lockout. This means, for example, while both the US and Japan are Region A, some American discs will not play on devices/software configured for Japan or vice versa, since the two countries have different country codes. (For example, the United States is "US" (21843 or hex 0x5553), Japan is "JP" (19024 or hex 0x4a50), and Canada is "CA" (17217 or hex 0x4341).) Although there are only three Blu-ray regions, the country code allows much more precise control of the regional distribution of Blu-ray Discs than the six (or eight) DVD regions. With Blu-ray Discs, there are no "special regions" such as the regions 7 and 8 for DVDs.
In circumvention of region-coding restrictions, stand-alone Blu-ray Disc players are sometimes modified by third parties to allow for playback of Blu-ray Discs (and DVDs) with any region code. Instructions ("hacks") describing how to reset the Blu-ray region counter of computer player applications to make them multi-region indefinitely are also regularly posted to video enthusiast websites and forums. Unlike DVD region codes, Blu-ray region codes are verified only by the player software, not by the optical drive's firmware.
the latest types of Blu-ray players, suitable for Ultra HD Blu-ray content, are not region-free, but Ultra HD Blu-ray disc manufacturers have not yet locked the discs to any region and they work worldwide.
Digital rights management
The Blu-ray Disc format employs several layers of digital rights management (DRM) which restrict the usage of the discs. This has led to extensive criticism of the format by organizations opposed to DRM, such as the Free Software Foundation, and consumers because new releases require player firmware updates to allow disc playback.
High-bandwidth Digital Content Protection
Blu-ray equipment is required to implement the High-bandwidth Digital Content Protection (HDCP) system to encrypt the data sent by players to rendering devices through physical connections. This is aimed at preventing the copying of copyrighted content as it travels across cables. Through a protocol flag in the media stream called the Image Constraint Token (ICT), a Blu-ray Disc can enforce its reproduction in a lower resolution whenever a full HDCP-compliant link is not used. In order to ease the transition to high definition formats, the adoption of this protection method was postponed until 2011.
Advanced Access Content System
The Advanced Access Content System (AACS) is a standard for content distribution and digital rights management. It was developed by AS Licensing Administrator, LLC (AACS LA), a consortium that includes Disney, Intel, Microsoft, Panasonic, Warner Bros., IBM, Toshiba, and Sony. Since the appearance of the format on devices in 2006, several successful attacks have been made on it. The first known attack relied on the trusted client problem. In addition, decryption keys have been extracted from a weakly protected player (WinDVD). Since keys can be revoked in newer releases, this is only a temporary attack, and new keys must continually be discovered in order to decrypt the latest discs.
BD+
BD+ was developed by Cryptography Research Inc. and is based on their concept of Self-Protecting Digital Content. BD+, effectively a small virtual machine embedded in authorized players, allows content providers to include executable programs on Blu-ray Discs. Such programs can:
Examine the host environment to see if the player has been tampered with. Every licensed playback device manufacturer must provide the BD+ licensing authority with memory footprints that identify their devices.
Verify that the player's keys have not been changed
Execute native code, possibly to patch an otherwise insecure system
Transform the audio and video output. Parts of the content will not be viewable without letting the BD+ program unscramble it.
If a playback device manufacturer finds that its devices have been hacked, it can potentially release BD+ code that detects and circumvents the vulnerability. These programs can then be included in all new content releases. The specifications of the BD+ virtual machine are available only to licensed device manufacturers. A list of licensed commercial adopters is available from the BD+ website.
The first titles using BD+ were released in October 2007. Since November 2007, versions of BD+ protection have been circumvented by various versions of the AnyDVD HD program. Other programs known to be capable of circumventing BD+ protection are DumpHD (versions 0.6 and above, along with some supporting software), MakeMKV, and two applications from DVDFab (Passkey and HD Decrypter).
BD-ROM Mark
ROM Mark is a small amount of cryptographic data that is stored separately from normal Blu-ray Disc data, aiming to prevent replication of the discs. The cryptographic data is needed to decrypt the copyrighted disc content protected by AACS. A specially licensed piece of hardware is required to insert the ROM Mark into the media during mastering. During replication, this ROM Mark is transferred together with the recorded data to the disc. In consequence, any copies of a disc made with a regular recorder will lack the ROM Mark data and will be unreadable on standard players.
Backward compatibility
The Blu-ray Disc Association recommends but does not require that Blu-ray Disc drives be capable of reading standard DVDs and CDs, for backward compatibility. Most Blu-ray Disc players are capable of reading both CDs and DVDs; however, a few of the early Blu-ray Disc players released in 2006, such as the Sony BDP-S1, could play DVDs but not CDs. In addition, with the exception of some early models from LG and Samsung, Blu-ray players cannot play HD DVDs, and HD DVD players cannot play Blu-ray Discs. Some Blu-ray players can also play Video CDs, Super Audio CDs, and/or DVD-Audio discs. All Ultra HD Blu-ray players can play regular Blu-ray Discs, and most can play DVDs and CDs. The PlayStation 4 and PlayStation 5 do not support CDs.
Variations
High Fidelity Pure Audio (BD-A)
High Fidelity Pure Audio (HFPA) is a marketing initiative, spearheaded by the Universal Music Group, for audio-only Blu-ray optical discs. Launched in 2013 as a potential successor to the compact disc, it has been compared with DVD-A and SACD, which had similar aims.
AVCHD
AVCHD was originally developed as a high-definition format for consumer tapeless camcorders. Derived from the Blu-ray Disc specification, AVCHD shares a similar random access directory structure but is restricted to lower audio and video bitrates, simpler interactivity, and the use of AVC-video and Dolby AC-3 (or linear PCM) audio. Being primarily an acquisition format, AVCHD playback is not universally recognized among devices that play Blu-ray Discs. Nevertheless, many such devices are capable of playing AVCHD recordings from removable media, such as DVDs, SD/SDHC memory cards, "Memory Stick" cards, and hard disk drives.
AVCREC
AVCREC uses a BDAV container to record high-definition content on conventional DVDs. Presently AVCREC is tightly integrated with the Japanese ISDB broadcast standard and is not marketed outside of Japan. AVCREC is used primarily in set-top digital video recorders and in this regard it is comparable to HD REC.
Blu-ray 3D
The Blu-ray Disc Association (BDA) created a task force made up of executives from the film industry and the consumer electronics and IT sectors to help define standards for putting 3D film and 3D television content on a Blu-ray Disc. On December 17, 2009, the BDA officially announced 3D specs for Blu-ray Disc, allowing backward compatibility with current 2D Blu-ray players, though compatibility is limited by the fact that the longer 3D discs are triple-layer, which normal (2D only) players cannot read. The BDA has said, "The Blu-ray 3D specification calls for encoding 3D video using the "Stereo High" profile defined by Multiview Video Coding (MVC), an extension to the ITU-T H.264 Advanced Video Coding (AVC) codec currently implemented by all Blu-ray Disc players. MPEG4-MVC compresses both left and right eye views with a typical 50% overhead compared to equivalent 2D content, and can provide full 1080p resolution backward compatibility with current 2D Blu-ray Disc players." This means the MVC (3D) stream is backward compatible with H.264/AVC (2D) stream, allowing older 2D devices and software to decode stereoscopic video streams, ignoring additional information for the second view. However, some 3D discs have a user limitation set preventing the disc from being viewed in 2D (though a 2D disc is often included in the packaging).
Sony added Blu-ray 3D support to its PlayStation 3 console via a firmware upgrade on September 21, 2010. The console had previously gained 3D gaming capability via an update on April 21, 2010. Since the version 3.70 software update on August 9, 2011, the PlayStation 3 can play DTS-HD Master Audio and DTS-HD High Resolution Audio while playing 3D Blu-ray. Dolby TrueHD is used on a small minority of Blu-ray 3D releases, and bitstreaming implemented in slim PlayStation 3 models only (original "fat" PS3 models decode internally and send audio as LPCM). The PlayStation VR can also be used to watch these movies in 3D on a PlayStation 4. most major home entertainment studios, such as Walt Disney Studios, Sony Pictures, MGM, and Universal Pictures had discontinued the Blu-ray 3D format in North America, but continued to produce and sell them in other regions such as South America, Europe, Asia, and Australia. Paramount Pictures has ceased sales and productions of 3D Blu-ray Discs all over the world, its last 3D releases being Ghost in the Shell and Transformers: The Last Knight, while Warner Bros. continued to sell and produce 3D Blu-ray Discs in North America until 2022 with their last film released on the format being Dune.
Ultra HD Blu-ray
Ultra HD Blu-ray Discs are incompatible with existing standard Blu-ray players. They support 4K UHD ( pixel resolution) video at frame rates up to 60 progressive frames per second, encoded using High Efficiency Video Coding. The discs support both high dynamic range (HDR) by increasing the color depth to 10-bit per color and a greater color gamut than supported by conventional Blu-ray video by using the Rec. 2020 color space.
The specification for an 8K Blu-ray format was also completed by the Blu-ray Disc Association for use in Japan. More than two hours of 8K content can be recorded on BDXL discs.
| Technology | Non-volatile memory | null |
11016082 | https://en.wikipedia.org/wiki/Spine%20%28zoology%29 | Spine (zoology) | In a zoological context, spines are hard, needle-like anatomical structures found in both vertebrate and invertebrate species. The spines of most spiny mammals are modified hairs, with a spongy center covered in a thick, hard layer of keratin and a sharp, sometimes barbed tip.
Occurrence
Mammals
Spines in mammals include the prickles of hedgehogs, and among rodents, the quills of porcupines (of both the New World and the Old), as well as the prickly fur of spiny mice, spiny pocket mice, and of species of spiny rat. They are also found on afrotherian tenrecs of the family Tenrecinae (hedgehog and streaked tenrecs), marsupial spiny bandicoots, and on echidnas (a monotreme).
An ancient synapsid, Dimetrodon, had extremely long spines on its backbone that were joined together with a web of skin that formed a sail-like structure.
Many mammalian species, like cats and fossas, also have penile spines.
The Mesozoic eutriconodont mammal Spinolestes already displayed spines similar to those of modern spiny mice.
Fish
Spines are found in the fins of most bony fishes, particularly actinopterygians (ray-finned fishes), who have folding fan-like fin made of spreading bony spines called lepidotrichia or "rays" covered by thin stretches of skin.
In the other bony fish clade, the sarcopterygians (lobe-finned fish), the fin spines (if any at all) are significantly shorter and each fin is instead dominated by a muscular stalk ("lobe") with a jointed internal appendicular skeleton. The limbs of tetrapods, who descended from sarcopterygian ancestors, are homologous to the paired pectoral and pelvic fins.
Some fish, such as scorpion fish and lionfish, has prominent sharp, venomous spines for anti-predator defense. The tail stinger on a stingray is also a type of barbed spine modified from dermal denticles.
The acanthodians, an extinct class of ancient fish that are paraphyletic to the cartilaginous fishes, have prominent bony spines in the front (rostral) edges of all fins except the tail. The primary function of these rigid spines are generally presumed to be defensive against predators, but other proposed roles are as cutwaters to reduce drag or as holdfasts against subsurface currents.
Invertebrates
Defensive spines are also found in invertebrate animals, such as sea urchins. They are a feature of the shell of several different species of gastropod and bivalve mollusks, including the venus clam Pitar lupanaria.
Many species of arthropods also have spine-like protrusions on their bodies for defensive purposes. For example, the rostra on many shrimp species form a sharp spine that can be used against predators. The urticating bristles or setae on many caterpillars and New World tarantulas are essentially tiny detachable spines that can cause severe irritation upon contact. Those on the Lonomia caterpillars are venomous and can cause lethal coagulopathy, hemolysis and kidney failure.
Spines are also found in internal organs in invertebrates, such as the copulatory spines in the male or female organs of certain flatworms.
Function
In many cases, spines are a defense mechanism that help protect the animal against potential predators. Because spines are sharp, they can puncture skin and inflict pain and damage which may cause the predator to avoid that species from that point on.
The spine of some animals are capable of injecting venom. In the case of some large species of stingray, a puncture with the barbed spine and the accompanying venom has occasionally been fatal to humans.
Animals such as porcupines are considered aposematic, because their spines warn predators that they are dangerous, and in some cases, potentially toxic. Porcupines rattle their quills as a warning to predators, much like rattlesnakes use their rattles.
Treating injuries caused by spines
Because many species of fish and invertebrates carry venom within their spines, a rule of thumb is to treat every injury as if it were a snake bite. Venom can cause intense pain, and can sometimes result in death if left untreated.
On the other hand, being pricked by a porcupine quill is not dangerous, and the quills are not poisonous. The quill can be removed by gently but firmly pulling it out of the skin. The barbed tip sometimes breaks off, but it works its way out through the skin over time.
Human uses
Common uses for animal spines include:
Jewelry
Bracelets, earrings, and necklaces made from these spines are very common
Tribes from around the world use porcupine quills as jewelry for their body modification i.e. through the nose
Pens
Some of the earliest pens were made from quills
Quillwork, a form of textile embellishment traditionally practiced by Indigenous peoples of North America that employs the quills of porcupines as an aesthetic element
Occasionally, quills may be made into brushes
| Biology and health sciences | Integumentary system | Biology |
11020101 | https://en.wikipedia.org/wiki/Populus%20tremula | Populus tremula | Populus tremula (commonly called aspen, common aspen, Eurasian aspen, European aspen, or quaking aspen) is a species of poplar native to cool temperate regions of the Old World.
Description
It is a substantial deciduous tree growing to tall by broad, with a trunk attaining over in diameter. The bark is pale greenish-grey and smooth on young trees with dark grey diamond-shaped lenticels, becoming dark grey and fissured on older trees.
The adult leaves, produced on branches of mature trees, are nearly round, slightly wider than long, diameter, with a coarsely toothed margin and a laterally flattened petiole long. The flat petiole allows them to tremble in even slight breezes, and is the source of its scientific name, as well as one of its vernacular names "langues de femmes" attributed to Gerard's 17th-century Herball. The leaves on seedlings and fast-growing stems of suckers (root sprouts) are of a different shape, heart-shaped to nearly triangular. They are also often much larger, up to long; their petiole is also less flattened.
The flowers are wind-pollinated catkins produced in early spring before the new leaves appear; they are dioecious, with male and female catkins on different trees. The male catkins are patterned green and brown, long when shedding pollen; the female catkins are green, long at pollination, maturing in early summer to bear 10–20 (50–80) capsules each containing numerous tiny seeds embedded in downy fluff. The fluff assists wind dispersal of the seeds when the capsules split open at maturity.
It can be distinguished from the closely related North American Populus tremuloides, which is nearly identical, by the leaves being more coarsely toothed.
Like other aspens, it spreads extensively by suckers (root sprouts), which may be produced up to 40 m from the parent tree, forming extensive clonal colonies. This often makes the job of clearing unwanted trees from an area especially difficult, as new suckers will continue to sprout from the extensive root system for up to several years after all surface growth has been eliminated.
Distribution and habitat
The species is native to Europe and Asia, from Iceland and the British Isles east to Kamchatka, north to inside the Arctic Circle in Scandinavia and northern Russia, and south to central Spain, Turkey, the Tian Shan, North Korea, and northern Japan. It also occurs at one site in northwest Africa in Algeria. In the south of its range, it occurs at high altitudes in mountains.
Ecology
Eurasian aspen is a water and light demanding species that is able to vigorously colonize an open area after fire, clear cutting or other kinds of damage. After an individual has been damaged or destroyed, root suckers are produced abundantly on the shallow lateral roots. Fast growth continues until the age of about 20 years, when crown competition increases. After that, growth speed decreases and culminates at about 30 years of age. Aspen can reach an age of 200 years.
It is a very hardy species and tolerates long, cold winters and short summers.
Aspen is resistant to browsing pressure by fallow deer owing to its unpleasant taste.
This species is important for the hornet moth, which uses it as a host during the larval stage.
Fossil record
Fossils of Populus tremula have been described from the fossil flora of Kızılcahamam district in Turkey which is of early Pliocene age.
Cultivation
The aspen is found in cultivation in parks and large gardens. The fastigiate cultivar 'Erecta', with bright yellow autumn colouring, has gained the Royal Horticultural Society's Award of Garden Merit. The cultivar is colloquially known as "Swedish columnar" in Canada and the United States.
The hybrid with Populus alba (white poplar), known as grey poplar, Populus × canescens, is widely found in Europe and central Asia. Hybrids with several other aspens have also been bred at forestry research institutes in order to find trees with greater timber production and disease resistance (e.g. P. tremula × P. tremuloides, bred in Denmark).
Use
The wood of aspen is light and soft with very little shrinkage. It is used for lumber and matches but is also valued in the pulp and paper industry, being particularly useful for writing paper. In addition, it is used for plywood and different types of flake and particle boards. Given its hardiness and capacity for rapid growth and regeneration, it plays an important role in the production of wood for renewable energy. Ecologically, the species is important as many insect and fungi species benefit from it. The tree further provides habitat for several mammals and birds that require young forests.
| Biology and health sciences | Malpighiales | Plants |
9528025 | https://en.wikipedia.org/wiki/Greenhouse%20gas%20emissions | Greenhouse gas emissions | Greenhouse gas (GHG) emissions from human activities intensify the greenhouse effect. This contributes to climate change. Carbon dioxide (), from burning fossil fuels such as coal, oil, and natural gas, is the main cause of climate change. The largest annual emissions are from China followed by the United States. The United States has higher emissions per capita. The main producers fueling the emissions globally are large oil and gas companies. Emissions from human activities have increased atmospheric carbon dioxide by about 50% over pre-industrial levels. The growing levels of emissions have varied, but have been consistent among all greenhouse gases. Emissions in the 2010s averaged 56 billion tons a year, higher than any decade before. Total cumulative emissions from 1870 to 2022 were 703 (2575 ), of which 484±20 (1773±73 ) from fossil fuels and industry, and 219±60 (802±220 ) from land use change. Land-use change, such as deforestation, caused about 31% of cumulative emissions over 1870–2022, coal 32%, oil 24%, and gas 10%.
Carbon dioxide is the main greenhouse gas resulting from human activities. It accounts for more than half of warming. Methane (CH4) emissions have almost the same short-term impact. Nitrous oxide (N2O) and fluorinated gases (F-gases) play a lesser role in comparison. Emissions of carbon dioxide, methane and nitrous oxide in 2023 were all higher than ever before.
Electricity generation, heat and transport are major emitters; overall energy is responsible for around 73% of emissions. Deforestation and other changes in land use also emit carbon dioxide and methane. The largest source of anthropogenic methane emissions is agriculture, closely followed by gas venting and fugitive emissions from the fossil-fuel industry. The largest agricultural methane source is livestock. Agricultural soils emit nitrous oxide partly due to fertilizers. Similarly, fluorinated gases from refrigerants play an outsized role in total human emissions.
The current -equivalent emission rates averaging 6.6 tonnes per person per year, are well over twice the estimated rate 2.3 tons required to stay within the 2030 Paris Agreement increase of 1.5 °C (2.7 °F) over pre-industrial levels. Annual per capita emissions in the industrialized countries are typically as much as ten times the average in developing countries.
The carbon footprint (or greenhouse gas footprint) serves as an indicator to compare the amount of greenhouse gases emitted over the entire life cycle from the production of a good or service along the supply chain to its final consumption. Carbon accounting (or greenhouse gas accounting) is a framework of methods to measure and track how much greenhouse gas an organization emits.
Relevance for greenhouse effect and global warming
Overview of main sources
Relevant greenhouse gases
The major anthropogenic (human origin) sources of greenhouse gases are carbon dioxide (), nitrous oxide (), methane and three groups of fluorinated gases (sulfur hexafluoride (), hydrofluorocarbons (HFCs) and perfluorocarbons (PFCs, sulphur hexafluoride (SF6), and nitrogen trifluoride (NF3)). Though the greenhouse effect is heavily driven by water vapor, human emissions of water vapor are not a significant contributor to warming.
Although CFCs are greenhouse gases, they are regulated by the Montreal Protocol which was motivated by CFCs' contribution to ozone depletion rather than by their contribution to global warming. Ozone depletion has only a minor role in greenhouse warming, though the two processes are sometimes confused in the media. In 2016, negotiators from over 170 nations meeting at the summit of the United Nations Environment Programme reached a legally binding accord to phase out hydrofluorocarbons (HFCs) in the Kigali Amendment to the Montreal Protocol. The use of CFC-12 (except some essential uses) has been phased out due to its ozone depleting properties. The phasing-out of less active HCFC-compounds will be completed in 2030.
Human activities
Starting about 1750, industrial activity powered by fossil fuels began to significantly increase the concentration of carbon dioxide and other greenhouse gases. Emissions have grown rapidly since about 1950 with ongoing expansions in global population and economic activity following World War II. As of 2021, measured atmospheric concentrations of carbon dioxide were almost 50% higher than pre-industrial levels.
The main sources of greenhouse gases due to human activity (also called carbon sources) are:
Burning fossil fuels: Burning oil, coal and gas is estimated to have emitted 37.4 billion tonnes of -eq in 2023. The largest single source is coal-fired power stations, with 20% of greenhouse gases (GHG) as of 2021.
Land use change (mainly deforestation in the tropics) accounts for about a quarter of total anthropogenic GHG emissions.
Livestock enteric fermentation and manure management, paddy rice farming, land use and wetland changes, human-made lakes, pipeline losses, and covered vented landfill emissions leading to higher methane atmospheric concentrations. Many of the newer style fully vented septic systems that enhance and target the fermentation process also are sources of atmospheric methane.
Use of chlorofluorocarbons (CFCs) in refrigeration systems, and use of CFCs and halons in fire suppression systems and manufacturing processes.
Agricultural soils emit nitrous oxide (N2O) partly due to application of fertilizers.
The largest source of anthropogenic methane emissions is agriculture, closely followed by gas venting and fugitive emissions from the fossil-fuel industry. The largest agricultural methane source is livestock. Cattle (raised for both beef and milk, as well as for inedible outputs like manure and draft power) are the animal species responsible for the most emissions, representing about 65% of the livestock sector's emissions.
Global estimates
Global greenhouse gas emissions are about 50 Gt per year and for 2019 have been estimated at 57 Gt eq including 5 Gt due to land use change. In 2019, approximately 34% [20 Gt-eq] of total net anthropogenic GHG emissions came from the energy supply sector, 24% [14 Gt-eq] from industry, 22% [13 Gt-eq]from agriculture, forestry and other land use (AFOLU), 15% [8.7 Gt-eq] from transport and 6% [3.3 Gt-eq] from buildings.
The current -equivalent emission rates averaging 6.6 tonnes per person per year, are well over twice the estimated rate 2.3 tons required to stay within the 2030 Paris Agreement increase of 1.5 °C (2.7 °F) over pre-industrial levels.
While cities are sometimes considered to be disproportionate contributors to emissions, per-capita emissions tend to be lower for cities than the averages in their countries.
A 2017 survey of corporations responsible for global emissions found that 100 companies were responsible for 71% of global direct and indirect emissions, and that state-owned companies were responsible for 59% of their emissions.
China is, by a significant margin, Asia's and the world's largest emitter: it emits nearly 10 billion tonnes each year, more than one-quarter of global emissions. Other countries with fast growing emissions are South Korea, Iran, and Australia (which apart from the oil rich Persian Gulf states, now has the highest per capita emission rate in the world). On the other hand, annual per capita emissions of the EU-15 and the US are gradually decreasing over time. Emissions in Russia and Ukraine have decreased fastest since 1990 due to economic restructuring in these countries.
2015 was the first year to see both total global economic growth and a reduction of carbon emissions.
High income countries compared to low income countries
Annual per capita emissions in the industrialized countries are typically as much as ten times the average in developing countries. Due to China's fast economic development, its annual per capita emissions are quickly approaching the levels of those in the Annex I group of the Kyoto Protocol (i.e., the developed countries excluding the US).
Africa and South America are both fairly small emitters, accounting for 3-4% of global emissions each. Both have emissions almost equal to international aviation and shipping.
Calculations and reporting
Variables
There are several ways of measuring greenhouse gas emissions. Some variables that have been reported include:
Definition of measurement boundaries: Emissions can be attributed geographically, to the area where they were emitted (the territory principle) or by the activity principle to the territory that produced the emissions. These two principles result in different totals when measuring, for example, electricity importation from one country to another, or emissions at an international airport.
Time horizon of different gases: The contribution of given greenhouse gas is reported as a equivalent. The calculation to determine this takes into account how long that gas remains in the atmosphere. This is not always known accurately and calculations must be regularly updated to reflect new information.
The measurement protocol itself: This may be via direct measurement or estimation. The four main methods are the emission factor-based method, mass balance method, predictive emissions monitoring systems, and continuous emissions monitoring systems. These methods differ in accuracy, cost, and usability. Public information from space-based measurements of carbon dioxide by Climate Trace is expected to reveal individual large plants before the 2021 United Nations Climate Change Conference.
These measures are sometimes used by countries to assert various policy/ethical positions on climate change.The use of different measures leads to a lack of comparability, which is problematic when monitoring progress towards targets. There are arguments for the adoption of a common measurement tool, or at least the development of communication between different tools.
Reporting
Emissions may be tracked over long time periods, known as historical or cumulative emissions measurements. Cumulative emissions provide some indicators of what is responsible for greenhouse gas atmospheric concentration build-up.
National accounts balance
The national accounts balance tracks emissions based on the difference between a country's exports and imports. For many richer nations, the balance is negative because more goods are imported than they are exported. This result is mostly due to the fact that it is cheaper to produce goods outside of developed countries, leading developed countries to become increasingly dependent on services and not goods. A positive account balance would mean that more production was occurring within a country, so more operational factories would increase carbon emission levels.
Emissions may also be measured across shorter time periods. Emissions changes may, for example, be measured against the base year of 1990. 1990 was used in the United Nations Framework Convention on Climate Change (UNFCCC) as the base year for emissions, and is also used in the Kyoto Protocol (some gases are also measured from the year 1995). A country's emissions may also be reported as a proportion of global emissions for a particular year.
Another measurement is of per capita emissions. This divides a country's total annual emissions by its mid-year population. Per capita emissions may be based on historical or annual emissions.
Embedded emissions
One way of attributing greenhouse gas emissions is to measure the embedded emissions (also referred to as "embodied emissions") of goods that are being consumed. Emissions are usually measured according to production, rather than consumption. For example, in the main international treaty on climate change (the UNFCCC), countries report on emissions produced within their borders, e.g., the emissions produced from burning fossil fuels. Under a production-based accounting of emissions, embedded emissions on imported goods are attributed to the exporting, rather than the importing, country. Under a consumption-based accounting of emissions, embedded emissions on imported goods are attributed to the importing country, rather than the exporting, country.
A substantial proportion of emissions is traded internationally. The net effect of trade was to export emissions from China and other emerging markets to consumers in the US, Japan, and Western Europe.
Carbon footprint
Emission intensity
Emission intensity is a ratio between greenhouse gas emissions and another metric, e.g., gross domestic product (GDP) or energy use. The terms "carbon intensity" and "emissions intensity" are also sometimes used. Emission intensities may be calculated using market exchange rates (MER) or purchasing power parity (PPP).
Example tools and websites
Carbon accounting (or greenhouse gas accounting) is a framework of methods to measure and track how much greenhouse gas an organization emits.
Climate TRACE
Historical trends
Cumulative and historical emissions
Cumulative anthropogenic (i.e., human-emitted) emissions of from fossil fuel use are a major cause of global warming, and give some indication of which countries have contributed most to human-induced climate change. In particular, stays in the atmosphere for at least 150 years and up to 1000 years, whilst methane disappears within a decade or so, and nitrous oxides last about 100 years. The graph gives some indication of which regions have contributed most to human-induced climate change. When these numbers are calculated per capita cumulative emissions based on then-current population the situation is shown even more clearly. The ratio in per capita emissions between industrialized countries and developing countries was estimated at more than 10 to 1.
Non-OECD countries accounted for 42% of cumulative energy-related emissions between 1890 and 2007. Over this time period, the US accounted for 28% of emissions; the EU, 23%; Japan, 4%; other OECD countries 5%; Russia, 11%; China, 9%; India, 3%; and the rest of the world, 18%.The European Commission adopted a set of legislative proposals targeting a reduction of the emissions by 55% by 2030.
Overall, developed countries accounted for 83.8% of industrial emissions over this time period, and 67.8% of total emissions. Developing countries accounted for industrial emissions of 16.2% over this time period, and 32.2% of total emissions.
However, what becomes clear when we look at emissions across the world today is that the countries with the highest emissions over history are not always the biggest emitters today. For example, in 2017, the UK accounted for just 1% of global emissions.
In comparison, humans have emitted more greenhouse gases than the Chicxulub meteorite impact event which caused the extinction of the dinosaurs.
Transport, together with electricity generation, is the major source of greenhouse gas emissions in the EU. Greenhouse gas emissions from the transportation sector continue to rise, in contrast to power generation and nearly all other sectors. Since 1990, transportation emissions have increased by 30%. The transportation sector accounts for around 70% of these emissions. The majority of these emissions are caused by passenger vehicles and vans. Road travel is the first major source of greenhouse gas emissions from transportation, followed by aircraft and maritime. Waterborne transportation is still the least carbon-intensive mode of transportation on average, and it is an essential link in sustainable multimodal freight supply chains.
Buildings, like industry, are directly responsible for around one-fifth of greenhouse gas emissions, primarily from space heating and hot water consumption. When combined with power consumption within buildings, this figure climbs to more than one-third.
Within the EU, the agricultural sector presently accounts for roughly 10% of total greenhouse gas emissions, with methane from livestock accounting for slightly more than half of 10%.
Estimates of total emissions do include biotic carbon emissions, mainly from deforestation. Including biotic emissions brings about the same controversy mentioned earlier regarding carbon sinks and land-use change. The actual calculation of net emissions is very complex, and is affected by how carbon sinks are allocated between regions and the dynamics of the climate system.
The graphic shows the logarithm of 1850–2019 fossil fuel emissions; natural log on left, actual value of Gigatons per year on right. Although emissions increased during the 170-year period by about 3% per year overall, intervals of distinctly different growth rates (broken at 1913, 1945, and 1973) can be detected. The regression lines suggest that emissions can rapidly shift from one growth regime to another and then persist for long periods of time. The most recent drop in emissions growth – by almost 3 percentage points – was at about the time of the 1970s energy crisis. Percent changes per year were estimated by piecewise linear regression on the log data and are shown on the plot; the data are from The Integrated Carbon Observation system.
Changes since a particular base year
The sharp acceleration in emissions since 2000 to more than a 3% increase per year (more than 2 ppm per year) from 1.1% per year during the 1990s is attributable to the lapse of formerly declining trends in carbon intensity of both developing and developed nations. China was responsible for most of global growth in emissions during this period. Localised plummeting emissions associated with the collapse of the Soviet Union have been followed by slow emissions growth in this region due to more efficient energy use, made necessary by the increasing proportion of it that is exported. In comparison, methane has not increased appreciably, and by 0.25% y−1.
Using different base years for measuring emissions has an effect on estimates of national contributions to global warming. This can be calculated by dividing a country's highest contribution to global warming starting from a particular base year, by that country's minimum contribution to global warming starting from a particular base year. Choosing between base years of 1750, 1900, 1950, and 1990 has a significant effect for most countries.
Data from Global Carbon Project
The Global Carbon Project continuously releases data about emissions, budget and concentration.
Emissions by type of greenhouse gas
Carbon dioxide () is the dominant emitted greenhouse gas, while methane () emissions almost have the same short-term impact. Nitrous oxide (N2O) and fluorinated gases (F-gases) play a lesser role in comparison.
Greenhouse gas emissions are measured in equivalents determined by their global warming potential (GWP), which depends on their lifetime in the atmosphere. Estimations largely depend on the ability of oceans and land sinks to absorb these gases. Short-lived climate pollutants (SLCPs) including methane, hydrofluorocarbons (HFCs), tropospheric ozone and black carbon persist in the atmosphere for a period ranging from days to 15 years; whereas carbon dioxide can remain in the atmosphere for millennia. Reducing SLCP emissions can cut the ongoing rate of global warming by almost half and reduce the projected Arctic warming by two-thirds.
Greenhouse gas emissions in 2019 were estimated at 57.4 Gte, while emissions alone made up 42.5 Gt including land-use change (LUC).
While mitigation measures for decarbonization are essential on the longer term, they could result in weak near-term warming because sources of carbon emissions often also co-emit air pollution. Hence, pairing measures that target carbon dioxide with measures targeting non- pollutants – short-lived climate pollutants, which have faster effects on the climate, is essential for climate goals.
Carbon dioxide ()
Fossil fuel (use for energy generation, transport, heating and machinery in industrial plants): oil, gas and coal (89%) are the major driver of anthropogenic global warming with annual emissions of 35.6 Gt in 2019.
Cement production (burning of fossil fuels) (4%) is estimated at 1.42 Gt
Land-use change (LUC) is the imbalance of deforestation and reforestation. Estimations are very uncertain at 4.5 Gt. Wildfires alone cause annual emissions of about 7 Gt
Non-energy use of fuels, carbon losses in coke ovens, and flaring in crude oil production.
Methane (CH4)
Methane has a high immediate impact with a 5-year global warming potential of up to 100. Given this, the current 389 Mt of methane emissions has about the same short-term global warming effect as emissions, with a risk to trigger irreversible changes in climate and ecosystems. For methane, a reduction of about 30% below current emission levels would lead to a stabilization in its atmospheric concentration.
Fossil fuels (32%) (emissions due to losses during production and transport) account for most of the methane emissions including coal mining (12% of methane total), gas distribution and leakages (11%) as well as gas venting in oil production (9%).
Livestock (28%) with cattle (21%) as the dominant source, followed by buffalo (3%), sheep (2%), and goats (1.5%).
Human waste and wastewater (21%): When biomass waste in landfills and organic substances in domestic and industrial wastewater is decomposed by bacteria in anaerobic conditions, substantial amounts of methane are generated.
Rice cultivation (10%) on flooded rice fields is another agricultural source, where anaerobic decomposition of organic material produces methane.
Nitrous oxide ()
N2O has a high GWP and significant Ozone Depleting Potential. It is estimated that the global warming potential of N2O over 100 years is 265 times greater than . For N2O, a reduction of more than 50% would be required for a stabilization.
Most emissions (56%) of nitrous oxide comes from agriculture, especially meat production: cattle (droppings on pasture), fertilizers, animal manure.Further contributions come from combustion of fossil fuels (18%) and biofuels as well as industrial production of adipic acid and nitric acid.
F-gases
Fluorinated gases include hydrofluorocarbons (HFC), perfluorocarbons (PFC), sulfur hexafluoride (SF6), and nitrogen trifluoride (NF3). They are used by switchgear in the power sector, semiconductor manufacture, aluminum production and a largely unknown source of SF6. Continued phase down of manufacture and use of HFCs under the Kigali Amendment to the Montreal Protocol will help reduce HFC emissions and concurrently improve the energy efficiency of appliances that use HFCs like air conditioners, freezers and other refrigeration devices.
Hydrogen
Hydrogen leakages contribute to indirect global warming.
When hydrogen is oxidized in the atmosphere, the result is an increase in concentrations of greenhouse gases in both the troposphere and the stratosphere. Hydrogen can leak from hydrogen production facilities as well as any infrastructure in which hydrogen is transported, stored, or consumed.
Black carbon
Black carbon is formed through the incomplete combustion of fossil fuels, biofuel, and biomass. It is not a greenhouse gas but a climate forcing agent. Black carbon can absorb sunlight and reduce albedo when deposited on snow and ice. Indirect heating can be caused by the interaction with clouds. Black carbon stays in the atmosphere for only several days to weeks. Emissions may be mitigated by upgrading coke ovens, installing particulate filters on diesel-based engines, reducing routine flaring, and minimizing open burning of biomass.
Emissions by sector
Global greenhouse gas emissions can be attributed to different sectors of the economy. This provides a picture of the varying contributions of different types of economic activity to climate change, and helps in understanding the changes required to mitigate climate change.
Greenhouse gas emissions can be divided into those that arise from the combustion of fuels to produce energy, and those generated by other processes. Around two thirds of greenhouse gas emissions arise from the combustion of fuels.
Energy may be produced at the point of consumption, or by a generator for consumption by others. Thus emissions arising from energy production may be categorized according to where they are emitted, or where the resulting energy is consumed. If emissions are attributed at the point of production, then electricity generators contribute about 25% of global greenhouse gas emissions. If these emissions are attributed to the final consumer then 24% of total emissions arise from manufacturing and construction, 17% from transportation, 11% from domestic consumers, and 7% from commercial consumers. Around 4% of emissions arise from the energy consumed by the energy and fuel industry itself.
The remaining third of emissions arise from processes other than energy production. 12% of total emissions arise from agriculture, 7% from land use change and forestry, 6% from industrial processes, and 3% from waste.
Electricity generation
Coal-fired power stations are the single largest emitter, with over 20% of global greenhouse gas emissions in 2018. Although much less polluting than coal plants, natural gas-fired power plants are also major emitters, taking electricity generation as a whole over 25% in 2018. Notably, just 5% of the world's power plants account for almost three-quarters of carbon emissions from electricity generation, based on an inventory of more than 29,000 fossil-fuel power plants across 221 countries. In the 2022 IPCC report, it is noted that providing modern energy services universally would only increase greenhouse gas emissions by a few percent at most. This slight increase means that the additional energy demand that comes from supporting decent living standards for all would be far lower than current average energy consumption.
In March 2024, the International Energy Agency (IEA) reported that in 2023, global emissions from energy sources increased by 1.1%, rising by 410 million tonnes to a record 37.4 billion tonnes, primarily due to coal. Drought-related decreases in hydropower contributed to a 170 million tonne rise in emissions, which would have otherwise led to a decrease in the electricity sector's emissions. The implementation of clean energy technologies like solar, wind, nuclear, heat pumps, and electric vehicles since 2019 has significantly tempered emissions growth, which would have been threefold without these technologies.
Agriculture, forestry and land use
Agriculture
Deforestation
Deforestation is a major source of greenhouse gas emissions. A study shows annual carbon emissions (or carbon loss) from tropical deforestation have doubled during the last two decades and continue to increase. (0.97 ±0.16 PgC per year in 2001–2005 to 1.99 ±0.13 PgC per year in 2015–2019)
Land-use change
Land-use change, e.g., the clearing of forests for agricultural use, can affect the concentration of greenhouse gases in the atmosphere by altering how much carbon flows out of the atmosphere into carbon sinks. Accounting for land-use change can be understood as an attempt to measure "net" emissions, i.e., gross emissions from all sources minus the removal of emissions from the atmosphere by carbon sinks.
There are substantial uncertainties in the measurement of net carbon emissions. Additionally, there is controversy over how carbon sinks should be allocated between different regions and over time. For instance, concentrating on more recent changes in carbon sinks is likely to favour those regions that have deforested earlier, e.g., Europe.
In 1997, human-caused Indonesian peat fires were estimated to have released between 13% and 40% of the average annual global carbon emissions caused by the burning of fossil fuels.
Transport of people and goods
Transportation accounts for 15% of emissions worldwide. Over a quarter of global transport emissions are from road freight, so many countries are further restricting truck emissions to help limit climate change.
Maritime transport accounts for 3.5% to 4% of all greenhouse gas emissions, primarily carbon dioxide. In 2022, the shipping industry's 3% of global greenhouse gas emissions made it "the sixth largest greenhouse gas emitter worldwide, ranking between Japan and Germany."
Aviation
Jet airliners contribute to climate change by emitting carbon dioxide (), nitrogen oxides, contrails and particulates.In 2018, global commercial operations generated 2.4% of all emissions.
In 2020, approximately 3.5% of the overall human impacts on climate are from the aviation sector. The impact of the sector on climate in the last 20 years had doubled, but the part of the contribution of the sector in comparison to other sectors did not change because other sectors grew as well.
Some representative figures for average direct emissions (not accounting for high-altitude radiative effects) of airliners expressed as and equivalent per passenger kilometer:
Domestic, short distance, less than : 257 g/km or 259 g/km (14.7 oz/mile) e
Long-distance flights: 113 g/km or 114 g/km (6.5 oz/mile) e
Buildings and construction
In 2018, manufacturing construction materials and maintaining buildings accounted for 39% of carbon dioxide emissions from energy and process-related emissions. Manufacture of glass, cement, and steel accounted for 11% of energy and process-related emissions. Because building construction is a significant investment, more than two-thirds of buildings in existence will still exist in 2050. Retrofitting existing buildings to become more efficient will be necessary to meet the targets of the Paris Agreement; it will be insufficient to only apply low-emission standards to new construction. Buildings that produce as much energy as they consume are called zero-energy buildings, while buildings that produce more than they consume are energy-plus. Low-energy buildings are designed to be highly efficient with low total energy consumption and carbon emissions—a popular type is the passive house.
The construction industry has seen marked advances in building performance and energy efficiency over recent decades. Green building practices that avoid emissions or capture the carbon already present in the environment, allow for reduced footprint of the construction industry, for example, use of hempcrete, cellulose fiber insulation, and landscaping.
In 2019, the building sector was responsible for 12 Gt-eq emissions. More than 95% of these emissions were carbon, and the remaining 5% were , , and halocarbon.
The largest contributor to building sector emissions (49% of total) is the production of electricity for use in buildings.
Of global building sector GHG emissions, 28% are produced during the manufacturing process of building materials such as steel, cement (a key component of concrete), and glass. The conventional process inherently related to the production of steel and cement results in large amounts of CO2 emitted. For example, the production of steel in 2018 was responsible for 7 to 9% of the global CO2 emissions.
The remaining 23% of global building sector GHG emissions are produced directly on site during building operations.
Embodied carbon emissions in construction sector
Embodied carbon emissions, or upfront carbon emissions (UCE), are the result of creating and maintaining the materials that form a building. As of 2018, "Embodied carbon is responsible 11% of global greenhouse gas emissions and 28% of global building sector emissions ... Embodied carbon will be responsible for almost half of total new construction emissions between now and 2050."
GHG emissions which are produced during the mining, processing, manufacturing, transportation and installation of building materials are referred to as the embodied carbon of a material. The embodied carbon of a construction project can be reduced by using low-carbon materials for building structures and finishes, reducing demolition, and reusing buildings and construction materials whenever possible.
Industrial processes
Secunda CTL is the world's largest single emitter, at 56.5 million tonnes a year.
Mining
Flaring and venting of natural gas in oil wells is a significant source of greenhouse gas emissions. Its contribution to greenhouse gases has declined by three-quarters in absolute terms since a peak in the 1970s of approximately 110 million metric tons/year, and in 2004 accounted for about 1/2 of one percent of all anthropogenic carbon dioxide emissions.
The World Bank estimates that 134 billion cubic meters of natural gas are flared or vented annually (2010 datum), an amount equivalent to the combined annual gas consumption of Germany and France or enough to supply the entire world with gas for 16 days. This flaring is highly concentrated: 10 countries account for 70% of emissions, and twenty for 85%.
Steel and aluminum
Steel and aluminum are key economic sectors where is produced. According to a 2013 study, "in 2004, the steel industry along emits about 590M tons of , which accounts for 5.2% of the global anthropogenic GHG emissions. emitted from steel production primarily comes from energy consumption of fossil fuel as well as the use of limestone to purify iron oxides."
Plastics
Plastics are produced mainly from fossil fuels. It was estimated that between 3% and 4% of global GHG emissions are associated with plastics' life cycles. The EPA estimates as many as five mass units of carbon dioxide are emitted for each mass unit of polyethylene terephthalate (PET) produced—the type of plastic most commonly used for beverage bottles, the transportation produce greenhouse gases also. Plastic waste emits carbon dioxide when it degrades. In 2018 research claimed that some of the most common plastics in the environment release the greenhouse gases methane and ethylene when exposed to sunlight in an amount that can affect the earth climate.
Due to the lightness of plastic versus glass or metal, plastic may reduce energy consumption. For example, packaging beverages in PET plastic rather than glass or metal is estimated to save 52% in transportation energy, if the glass or metal package is single-use, of course.
In 2019 a new report "Plastic and Climate" was published. According to the report, the production and incineration of plastics will contribute in the equivalent of 850 million tonnes of carbon dioxide () to the atmosphere in 2019. With the current trend, annual life cycle greenhouse gas emissions of plastics will grow to 1.34 billion tonnes by 2030. By 2050, the life cycle emissions of plastics could reach 56 billion tonnes, as much as 14 percent of the Earth's remaining carbon budget. The report says that only solutions which involve a reduction in consumption can solve the problem, while others like biodegradable plastic, ocean cleanup, using renewable energy in plastic industry can do little, and in some cases may even worsen it.
Pulp and paper
The global print and paper industry accounts for about 1% of global carbon dioxide emissions. Greenhouse gas emissions from the pulp and paper industry are generated from the combustion of fossil fuels required for raw material production and transportation, wastewater treatment facilities, purchased power, paper transportation, printed product transportation, disposal and recycling.
Various services
Digital services
In 2020, data centers (excluding cryptocurrency mining) and data transmission each used about 1% of world electricity. The digital sector produces between 2% and 4% of global GHG emissions, a large part of which is from chipmaking. However the sector reduces emissions from other sectors which have a larger global share, such as transport of people, and possibly buildings and industry.
Mining for proof-of-work cryptocurrencies requires enormous amounts of electricity and consequently comes with a large carbon footprint. Proof-of-work blockchains such as Bitcoin, Ethereum, Litecoin, and Monero were estimated to have added between 3 million and 15 million tonnes of carbon dioxide () to the atmosphere in the period from 1 January 2016 to 30 June 2017. By the end of 2021, Bitcoin was estimated to produce 65.4 million tonnes of , as much as Greece, and consume between 91 and 177 terawatt-hours annually. Bitcoin is the least energy-efficient cryptocurrency, using 707.6 kilowatt-hours of electricity per transaction.
A study in 2015 investigated the global electricity usage that can be ascribed to Communication Technology (CT) between 2010 and 2030. Electricity usage from CT was divided into four principle categories: (i) consumer devices, including personal computers, mobile phones, TVs and home entertainment systems; (ii) network infrastructure; (iii) data center computation and storage; and lastly (iv) production of the above categories. The study estimated for the worst-case scenario, that CT electricity usage could contribute up to 23% of the globally released greenhouse gas emissions in 2030.
Health care
The healthcare sector produces 4.4–4.6% of global greenhouse gas emissions.
Based on the 2013 life cycle emissions in the health care sector, it is estimated that the GHG emissions associated with US health care activities may cause an additional 123,000 to 381,000 DALYs annually.
Water supply and sanitation
Tourism
According to UNEP, global tourism is a significant contributor to the increasing concentrations of greenhouse gases in the atmosphere.
Emissions by other characteristics
The responsibility for anthropogenic climate change differs substantially among individuals, e.g. between groups or cohorts.
By type of energy source
By socio-economic class and age
Fueled by the consumptive lifestyle of wealthy people, the wealthiest 5% of the global population has been responsible for 37% of the absolute increase in greenhouse gas emissions worldwide. It can be seen that there is a strong relationship between income and per capita carbon dioxide emissions. Almost half of the increase in absolute global emissions has been caused by the richest 10% of the population. In the newest report from the IPCC 2022, it states that the lifestyle consumptions of the poor and middle class in emerging economies produce approximately 5–50 times less the amount that the high class in already developed high-income countries. Variations in regional, and national per capita emissions partly reflect different development stages, but they also vary widely at similar income levels. The 10% of households with the highest per capita emissions contribute a disproportionately large share of global household greenhouse gas emissions.
Studies find that the most affluent citizens of the world are responsible for most environmental impacts, and robust action by them is necessary for prospects of moving towards safer environmental conditions.
According to a 2020 report by Oxfam and the Stockholm Environment Institute, the richest 1% of the global population have caused twice as much carbon emissions as the poorest 50% over the 25 years from 1990 to 2015. This was, respectively, during that period, 15% of cumulative emissions compared to 7%. The bottom half of the population is directly responsible for less than 20% of energy footprints and consume less than the top 5% in terms of trade-corrected energy. The largest disproportionality was identified to be in the domain of transport, where e.g. the top 10% consume 56% of vehicle fuel and conduct 70% of vehicle purchases. However, wealthy individuals are also often shareholders and typically have more influence and, especially in the case of billionaires, may also direct lobbying efforts, direct financial decisions, and/or control companies.
Based on a study in 32 developed countries, researchers found that "seniors in the United States and Australia have the highest per capita footprint, twice the Western average. The trend is mainly due to changes in expenditure patterns of seniors".
Methods for reducing greenhouse gas emissions
Governments have taken action to reduce greenhouse gas emissions to mitigate climate change. Countries and regions listed in Annex I of the United Nations Framework Convention on Climate Change (UNFCCC) (i.e., the OECD and former planned economies of the Soviet Union) are required to submit periodic assessments to the UNFCCC of actions they are taking to address climate change. Policies implemented by governments include for example national and regional targets to reduce emissions, promoting energy efficiency, and support for an energy transition.
Projections for future emissions
In October 2023, the US Energy Information Administration (EIA) released a series of projections out to 2050 based on current ascertainable policy interventions. Unlike many integrated systems models in this field, emissions are allowed to float rather than be pinned to netzero in 2050. Asensitivity analysis varied key parameters, primarily future GDP growth (2.6%pa as reference, variously 1.8% and 3.4%) and secondarily technological learning rates, future crude oil prices, and similar exogenous inputs. The model results are far from encouraging. In no case did aggregate energy-related carbon emissions ever dip below 2022 levels (see figure3 plot). The IEO2023 exploration provides a benchmark and suggests that far stronger action is needed.
By country
List of countries
In 2019, China, the United States, India, the EU27+UK, Russia, and Japan - the world's largest emitters - together accounted for 51% of the population, 62.5% of global gross domestic product, 62% of total global fossil fuel consumption and emitted 67% of total global fossil . Emissions from these five countries and the EU28 show different changes in 2019 compared to 2018: the largest relative increase is found for China (+3.4%), followed by India (+1.6%). On the contrary, the EU27+UK (-3.8%), the United States (-2.6%), Japan (-2.1%) and Russia (-0.8%) reduced their fossil emissions.
United States
China
India
Society and culture
Impacts of the COVID-19 pandemic
In 2020, carbon dioxide emissions fell by 6.4% or 2.3 billion tonnes globally. In April 2020, emissions fell by up to 30%. In China, lockdowns and other measures resulted in a 26% decrease in coal consumption, and a 50% reduction in nitrogen oxide emissions. Greenhouse gas emissions rebounded later in the pandemic as many countries began lifting restrictions, with the direct impact of pandemic policies having a negligible long-term impact on climate change.
| Physical sciences | Climate change | Earth science |
9536676 | https://en.wikipedia.org/wiki/Kinematic%20pair | Kinematic pair | In classical mechanics, a kinematic pair is a connection between two physical objects that imposes constraints on their relative movement (kinematics). German engineer Franz Reuleaux introduced the kinematic pair as a new approach to the study of machines that provided an advance over the notion of elements consisting of simple machines.
Description
Kinematics is the branch of classical mechanics which describes the motion of points, bodies (objects) and systems of bodies (groups of objects) without consideration of the causes of motion. Kinematics as a field of study is often referred to as the "geometry of motion". For further detail, see Kinematics.
Hartenberg & Denavit presents the definition of a kinematic pair:
In the matter of connections between rigid bodies, Reuleaux recognized two kinds; he called them higher and lower pairs (of elements). With higher pairs, the two elements are in contact at a point or along a line, as in a ball bearing or disk cam and follower; the relative motions of coincident points are dissimilar. Lower pairs are those for which area contact may be visualized, as in pin connections, crossheads, ball-and socket joints and some others; the relative motion of coincident points of the elements, and hence of their links, are similar, and an exchange of elements from one link to the other does not alter the relative motion of the parts as it would with higher pairs.In kinematics, the two connected physical objects, forming a kinematic pair, are called 'rigid bodies'. In studies of mechanisms, manipulators or robots, the two objects are typically called 'links'.
Lower pair
A lower pair is an ideal joint that constrains contact between a surface in the moving body to a corresponding in the fixed body. A lower pair is one in which there occurs a surface or area contact between two members, e.g. nut and screw, universal joint used to connect two propeller shafts.
Cases of lower joints:
A revolute R joint, or hinged joint, requires a line in the moving body to remain co-linear with a line in the fixed body, and a plane perpendicular to this line in the moving body maintain contact with a similar perpendicular plane in the fixed body. This imposes five constraints on the relative movement of the links, which therefore has one degree of freedom.
A prismatic P joint, or slider, requires that a line in the moving body remain co-linear with a line in the fixed body, and a plane parallel to this line in the moving body maintain contact with a similar parallel plane in the fixed body. This imposes five constraints on the relative movement of the links, which therefore has one degree of freedom.
A screw joint or helical H joint requires cut threads in two links, so that there is a turning as well as sliding motion between them. This joint has one degree of freedom.
A cylindrical C joint requires that a line in the moving body remain co-linear with a line in the fixed body. It is a combination of a revolute joint and a sliding joint. This joint has two degrees of freedom.
A universal U joint consists of two intersecting, mutually orthogonal revolute joints connecting rigid links whose axes are inclined to each other.
A spherical S joint or ball and socket joint requires that a point in the moving body remain stationary in the fixed body. This joint has three degrees of freedom, corresponding to rotations around orthogonal axes.
A planar joint requires that a plane in the moving body maintain contact with a plane in fixed body. This joint has three degrees of freedom. The moving plane can slide in two dimensions along the fixed plane, and it can rotate on an axis normal to the fixed plane.
A parallelogram Pa joint, is composed of four links connected together by four revolute joints at the corners of a parallelogram.
Higher pairs
Generally, a higher pair is a constraint that requires a curve or surface in the moving body to maintain contact with a curve or surface in the fixed body. For example, the contact between a cam and its follower is a higher pair called a cam joint. Similarly, the contact between the involute curves that form the meshing teeth of two gears are cam joints, as is a wheel rolling on a surface. It has a point or line contact.
Wrapping pair/ Higher pair
A wrapping/higher pair is a constraint that comprises belts, chains, and such other devices. A belt-driven pulley is an example of this pair.
In this type of which is very similar to the higher pair (which is having point or line contact), but having multiple point contact.
Joint notation
Context
Mechanisms, manipulators or robots are typically composed of links connected together by joints. Serial manipulators, like the SCARA robot, connect a moving platform to a base through a single chain of links and joints. In robotics the moving platform is called the 'end effector'. Multiple serial chains connect the moving platform to the base of parallel manipulators, like the Gough-Stewart mechanism. The individual serial chains of parallel manipulators are called 'limbs' or 'legs'. Topology refers to the arrangement of links and joints forming a manipulator or robot. Joint notation is a convenient way of defining the joint topology of mechanisms, manipulators or robots.
Abbreviations
Joints are abbreviated as follows: prismatic P, revolute R, universal U, cylindrical C, spherical S, parallelogram Pa. Actuated or active joints are identified by underscores, i.e., P, R, U, C, S, Pa.
Notation
Joint notation specifies the type and order of the joints forming a mechanism. It identifies the sequences of joints, starting from the abbreviation of the first joint at the base to the last abbreviation at the moving platform. For example, joint notation for the serial SCARA robot is RRP , indicating that it is composed of two active revolute joints RR followed by an active prismatic P joint. Repeated joints may be summarized by their number; so that joint notation for the SCARA robot can also be written 2RP for example. Joint notation for the parallel Gough-Stewart mechanism is 6-UPS or 6(UPS) indicating that it is composed of six identical serial limbs, each one composed of a universal U, active prismatic P and spherical S joint. Parentheses () enclose the joints of individual serial limbs.
| Technology | Mechanisms | null |
13549049 | https://en.wikipedia.org/wiki/Equinox%20%28celestial%20coordinates%29 | Equinox (celestial coordinates) | In astronomy, an equinox is either of two places on the celestial sphere at which the ecliptic intersects the celestial equator. Although there are two such intersections, the equinox associated with the Sun's ascending node is used as the conventional origin of celestial coordinate systems and referred to simply as "the equinox". In contrast to the common usage of spring/vernal and autumnal equinoxes, the celestial coordinate system equinox is a direction in space rather than a moment in time.
In a cycle of about 25,800 years, the equinox moves westward with respect to the celestial sphere because of perturbing forces; therefore, in order to define a coordinate system, it is necessary to specify the date for which the equinox is chosen. This date should not be confused with the epoch. Astronomical objects show real movements such as orbital and proper motions, and the epoch defines the date for which the position of an object applies. Therefore, a complete specification of the coordinates for an astronomical object requires both the date of the equinox and of the epoch.
The currently used standard equinox and epoch is J2000.0, which is January 1, 2000 at 12:00 TT. The prefix "J" indicates that it is a Julian epoch. The previous standard equinox and epoch was B1950.0, with the prefix "B" indicating it was a Besselian epoch. Before 1984 Besselian equinoxes and epochs were used. Since that time Julian equinoxes and epochs have been used.
Motion of the equinox
The equinox moves, in the sense that as time progresses it is in a different location with respect to the distant stars. Consequently, star catalogs over the years, even over the course of a few decades, will list different ephemerides. This is due to precession and nutation, both of which can be modeled, as well as other minor perturbing forces which can only be determined by observation and are thus tabulated in astronomical almanacs.
Precession
Precession of the equinox was first noted by Hipparchus in 129 BC, when noting the location of Spica with respect to the equinox and comparing it to the location observed by Timocharis in 273 BC. It is a long term motion with a period of 25,800 years.
Nutation
Nutation is the oscillation of the ecliptic plane. It was first observed by James Bradley as a variation in the declination of stars. Bradley published this discovery in 1748. Because he did not have an accurate enough clock, Bradley was unaware of the effect of nutation on the motion of the equinox along the celestial equator, although that is in the present day the more significant aspect of nutation. The period of oscillation of the nutation is 18.6 years.
Equinoxes and epochs
Besselian equinoxes and epochs
A Besselian epoch, named after German mathematician and astronomer Friedrich Bessel (1784–1846), is an epoch that is based on a Besselian year of 365.242198781 days, which is a tropical year measured at the point where the Sun's longitude is exactly 280°. Since 1984, Besselian equinoxes and epochs have been superseded by Julian equinoxes and epochs. The current standard equinox and epoch is J2000.0, which is a Julian epoch.
Besselian epochs are calculated according to:
B = 1900.0 + (Julian date − 2415020.31352) / 365.242198781
The previous standard equinox and epoch were B1950.0, a Besselian epoch.
Since the right ascension and declination of stars are constantly changing due to precession, astronomers always specify these with reference to a particular equinox. Historically used Besselian equinoxes include B1875.0, B1900.0, B1925.0 and B1950.0. The official constellation boundaries were defined in 1930 using B1875.0.
Julian equinoxes and epochs
A Julian epoch is an epoch that is based on Julian years of exactly 365.25 days. Since 1984, Julian epochs are used in preference to the earlier Besselian epochs.
Julian epochs are calculated according to:
J = 2000.0 + (Julian date − 2451545.0)/365.25
The standard equinox and epoch currently in use are J2000.0, which corresponds to January 1, 2000, 12:00 Terrestrial Time.
J2000.0
The J2000.0 epoch is precisely Julian date 2451545.0 TT (Terrestrial Time), or January 1, 2000, noon TT. This is equivalent to January 1, 2000, 11:59:27.816 TAI or January 1, 2000, 11:58:55.816 UTC.
Since the right ascension and declination of stars are constantly changing due to precession, (and, for relatively nearby stars due to proper motion), astronomers always specify these with reference to a particular epoch. The earlier epoch that was in standard use was the B1950.0 epoch.
When the mean equator and equinox of J2000 are used to define a celestial reference frame, that frame may also be denoted J2000 coordinates or simply J2000. This is different from the International Celestial Reference System (ICRS): the mean equator and equinox at J2000.0 are distinct from and of lower precision than ICRS, but agree with ICRS to the limited precision of the former. Use of the "mean" locations means that nutation is averaged out or omitted. This means that the Earth's rotational North pole does not point quite at the J2000 celestial pole at the epoch J2000.0; the true pole of epoch nutates away from the mean one. The same differences pertain to the equinox.
The "J" in the prefix indicates that it is a Julian equinox or epoch rather than a Besselian equinox or epoch.
Equinox of Date
There is a special meaning of the expression "equinox (and ecliptic/equator) of date". This reference frame is defined by the positions of the ecliptic and the celestial equator as of the date/epoch on which the position of something else (typically a solar system object) is being specified.
Other equinoxes and their corresponding epochs
Other equinoxes and epochs that have been used include:
The Bonner Durchmusterung started by Friedrich Wilhelm August Argelander uses B1855.0
The Henry Draper Catalog uses B1900.0
Constellation boundaries were defined in 1930 along lines of right ascension and declination for the B1875.0 epoch.
Occasionally, non-standard equinoxes have been used, such as B1925.0 and B1970.0
The Hipparcos Catalog uses the International Celestial Reference System (ICRS) coordinate system (which is essentially equinox J2000.0) but uses an epoch of J1991.25. For objects with a significant proper motion, assuming that the epoch is J2000.0 leads to a large position error. Assuming that the equinox is J1991.25 leads to a large error for nearly all objects.
Epochs and equinoxes for orbital elements are usually given in Terrestrial Time, in several different formats, including:
Gregorian date with 24-hour time: 2000 January 1, 12:00 TT
Gregorian date with fractional day: 2000 January 1.5 TT
Julian day with fractional day: JDT 2451545.0
NASA/NORAD's Two-line elements format with fractional day: 00001.50000000
Sidereal time and the equation of the equinoxes
Sidereal time is the hour angle of the equinox. However, there are two types: if the mean equinox is used (that which only includes precession), it is called mean sidereal time; if the true equinox is used (the actual location of the equinox at a given instant), it is called apparent sidereal time. The difference between these two is known as the equation of the equinoxes, and is tabulated in the Astronomical Almanac.
A related concept is known as the equation of the origins, which is the arc length between the Celestial Intermediate Origin and the equinox. Alternatively, the equation of the origins is the difference between the Earth Rotation Angle and the apparent sidereal time at Greenwich.
Diminishing role of the equinox in astronomy
In modern astronomy the ecliptic and the equinox are diminishing in importance as required, or even convenient, reference concepts. (The equinox remains important in ordinary civil use, in defining the seasons, however.) This is for several reasons. One important reason is that it is difficult to be precise what the ecliptic is, and there is even some confusion in the literature about it. Should it be centered on the Earth's center of mass, or on the Earth-Moon barycenter?
Also with the introduction of the International Celestial Reference Frame, all objects near and far are put fundamentally in relationship to a large frame based on very distant fixed radio sources, and the choice of the origin is arbitrary and defined for the convenience of the problem at hand. There are no significant problems in astronomy where the ecliptic and the equinox need to be defined.
| Physical sciences | Celestial sphere: General | Astronomy |
2244399 | https://en.wikipedia.org/wiki/Newton-second | Newton-second | The newton-second (also newton second; symbol: N⋅s or N s) is the unit of impulse in the International System of Units (SI). It is dimensionally equivalent to the momentum unit kilogram-metre per second (kg⋅m/s). One newton-second corresponds to a one-newton force applied for one second.
It can be used to identify the resultant velocity of a mass if a force accelerates the mass for a specific time interval.
Definition
Momentum is given by the formula:
is the momentum in newton-seconds (N⋅s) or "kilogram-metres per second" (kg⋅m/s)
is the mass in kilograms (kg)
is the velocity in metres per second (m/s)
Examples
This table gives the magnitudes of some momenta for various masses and speeds.
| Physical sciences | Momentum/impulse | Basics and measurement |
2245629 | https://en.wikipedia.org/wiki/Ichthyosporea | Ichthyosporea | The Ichthyosporea (or DRIP clade, or Mesomycetozoea) are a small group of Opisthokonta in Eukaryota (formerly protists), mostly parasites of fish and other animals.
Significance
They are not particularly distinctive morphologically, appearing in host tissues as enlarged spheres or ovals containing spores, and most were originally classified in various groups as fungi, protozoa, or colorless algae. However, they form a coherent group on molecular trees, closely related to both animals and fungi and so of interest to biologists studying their origins. In a 2008 study they emerge robustly as the sibling-group of the clade Filozoa, which includes the animals.
Huldtgren et al., following x-ray tomography of microfossils of the Ediacaran Doushantuo Formation, has interpreted them as mesomycetozoan spore capsules.
Terminology
The name DRIP is an acronym for the first protozoa identified as members of the group, Cavalier-Smith later treated them as the class Ichthyosporea, since they were all parasites of fish.
order Dermocystida
"D": Dermocystidium. One species, Rhinosporidium seeberi, infects birds and mammals, including humans.
"R": the "rosette agent", now known as Sphaerothecum destruens
order Ichthyophonida
"I": Ichthyophonus
"P": Psorospermium
Since other new members have been added (e.g. the former fungal orders Eccrinales and Amoebidiales), Mendoza et al. suggested changing the name to Mesomycetozoea, which refers to their evolutionary position. On Eukaryota tree, in Opisthokont clade, Mesomycetozoea is in the middle ("Meso-") of the fungi ("-myceto-") and the animals ("-zoea"). The name Mesomycetozoa (without a third e) is also used to refer to this group, but Mendoza et al. use it as an alternate name for basal Opisthokonts.
Phylogeny
Taxonomy
Class Ichthyosporea Cavalier-Smith 1998
Order Dermocystida Cavalier-Smith 1998
Family Rhinosporidiaceae Mendoza et al. 2001
Order Ichthyophonida Cavalier-Smith 1998
Suborder Sphaeroformina Cavalier-Smith 2012
Family Creolimacidae Cavalier-Smith 2012
Family Psorospermidae Cavalier-Smith 2012
Family Piridae Cavalier-Smith 2012
Suborder Trichomycina Cavalier-Smith 2012
Genus †Paleocadus Poinar 2016
Family Amoebidiidae Lichtenstein 1917 ex Kirk et al. 2001
Family Ichthyophonidae Cavalier-Smith 2012
Family Paramoebidiidae Reynolds et al. 2017
Family Parataeniellaceae Manier & Lichtward 1968
Family Eccrinaceae Leger & Duboscq 1929 [Palavasciaceae Manier & Lichtward 1968]
| Biology and health sciences | Eukaryotes | Plants |
2246132 | https://en.wikipedia.org/wiki/Onychomycosis | Onychomycosis | Onychomycosis, also known as tinea unguium, is a fungal infection of the nail. Symptoms may include white or yellow nail discoloration, thickening of the nail, and separation of the nail from the nail bed. Fingernails may be affected, but it is more common for toenails. Complications may include cellulitis of the lower leg.
A number of different types of fungus can cause onychomycosis, including dermatophytes and Fusarium. Risk factors include athlete's foot, other nail diseases, exposure to someone with the condition, peripheral vascular disease, and poor immune function. The diagnosis is generally suspected based on the appearance and confirmed by laboratory testing.
Onychomycosis does not necessarily require treatment. The antifungal medication terbinafine taken by mouth appears to be the most effective but is associated with liver problems. Trimming the affected nails when on treatment also appears useful.
There is a ciclopirox-containing nail polish, but there is no evidence that it works. The condition returns in up to half of cases following treatment. Not using old shoes after treatment may decrease the risk of recurrence.
Onychomycosis occurs in about 10 percent of the adult population, with older people more frequently affected. Males are affected more often than females. Onychomycosis represents about half of nail disease. It was first determined to be the result of a fungal infection in 1853 by Georg Meissner.
Signs and symptoms
The most common symptom of a fungal nail infection is the nail becoming thickened and discoloured: white, black, yellow or green. As the infection progresses the nail can become brittle, with pieces breaking off or coming away from the toe or finger completely. If left untreated, the skin underneath and around the nail can become inflamed and painful. There may also be white or yellow patches on the nailbed or scaly skin next to the nail, and a foul smell. There is usually no pain or other bodily symptoms, unless the disease is severe. People with onychomycosis may experience significant psychosocial problems due to the appearance of the nail, particularly when fingers – which are always visible – rather than toenails are affected.
Dermatophytids are fungus-free skin lesions that sometimes form as a result of a fungus infection in another part of the body. This could take the form of a rash or itch in an area of the body that is not infected with the fungus. Dermatophytids can be thought of as an allergic reaction to the fungus.
Causes
The causative pathogens of onychomycosis are all in the fungus kingdom and include dermatophytes, Candida (yeasts), and nondermatophytic molds. Dermatophytes are the fungi most commonly responsible for onychomycosis in the temperate western countries; while Candida and nondermatophytic molds are more frequently involved in the tropics and subtropics with a hot and humid climate.
Dermatophytes
When onychomycosis is due to a dermatophyte infection, it is termed tinea unguium. Trichophyton rubrum is the most common dermatophyte involved in onychomycosis. Other dermatophytes that may be involved are T. interdigitale, Epidermophyton floccosum, Tricholosporum violaceum, Microsporum gypseum, T. tonsurans, and T. soudanense. A common outdated name that may still be reported by medical laboratories is Trichophyton mentagrophytes for T. interdigitale. The name T. mentagrophytes is now restricted to the agent of favus skin infection of the mouse; though this fungus may be transmitted from mice and their danders to humans, it generally infects skin and not nails.
Other
Other causative pathogens include Candida and nondermatophytic molds, in particular members of the mold genus Scytalidium (name recently changed to Neoscytalidium), Scopulariopsis, and Aspergillus.
Candida species mainly cause fingernail onychomycosis in people whose hands are often submerged in water. Scytalidium mainly affects people in the tropics, though it persists if they later move to areas of temperate climate.
Other molds more commonly affect people older than 60 years, and their presence in the nail reflects a slight weakening in the nail's ability to defend itself against fungal invasion.
Nail injury and nail psoriasis can cause damaged toenails to become thick, discolored & brittle.
Risk factors
Advancing age (usually over the age of 60) is the most common risk factor for onychomycosis due to diminished blood circulation, longer exposure to fungi, nails which grow more slowly and thicken, and reduced immune function increasing susceptibility to infection. Nail fungus tends to affect men more often than women and is associated with a family history of this infection.
Other risk factors include perspiring heavily, being in a humid or moist environment, psoriasis, wearing socks and shoes that hinder ventilation and do not absorb perspiration, going barefoot in damp public places such as swimming pools, gyms and shower rooms, having athlete's foot (tinea pedis), minor skin or nail injury, damaged nail, or other infection, and having diabetes, circulation problems, which may also lead to lower peripheral temperatures on hands and feet, or a weakened immune system.
Diagnosis
The diagnosis is generally suspected based on the appearance and confirmed by laboratory testing. The four main tests are a potassium hydroxide smear, culture, histology examination, and polymerase chain reaction. The sample examined is generally nail scrapings or clippings. These being from as far up the nail as possible.
Nail plate biopsy with periodic acid-Schiff stain appear more useful than culture or direct KOH examination. To reliably identify nondermatophyte molds, several samples may be necessary.
Classification
There are five classic types of onychomycosis:
Distal subungual onychomycosis is the most common form of tinea unguium and is usually caused by Trichophyton rubrum, which invades the nail bed and the underside of the nail plate.
White superficial onychomycosis (WSO) is caused by fungal invasion of the superficial layers of the nail plate to form "white islands" on the plate. It accounts for around 10 percent of onychomycosis cases. In some cases, WSO is a misdiagnosis of "keratins granulations" which are not a fungus, but a reaction to nail polish that can cause the nails to have a chalky white appearance. A laboratory test should be performed to confirm.
Proximal subungual onychomycosis is fungal penetration of the newly formed nail plate through the proximal nail fold. It is the least common form of tinea unguium in healthy people, but is found more commonly when the patient is immunocompromised.
Endonyx onychomycosis is characterized by leukonychia along with a lack of onycholysis or subungual hyperkeratosis.
Candidal onychomycosis is Candida species invasion of the fingernails, usually occurring in persons who frequently immerse their hands in water. This normally requires the prior damage of the nail by infection or trauma.
Differential diagnosis
In many cases of suspected nail fungus there is actually no fungal infection, but only nail deformity.
To avoid misdiagnosis as nail psoriasis, lichen planus, contact dermatitis, nail bed tumors such as melanoma, trauma, or yellow nail syndrome, laboratory confirmation may be necessary.
Other conditions that may appear similar to onychomycosis include: psoriasis, normal aging, green nail syndrome, yellow nail syndrome, and chronic paronychia.
Treatment
Medications
Most treatments are with antifungal medications, either topically or by mouth. Avoiding use of antifungal therapy by mouth (e.g., terbinafine) in persons without a confirmed infection is recommended, because of the possible side effects of that treatment. First topical terbinafine medication (MOB-015) was launched in February 2024 in Sweden under the name Terclara. This medication recorded 76% mycological cure rate in two phase 3 studies. The topical property of this medication ensures that typical terbinafine side effects are not present (1000 times lower terbinafine levels in plasma). Roll-out in other countries will continue in the coming years.
Medications that may be taken by mouth include terbinafine (76% effective), itraconazole (60% effective), and fluconazole (48% effective). They share characteristics that enhance their effectiveness: prompt penetration of the nail and nail bed, and persistence in the nail for months after discontinuation of therapy. Ketoconazole by mouth is not recommended due to side effects. Oral terbinafine is better tolerated than itraconazole. For superficial white onychomycosis, systemic rather than topical antifungal therapy is advised.
Topical agents include ciclopirox nail paint, amorolfine, and efinaconazole. Some topical treatments need to be applied daily for prolonged periods (at least one year). Topical amorolfine is applied weekly.
Efinaconazole, a topical azole antifungal, led to cure rates two or three times better than the next-best topical treatment, ciclopirox. In trials, about 17% of people were cured using efinaconazole, as opposed to 4% of people using placebo.
Topical ciclopirox results in a cure in 6% to 9% of cases. Ciclopirox when used with terbinafine appears to be better than either agent alone. Although eficonazole, P-3051 (ciclopirox 8% hydrolacquer), and tavaborole are effective at treating fungal infection of toenails, complete cure rates are low.
Other
Chemical (keratolytic) or surgical debridement of the affected nail appears to improve outcomes.
As of 2014, evidence for laser treatment is unclear as the evidence is of low quality and varies by type of laser.
Tea tree oil is not recommended as a treatment on present data. It was found to irritate the surrounding skin in some trial participants.
Cost
United States
According to a 2015 study, the cost in the United States of testing with the periodic acid–Schiff stain (PAS) was about $148. Even if the cheaper KOH test is used first and the PAS test is used only if the KOH test is negative, there is a good chance that the PAS will be done (because of either a true or a false negative with the KOH test). But the terbinafine treatment costs only $10 (plus an additional $43 for liver function tests). In conclusion the authors say that terbinafine has a relatively benign adverse effect profile, with liver damage very rare, so it makes more sense cost-wise for the dermatologist to prescribe the treatment without doing the PAS test. (Another option would be to prescribe the treatment only if the potassium hydroxide test is positive, but it gives a false negative in about 20% of cases of fungal infection.) On the other hand, as of 2015 the price of topical (non-oral) treatment with efinaconazole was $2307 per nail, so testing is recommended before prescribing it.
The cost of efinaconazole treatment can be reduced to $65 per 1-month dose using drug coupons, bringing the treatment cost to $715 per nail.
Canada
In 2019, a study by the Canadian Agency for Drugs and Technologies in Health found the cost for a 48-week efinaconazole course to be $178 for a big toe, and $89 for a different toe.
Prognosis
Recurrence may occur following treatment, with a 20-25% relapse rate within 2 years of successful treatment. Nail fungus can be painful and cause permanent damage to nails. It may lead to other serious infections if the immune system is suppressed due to medication, diabetes or other conditions. The risk is most serious for people with diabetes and with immune systems weakened by leukemia or AIDS, or medication after organ transplant. Diabetics have vascular and nerve impairment, and are at risk of cellulitis, a potentially serious bacterial infection; any relatively minor injury to feet, including a nail fungal infection, can lead to more serious complications. Infection of the bone is another rare complication.
Epidemiology
A 2003 survey of diseases of the foot in 16 European countries found onychomycosis to be the most frequent fungal foot infection and estimated its prevalence at 27%. Prevalence was observed to increase with age. In Canada, the prevalence was estimated to be 6.48%. Onychomycosis affects approximately one-third of diabetics and is 56% more frequent in people with psoriasis.
Etymology
The term is from Ancient Greek "nail", "fungus", and the suffix ōsis "functional disease".
Research
Research suggests that fungi are sensitive to heat, typically . The basis of laser treatment is to try to heat the nail bed to these temperatures in order to disrupt fungal growth. As of 2013 research into laser treatment seemed promising. There is also ongoing development in photodynamic therapy, which uses laser or LED light to activate photosensitisers that eradicate fungi.
| Biology and health sciences | Fungal infections | Health |
2248633 | https://en.wikipedia.org/wiki/Uintatherium | Uintatherium | Uintatherium ("Beast of the Uinta Mountains") is an extinct genus of herbivorous dinoceratan mammal that lived during the Eocene epoch. Two species are currently recognized: U. anceps from the United States during the Early to Middle Eocene (56–38 million years ago) and U. insperatus of Middle to Late Eocene (48–34 million years ago) China.
Description
Uintatherium was a large browsing animal. With a skull long, tall at the shoulder, body length of about and a weight up to 2 tonnes, it was similar to today's rhinoceros, both in size and in shape. Its legs were robust to sustain the weight of the animal and were equipped with hooves. Moreover, a Uintatherium's sternum was made up of horizontal segments, unlike today's rhinos, which have compressed vertical segments.
Skull
Its most unusual feature was the skull, which was large and strongly built, but simultaneously flat and concave: this feature is rare and is found in no other known mammal except some brontotheres. The cranial cavity was exceptionally small because the walls of the cranium were exceedingly thick. The cranium was lightened by several sinuses, like those in an elephant's skull.
The teeth were larger in males than in females. The upper canine teeth were large and may have been formidable defensive weapons; superficially, they resembled those of saber-toothed cats.
The front of the male's skull bore six knob-like ossicones, which projected . Their function is unknown. They may have been used in defense and/or sexual display.
Discovery and taxonomy
Fossils of Uintatherium were first discovered in the Bridger Basin near Fort Bridger by Lieutenant W. N. Wann in September 1870 and were later described as a new species of Titanotherium, Titanotherium anceps, by Othniel Marsh in 1871. The specimen (YPM 11030) only consisted of several skull pieces, including the right parietal horn, and fragmentary postcrania. The following year, Marsh and Joseph Leidy collected in the Eocene Beds near Fort Bridger while Edward Cope, Marsh's competitor, excavated in the Washakie Basin. In August 1872, Leidy named Uintatherium robustum based on a posterior skull and partial mandibles (ANSP 12607). Another specimen discovered by Leidy's crews consisting of a canine was named Uintamastix atrox and was thought to have been a saber-toothed and carnivorous.
Eighteen days after the description of Uintatherium, Cope and Marsh both named new genera of Uinta dinoceratans, Cope naming Loxolophodon in his "garbled" telegram and Marsh dubbed Tinoceras. Due to Uintatherium being named first, Cope and Marsh's genera are synonymous with Uintatherium. Cope described two genera in his telegram, Loxolophodon and Eobasileus; the latter is currently considered separate from Uintatherium. Tinoceras was a new genus made for Titanotherium anceps by Marsh. Several days later, Marsh erected the genus Dinoceras. Dinoceras and Tinoceras would receive several additional species by Marsh throughout the 1870s and 1880s, many based on fragmentary material. Several complete skulls were found by Cope and Marsh crews, leading to theories like Cope's proboscidean assessment. Because of Cope and Marsh's rivalry, the two would often publish scathing criticisms of each other's work, stating their respective genera were valid. The trio would name 25 species now considered synonymous with Marsh's original species, Titanotherium anceps, which was placed in Leidy's genus, Uintatherium.
Many additional discoveries of Uintatherium have since occurred, making Uintatherium one of the best-known and popular American fossil mammals. Princeton University launched expeditions to the Eocene beds of Wyoming in the 1870s and 1880s, discovering several partial since skulls and naming several species of uintatheres that are now considered synonyms of U. anceps. Major reassessment came in the 1960s by Walter Wheeler who synonymized and re-described many of the Uintatherium fossils discovered during the 19th century A cast of a Uintatherium skeleton is on display at the Utah Field House of Natural History State Park. The skeleton of Uintatherium is also on display at the Smithsonian National Museum of Natural History in Washington, DC. A new species was named based on almost intact skull, U. insperatus, found in the lower part of the Lushi Formation of the Lushi Basin in Henan Province, China.
| Biology and health sciences | Mammals: General | Animals |
14684531 | https://en.wikipedia.org/wiki/Graphometer | Graphometer | The graphometer, semicircle or semicircumferentor is a surveying instrument used for angle measurements. It consists of a semicircular limb divided into 180 degrees and sometimes subdivided into minutes. The limb is subtended by the diameter with two sights at its ends. In the middle of the diameter a "box and needle" (compass) is fixed. On the same middle the alidade with two other sights is fitted. The device is mounted on a staff via a ball and socket joint. In effect the device is a half-circumferentor. For convenience, sometimes another half-circle from 180 to 360 degrees may be graduated in another line on the limb.
The form was introduced in Philippe Danfrie's (Paris, 1597) and the term graphometer was popular with French geodesists. The preferable English-language terms were semicircle or semicircumferentor. Some 19th-century graphometers had telescopic rather than open sights.
Le Nôtre's ('The theory and practice of gardening'), published in 1709, described the use of the graphometer in transferring geometric shapes from garden plans onto landscapes at a large scale.
Usage
To measure an angle, say, EKG, the diameter middle C is placed at the angle apex K using the plummet at point C of the instrument. The diameter is aligned with leg KE of the angle using the sights at the ends of the diameter. The alidade is aligned with the leg KG using another pair of sights, and the angle read off the limb as marked by the alidade. Further uses of the graphometer are the same as those of the circumferentor.
| Technology | Surveying tools | null |
14688273 | https://en.wikipedia.org/wiki/Allotropes%20of%20phosphorus | Allotropes of phosphorus | Elemental phosphorus can exist in several allotropes, the most common of which are white and red solids. Solid violet and black allotropes are also known. Gaseous phosphorus exists as diphosphorus and atomic phosphorus.
White phosphorus
White phosphorus, yellow phosphorus or simply tetraphosphorus () exists as molecules of four phosphorus atoms in a tetrahedral structure, joined by six phosphorus—phosphorus single bonds. The free P4 molecule in the gas phase has a P-P bond length of rg = 2.1994(3) Å as was determined by gas electron diffraction. Despite the tetrahedral arrangement the P4 molecules have no significant ring strain and a vapor of P4 molecules is stable. This is due to the nature of bonding in the P4 tetrahedron which can be described by spherical aromaticity or cluster bonding, that is the electrons are highly delocalized. This has been illustrated by calculations of the magnetically induced currents, which sum up to 29 nA/T, much more than in the archetypical aromatic molecule benzene (11 nA/T).
Molten and gaseous white phosphorus also retains the tetrahedral molecules, until when it starts decomposing to molecules.
White phosphorus is a translucent waxy solid that quickly yellows in light, and impure white phosphorus is for this reason called yellow phosphorus. It is toxic, causing severe liver damage on ingestion and phossy jaw from chronic ingestion or inhalation.
It glows greenish in the dark (when exposed to oxygen). It ignites spontaneously in air at about , and at much lower temperatures if finely divided (due to melting-point depression). Because of this property, white phosphorus is used as a weapon. Phosphorus reacts with oxygen, usually forming two oxides depending on the amount of available oxygen: (phosphorus trioxide) when reacted with a limited supply of oxygen, and when reacted with excess oxygen. On rare occasions, , , and are also formed, but in small amounts. This combustion gives phosphorus(V) oxide, which consists of tetrahedral with oxygen inserted between the phosphorus atoms and at their vertices:
The odour of combustion of this form has a characteristic garlic smell. White phosphorus is only slightly soluble in water and can be stored under water. Indeed, white phosphorus is safe from self-igniting when it is submerged in water; due to this, unreacted white phosphorus can prove hazardous to beachcombers who may collect washed-up samples while unaware of their true nature. is soluble in benzene, oils, carbon disulfide, and disulfur dichloride.
The white allotrope can be produced using several methods. In the industrial process, phosphate rock is heated in an electric or fuel-fired furnace in the presence of carbon and silica. Elemental phosphorus is then liberated as a vapour and can be collected under phosphoric acid. An idealized equation for this carbothermal reaction is shown for calcium phosphate (although phosphate rock contains substantial amounts of fluoroapatite):
Other polyhedrane analogues
Although white phosphorus forms the tetrahedron, the simplest possible Platonic hydrocarbon, no other polyhedral phosphorus clusters are known. White phosphorus converts to the thermodynamically-stabler red allotrope, but that allotrope is not isolated polyhedra.
Cubane, in particular, is unlikely to form, and the closest approach is the half-phosphorus compound , produced from phosphaalkynes. Other clusters are more thermodynamically favorable, and some have been partially formed as components of larger polyelemental compounds.
Red phosphorus
Red phosphorus may be formed by heating white phosphorus to in the absence of air or by exposing white phosphorus to sunlight. Red phosphorus exists as an amorphous network. Upon further heating, the amorphous red phosphorus crystallizes. It has two crystalline forms: violet phosphorus and fibrous red phosphorus. Bulk red phosphorus does not ignite in air at temperatures below , whereas pieces of white phosphorus ignite at about .
Under standard conditions it is more stable than white phosphorus, but less stable than the thermodynamically stable black phosphorus. The standard enthalpy of formation of red phosphorus is −17.6 kJ/mol. Red phosphorus is kinetically most stable.
It was first presented by Anton von Schrötter before the Vienna Academy of Sciences on December 9, 1847, although others had doubtlessly had this substance in their hands before, such as Berzelius.
Applications
Red phosphorus can be used as a very effective flame retardant, especially in thermoplastics (e.g. polyamide) and thermosets (e.g. epoxy resins or polyurethanes). The flame retarding effect is based on the formation of polyphosphoric acid. Together with the organic polymer material, these acids create a char that prevents the propagation of the flames. The safety risks associated with phosphine generation and friction sensitivity of red phosphorus can be effectively minimized by stabilization and micro-encapsulation. For easier handling, red phosphorus is often used in form of dispersions or masterbatches in various carrier systems. However, for electronic/electrical systems, red phosphorus flame retardant has been effectively banned by major OEMs due to its tendency to induce premature failures. One persistent problem is that red phosphorus in epoxy molding compounds induces elevated leakage current in semiconductor devices. Another problem was acceleration of hydrolysis reactions in PBT insulating material.
Red phosphorus can also be used in the illicit production of methamphetamine and Krokodil.
Red phosphorus can be used as an elemental photocatalyst for hydrogen formation from the water. They display a steady hydrogen evolution rates of 633 μmol/(h⋅g) by the formation of small-sized fibrous phosphorus.
Violet or Hittorf's phosphorus
Monoclinic phosphorus, violet phosphorus, or Hittorf's metallic phosphorus is a crystalline form of the amorphous red phosphorus. In 1865, Johann Wilhelm Hittorf heated red phosphorus in a sealed tube at 530 °C. The upper part of the tube was kept at 444 °C. Brilliant opaque monoclinic, or rhombohedral, crystals sublimed as a result. Violet phosphorus can also be prepared by dissolving white phosphorus in molten lead in a sealed tube at 500 °C for 18 hours. Upon slow cooling, Hittorf's allotrope crystallises out. The crystals can be revealed by dissolving the lead in dilute nitric acid followed by boiling in concentrated hydrochloric acid. In addition, a fibrous form exists with similar phosphorus cages. The lattice structure of violet phosphorus was presented by Thurn and Krebs in 1969. Imaginary frequencies, indicating the irrationalities or instabilities of the structure, were obtained for the reported violet structure from 1969. The single crystal of violet phosphorus was also produced. The lattice structure of violet phosphorus has been obtained by single-crystal x-ray diffraction to be monoclinic with space group of P2/n (13) (a = 9.210, b = 9.128, c = 21.893 Å, β = 97.776°, CSD-1935087). The optical band gap of the violet phosphorus was measured by diffuse reflectance spectroscopy to be around 1.7 eV. The thermal decomposition temperature was 52 °C higher than its black phosphorus counterpart. The violet phosphorene was easily obtained from both mechanical and solution exfoliation.
Reactions of violet phosphorus
Violet phosphorus does not ignite in air until heated to 300 °C and is insoluble in all solvents. It is not attacked by alkali and only slowly reacts with halogens. It can be oxidised by nitric acid to phosphoric acid. Violet phosphorus ignites upon impact in air.
If it is heated in an atmosphere of inert gas, for example nitrogen or carbon dioxide, it sublimes and the vapour condenses as white phosphorus. If it is heated in a vacuum and the vapour condensed rapidly, violet phosphorus is obtained. It would appear that violet phosphorus is a polymer of high relative molecular mass, which on heating breaks down into molecules. On cooling, these would normally dimerize to give molecules (i.e. white phosphorus) but, in a vacuum, they link up again to form the polymeric violet allotrope.
Black phosphorus
Black phosphorus is the thermodynamically stable form of phosphorus at room temperature and pressure, with a heat of formation of −39.3 kJ/mol (relative to white phosphorus which is defined as the standard state). It was first synthesized by heating white phosphorus under high pressures (12,000 atmospheres) in 1914. As a 2D material, in appearance, properties, and structure, black phosphorus is very much like graphite with both being black and flaky, a conductor of electricity, and having puckered sheets of linked atoms.
Black phosphorus has an orthorhombic pleated honeycomb structure and is the least reactive allotrope, a result of its lattice of interlinked six-membered rings where each atom is bonded to three other atoms. In this structure, each phosphorus atom has five outer shell electrons. Black and red phosphorus can also take a cubic crystal lattice structure. The first high-pressure synthesis of black phosphorus crystals was made by the Nobel prize winner Percy Williams Bridgman in 1914. Metal salts catalyze the synthesis of black phosphorus.
Black phosphorus-based sensors exhibit several superior qualities over traditional materials used in piezoelectric or resistive sensors. Characterized by its unique puckered honeycomb lattice structure, black phosphorus provides exceptional carrier mobility. This property ensures its high sensitivity and mechanical resilience, making it an intriguing candidate for sensor technology.
Phosphorene
The similarities to graphite also include the possibility of scotch-tape delamination (exfoliation), resulting in phosphorene, a graphene-like 2D material with excellent charge transport properties, thermal transport properties and optical properties. Distinguishing features of scientific interest include a thickness dependent band-gap, which is not found in graphene. This, combined with a high on/off ratio of ~105 makes phosphorene a promising candidate for field-effect transistors (FETs). The tunable bandgap also suggests promising applications in mid-infrared photodetectors and LEDs. Exfoliated black phosphorus sublimes at 400 °C in vacuum. It gradually oxidizes when exposed to water in the presence of oxygen, which is a concern when contemplating it as a material for the manufacture of transistors, for example. Exfoliated black phosphorus is an emerging anode material in the battery community, showing high stability and lithium storage.
Ring-shaped phosphorus
Ring-shaped phosphorus was theoretically predicted in 2007. The ring-shaped phosphorus was self-assembled inside evacuated multi-walled carbon nanotubes with inner diameters of 5–8 nm using a vapor encapsulation method. A ring with a diameter of 5.30 nm, consisting of 23 and 23 units with a total of 230 P atoms, was observed inside a multi-walled carbon nanotube with an inner diameter of 5.90 nm in atomic scale. The distance between neighboring rings is 6.4 Å.
The ring shaped molecule is not stable in isolation.
Blue phosphorus
Single-layer blue phosphorus was first produced in 2016 by the method of molecular beam epitaxy from black phosphorus as precursor.
Diphosphorus
The diphosphorus allotrope () can normally be obtained only under extreme conditions (for example, from at 1100 kelvin). In 2006, the diatomic molecule was generated in homogeneous solution under normal conditions with the use of transition metal complexes (for example, tungsten and niobium).
Diphosphorus is the gaseous form of phosphorus, and the thermodynamically stable form between 1200 °C and 2000 °C. The dissociation of tetraphosphorus () begins at lower temperature: the percentage of at 800 °C is ≈ 1%. At temperatures above about 2000 °C, the diphosphorus molecule begins to dissociate into atomic phosphorus.
Phosphorus nanorods
nanorod polymers were isolated from CuI-P complexes using low temperature treatment.
Red/brown phosphorus was shown to be stable in air for several weeks and have properties distinct from those of red phosphorus. Electron microscopy showed that red/brown phosphorus forms long, parallel nanorods with a diameter between 3.4 Å and 4.7 Å.
Properties
| Physical sciences | Group 15 | Chemistry |
8857956 | https://en.wikipedia.org/wiki/Progymnosperm | Progymnosperm | The progymnosperms are an extinct group of woody, spore-bearing plants that is presumed to have evolved from the trimerophytes, and eventually gave rise to the spermatophytes, ancestral to both gymnosperms and angiosperms (flowering plants). They have been treated formally at the rank of division Progymnospermophyta or class Progymnospermopsida (as opposite). The stratigraphically oldest known examples belong to the Middle Devonian order the Aneurophytales, with forms such as Protopteridium, in which the vegetative organs consisted of relatively loose clusters of axes. Tetraxylopteris is another example of a genus lacking leaves. In more advanced aneurophytaleans such as Aneurophyton these vegetative organs started to look rather more like fronds, and eventually during Late Devonian times the aneurophytaleans are presumed to have given rise to the pteridosperm order, the Lyginopteridales. In Late Devonian times, another group of progymnosperms gave rise to the first really large trees known as Archaeopteris. The latest surviving group of progymnosperms is the Noeggerathiales, which persisted until the end of the Permian.
Other characteristics:
Vascular cambium with unlimited growth potential is present as well as xylem and phloem.
Ancestors of the earliest seed plants as well as the first true trees.
Strong monopodial growth is exhibited.
Some were heterosporous but others were homosporous.
Phylogeny
Progymnosperms are a paraphyletic grade of plants.
| Biology and health sciences | Pteridophytes | Plants |
1595817 | https://en.wikipedia.org/wiki/Energy%20demand%20management | Energy demand management | Energy demand management, also known as demand-side management (DSM) or demand-side response (DSR), is the modification of consumer demand for energy through various methods such as financial incentives and behavioral change through education.
Usually, the goal of demand-side management is to encourage the consumer to use less energy during peak hours, or to move the time of energy use to off-peak times such as nighttime and weekends. Peak demand management does not necessarily decrease total energy consumption, but could be expected to reduce the need for investments in networks and/or power plants for meeting peak demands. An example is the use of energy storage units to store energy during off-peak hours and discharge them during peak hours.
A newer application for DSM is to aid grid operators in balancing variable generation from wind and solar units, particularly when the timing and magnitude of energy demand does not coincide with the renewable generation. Generators brought on line during peak demand periods are often fossil fuel units. Minimizing their use reduces emissions of carbon dioxide and other pollutants.
The term DSM was coined following the time of the 1973 energy crisis and 1979 energy crisis. Governments of many countries mandated performance of various programs for demand management. An early example is the National Energy Conservation Policy Act of 1978 in the U.S., preceded by similar actions in California and Wisconsin. Demand-side management was introduced publicly by Electric Power Research Institute (EPRI) in the 1980s. Nowadays, DSM technologies become increasingly feasible due to the integration of information and communications technology and the power system, new terms such as integrated demand-side management (IDSM), or smart grid.
Operation
The American electric power industry originally relied heavily on foreign energy imports, whether in the form of consumable electricity or fossil fuels that were then used to produce electricity. During the time of the energy crises in the 1970s, the federal government passed the Public Utility Regulatory Policies Act (PURPA), hoping to reduce dependence on foreign oil and to promote energy efficiency and alternative energy sources. This act forced utilities to obtain the cheapest possible power from independent power producers, which in turn promoted renewables and encouraged the utility to reduce the amount of power they need, hence pushing forward agendas for energy efficiency and demand management.
Electricity use can vary dramatically on short and medium time frames, depending on current weather patterns. Generally the wholesale electricity system adjusts to changing demand by dispatching additional or less generation. However, during peak periods, the additional generation is usually supplied by less efficient ("peaking") sources. Unfortunately, the instantaneous financial and environmental cost of using these "peaking" sources is not necessarily reflected in the retail pricing system. In addition, the ability or willingness of electricity consumers to adjust to price signals by altering demand (elasticity of demand) may be low, particularly over short time frames. In many markets, consumers (particularly retail customers) do not face real-time pricing at all, but pay rates based on average annual costs or other constructed prices.
Energy demand management activities attempt to bring the electricity demand and supply closer to a perceived optimum, and help give electricity end users benefits for reducing their demand. In the modern system, the integrated approach to demand-side management is becoming increasingly common. IDSM automatically sends signals to end-use systems to shed load depending on system conditions. This allows for very precise tuning of demand to ensure that it matches supply at all times, reduces capital expenditures for the utility. Critical system conditions could be peak times, or in areas with levels of variable renewable energy, during times when demand must be adjusted upward to avoid over-generation or downward to help with ramping needs.
In general, adjustments to demand can occur in various ways: through responses to price signals, such as permanent differential rates for evening and day times or occasional highly priced usage days, behavioral changes achieved through home area networks, automated controls such as with remotely controlled air-conditioners, or with permanent load adjustments with energy efficient appliances.
Logical foundations
Demand for any commodity can be modified by actions of market players and government (regulation and taxation). Energy demand management implies actions that influence demand for energy. DSM was originally adopted in electricity, but today it is applied widely to utilities including water and gas as well.
Reducing energy demand is contrary to what both energy suppliers and governments have been doing during most of the modern industrial history. Whereas real prices of various energy forms have been decreasing during most of the industrial era, due to economies of scale and technology, the expectation for the future is the opposite. Previously, it was not unreasonable to promote energy use as more copious and cheaper energy sources could be anticipated in the future or the supplier had installed excess capacity that would be made more profitable by increased consumption.
In centrally planned economies subsidizing energy was one of the main economic development tools. Subsidies to the energy supply industry are still common in some countries.
Contrary to the historical situation, energy prices and availability are expected to deteriorate. Governments and other public actors, if not the energy suppliers themselves, are tending to employ energy demand measures that will increase the efficiency of energy consumption.
Types
Energy efficiency: Using less power to perform the same tasks. This involves a permanent reduction of demand by using more efficient load-intensive appliances such as water heaters, refrigerators, or washing machines.
Demand response: Any reactive or preventative method to reduce, flatten or shift demand. Historically, demand response programs have focused on peak reduction to defer the high cost of constructing generation capacity. However, demand response programs are now being looked to assist with changing the net load shape as well, load minus solar and wind generation, to help with integration of variable renewable energy. Demand response includes all intentional modifications to consumption patterns of electricity of end user customers that are intended to alter the timing, level of instantaneous demand, or the total electricity consumption. Demand response refers to a wide range of actions which can be taken at the customer side of the electricity meter in response to particular conditions within the electricity system (such as peak period network congestion or high prices), including the aforementioned IDSM.
Dynamic demand: Advance or delay appliance operating cycles by a few seconds to increase the diversity factor of the set of loads. The concept is that by monitoring the power factor of the power grid, as well as their own control parameters, individual, intermittent loads would switch on or off at optimal moments to balance the overall system load with generation, reducing critical power mismatches. As this switching would only advance or delay the appliance operating cycle by a few seconds, it would be unnoticeable to the end user. In the United States, in 1982, a (now-lapsed) patent for this idea was issued to power systems engineer Fred Schweppe. This type of dynamic demand control is frequently used for air-conditioners. One example of this is through the SmartAC program in California.
Distributed energy resources: Distributed generation, also distributed energy, on-site generation (OSG) or district/decentralized energy is electrical generation and storage performed by a variety of small, grid-connected devices referred to as distributed energy resources (DER). Conventional power stations, such as coal-fired, gas and nuclear powered plants, as well as hydroelectric dams and large-scale solar power stations, are centralized and often require electric energy to be transmitted over long distances. By contrast, DER systems are decentralized, modular and more flexible technologies, that are located close to the load they serve, albeit having capacities of only 10 megawatts (MW) or less. These systems can comprise multiple generation and storage components; in this instance they are referred to as hybrid power systems. DER systems typically use renewable energy sources, including small hydro, biomass, biogas, solar power, wind power, and geothermal power, and increasingly play an important role for the electric power distribution system. A grid-connected device for electricity storage can also be classified as a DER system, and is often called a distributed energy storage system (DESS). By means of an interface, DER systems can be managed and coordinated within a smart grid. Distributed generation and storage enables collection of energy from many sources and may lower environmental impacts and improve security of supply.
Scale
Broadly, demand side management can be classified into four categories: national scale, utility scale, community scale, and individual household scale.
National scale
Energy efficiency improvement is one of the most important demand side management strategies. Efficiency improvements can be implemented nationally through legislation and standards in housing, building, appliances, transport, machines, etc.
Utility scale
During peak demand time, utilities are able to control storage water heaters, pool pumps and air conditioners in large areas to reduce peak demand, e.g. Australia and Switzerland. One of the common technologies is ripple control: high frequency signal (e.g. 1000 Hz) is superimposed to normal electricity (50 or 60 Hz) to switch on or off devices.
In more service-based economies, such as Australia, electricity network peak demand often occurs in the late afternoon to early evening (4pm to 8pm). Residential and commercial demand is the most significant part of these types of peak demand. Therefore, it makes great sense for utilities (electricity network distributors) to manage residential storage water heaters, pool pumps, and air conditioners.
Community scale
Other names can be neighborhood, precinct, or district. Community central heating systems have been existing for many decades in regions of cold winters. Similarly, peak demand in summer peak regions need to be managed, e.g. Texas & Florida in the U.S., Queensland and New South Wales in Australia. Demand side management can be implemented in community scale to reduce peak demand for heating or cooling. Another aspect is to achieve Net zero-energy building or community.
Managing energy, peak demand and bills in community level may be more feasible and viable, because of the collective purchasing power, the bargaining power, more options in energy efficiency or storage, more flexibility and diversity in generating and consuming energy at different times, e.g. using PV to compensate day time consumption or for energy storage.
Household scale
In areas of Australia, more than 30% (2016) of households have rooftop photo-voltaic systems. It is useful for them to use free energy from the sun to reduce energy import from the grid. Further, demand side management can be helpful when a systematic approach is considered: the operation of photovoltaic, air conditioner, battery energy storage systems, storage water heaters, building performance and energy efficiency measures.
Examples
Queensland, Australia
The utility companies in the state of Queensland, Australia have devices fitted onto certain household appliances such as air conditioners or into household meters to control water heater, pool pumps etc. These devices would allow energy companies to remotely cycle the use of these items during peak hours. Their plan also includes improving the efficiency of energy-using items and giving financial incentives to consumers who use electricity during off-peak hours, when it is less expensive for energy companies to produce.
Another example is that with demand side management, Southeast Queensland households can use electricity from rooftop photo-voltaic system to heat up water.
Toronto, Canada
In 2008, Toronto Hydro, the monopoly energy distributor of Ontario, had over 40,000 people signed up to have remote devices attached to air conditioners which energy companies use to offset spikes in demand. Spokeswoman Tanya Bruckmueller says that this program can reduce demand by 40 megawatts during emergency situations.
Indiana, US
The Alcoa Warrick Operation is participating in MISO as a qualified demand response resource, which means it is providing demand response in terms of energy, spinning reserve, and regulation service.
Brazil
Demand-side management can apply to electricity system based on thermal power plants or to systems where renewable energy, as hydroelectricity, is predominant but with a complementary thermal generation, for instance, in Brazil.
In Brazil's case, despite the generation of hydroelectric power corresponds to more than 80% of the total, to achieve a practical balance in the generation system, the energy generated by hydroelectric plants supplies the consumption below the peak demand. Peak generation is supplied by the use of fossil-fuel power plants. In 2008, Brazilian consumers paid more than U$1 billion for complementary thermoelectric generation not previously programmed.
In Brazil, the consumer pays for all the investment to provide energy, even if a plant sits idle. For most fossil-fuel thermal plants, the consumers pay for the "fuels" and other operation costs only when these plants generate energy. The energy, per unit generated, is more expensive from thermal plants than from hydroelectric. Only a few of the Brazilian's thermoelectric plants use natural gas, so they pollute significantly more than hydroelectric plants. The power generated to meet the peak demand has higher costs—both investment and operating costs—and the pollution has a significant environmental cost and potentially, financial and social liability for its use. Thus, the expansion and the operation of the current system is not as efficient as it could be using demand side management. The consequence of this inefficiency is an increase in energy tariffs that is passed on to the consumers.
Moreover, because electric energy is generated and consumed almost instantaneously, all the facilities, as transmission lines and distribution nets, are built for peak consumption. During the non-peak periods their full capacity is not utilized.
The reduction of peak consumption can benefit the efficiency of the electric systems, like the Brazilian system, in various ways: as deferring new investments in distribution and transmission networks, and reducing the necessity of complementary thermal power operation during peak periods, which can diminish both the payment for investment in new power plants to supply only during the peak period and the environmental impact associated with greenhouse gas emission.
Issues
Some people argue that demand-side management has been ineffective because it has often resulted in higher utility costs for consumers and less profit for utilities.
One of the main goals of demand side management is to be able to charge the consumer based on the true price of the utilities at that time. If consumers could be charged less for using electricity during off-peak hours, and more during peak hours, then supply and demand would theoretically encourage the consumer to use less electricity during peak hours, thus achieving the main goal of demand side management.
| Technology | Concepts | null |
1596221 | https://en.wikipedia.org/wiki/Chemical%20test | Chemical test | In chemistry, a chemical test is a qualitative or quantitative procedure designed to identify, quantify, or characterise a chemical compound or chemical group.
Purposes
Chemical testing might have a variety of purposes, such as to:
Determine if, or verify that, the requirements of a specification, regulation, or contract are met
Decide if a new product development program is on track: Demonstrate proof of concept
Demonstrate the utility of a proposed patent
Determine the interactions of a sample with other known substances
Determine the composition of a sample
Provide standard data for other scientific, medical, and Quality assurance functions
Validate suitability for end-use
Provide a basis for Technical communication
Provide a technical means of comparison of several options
Provide evidence in legal proceedings
Biochemical tests
Clinistrips quantitatively test for sugar in urine
The Kastle-Meyer test tests for the presence of hemoglobin
Salicylate testing is a category of drug testing that is focused on detecting salicylates such as acetylsalicylic acid for either biochemical or medical purposes.
The Phadebas test tests for the presence of saliva for forensic purposes
Iodine solution tests for starch
The Van Slyke determination tests for specific amino acids
The Zimmermann test tests for ketosteroids
Seliwanoff's test differentiates between aldose and ketose sugars
Test for lipids: add ethanol to sample, then shake; add water to the solution, and shake again. If fat is present, the product turns milky white.
The Sakaguchi test detects the presence of arginine in protein
The Hopkins–Cole reaction tests for the presence of tryptophan in proteins
The nitroprusside reaction tests for the presence of free thiol groups of cysteine in proteins
The Sullivan reaction tests for the presence of cysteine and cystine in proteins
The Acree–Rosenheim reaction tests for the presence of tryptophan in proteins
The Pauly reaction tests for the presence of tyrosine or histidine in proteins
Heller's test tests for the presence of albumin in urine
Gmelin's test tests for the presence of bile pigments in urine
Hay's test tests for the presence of bile pigments in urine
Reducing sugars
Barfoed's test tests for reducing polysacchorides or disaccharides
Benedict's reagent tests for reducing sugars or aldehydes
Fehling's solution tests for reducing sugars or aldehydes, similar to Benedict's reagent
Molisch's test tests for carbohydrates
Nylander's test tests for reducing sugars
Rapid furfural test distinguishes between glucose and fructose
Proteins and polypeptides
The bicinchoninic acid assay tests for proteins
The Biuret test tests for proteins and polypeptides
Bradford protein assay measures protein quantitatively
The Phadebas amylase test determines alpha-amylase activity
Organic tests
The carbylamine reaction tests for primary amines
The esterification reaction tests for the presence of alcohol and/or carboxylic acids
The Griess test tests for organic nitrite compounds
The 2,4-dinitrophenylhydrazine tests for carbonyl compounds
The iodoform reaction tests for the presence of methyl ketones, or compounds which can be oxidized to methyl ketones
The Schiff test detects aldehydes
Tollens' reagent tests for aldehydes (known as the silver mirror test)
The Zeisel determination tests for the presence of esters or ethers
Lucas' reagent is used to distinguish between primary, secondary and tertiary alcohols.
The bromine test is used to test for the presence of unsaturation and phenols.
Inorganic tests
Barium chloride tests for sulfates
Acidified silver nitrate solution tests for halide ions
The Beilstein test tests for halides qualitatively
The bead test tests for certain metals
The Carius halogen method measures halides quantitatively.
Chemical tests for cyanide test for the presence of cyanide, CN−
Copper sulfate tests for the presence of water
Flame tests test for metals
The Gilman test tests for the presence of a Grignard reagent
The Kjeldahl method quantitatively determines the presence of nitrogen
Nessler's reagent tests for the presence of ammonia
Ninhydrin tests for ammonia or primary amines
Phosphate tests test for phosphate
The sodium fusion test tests for the presence of nitrogen, sulfur, and halides in a sample
The Zerewitinoff determination tests for any acidic hydrogen
The Oddy test tests for acid, aldehydes, and sulfides
Gunzberg's test tests for the presence of hydrochloric acid
Kelling's test tests for the presence of lactic acid
| Physical sciences | Chemical methods | Chemistry |
1596317 | https://en.wikipedia.org/wiki/Habitat | Habitat | In ecology, habitat refers to the array of resources, physical and biotic factors that are present in an area, such as to support the survival and reproduction of a particular species. A species habitat can be seen as the physical manifestation of its ecological niche. Thus "habitat" is a species-specific term, fundamentally different from concepts such as environment or vegetation assemblages, for which the term "habitat-type" is more appropriate.
The physical factors may include (for example): soil, moisture, range of temperature, and light intensity. Biotic factors include the availability of food and the presence or absence of predators. Every species has particular habitat requirements, habitat generalist species are able to thrive in a wide array of environmental conditions while habitat specialist species require a very limited set of factors to survive. The habitat of a species is not necessarily found in a geographical area, it can be the interior of a stem, a rotten log, a rock or a clump of moss; a parasitic organism has as its habitat the body of its host, part of the host's body (such as the digestive tract), or a single cell within the host's body.
Habitat types are environmental categorizations of different environments based on the characteristics of a given geographical area, particularly vegetation and climate. Thus habitat types do not refer to a single species but to multiple species living in the same area. For example, terrestrial habitat types include forest, steppe, grassland, semi-arid or desert. Fresh-water habitat types include marshes, streams, rivers, lakes, and ponds; marine habitat types include salt marshes, the coast, the intertidal zone, estuaries, reefs, bays, the open sea, the sea bed, deep water and submarine vents.
Habitat types may change over time. Causes of change may include a violent event (such as the eruption of a volcano, an earthquake, a tsunami, a wildfire or a change in oceanic currents); or change may occur more gradually over millennia with alterations in the climate, as ice sheets and glaciers advance and retreat, and as different weather patterns bring changes of precipitation and solar radiation. Other changes come as a direct result of human activities, such as deforestation, the plowing of ancient grasslands, the diversion and damming of rivers, the draining of marshland and the dredging of the seabed. The introduction of alien species can have a devastating effect on native wildlife – through increased predation, through competition for resources or through the introduction of pests and diseases to which the indigenous species have no immunity.
Definition and etymology
The word "habitat" has been in use since about 1755 and derives from the Latin habitāre, to inhabit, from habēre, to have or to hold. Habitat can be defined as the natural environment of an organism, the type of place in which it is natural for it to live and grow. It is similar in meaning to a biotope; an area of uniform environmental conditions associated with a particular community of plants and animals.
Environmental factors
The chief environmental factors affecting the distribution of living organisms are temperature, humidity, climate, soil and light intensity, and the presence or absence of all the requirements that the organism needs to sustain it. Generally speaking, animal communities are reliant on specific types of plant communities.
Some plants and animals have habitat requirements which are met in a wide range of locations. The small white butterfly Pieris rapae for example is found on all the continents of the world apart from Antarctica. Its larvae feed on a wide range of Brassicas and various other plant species, and it thrives in any open location with diverse plant associations. The large blue butterfly Phengaris arion is much more specific in its requirements; it is found only in chalk grassland areas, its larvae feed on Thymus species, and because of complex life cycle requirements it inhabits only areas in which Myrmica ants live.
Disturbance is important in the creation of biodiverse habitat types. In the absence of disturbance, a climax vegetation cover develops that prevents the establishment of other species. Wildflower meadows are sometimes created by conservationists but most of the flowering plants used are either annuals or biennials and disappear after a few years in the absence of patches of bare ground on which their seedlings can grow. Lightning strikes and toppled trees in tropical forests allow species richness to be maintained as pioneering species move in to fill the gaps created. Similarly, coastal habitat types can become dominated by kelp until the seabed is disturbed by a storm and the algae swept away, or shifting sediment exposes new areas for colonisation. Another cause of disturbance is when an area may be overwhelmed by an invasive introduced species which is not kept under control by natural enemies in its new habitat.
Types
Terrestrial
Terrestrial habitat types include forests, grasslands, wetlands and deserts. Within these broad biomes are more specific habitat types with varying climate types, temperature regimes, soils, altitudes and vegetation. Many of these habitat types grade into each other and each one has its own typical communities of plants and animals. A habitat-type may suit a particular species well, but its presence or absence at any particular location depends to some extent on chance, on its dispersal abilities and its efficiency as a colonizer.
Arid
Arid habitats are those where there is little available water. The most extreme arid habitats are deserts. Desert animals have a variety of adaptations to survive the dry conditions. Some frogs live in deserts, creating moist habitat types underground and hibernating while conditions are adverse. Couch's spadefoot toad (Scaphiopus couchii) emerges from its burrow when a downpour occurs and lays its eggs in the transient pools that form; the tadpoles develop with great rapidity, sometimes in as little as nine days, undergo metamorphosis, and feed voraciously before digging a burrow of their own.
List of arid habitat types
Desert
Fog desert
Polar desert
Steppe
Savanna
Wetland and riparian
Other organisms cope with the drying up of their aqueous habitat in other ways. Vernal pools are ephemeral ponds that form in the rainy season and dry up afterwards. They have their specially-adapted characteristic flora, mainly consisting of annuals, the seeds of which survive the drought, but also some uniquely adapted perennials. Animals adapted to these extreme habitat types also exist; fairy shrimps can lay "winter eggs" which are resistant to desiccation, sometimes being blown about with the dust, ending up in new depressions in the ground. These can survive in a dormant state for as long as fifteen years. Some killifish behave in a similar way; their eggs hatch and the juvenile fish grow with great rapidity when the conditions are right, but the whole population of fish may end up as eggs in diapause in the dried up mud that was once a pond.
Examples of wetland and riparian habitat types
Bog
Marsh
Fen
Flooded grasslands and savannas
Floodplain
Shrub swamp
Swamp
Vernal pool
Wet meadow
Forest
Examples of forest habitat types
Boreal forest
Cloud forest
Peat swamp forest
Temperate coniferous forest
Temperate deciduous forest
Temperate rain forest
Thorn forest
Tropical dry forest
Tropical moist forest
Tropical rain forest
Woodland
Freshwater
Freshwater habitat types include rivers, streams, lakes, ponds, marshes and bogs. They can be divided into running waters (rivers, streams) and standing waters (lakes, ponds, marshes, bogs). Although some organisms are found across most of these habitat types, the majority have more specific requirements. The water velocity, its temperature and oxygen saturation are important factors, but in river systems, there are fast and slow sections, pools, bayous and backwaters which provide a range of habitat types. Similarly, aquatic plants can be floating, semi-submerged, submerged or grow in permanently or temporarily saturated soils besides bodies of water. Marginal plants provide important habitat for both invertebrates and vertebrates, and submerged plants provide oxygenation of the water, absorb nutrients and play a part in the reduction of pollution.
Marine
Marine habitats include brackish water, estuaries, bays, the open sea, the intertidal zone, the sea bed, reefs and deep / shallow water zones. Further variations include rock pools, sand banks, mudflats, brackish lagoons, sandy and pebbly beaches, and seagrass beds, all supporting their own flora and fauna. The benthic zone or seabed provides a home for both static organisms, anchored to the substrate, and for a large range of organisms crawling on or burrowing into the surface. Some creatures float among the waves on the surface of the water, or raft on floating debris, others swim at a range of depths, including organisms in the demersal zone close to the seabed, and myriads of organisms drift with the currents and form the plankton.
List of marine habitat types
Abyssal plain
Aphotic zone
Benthic zone
Cold seep
Coral reef
Demersal zone
Estuary
Hydrothermal vent
Intertidal zone
Kelp forest
Littoral zone
Oceanic trench
Photic zone
Seagrass meadow
Mangrove swamp
Seamount
Tide pool
Urban
Many animals and plants have taken up residence in urban environments. They tend to be adaptable generalists and use the town's features to make their homes. Rats and mice have followed man around the globe, pigeons, peregrines, sparrows, swallows and house martins use the buildings for nesting, bats use roof space for roosting, foxes visit the garbage bins and squirrels, coyotes, raccoons and skunks roam the streets. About 2,000 coyotes are thought to live in and around Chicago. A survey of dwelling houses in northern European cities in the twentieth century found about 175 species of invertebrate inside them, including 53 species of beetle, 21 flies, 13 butterflies and moths, 13 mites, 9 lice, 7 bees, 5 wasps, 5 cockroaches, 5 spiders, 4 ants and a number of other groups. In warmer climates, termites are serious pests in the urban habitat; 183 species are known to affect buildings and 83 species cause serious structural damage.
Microhabitat types
A microhabitat is the small-scale physical requirements of a particular organism or population. Every habitat includes large numbers of microhabitat types with subtly different exposure to light, humidity, temperature, air movement, and other factors. The lichens that grow on the north face of a boulder are different from those that grow on the south face, from those on the level top, and those that grow on the ground nearby; the lichens growing in the grooves and on the raised surfaces are different from those growing on the veins of quartz. Lurking among these miniature "forests" are the microfauna, species of invertebrate, each with its own specific habitat requirements.
There are numerous different microhabitat types in a wood; coniferous forest, broad-leafed forest, open woodland, scattered trees, woodland verges, clearings, and glades; tree trunk, branch, twig, bud, leaf, flower, and fruit; rough bark, smooth bark, damaged bark, rotten wood, hollow, groove, and hole; canopy, shrub layer, plant layer, leaf litter, and soil; buttress root, stump, fallen log, stem base, grass tussock, fungus, fern, and moss. The greater the structural diversity in the wood, the greater the number of microhabitat types that will be present. A range of tree species with individual specimens of varying sizes and ages, and a range of features such as streams, level areas, slopes, tracks, clearings, and felled areas will provide suitable conditions for an enormous number of biodiverse plants and animals. For example, in Britain it has been estimated that various types of rotting wood are home to over 1700 species of invertebrate.
For a parasitic organism, its habitat is the particular part of the outside or inside of its host on or in which it is adapted to live. The life cycle of some parasites involves several different host species, as well as free-living life stages, sometimes within vastly different microhabitat types. One such organism is the trematode (flatworm) Microphallus turgidus, present in brackish water marshes in the southeastern United States. Its first intermediate host is a snail and the second, a glass shrimp. The final host is the waterfowl or mammal that consumes the shrimp.
Extreme habitat types
Although the vast majority of life on Earth lives in mesophyllic (moderate) environments, a few organisms, most of them microbes, have managed to colonise extreme environments that are unsuitable for more complex life forms. There are bacteria, for example, living in Lake Whillans, half a mile below the ice of Antarctica; in the absence of sunlight, they must rely on organic material from elsewhere, perhaps decaying matter from glacier melt water or minerals from the underlying rock. Other bacteria can be found in abundance in the Mariana Trench, the deepest place in the ocean and on Earth; marine snow drifts down from the surface layers of the sea and accumulates in this undersea valley, providing nourishment for an extensive community of bacteria.
Other microbes live in environments lacking in oxygen, and are dependent on chemical reactions other than photosynthesis. Boreholes drilled into the rocky seabed have found microbial communities apparently based on the products of reactions between water and the constituents of rocks. These communities have not been studied much, but may be an important part of the global carbon cycle. Rock in mines two miles deep also harbour microbes; these live on minute traces of hydrogen produced in slow oxidizing reactions inside the rock. These metabolic reactions allow life to exist in places with no oxygen or light, an environment that had previously been thought to be devoid of life.
The intertidal zone and the photic zone in the oceans are relatively familiar habitat types. However the vast bulk of the ocean is inhospitable to air-breathing humans, with scuba divers limited to the upper or so. The lower limit for photosynthesis is and below that depth the prevailing conditions include total darkness, high pressure, little oxygen (in some places), scarce food resources and extreme cold. This habitat is very challenging to research, and as well as being little-studied, it is vast, with 79% of the Earth's biosphere being at depths greater than . With no plant life, the animals in this zone are either detritivores, reliant on food drifting down from surface layers, or they are predators, feeding on each other. Some organisms are pelagic, swimming or drifting in mid-ocean, while others are benthic, living on or near the seabed. Their growth rates and metabolisms tend to be slow, their eyes may be very large to detect what little illumination there is, or they may be blind and rely on other sensory inputs. A number of deep sea creatures are bioluminescent; this serves a variety of functions including predation, protection and social recognition. In general, the bodies of animals living at great depths are adapted to high pressure environments by having pressure-resistant biomolecules and small organic molecules present in their cells known as piezolytes, which give the proteins the flexibility they need. There are also unsaturated fats in their membranes which prevent them from solidifying at low temperatures.
Hydrothermal vents were first discovered in the ocean depths in 1977. They result from seawater becoming heated after seeping through cracks to places where hot magma is close to the seabed. The under-water hot springs may gush forth at temperatures of over and support unique communities of organisms in their immediate vicinity. The basis for this teeming life is chemosynthesis, a process by which microbes convert such substances as hydrogen sulfide or ammonia into organic molecules. These bacteria and Archaea are the primary producers in these ecosystems and support a diverse array of life. About 350 species of organism, dominated by molluscs, polychaete worms and crustaceans, had been discovered around hydrothermal vents by the end of the twentieth century, most of them being new to science and endemic to these habitat types.
Besides providing locomotion opportunities for winged animals and a conduit for the dispersal of pollen grains, spores and seeds, the atmosphere can be considered to be a habitat-type in its own right. There are metabolically active microbes present that actively reproduce and spend their whole existence airborne, with hundreds of thousands of individual organisms estimated to be present in a cubic meter of air. The airborne microbial community may be as diverse as that found in soil or other terrestrial environments, however, these organisms are not evenly distributed, their densities varying spatially with altitude and environmental conditions. Aerobiology has not been studied much, but there is evidence of nitrogen fixation in clouds, and less clear evidence of carbon cycling, both facilitated by microbial activity.
There are other examples of extreme habitat types where specially adapted lifeforms exist; tar pits teeming with microbial life; naturally occurring crude oil pools inhabited by the larvae of the petroleum fly; hot springs where the temperature may be as high as and cyanobacteria create microbial mats; cold seeps where the methane and hydrogen sulfide issue from the ocean floor and support microbes and higher animals such as mussels which form symbiotic associations with these anaerobic organisms; salt pans that harbour salt-tolerant bacteria, archaea and also fungi such as the black yeast Hortaea werneckii and basidiomycete Wallemia ichthyophaga; ice sheets in Antarctica which support fungi Thelebolus spp., glacial ice with a variety of bacteria and fungi; and snowfields on which algae grow.
Habitat change
Whether from natural processes or the activities of man, landscapes and their associated habitat types change over time. There are the slow geomorphological changes associated with the geologic processes that cause tectonic uplift and subsidence, and the more rapid changes associated with earthquakes, landslides, storms, flooding, wildfires, coastal erosion, deforestation and changes in land use. Then there are the changes in habitat types brought on by alterations in farming practices, tourism, pollution, fragmentation and climate change.
Loss of habitat is the single greatest threat to any species. If an island on which an endemic organism lives becomes uninhabitable for some reason, the species will become extinct. Any type of habitat surrounded by a different habitat is in a similar situation to an island. If a forest is divided into parts by logging, with strips of cleared land separating woodland blocks, and the distances between the remaining fragments exceeds the distance an individual animal is able to travel, that species becomes especially vulnerable. Small populations generally lack genetic diversity and may be threatened by increased predation, increased competition, disease and unexpected catastrophe. At the edge of each forest fragment, increased light encourages secondary growth of fast-growing species and old growth trees are more vulnerable to logging as access is improved. The birds that nest in their crevices, the epiphytes that hang from their branches and the invertebrates in the leaf litter are all adversely affected and biodiversity is reduced. Habitat fragmentation can be ameliorated to some extent by the provision of wildlife corridors connecting the fragments. These can be a river, ditch, strip of trees, hedgerow or even an underpass to a highway. Without the corridors, seeds cannot disperse and animals, especially small ones, cannot travel through the hostile territory, putting populations at greater risk of local extinction.
Habitat disturbance can have long-lasting effects on the environment. Bromus tectorum is a vigorous grass from Europe which has been introduced to the United States where it has become invasive. It is highly adapted to fire, producing large amounts of flammable detritus and increasing the frequency and intensity of wildfires. In areas where it has become established, it has altered the local fire regimen to such an extant that native plants cannot survive the frequent fires, allowing it to become even more dominant. A marine example is when sea urchin populations "explode" in coastal waters and destroy all the macroalgae present. What was previously a kelp forest becomes an urchin barren that may last for years and this can have a profound effect on the food chain. Removal of the sea urchins, by disease for example, can result in the seaweed returning, with an over-abundance of fast-growing kelp.
Fragmentation
Destruction
Habitat protection
The protection of habitat types is a necessary step in the maintenance of biodiversity because if habitat destruction occurs, the animals and plants reliant on that habitat suffer. Many countries have enacted legislation to protect their wildlife. This may take the form of the setting up of national parks, forest reserves and wildlife reserves, or it may restrict the activities of humans with the objective of benefiting wildlife. The laws may be designed to protect a particular species or group of species, or the legislation may prohibit such activities as the collecting of bird eggs, the hunting of animals or the removal of plants. A general law on the protection of habitat types may be more difficult to implement than a site specific requirement. A concept introduced in the United States in 1973 involves protecting the critical habitat of endangered species, and a similar concept has been incorporated into some Australian legislation.
International treaties may be necessary for such objectives as the setting up of marine reserves. Another international agreement, the Convention on the Conservation of Migratory Species of Wild Animals, protects animals that migrate across the globe and need protection in more than one country. Even where legislation protects the environment, a lack of enforcement often prevents effective protection. However, the protection of habitat types needs to take into account the needs of the local residents for food, fuel and other resources. Faced with hunger and destitution, a farmer is likely to plough up a level patch of ground despite it being the last suitable habitat for an endangered species such as the San Quintin kangaroo rat, and even kill the animal as a pest. In the interests of ecotourism it is desirable that local communities are educated on the uniqueness of their flora and fauna.
Monotypic habitat
A monotypic habitat type is a concept sometimes used in conservation biology, in which a single species of animal or plant is the only species of its type to be found in a specific habitat and forms a monoculture. Even though it might seem such a habitat type is impoverished in biodiversity as compared with polytypic habitat types, this is not necessarily the case. Monocultures of the exotic plant Hydrilla support a similarly rich fauna of invertebrates as a more varied habitat. The monotypic habitat occurs in both botanical and zoological contexts. Some invasive species may create monocultural stands that prevent other species from growing there. A dominant colonization can occur from retardant chemicals exuded, nutrient monopolization, or from lack of natural controls, such as herbivores or climate, that keep them in balance with their native habitat types. The yellow starthistle, Centaurea solstitialis is a botanical monotypic habitat example of this, currently dominating over in California alone. The non-native freshwater zebra mussel, Dreissena polymorpha, that colonizes areas of the Great Lakes and the Mississippi River watershed, is a zoological monotypic habitat example; the predators or parasites that control it in its home-range in Russia are absent.
| Biology and health sciences | Ecology | null |
1596497 | https://en.wikipedia.org/wiki/Battle%20of%20the%20sexes%20%28game%20theory%29 | Battle of the sexes (game theory) | In game theory, the battle of the sexes is a two-player coordination game that also involves elements of conflict. The game was introduced in 1957 by R. Duncan Luce and Howard Raiffa in their classic book, Games and Decisions. Some authors prefer to avoid assigning sexes to the players and instead use Players 1 and 2, and some refer to the game as Bach or Stravinsky, using two concerts as the two events. The game description here follows Luce and Raiffa's original story.
Imagine that a man and a woman hope to meet this evening, but have a choice between two events to attend: a prize fight and a ballet. The man would prefer to go to prize fight. The woman would prefer the ballet. Both would prefer to go to the same event rather than different ones. If they cannot communicate, where should they go?
The payoff matrix labeled "Battle of the Sexes (1)" shows the payoffs when the man chooses a row and the woman chooses a column. In each cell, the first number represents the man's payoff and the second number the woman's.
This standard representation does not account for the additional harm that might come from not only going to different locations, but going to the wrong one as well (e.g. the man goes to the ballet while the woman goes to the prize fight, satisfying neither). To account for this, the game would be represented in "Battle of the Sexes (2)", where in the top right box, the players each have a payoff of 1 because they at least get to attend their favored events.
Equilibrium analysis
This game has two pure strategy Nash equilibria, one where both players go to the prize fight, and another where both go to the ballet. There is also a mixed strategy Nash equilibrium, in which the players randomize using specific probabilities. For the payoffs listed in Battle of the Sexes (1), in the mixed strategy equilibrium the man goes to the prize fight with probability 3/5 and the woman to the ballet with probability 3/5, so they end up together at the prize fight with probability 6/25 = (3/5)(2/5) and together at the ballet with probability 6/25 = (2/5)(3/5). Because a pure strategy is a degenerate case of a mixed strategy, the two pure strategy Nash equilibria are also part of the set of mixed strategy Nash equilibria. As a result, there are a total of three mixed strategy Nash equilibria in the Battle of the Sexes.
This presents an interesting case for game theory since each of the Nash equilibria is deficient in some way. The two pure strategy Nash equilibria are unfair; one player consistently does better than the other. The mixed strategy Nash equilibrium is inefficient: the players will miscoordinate with probability 13/25, leaving each player with an expected return of 6/5 (less than the payoff of 2 from each's less favored pure strategy equilibrium). It remains unclear how expectations would form that would result in a particular equilibrium being played out.
One possible resolution of the difficulty involves the use of a correlated equilibrium. In its simplest form, if the players of the game have access to a commonly observed randomizing device, then they might decide to correlate their strategies in the game based on the outcome of the device. For example, if the players could flip a coin before choosing their strategies, they might agree to correlate their strategies based on the coin flip by, say, choosing ballet in the event of heads and prize fight in the event of tails. Notice that once the results of the coin flip are revealed neither player has any incentives to alter their proposed actions if they believe the other will not. The result is that perfect coordination is always achieved and, prior to the coin flip, the expected payoffs for the players are exactly equal. It remains true, however, that even if there is a correlating device, the Nash equilibria in which the players ignore it will remain; correlated equilibria require both the existence of a correlating device and the expectation that both players will use it to make their decision.
| Mathematics | Game theory | null |
1597976 | https://en.wikipedia.org/wiki/Kiang | Kiang | The kiang (Equus kiang) is the largest of the Asinus subgenus. It is native to the Tibetan Plateau in Ladakh India, northern Pakistan, Tajikistan, China and northern Nepal. It inhabits montane grasslands and shrublands. Other common names for this species include Tibetan wild ass, khyang and gorkhar.
Characteristics
The kiang is the largest of the wild asses, with an average height at the withers of . They range from high at the withers, with a body long, and a tail of . Kiangs have only slight sexual dimorphism, with the males weighing from , while females weigh . They have a large head, with a blunt muzzle and a convex nose. The mane is upright and relatively short. The coat is a rich chestnut colour, darker brown in winter and a sleek reddish brown in late summer, when the animal moults its woolly fur. The summer coat is long and the winter coat is double that length. The legs, underparts, end of the muzzle, and the inside of the ears are all white. A broad, dark chocolate-coloured dorsal stripe extends from the mane to the end of the tail, which ends in a tuft of blackish brown hairs.
Evolution
The genus Equus, which includes all extant equines, is believed to have evolved from Dinohippus, via the intermediate form Plesippus. One of the oldest species is Equus simplicidens, described as zebra-like with a donkey-shaped head. The oldest fossil to date is ~3.5 million years old from Idaho, US. The genus appears to have spread quickly into the Old World, with the similarly aged Equus livenzovensis documented from western Europe and Russia.
Molecular phylogenies indicate the most recent common ancestor of all modern equids (members of the genus Equus) lived ~5.6 (3.9–7.8) mya. Direct paleogenomic sequencing of a 700,000-year-old middle Pleistocene horse metapodial bone from Canada implies a more recent 4.07 Myr before present date for the most recent common ancestor (MRCA) within the range of 4.0 to 4.5 Myr BP. The oldest divergencies are the Asian hemiones (subgenus E. (Asinus), including the kulan, onager, and kiang), followed by the African zebras (subgenera E. (Dolichohippus), and E. (Hippotigris)). All other modern forms including the domesticated horse (and many fossil Pliocene and Pleistocene forms) belong to the subgenus E. (Equus) which diverged ~4.8 (3.2–6.5) million years ago.
Taxonomy
The kiang is closely related to the onager (Equus hemionus), and in some classifications it is considered a subspecies, E. hemionus kiang. Molecular studies, however, indicate that it is a distinct species. An even closer relative, however, may be the extinct Equus conversidens of Pleistocene America, to which it bears a number of striking similarities; however, such a relationship would require kiangs to have crossed Beringia during the Ice Age, for which little evidence exists. Kiangs can crossbreed with onagers, horses, donkeys, and Burchell's zebras in captivity, although, like mules, the resulting offspring are sterile. Kiangs have never been domesticated.
Three kiang subspecies are currently recognised:
E. k. kiang — western kiang in Tibet, Ladakh and southwestern Xinjiang
E. k. holdereri — eastern kiang in Qinghai and southeastern Xinjiang
E. k. polyodon — southern kiang in southern Tibet up to northern Nepal
The eastern kiang is the largest subspecies; the southern kiang is the smallest. The western kiang is slightly smaller than the eastern and also has a darker coat. However, no genetic information confirms the validity of the three subspecies, which may simply represent a cline, with gradual variation between the three forms.
Distribution and habitat
The kiang is distributed from the Kunlun Mountains in the north, the Tibetan Plateau to the Himalayas in the south. It occurs mostly in China, but about 2,500–3,000 kiangs are thought to inhabit the Ladakh, Himachal Pradesh, and Uttarakhand regions of India, and smaller numbers along the northern frontier of Nepal.
Kiang herds inhabit alpine meadows and steppe country between elevation. They prefer relatively flat plateaus, wide valleys, and low hills, dominated by grasses, sedges, and smaller amounts of other low-lying vegetation. This open terrain, in addition to supplying them with suitable forage absent in the more arid regions of central Asia, may make it easier for them to detect, and flee from, predators.
Behavior and ecology
The kiang is a herbivore, feeding on grasses and sedges, especially Stipa, but also on other plants such as bog sedges, true sedges, and meadow grasses. When little grass is available, such as during winter or in the more arid margins of their native habitat, kiangs have been observed eating shrubs, herbs, and even Oxytropis roots dug from the ground. Although they do sometimes drink from waterholes, such sources of water are rare on the Tibetan Plateau, and they likely obtain most of their water from the plants they eat, or possibly from snow in winter.
Kiangs sometimes gather in large herds, which may number several hundred individuals. However, these herds are not permanent groupings, but temporary aggregations, consisting either of young males only, or of mothers and their foals. Older males are typically solitary, defending a territory of about from rivals, and dominating any local groups of females. Territorial males sometimes become aggressive towards intruders, kicking and biting at them, but more commonly chase them away after a threat display that involves flattening the ears and braying.
Reproduction
Kiangs mate between late July and late August, when older males tend reproductive females by trotting around them, and then chasing them prior to mating. The length of gestation has been variously reported as seven to 12 months, and results in the birth of a single foal. Females are able to breed again almost immediately after birth, although births every other year are more common. Foals weigh up to at birth, and are able to walk within a few hours. The age of sexual maturity is unknown, although probably around three or four years, as it is in the closely related onager. Kiang live for up to 20 years in the wild.
In culture
Natural historian Chris Lavers points to travellers' tales of the kiang as one source of inspiration for the unicorn, first described in Indika by the Ancient Greek physician Ctesias.
Ekai Kawaguchi, a Japanese monk who traveled in Tibet from July, 1900 to June 1902, reported:
Thubten Jigme Norbu, the elder brother of Tenzin Gyatso the 14th Dalai Lama, reporting on his trip from Kumbum Monastery in Amdo to Lhasa in 1950, wrote:
| Biology and health sciences | Equidae | Animals |
1597987 | https://en.wikipedia.org/wiki/Gr%C3%A9vy%27s%20zebra | Grévy's zebra | Grévy's zebra (Equus grevyi), also known as the imperial zebra, is the largest living wild equid and the most threatened of the three species of zebra, the other two being the plains zebra and the mountain zebra. Named after French president Jules Grévy, it is found in parts of Kenya and Ethiopia. Superficially, Grévy's zebras' physical features can help to identify it from the other zebra species; their overall appearance is slightly closer to that of a mule, compared to the more "equine" (horse) appearance of the plains and mountain zebras. Compared to other zebra species, Grévy's are the tallest; they have mule-like, larger ears, and have the tightest stripes of all zebras. They have distinctively erect manes, and more slender snouts.
Grévy's zebra live in semi-arid savanna, where they feed on grasses, legumes, and browse, such as acacia; they can survive up to five days without water. They differ from the other zebra species in that they do not live in a harem, and they maintain few long-lasting social bonds. Stallion territoriality and mother–foal relationships form the basis of the social system of the Grévy's zebra. Despite a handful of zoos and animal parks around the world having had successful captive-breeding programs, in its native home this zebra is listed by the IUCN as endangered. Its population has declined from 15,000 to 2,000 since the 1970s. In 2016, the population was reported to be "stable"; however, as of 2020, the wild numbers are still estimated at only around 2,250 animals, in part due to anthrax outbreaks in eastern Africa.
Taxonomy and naming
The Grévy's zebra was first described by French naturalist Émile Oustalet in 1882. He named it after Jules Grévy, then president of France, who, in the 1880s, was given one by the government of Abyssinia. Traditionally, this species was classified in the subgenus Dolichohippus with plains zebra and mountain zebra in Hippotigris. Groves and Bell (2004) place all three species in the subgenus Hippotigris.
Fossils of zebra-like equids have been found throughout Africa and Asia in the Pliocene and Pleistocene deposits. Notable examples include E. sanmeniensis from China, E. cautleyi from India, E. valeriani from central Asia and E. oldowayensis from East Africa. The latter, in particular is very similar to the Grévy's zebra and may have been its ancestor.
The modern Grévy's zebra arose in the Middle Pleistocene. Zebras appear to be a monophyletic lineage and recent (2013) phylogenies have placed Grévy's zebra in a sister taxon with the plains zebra. In areas where Grévy's zebras are sympatric with plains zebras, the two may gather in same herds and fertile hybrids do occur.
Description
Grévy's zebra is the largest of all wild equines. It is in head-body with a tail, and stands high at the withers. These zebras weigh . Grévy's zebra differs from the other two zebras in its more primitive characteristics. It is particularly mule-like in appearance; the head is large, long, and narrow with elongated nostril openings; the ears are very large, rounded, and conical and the neck is short but thick. The muzzle is ash-grey to black in colour, and the lips are whiskered. The mane is tall and erect; juveniles have a mane that extends to the length of the back and shortens as they reach adulthood.
As with all zebra species, Grévy's zebra's pelage has a black and white striping pattern. The stripes are narrow and close-set, broader on the neck, and extending to the hooves. The belly and the area around the base of the tail lack stripes and are just white in color, which is unique to the Grévy's zebra. Foals are born with brown and white striping, with the brown stripes darkening as they grow older.
Range and ecology
Grévy's zebra largely inhabits northern Kenya, with some isolated populations in Ethiopia. It was extirpated from Somalia and Djibouti and its status in South Sudan is uncertain. It lives in Acacia-Commiphora bushlands and barren plains. Ecologically, this species is intermediate between the arid-living African wild ass and the water-dependent plains zebra. Lactating mares and non-territorial stallions use areas with green, short grass and medium, dense bush more often than non-lactating mares and territorial stallions.
Grévy's zebras rely on grasses, legumes, and browse for nutrition. They commonly browse when grasses are not plentiful. Their hindgut fermentation digestive system allows them to subsist on diets of lower nutritional quality than that necessary for ruminant herbivores. Grevy's zebras can survive up to a week without water, but will drink daily when it is plentiful. They often migrate to better watered highlands during the dry season. Mares require significantly more water when they are lactating. During droughts, the zebras will dig water holes and defend them. The Grévy's zebra's main predator is the lion, but adults can be hunted by spotted hyenas. African hunting dogs, cheetahs and leopards almost never attack adults, even in desperate times, but sometimes prey on young animals, although mares are fiercely protective of their young. In addition, they are susceptible to various gastro-intestinal parasites, notably of the genus Trichostrongylus.
Behaviour and life history
Adult stallions mostly live in territories during the wet seasons but some may stay in them year round if there's enough water left. Stallions that are unable to establish territories are free-ranging and are known as bachelors. Mares, young and non-territorial stallions wander through large home ranges. The mares will wander from territory to territory preferring the ones with the highest-quality food and water sources. Up to nine stallions may compete for a mare outside of a territory. Territorial stallions will tolerate other stallions who wander in their territory. However, when an oestrous mare is present the territorial stallion keeps other stallions at bay. Non-territorial stallions might avoid territorial ones because of harassment. When mares are not around, a territorial stallion will seek the company of other stallions. The stallion shows his dominance with an arched neck and a high-stepping gait and the least dominant stallions submit by extending their tail, lowering their heads and nuzzling their superior's chest or groin.
Zebras produce numerous sounds and vocalisations. When alarmed, they produce deep, hoarse grunts. Whistles and squeals are also made when alarmed, during fights, when scared or in pain. Snorts may be produced when scared or as a warning. A stallion will bray in defense of his territory, when driving mares, or keeping other stallions at bay. Barks may be made during copulation and distressed foals will squeal. The call of the Grévy's zebra has been described as "something like a hippo's grunt combined with a donkey's wheeze". To get rid of flies or parasites, they roll in dust, water or mud or, in the case of flies, they twitch their skin. They also rub against trees, rocks and other objects to get rid of irritations such as itchy skin, hair or parasites. Although Grévy's zebras do not perform mutual grooming, they do sometimes rub against a conspecific.
Reproduction
Grévy's zebras can mate and give birth year round, but most mating takes place in the early rainy seasons and births mostly take place in August or September after the long rains. An oestrous mare may visit as many as four territories a day and will mate with the stallions in them. Among territorial stallions, the most dominant ones control territories near water sources, which mostly attract mares with dependant foals, while more subordinate stallions control territories away from water with greater amounts of vegetation, which mostly attract mares without dependant foals.
The resident stallions of territories will try to subdue the entering mares with dominance rituals and then continue with courtship and copulation. Grévy's zebra stallions have large testicles and can ejaculate a large amount of semen to replace the sperm of other males. This is a useful adaptation for a species whose mares mate polyandrously. Bachelors or outside territorial stallions sometimes "sneak" copulation of mares in another stallion's territory. While mare associations with individual stallions are brief and mating is promiscuous, mares who have just given birth will reside with one stallion for long periods and mate exclusively with that stallion. Lactating females are harassed by stallions more often than non-lactating ones and thus associating with one male and his territory provides an advantage as he will guard against other males.
Gestation of the Grévy's zebra normally lasts 390 days, with a single foal being born. A newborn zebra will follow anything that moves, so new mothers prevent other mares from approaching their foals while imprinting their own striping pattern, scent and vocalisation on them. Mares with young foals may gather into small groups. Mares may leave their foals in "kindergartens" while searching for water. The foals will not hide, so they can be vulnerable to predators. However, kindergartens tend to be protected by an adult, usually a territorial stallion. A mare with a foal stays with one dominant territorial stallion who has exclusive mating rights to her. While the foal may not be his, the stallion will look after it to ensure that the mare stays in his territory. To adapt to a semi-arid environment, Grévy's zebra foals have longer nursing intervals and wait until they are three months old before they start drinking water. Although offspring become less dependent on their mothers after half a year, associations with them continue for up to three years.
Relationship with humans
The Grévy's zebra was known to the Europeans in antiquity and was used by the Romans in circuses. It was subsequently forgotten in the Western world for a thousand years. In the seventeenth century, the king of Shoa (now central Ethiopia) exported two zebras; one to the Sultan of Turkey and another to the Dutch governor of Jakarta. A century later, in 1882, the government of Abyssinia sent one to French president Jules Grévy. It was at that time that the animal was recognised as its own species and named in Grévy's honour. Grévy's zebra appears on the Eritrean 25-cent coin.
Status and conservation
The Grévy's zebra is considered endangered. Its population was estimated to be 15,000 in the 1970s and by the early 21st century the population was lower than 3,500, a 75% decline. In 2008, it was estimated that there are less than 2,500 Grévy's zebras still living in the wild, further declining to fewer than 2,000 mature individuals in 2016. Nonetheless, the Grévy's zebra population trend was considered stable as of 2016.
There are also an estimated 600 Grévy's zebras in captivity. Captive herds have been known to thrive, like at White Oak Conservation in Yulee, Florida, United States, where more than 70 foals have been born. There, research is underway in partnership with the Conservation Centers for Species Survival on semen collection and freezing and on artificial insemination.
The Grévy's zebra is legally protected in Ethiopia. In Kenya, it is protected by the hunting ban of 1977. In the past, Grévy's zebras were threatened mainly by hunting for their skins which fetched a high price on the world market. However, hunting has declined and the main threat to the zebra is habitat loss and competition with livestock. Cattle gather around watering holes and the Grévy's zebras are fenced from those areas. Community-based conservation efforts have shown to be the most effective in preserving Grévy's zebras and their habitat. Less than 0.5% of the range of the Grévy's zebra is in protected areas. In Ethiopia, the protected areas include Aledeghi Wildlife Reserve, Yabelo Wildlife Sanctuary, Borana National Park, and Chelbi Sanctuary. In Kenya, important protected areas include the Buffalo Springs, Samburu and Shaba National Reserves and the private and community land wildlife conservancies in Isiolo, Samburu and the Laikipia Plateau.
The mesquite plant was introduced into Ethiopia around 1997 and is endangering the zebra's food supply. An invasive species, it is replacing the two grass species, Cenchrus ciliaris and Chrysopogon plumulosus, which the zebras eat for most of their food.
| Biology and health sciences | Equidae | Animals |
1599216 | https://en.wikipedia.org/wiki/Biomagnification | Biomagnification | Biomagnification, also known as bioamplification or biological magnification, is the increase in concentration of a substance, e.g a pesticide, in the tissues of organisms at successively higher levels in a food chain. This increase can occur as a result of:
Persistence – where the substance cannot be broken down by environmental processes.
Food chain energetics – where the substance's concentration increases progressively as it moves up a food chain.
Low or non-existent rate of internal degradation or excretion of the substance – mainly due to water-insolubility.
Biological magnification often refers to the process whereby substances such as pesticides or heavy metals work their way into lakes, rivers and the ocean, and then move up the food chain in progressively greater concentrations as they are incorporated into the diet of aquatic organisms such as zooplankton, which in turn are eaten perhaps by fish, which then may be eaten by bigger fish, large birds, animals, or humans. The substances become increasingly concentrated in tissues or internal organs as they move up the chain. Bioaccumulants are substances that increase in concentration in living organisms as they take in contaminated air, water, or food because the substances are very slowly metabolized or excreted.
Processes
Although sometimes used interchangeably with "bioaccumulation", an important distinction is drawn between the two, and with bioconcentration.
Bioaccumulation occurs within a trophic level, and is the increase in the concentration of a substance in certain tissues of organisms' bodies due to absorption from food and the environment.
Bioconcentration is defined as occurring when uptake from the water is greater than excretion.
Thus, bioconcentration and bioaccumulation occur within an organism, and biomagnification occurs across trophic (food chain) levels.
Biodilution is also a process that occurs to all trophic levels in an aquatic environment; it is the opposite of biomagnification, thus when a pollutant gets smaller in concentration as it progresses up a food web.
Many chemicals that bioaccumulate are highly soluble in fats (lipophilic) and insoluble in water (hydrophobic).
For example, though mercury is only present in small amounts in seawater, it is absorbed by algae (generally as methylmercury). Methylmercury is one of the most harmful mercury molecules. It is efficiently absorbed, but only very slowly excreted by organisms. Bioaccumulation and bioconcentration result in buildup in the adipose tissue of successive trophic levels: zooplankton, small nekton, larger fish, etc. Anything which eats these fish also consumes the higher level of mercury the fish have accumulated. This process explains why predatory fish such as swordfish and sharks or birds like osprey and eagles have higher concentrations of mercury in their tissue than could be accounted for by direct exposure alone. For example, herring contains mercury at approximately 0.01 parts per million (ppm) and shark contains mercury at greater than 1 ppm.
DDT is a pesticide known to biomagnify, which is one of the most significant reasons it was deemed harmful to the environment by the EPA and other organizations. DDT is one of the least soluble chemicals known and accumulates progressively in adipose tissue, and as the fat is consumed by predators, the amounts of DDT biomagnify. A well known example of the harmful effects of DDT biomagnification is the significant decline in North American populations of predatory birds such as bald eagles and peregrine falcons due to DDT caused eggshell thinning in the 1950s. DDT is now a banned substance in many parts of the world.
Current status
In a review, a large number of studies, Suedel et al. concluded that although biomagnification is probably more limited in occurrence than previously thought, there is good evidence that DDT, DDE, PCBs, toxaphene, and the organic forms of mercury and arsenic do biomagnify in nature. For other contaminants, bioconcentration and bioaccumulation account for their high concentrations in organism tissues. More recently, Gray reached a similar substances remaining in the organisms and not being diluted to non-threatening concentrations. The success of top predatory-bird recovery (bald eagles, peregrine falcons) in North America following the ban on DDT use in agriculture is testament to the importance of recognizing and responding to biomagnification.
Substances that biomagnify
Two common groups that are known to biomagnify are chlorinated hydrocarbons, also known as organochlorines, and inorganic compounds like methylmercury or heavy metals. Both are lipophilic and not easily degraded. Novel organic substances like organochlorines are not easily degraded because organisms lack previous exposure and have thus not evolved specific detoxification and excretion mechanisms, as there has been no selection pressure from them. These substances are consequently known as "persistent organic pollutants" or POPs.
Metals are not degradable because they are chemical elements. Organisms, particularly those subject to naturally high levels of exposure to metals, have mechanisms to sequester and excrete metals. Problems arise when organisms are exposed to higher concentrations than usual, which they cannot excrete rapidly enough to prevent damage. Persistent heavy metals, such as lead, cadmium, mercury, and arsenic, can have a wide variety of adverse health effects across species.
Novel organic substances
DDT (dichlorodiphenyltrichloroethane).
Hexachlorobenzene (HCB).
PCBs (polychlorinated biphenyls).
Toxaphene.
Monomethylmercury.
| Biology and health sciences | Ecology | Biology |
7366298 | https://en.wikipedia.org/wiki/Solid-state%20drive | Solid-state drive | A solid-state drive (SSD) is a type of solid-state storage device that uses integrated circuits to store data persistently. It is sometimes called semiconductor storage device, solid-state device, or solid-state disk.
SSDs rely on non-volatile memory, typically NAND flash, to store data in memory cells. The performance and endurance of SSDs vary depending on the number of bits stored per cell, ranging from high-performing single-level cells (SLC) to more affordable but slower quad-level cells (QLC). In addition to flash-based SSDs, other technologies such as 3D XPoint offer faster speeds and higher endurance through different data storage mechanisms.
Unlike traditional hard disk drives (HDDs), SSDs have no moving parts, allowing them to deliver faster data access speeds, reduced latency, increased resistance to physical shock, lower power consumption, and silent operation.
Often interfaced to a system in the same way as HDDs, SSDs are used in a variety of devices, including personal computers, enterprise servers, and mobile devices. However, SSDs are generally more expensive on a per-gigabyte basis and have a finite number of write cycles, which can lead to data loss over time. Despite these limitations, SSDs are increasingly replacing HDDs, especially in performance-critical applications and as primary storage in many consumer devices.
SSDs come in various form factors and interface types, including SATA, PCIe, and NVMe, each offering different levels of performance. Hybrid storage solutions, such as solid-state hybrid drives (SSHDs), combine SSD and HDD technologies to offer improved performance at a lower cost than pure SSDs.
Attributes
An SSD stores data in semiconductor cells, with its properties varying according to the number of bits stored in each cell (between 1 and 4). Single-level cells (SLC) store one bit of data per cell and provide higher performance and endurance. In contrast, multi-level cells (MLC), triple-level cells (TLC), and quad-level cells (QLC) store more data per cell but have lower performance and endurance. SSDs using 3D XPoint technology, such as Intel’s Optane, store data by changing electrical resistance instead of storing electrical charges in cells, which can provide faster speeds and longer data persistence compared to conventional flash memory. SSDs based on NAND flash slowly leak charge when not powered, while heavily-used consumer drives may start losing data typically after one to two year in storage. SSDs have a limited lifetime number of writes, and also slow down as they reach their full storage capacity.
SSDs also have internal parallelism that allows them to manage multiple operations simultaneously, which enhances their performance.
Unlike HDDs and similar electromechanical magnetic storage, SSDs do not have moving mechanical parts, which provides advantages such as resistance to physical shock, quieter operation, and faster access times. Their lower latency results in higher input/output rates (IOPS) than HDDs.
Some SSDs are combined with traditional hard drives in hybrid configurations, such as Intel's Hystor and Apple's Fusion Drive. These drives use both flash memory and spinning magnetic disks in order to improve the performance of frequently-accessed data.
Traditional interfaces (e.g. SATA and SAS) and standard HDD form factors allow such SSDs to be used as drop-in replacements for HDDs in computers and other devices. Newer form factors such as mSATA, M.2, U.2, NF1/M.3/NGSFF, XFM Express (Crossover Flash Memory, form factor XT2) and EDSFF and higher speed interfaces such as NVM Express (NVMe) over PCI Express (PCIe) can further increase performance over HDD performance.
Comparison with other technologies
Hard disk drives
Traditional HDD benchmarks tend to focus on the performance characteristics such as rotational latency and seek time. As SSDs do not need to spin or seek to locate data, they are vastly superior to HDDs in such tests. However, SSDs have challenges with mixed reads and writes, and their performance may degrade over time. Therefore, SSD testing typically looks at when the full drive is first used, as the new and empty drive may have much better write performance than it would show after only weeks of use.
The reliability of both HDDs and SSDs varies greatly among models. Some field failure rates indicate that SSDs are significantly more reliable than HDDs. However, SSDs are sensitive to sudden power interruption, sometimes resulting in aborted writes or even cases of the complete loss of the drive.
Most of the advantages of solid-state drives over traditional hard drives are due to their ability to access data completely electronically instead of electromechanically, resulting in superior transfer speeds and mechanical ruggedness. On the other hand, hard disk drives offer significantly higher capacity for their price.
In traditional HDDs, a rewritten file will generally occupy the same location on the disk surface as the original file, whereas in SSDs the new copy will often be written to different NAND cells for the purpose of wear leveling. The wear-leveling algorithms are complex and difficult to test exhaustively. As a result, one major cause of data loss in SSDs is firmware bugs.
Memory cards
While both memory cards and most SSDs use flash memory, they have very different characteristics, including power consumption, performance, size, and reliability. Originally, solid state drives were shaped and mounted in the computer like hard drives. In contrast, memory cards (such as Secure Digital (SD), CompactFlash (CF), and many others) were originally designed for digital cameras and later found their way into cell phones, gaming devices, GPS units, etc. Most memory cards are physically smaller than SSDs, and designed to be inserted and removed repeatedly.
Failure and recovery
SSDs have different failure modes from traditional magnetic hard drives. Because solid-state drives contain no moving parts, they are generally not subject to mechanical failures. However, other types of failures can occur. For example, incomplete or failed writes due to sudden power loss may be more problematic than with HDDs, and the failure of a single chip may result in the loss of all data stored on it. Nonetheless, studies indicate that SSDs are generally reliable, often exceed their manufacturer-stated lifespan and having lower failure rates than HDDs. However, studies also note that SSDs experience higher rates of uncorrectable errors, which can lead to data loss, compared to HDDs.
The endurance of an SSD is typically listed on its datasheet in one of two forms:
either n DW/D (n drive writes per day)
or m TBW (maximum terabytes written), abbreviated TBW.
For example, a Samsung 970 EVO NVMe M.2 SSD (2018) with 1 TB of capacity has an endurance rating of 600 TBW.
Recovering data from SSDs presents challenges due to the non-linear and complex nature of data storage in solid-state drives. The internal operations of SSDs vary by manufacturer, with commands (e.g. TRIM and the ATA Secure Erase) and programs like (e.g. hdparm) being able to erase and modify the bits of a deleted file.
Reliability metrics
The JEDEC Solid State Technology Association (JEDEC) has established standards for SSD reliability metrics, which include:
Unrecoverable Bit Error Ratio (UBER)
Terabytes Written (TBW) – the total number of terabytes that can be written to a drive within its warranty period
Drive Writes Per Day (DWPD) – the number of times the full capacity of the drive can be written to per day within its warranty period
Applications
In a distributed computing environment, SSDs can be used as a distributed cache layer that temporarily absorbs the large volume of user requests to slower HDD-based backend storage systems. This layer provides much higher bandwidth and lower latency than the storage system would, and can be managed in a number of forms, such as a distributed key-value database and a distributed file system. On supercomputers, this layer is typically referred to as burst buffer.
Flash-based solid-state drives can be used to create network appliances from general-purpose personal computer hardware. A write protected flash drive containing the operating system and application software can substitute for larger, less reliable disk drives or CD-ROMs. Appliances built this way can provide an inexpensive alternative to expensive router and firewall hardware.
SSDs based on an SD card with a live SD operating system are easily write-locked. Combined with a cloud computing environment or other writable medium, an OS booted from a write-locked SD card is reliable, persistent and impervious to permanent corruption.
Hard-drive cache
In 2011, Intel introduced a caching mechanism for their Z68 chipset (and mobile derivatives) called Smart Response Technology, which allows a SATA SSD to be used as a cache (configurable as write-through or write-back) for a conventional, magnetic hard disk drive. A similar technology is available on HighPoint's RocketHybrid PCIe card.
Solid-state hybrid drives (SSHDs) are based on the same principle, but integrate some amount of flash memory on board of a conventional drive instead of using a separate SSD. The flash layer in these drives can be accessed independently from the magnetic storage by the host using ATA-8 commands, allowing the operating system to manage it. For example, Microsoft's ReadyDrive technology explicitly stores portions of the hibernation file in the cache of these drives when the system hibernates, making the subsequent resume faster.
Dual-drive hybrid systems are combining the usage of separate SSD and HDD devices installed in the same computer, with overall performance optimization managed by the computer user, or by the computer's operating system software. Examples of this type of system are bcache and dm-cache on Linux, and Apple's Fusion Drive.
Architecture and function
The primary components of an SSD are the controller and the memory used to store data. Traditionally, early SSDs used volatile DRAM for storage, but since 2009, most SSDs utilize non-volatile NAND flash memory, which retains data even when powered off. Flash memory SSDs store data in metal–oxide–semiconductor (MOS) integrated circuit chips, using non-volatile floating-gate memory cells.
Controller
Every SSD includes a controller, which manages the data flow between the NAND memory and the host computer. The controller is an embedded processor that runs firmware to optimize performance, managing data, and ensuring data integrity.
Some of the primary functions performed by the controller are:
Bad block mapping
Read and write caching
Encryption
Crypto-shredding
Error detection and correction using error-correcting code (ECC), such as BCH code
Garbage collection
Read scrubbing and management of read disturb
Wear leveling
The overall performance of an SSD can scale with the number of parallel NAND chips and the efficiency of the controller. For example, controllers that enable parallel processing of NAND flash chips can improve bandwidth and reduce latency.
Micron and Intel pioneered faster SSDs by implementing techniques such as data striping and interleaving to enhance read/write speeds. More recently, SandForce introduced controllers that incorporate data compression to reduce the amount of data written to the flash memory, potentially increasing both performance and endurance.
Wear leveling
Wear leveling is a technique used in SSDs to ensure that write and erase operations are distributed evenly across all blocks of the flash memory. Without this, specific blocks could wear out prematurely due to repeated use, reducing the overall lifespan of the SSD. The process moves data that is infrequently changed (cold data) from heavily used blocks, so that data that changes more frequently (hot data) can be written to those blocks. This helps distribute wear more evenly across the entire SSD. However, this process introduces additional writes, known as write amplification, which must be managed to balance performance and durability.
Memory
Flash memory
Most SSDs use non-volatile NAND flash memory for data storage, primarily due to its cost-effectiveness and ability to retain data without a constant power supply. NAND flash-based SSDs store data in semiconductor cells, with the specific architecture influencing performance, endurance, and cost.
There are various types of NAND flash memory, categorized by the number of bits stored in each cell:
Single-Level Cell (SLC): Stores 1 bit per cell. SLC provides the highest performance, reliability, and endurance but is more expensive.
Multi-Level Cell (MLC): Stores 2 bits per cell. MLC offers a balance between cost, performance, and endurance.
Triple-Level Cell (TLC): Stores 3 bits per cell. TLC is less expensive but slower and less durable than SLC and MLC.
Quad-Level Cell (QLC): Stores 4 bits per cell. QLC is the most affordable option but has the lowest performance and endurance.
Over time, SSD controllers have improved the efficiency of NAND flash, incorporating techniques such as interleaved memory, advanced error correction, and wear leveling to optimize performance and extend the lifespan of the drive. Lower-end SSDs often use QLC or TLC memory, while higher-end drives for enterprise or performance-critical applications may use MLC or SLC.
In addition to the flat (planar) NAND structure, many SSDs now use 3D NAND (or V-NAND), where memory cells are stacked vertically, increasing storage density while improving performance and reducing costs.
DRAM and DIMM
Some SSDs use volatile DRAM instead of NAND flash, offering very high-speed data access but requiring a constant power supply to retain data. DRAM-based SSDs are typically used in specialized applications where performance is prioritized over cost or non-volatility. Many SSDs, such as NVDIMM devices, are equipped with backup power sources such as internal batteries or external AC/DC adapters. These power sources ensure data is transferred to a backup system (usually NAND flash or another storage medium) in the event of power loss, preventing data corruption or loss. Similarly, ULLtraDIMM devices use components designed for DIMM modules, but only use flash memory, similar to a DRAM SSD.
DRAM-based SSDs are often used for tasks where data must be accessed at high speeds with low latency, such as in high-performance computing or certain server environments.
3D XPoint
3D XPoint is a type of non-volatile memory technology developed by Intel and Micron, announced in 2015. It operates by changing the electrical resistance of materials in its cells, offering much faster access times than NAND flash. 3D XPoint-based SSDs, such as Intel’s Optane drives, provide lower latency and higher endurance than NAND-based drives, although they are more expensive per gigabyte.
Other
Drives known as hybrid drives or solid-state hybrid drives (SSHDs) use a hybrid of spinning disks and flash memory. Some SSDs use magnetoresistive random-access memory (MRAM) for storing data.
Cache and buffer
Many flash-based SSDs include a small amount of volatile DRAM as a cache, similar to the buffers in hard disk drives. This cache can temporarily hold data while it is being written to the flash memory, and it also stores metadata such as the mapping of logical blocks to physical locations on the SSD.
Some SSD controllers, like those from SandForce, achieve high performance without using an external DRAM cache. These designs rely on other mechanisms, such as on-chip SRAM, to manage data and minimize power consumption.
Additionally, some SSDs use an SLC cache mechanism to temporarily store data in single-level cell (SLC) mode, even on multi-level cell (MLC) or triple-level cell (TLC) SSDs. This improves write performance by allowing data to be written to faster SLC storage before being moved to slower, higher-capacity MLC or TLC storage.
On NVMe SSDs, Host Memory Buffer (HMB) technology allows the SSD to use a portion of the system’s DRAM instead of relying on a built-in DRAM cache, reducing costs while maintaining a high level of performance.
In certain high-end consumer and enterprise SSDs, larger amounts of DRAM are included to cache both file table mappings and written data, reducing write amplification and enhances overall performance.
Battery and supercapacitor
Higher-performing SSDs may include a capacitor or battery, which helps preserve data integrity in the event of an unexpected power loss. The capacitor or battery provides enough power to allow the data in the cache to be written to the non-volatile memory, ensuring no data is lost.
In some SSDs that use multi-level cell (MLC) flash memory, a potential issue known as "lower page corruption" can occur if power is lost while programming an upper page. This can result in previously written data becoming corrupted. To address this, some high-end SSDs incorporate supercapacitors to ensure all data can be safely written during a sudden power loss.
Some consumer SSDs have built-in capacitors to save critical data such as the Flash Translation Layer (FTL) mapping table. Examples include the Crucial M500 and Intel 320 series. Enterprise-class SSDs, such as the Intel DC S3700 series, often come with more robust power-loss protection mechanisms like supercapacitors or batteries.
Host Interface
The host interface of an SSD refers to the physical connector and the signaling methods used to communicate between the SSD and the host system. This interface is managed by the SSD's controller and is often similar to those found in traditional hard disk drives (HDDs). Common interfaces include:
Serial ATA: One of the most widely used interfaces in consumer SSDs. SATA 3.0 supports transfer speeds up to 6.0 Gbit/s.
Serial attached SCSI: Primarily used in enterprise environments, SAS interfaces are faster and more robust than SATA. SAS 3.0 offers speeds of up to 12.0 Gbit/s.
PCI Express (PCIe): A high-speed interface used in high-performance SSDs. PCIe 3.0 x4 supports transfer speeds of up to 31.5 Gbit/s.
M.2: A newer interface designed for SSDs that is more compact than SATA or PCIe, often found in laptops and high-end desktops. M.2 supports both SATA (up to 6.0 Gbit/s) and PCIe (up to 31.5 Gbit/s) interfaces.
U.2: Another interface used for enterprise-grade SSDs, providing PCIe 3.0 x4 speeds but with a more robust connector suitable for server environments.
Fibre Channel: Typically used in enterprise systems, Fibre Channel interfaces offer high data transfer speeds, with modern versions supporting up to 128 Gbit/s.
USB: Many external SSDs use the Universal Serial Bus interface, with modern versions like USB 3.1 Gen 2 supporting speeds of up to 10 Gbit/s.
Thunderbolt: Some high-end external SSDs use the Thunderbolt interface.
Parallel ATA (PATA): An older interface used in early SSDs, with speeds up to 1064 Mbit/s. PATA has largely been replaced by SATA due to higher data transfer rates and greater reliability.
Parallel SCSI: An interface primarily used in servers, with speeds ranging from 40 Mbit/s to 2560 Mbit/s. It has mostly been replaced by Serial Attached SCSI. The last SCSI-based SSD was introduced in 2004.
SSDs may support various logical interfaces, which define the command sets used by operating systems to communicate with the SSD. Two common logical interfaces include:
Advanced Host Controller Interface (AHCI): Initially designed for HDDs, AHCI is commonly used with SATA SSDs but is less efficient for modern SSDs due to its overhead.
NVM Express (NVMe): A modern interface designed specifically for SSDs, NVMe takes full advantage of the parallelism in SSDs, providing significantly lower latency and higher throughput than AHCI.
Configurations
The size and shape of any device are largely driven by the size and shape of the components used to make that device. Traditional HDDs and optical drives are designed around the rotating platter(s) or optical disc along with the spindle motor inside. Since an SSD is made up of various interconnected integrated circuits (ICs) and an interface connector, its shape is no longer limited to the shape of rotating media drives. Some solid-state storage solutions come in a larger chassis that may even be a rack-mount form factor with numerous SSDs inside. They would all connect to a common bus inside the chassis and connect outside the box with a single connector.
For general computer use, the 2.5-inch form factor (typically found in laptops and used for most SATA SSDs) is the most popular, in three thicknesses (7.0mm, 9.5mm, 14.8 or 15.0mm; with 12.0mm also available for some models). For desktop computers with 3.5-inch hard disk drive slots, a simple adapter plate can be used to make such a drive fit. Other types of form factors are more common in enterprise applications. An SSD can also be completely integrated in the other circuitry of the device, as in the Apple MacBook Air (starting with the fall 2010 model). , mSATA and M.2 form factors also gained popularity, primarily in laptops.
Standard HDD form factors
The benefit of using a current HDD form factor would be to take advantage of the extensive infrastructure already in place to mount and connect the drives to the host system. These traditional form factors are known by the size of the rotating media (i.e., 5.25-inch, 3.5-inch, 2.5-inch or 1.8-inch) and not the dimensions of the drive casing.
Standard card form factors
For applications where space is at a premium, like for ultrabooks or tablet computers, a few compact form factors were standardized for flash-based SSDs.
There is the mSATA form factor, which uses the PCI Express Mini Card physical layout. It remains electrically compatible with the PCI Express Mini Card interface specification while requiring an additional connection to the SATA host controller through the same connector.
M.2 form factor, formerly known as the Next Generation Form Factor (NGFF), is a natural transition from the mSATA and physical layout it used, to a more usable and more advanced form factor. While mSATA took advantage of an existing form factor and connector, M.2 has been designed to maximize usage of the card space, while minimizing the footprint. The M.2 standard allows both SATA and PCI Express SSDs to be fitted onto M.2 modules.
Some high performance, high capacity drives uses standard PCI Express add-in card form factor to house additional memory chips, permit the use of higher power levels, and allow the use of a large heat sink. There are also adapter boards that converts other form factors, especially M.2 drives with PCIe interface, into regular add-in cards.
Disk-on-a-module form factors
A disk-on-a-module (DOM) is a flash drive with either 40/44-pin Parallel ATA (PATA) or SATA interface, intended to be plugged directly into the motherboard and used as a computer hard disk drive (HDD). DOM devices emulate a traditional hard disk drive, resulting in no need for special drivers or other specific operating system support. DOMs are usually used in embedded systems, which are often deployed in harsh environments where mechanical HDDs would simply fail, or in thin clients because of small size, low power consumption, and silent operation.
storage capacities range from 4 MB to 128 GB with different variations in physical layouts, including vertical or horizontal orientation.
Box form factors
Many of the DRAM-based solutions use a box that is often designed to fit in a rack-mount system. The number of DRAM components required to get sufficient capacity to store the data along with the backup power supplies requires a larger space than traditional HDD form factors.
Bare-board form factors
Form factors which were more common to memory modules are now being used by SSDs to take advantage of their flexibility in laying out the components. Some of these include PCIe, mini PCIe, mini-DIMM, MO-297, and many more. The SATADIMM from Viking Technology uses an empty DDR3 DIMM slot on the motherboard to provide power to the SSD with a separate SATA connector to provide the data connection back to the computer. The result is an easy-to-install SSD with a capacity equal to drives that typically take a full 2.5-inch drive bay. At least one manufacturer, Innodisk, has produced a drive that sits directly on the SATA connector (SATADOM) on the motherboard without any need for a power cable. Some SSDs are based on the PCIe form factor and connect both the data interface and power through the PCIe connector to the host. These drives can use either direct PCIe flash controllers or a PCIe-to-SATA bridge device which then connects to SATA flash controllers.
There are also SSDs that are in the form of PCIe cards, these are sometimes called HHHL (Half Height Half Length), or AIC (Add in Card) SSDs.
Ball grid array form factors
In the early 2000s, a few companies introduced SSDs in Ball Grid Array (BGA) form factors, such as M-Systems' (now SanDisk) DiskOnChip and Silicon Storage Technology's NANDrive (now produced by Greenliant Systems), and Memoright's M1000 for use in embedded systems. The main benefits of BGA SSDs are their low power consumption, small chip package size to fit into compact subsystems, and that they can be soldered directly onto a system motherboard to reduce adverse effects from vibration and shock.
Such embedded drives often adhere to the eMMC and eUFS standards.
Development and history
Early SSDs using RAM and similar technology
The first devices resembling solid-state drives (SSDs) used semiconductor technology, with an early example being the 1978 StorageTek STC 4305. This device was a plug-compatible replacement for the IBM 2305 hard drive, initially using charge-coupled devices for storage and later switching to dynamic random-access memory (DRAM). The STC 4305 was significantly faster than its mechanical counterparts and cost around $400,000 for a 45 MB capacity. Though early SSD-like devices existed, they were not widely used due to their high cost and small storage capacity.
In the late 1980s, companies like Zitel began selling DRAM-based SSD products under the name "RAMDisk." These devices were primarily used in specialized systems like those made by UNIVAC and Perkin-Elmer.
SSDs using Flash
Flash memory, a key component in modern SSDs, was invented in 1980 by Fujio Masuoka at Toshiba. Flash-based SSDs were patented in 1989 by the founders of SanDisk, which released its first product in 1991: a 20 MB SSD for IBM laptops. While the storage capacity was limited and the price high (around $1,000), this marked the beginning of a transition to flash memory as an alternative to traditional hard drives.
In the 1990s, new manufacturers of flash memory drives emerged, including STEC, Inc., M-Systems, and BiTMICRO.
As the technology advanced, SSDs saw dramatic improvements in capacity, speed, and affordability. By 2016, commercially available SSDs had more capacity than the largest available HDDs. By 2018, flash-based SSDs had reached capacities of up to 100 TB in enterprise products, with consumer SSDs offering up to 16 TB. These advancements were accompanied by significant increases in read and write speeds, with some high-end consumer models reaching speeds of up to 14.5 GB/s.
In 2021, NVMe 2.0 with Zoned Namespaces (ZNS) was announced. ZNS allows data to be mapped directly to its physical location in memory, providing direct access on an SSD without a flash translation layer. In 2024, Samsung announced what it called the world's first SSD with a hybrid PCIe interface, the Samsung 990 EVO. The hybrid interface runs in either the x4 PCIe 4.0 or x2 PCIe 5.0 modes, a first for an M.2 SSD.
SSD prices have also fallen dramatically, with the cost per gigabyte decreasing from around $50,000 in 1991 to less than $0.05 by 2020.
Enterprise flash drives
Enterprise flash drives (EFDs) are designed for high-performance applications requiring fast input/output operations per second (IOPS), reliability, and energy efficiency. EFDs often have higher specifications than consumer SSDs, making them suitable for mission-critical applications. The term was first used by EMC in 2008 to describe SSDs built for enterprise environments.
One example of an EFD is the Intel DC S3700 series, launched in 2012. These drives were notable for their consistent performance, maintaining IOPS variation within a narrow range, which is crucial for enterprise environments.
Another significant product is the Toshiba PX02SS series, launched in 2016. Designed for write-intensive applications like online transaction processing, these drives achieved impressive read and write speeds and high endurance ratings.
Drives using other persistent memory technologies
In 2017, Intel introduced SSDs based on 3D XPoint technology under the Optane brand. Unlike NAND flash, 3D XPoint uses a different method to store data, offering higher IOPS performance, although sequential read and write speeds remain slower compared to traditional SSDs.
Consumer use
As SSD technology continues to improve, they are increasingly used in ultra-mobile PCs and lightweight laptop systems. The first flash-memory SSD based PC to become available was the Sony Vaio UX90, announced for pre-order on 27 June 2006 and began shipping in Japan on 3 July 2006 with a 16 GB flash memory hard drive. Another of the first mainstream releases of SSD was the XO Laptop, built as part of the One Laptop Per Child project. Mass production of these computers, built for children in developing countries, began in December 2007. By 2009, Dell, Toshiba, Asus, Apple, and Lenovo had begun producing laptops with SSDs.
By 2010, Apple's MacBook Air line began using solid state drives as the default. In 2011, Intel's Ultrabook became the first widely available consumer computers using SSDs aside from the MacBook Air. At present, SDD devices are widely used and distributed by a number of companies, with a small number of companies manufacturing the NAND flash devices within them.
Sales
SSD shipments were 11 million units in 2009, 17.3 million units in 2011 for a total of US$5 billion, 39 million units in 2012, and were expected to rise to 83 million units in 2013
to 201.4 million units in 2016 and to 227 million units in 2017.
Revenues for the SSD market worldwide totaled $585 million in 2008, rising over 100% from $259 million in 2007.
File-system support
The same file systems used on hard disk drives can typically also be used on solid state drives. File systems that support SSDs generally also support the TRIM command, which helps the SSD to recycle discarded data. The file system does not need to manage wear leveling or other flash memory characteristics, as they are handled internally by the SSD. Some log-structured file systems (e.g. F2FS, JFFS2) help to reduce write amplification on SSDs, especially in situations where only very small amounts of data are changed, such as when updating file-system metadata.
If an operating system does not support using TRIM on discrete swap partitions, it might be possible to use swap files inside an ordinary file system instead. For example, OS X does not support swap partitions; it only swaps to files within a file system, so it can use TRIM when, for example, swap files are deleted.
Linux
Since 2010, standard Linux drive utilities have taken care of appropriate partition alignment by default.
Kernel support for the TRIM operation was introduced in version 2.6.33 of the Linux kernel mainline, released on 24 February 2010. The ext4, Btrfs, XFS, JFS, and F2FS file systems include support for the discard (TRIM or UNMAP) function. To make use of TRIM, a file system must be mounted using the discard parameter. Linux swap partitions are by default performing discard operations when the underlying drive supports TRIM, with the possibility to turn them off. Support for queued TRIM, a SATA 3.1 feature that results in TRIM commands not disrupting the command queues, was introduced in Linux kernel 3.12, released on November 2, 2013.
An alternative to the kernel-level TRIM operation is to use a user-space utility called that goes through all of the unused blocks in a filesystem and dispatches TRIM commands for those areas. Theutility is usually run by cron as a scheduled task.
Linux performance considerations
During installation, Linux distributions usually do not configure the installed system to use TRIM and thus the /etc/fstab file requires manual modifications. This is because the current Linux TRIM command implementation might not be optimal. It has been proven to cause a performance degradation instead of a performance increase under certain circumstances. Linux sends an individual TRIM command to each sector, instead of a vectorized list defining a TRIM range as recommended by the TRIM specification.
For performance reasons, it is recommended to switch the I/O scheduler from the default CFQ (Completely Fair Queuing) to NOOP or Deadline. CFQ was designed for traditional magnetic media and seek optimization, thus many of those I/O scheduling efforts are wasted when used with SSDs. As part of their designs, SSDs offer much bigger levels of parallelism for I/O operations, so it is preferable to leave scheduling decisions to their internal logic, especially for high-end SSDs.
A scalable block layer for high-performance SSD storage, known as blk-multiqueue or blk-mq and developed primarily by Fusion-io engineers, was merged into the Linux kernel mainline in kernel version 3.13, released on 19 January 2014. This leverages the performance offered by SSDs and NVMe by allowing much higher I/O submission rates. With this new design of the Linux kernel block layer, internal queues are split into two levels (per-CPU and hardware-submission queues), thus removing bottlenecks and allowing much higher levels of I/O parallelization. As of version 4.0 of the Linux kernel, released on 12 April 2015, VirtIO block driver, the SCSI layer (which is used by Serial ATA drivers), device mapper framework, loop device driver, unsorted block images (UBI) driver (which implements erase block management layer for flash memory devices) and RBD driver (which exports Ceph RADOS objects as block devices) have been modified to actually use this new interface; other drivers will be ported in the following releases.
macOS
Versions since Mac OS X 10.6.8 (Snow Leopard) support TRIM but only when used with an Apple-purchased SSD. TRIM is not automatically enabled for third-party drives, although it can be enabled by using third-party utilities such as Trim Enabler. The status of TRIM can be checked in the System Information application or in the system_profiler command-line tool.
Versions since OS X 10.10.4 (Yosemite) include sudo trimforce enable as a Terminal command that enables TRIM on non-Apple SSDs. There is also a technique to enable TRIM in versions earlier than Mac OS X 10.6.8, although it remains uncertain whether TRIM is actually utilized properly in those cases.
Microsoft Windows
Prior to version 7, Microsoft Windows did not take any specific measures to support solid state drives. From Windows 7, the standard NTFS file system provides support for the TRIM command.
By default, Windows 7 and newer versions execute TRIM commands automatically if the device is detected to be a solid-state drive. However, because TRIM irreversibly resets all freed space, it may be desirable to disable support where enabling data recovery is preferred over wear leveling. Windows implements TRIM for more than just file-delete operations. The TRIM operation is fully integrated with partition- and volume-level commands such as format and delete, with file-system commands relating to truncate and compression, and with the System Restore (also known as Volume Snapshot) feature.
Defragmentation should be disabled on solid-state drives because the location of the file components on an SSD does not significantly impact its performance, but moving the files to make them contiguous using the Windows Defrag routine will cause unnecessary write wear on the limited number of write cycles on the SSD. The SuperFetch feature will also not materially improve performance and causes additional overhead in the system and SSD.
Windows Vista
Windows Vista generally expects hard disk drives rather than SSDs. Windows Vista includes ReadyBoost to exploit characteristics of USB-connected flash devices, but for SSDs it only improves the default partition alignment to prevent read-modify-write operations that reduce the speed of SSDs. Most SSDs are typically split into 4 KiB sectors, while earlier systems may be based on 512 byte sectors with their default partition setups unaligned to the 4 KiB boundaries. Windows Vista does not send the TRIM command to solid-state drives, but some third-party utilities such as SSD Doctor will periodically scan the drive and TRIM the appropriate entries.
Windows 7
Windows 7 and later versions have native support for SSDs. The operating system detects the presence of an SSD and optimizes operation accordingly. For SSD devices, Windows 7 disables ReadyBoost and automatic defragmentation. Despite the initial statement by Steven Sinofsky before the release of Windows 7, however, defragmentation is not disabled, even though its behavior on SSDs differs. One reason is the low performance of Volume Shadow Copy Service on fragmented SSDs. The second reason is to avoid reaching the practical maximum number of file fragments that a volume can handle.
Windows 7 also includes support for the TRIM command to reduce garbage collection for data that the operating system has already determined is no longer valid.
Windows 8.1 and later
Windows 8.1 and later Windows systems also support automatic TRIM for PCI Express SSDs based on NVMe. For Windows 7, the KB2990941 update is required for this functionality and needs to be integrated into Windows Setup using DISM if Windows 7 has to be installed on the NVMe SSD. Windows 8/8.1 also supports the SCSI unmap command, an analog of SATA TRIM, for USB-attached SSDs or SATA-to-USB enclosures. It is also supported over USB Attached SCSI Protocol (UASP).
While Windows 7 supported automatic TRIM for internal SATA SSDs, Windows 8.1 and Windows 10 support manual TRIM as well as automatic TRIM for SATA, NVMe and USB-attached SSDs. Disk Defragmenter in Windows 10 and 11 may execute TRIM to optimize an SSD.
ZFS
Solaris as of version 10 Update 6 (released in October 2008), and recent versions of OpenSolaris, Solaris Express Community Edition, Illumos, Linux with ZFS on Linux, and FreeBSD all can use SSDs as a performance booster for ZFS. A low-latency SSD can be used for the ZFS Intent Log (ZIL), where it is named the SLOG. An SSD may also be used for the level 2 Adaptive Replacement Cache (L2ARC), which is used to cache data for reading.
FreeBSD
ZFS for FreeBSD introduced support for TRIM on September 23, 2012. The Unix File System also supports the TRIM command.
Standardization organizations
The following are noted standardization organizations and bodies that work to create standards for solid-state drives (and other computer storage devices). The table below also includes organizations which promote the use of solid-state drives. This is not necessarily an exhaustive list.
| Technology | Non-volatile memory | null |
11996885 | https://en.wikipedia.org/wiki/Electronic%20cigarette | Electronic cigarette | An electronic cigarette (e-cigarette), or vape, is a device that simulates smoking. It consists of an atomizer, a power source such as a battery, and a container such as a cartridge or tank. Instead of smoke, the user inhales vapor. As such, using an e-cigarette is often called "vaping".
The atomizer is a heating element that vaporizes a liquid solution called e-liquid that cools into an aerosol of tiny droplets, vapor and air. The vapor mainly comprises propylene glycol and/or glycerin, usually with nicotine and flavoring. Its exact composition varies, and depends on matters such as user behavior. E-cigarettes are activated by taking a puff or pressing a button. Some look like traditional cigarettes, and most kinds are reusable.
Nicotine is highly addictive. Users become physically and psychologically dependent. Although some evidence indicates that e-cigarettes are less addictive than smoking, with slower nicotine absorption rates, long-term e-cigarette safety remains uncertain. One issue is the need to separate the effects of vaping from the effects of smoking among users who both vape and smoke.
E-cigarettes containing nicotine are more effective than nicotine replacement therapy (NRT) for smoking cessation. Vaping is likely far less harmful than smoking, but still harmful. E-cigarette vapor contains far fewer toxins than cigarette smoke, and at much lower concentrations. The vapor typically contains traces of harmful substances not found in cigarette smoke. However, e-cigarettes have not been subject to the same rigorous testing that most nicotine replacement therapy products have, and health warnings may encourage a smoker to quit vaping.
Construction
An electronic cigarette consists of an atomizer, a power source such as a battery, and a container for e-liquid such as a cartridge or tank.
E-cigarettes have evolved over time, and the different designs are classified in generations. First-generation e-cigarettes tend to look like traditional cigarettes and are called "cigalikes". Second-generation devices are larger and look less like traditional cigarettes. Third-generation devices include mechanical mods and variable voltage devices. The fourth-generation includes sub-ohm tanks (meaning they have electrical resistance of less than 1 ohm) and temperature control. There are also pod mod devices that use protonated nicotine, rather than free-base nicotine found in earlier generations, providing higher nicotine yields.
E-liquid
The mixture used in vapor products such as e-cigarettes is called e-liquid. E-liquid formulations vary widely. A typical e-liquid is composed of propylene glycol and glycerin (95%) and a combination of flavorings, nicotine, and other additives (5%). The flavorings may be natural, artificial, or organic. Over 80 harmful chemicals such as formaldehyde and metallic nanoparticles have been found in e-liquids at trace quantities. There are many e-liquid manufacturers, and more than 15,000 flavors.
Many countries regulate what e-liquids can contain. In the US, there are Food and Drug Administration (FDA) compulsory manufacturing standards and American E-liquid Manufacturing Standards Association (AEMSA) recommended manufacturing standards. European Union standards are published in the EU Tobacco Products Directive.
Coils
In 2019 a study found that the metal coils of e-cigarettes can leach metal into the e-liquid leading to permanent lung damage in some cases. Research has shown that higher voltages generate more heat and release more toxic substances into the e-liquid. Vaping cannabis usually involves higher temperatures than nicotine.
Use
Popularity
Since entering the market around 2003, e-cigarette use has risen rapidly. In 2011 there were about 7 million adult e-cigarette users globally, increasing to 68 million in 2020 compared with 1.1 billion cigarette smokers. There was a further rise to 82 million e-cigarette users in 2021. This increase has been attributed to targeted marketing, lower cost compared to conventional cigarettes, and the better safety profile of e-cigarettes compared to tobacco. E-cigarette use is highest in China, the US, and Europe, with China having the most users.
Motivation
There are varied reasons for e-cigarette use. Most users are trying to quit smoking, but a large proportion of use is recreational or as an attempt to get around smoke-free laws. Many people vape because they believe vaping is safer than smoking. The wide choice of flavors and lower price compared to cigarettes are also important factors.
Other motivations include reduced odor and fewer stains. E-cigarettes also appeal to technophiles who enjoy customizing their devices.
Gateway theory
The gateway hypothesis is the idea that using less harmful drugs can lead to more harmful ones. Evidence shows that many users who begin by vaping will go on to also smoke traditional cigarettes. People with mental illnesses, who as a group are more susceptible to nicotine addiction, are at particularly high risk of dual use.
However, an association between vaping and subsequent smoking does not necessarily imply a causal gateway effect. Instead, people may have underlying characteristics that predispose them to both activities. There is a genetic association between smoking, vaping, gambling, promiscuity and other risk-taking behaviors. Young people with poor executive functioning use e-cigarettes, cigarettes, and alcohol at higher rates than their peers. E-cigarette users are also more likely to use both cannabis and unprescribed Adderall or Ritalin. Longitudinal studies of e-cigarettes and smoking have been criticized for failing to adequately control for these and other confounding factors.
Smoking rates have continually declined as e-cigarettes have grown in popularity, especially among young people, suggesting that there is little evidence for a gateway effect at the population level. This observation has been criticized, however, for ignoring the effect of anti-smoking interventions.
Young adult and teen use
Worldwide, increasing numbers of young people are vaping. With access to e-cigarettes, young people's tobacco use has dropped by about 75%.
Most young e-cigarette users have never smoked, but there is a substantial minority who both vape and smoke. Many young people who would not smoke are vaping. Young people who smoke tobacco or marijuana, or who drink alcohol, are much more likely to vape. Among young people who have tried vaping, most used a flavored product the first time.
Vaping correlates with smoking among young people, even in those who would otherwise be unlikely to smoke. Experimenting with vaping encourages young people to continue smoking. A 2015 study found minors had little resistance to buying e-cigarettes online. Teenagers may not admit to using e-cigarettes, but use, for instance, a hookah pen. As a result, self-reporting may be lower in surveys.
More recent studies show a trend of an increasing proportion of young people who use e-cigarettes. In 2018, 20% of high school students were using e-cigarettes. In 2020, however, this number increased to 50% of high school students reported to have used e-cigarettes. Similarly, in Canada, there has been trend showing 29% of young people reporting to have used e-cigarettes in 2017, increasing to 37% in 2018.
Health effects
The health risks of e-cigarettes are not known for certain, but the risk of serious adverse events is thought to be low, and e-cigarettes are likely safer than combusted tobacco products. However, this does not mean that e-cigarettes are harmless. E-cigarette use is associated with increased risk of chronic obstructive pulmonary disease, asthma, chronic bronchitis, and emphysema. Those who use e-cigarettes daily have higher risk than those who use them occasionally. According to the National Academies of Sciences, Engineering, and Medicine, "Laboratory tests of e-cigarette ingredients, in vitro toxicological tests, and short-term human studies suggest that e-cigarettes are likely to be far less harmful than combustible tobacco cigarettes." Randomized controlled trials provide "high-certainty" evidence that e-cigarettes containing nicotine are more effective than nicotine replacement therapy for discontinuing tobacco smoking, and moderate‐certainty evidence that they are more effective than e-cigarettes free of nicotine.
Some of the most common but less serious adverse effects include abdominal pain, headache, blurry vision, throat and mouth irritation, vomiting, nausea, and coughing. Nicotine is addictive and harmful to fetuses, children, and young people. In 2019 and 2020, an outbreak of severe vaping lung illness in the US was strongly linked to vitamin E acetate by the CDC. While it is still widely debated which particular component of vape liquid is the cause of illness, vitamin E acetate, specifically, has been identified as a potential culprit in vape-related illnesses. There was likely more than one cause of the outbreak.
E-cigarettes produce similar levels of particulates to tobacco cigarettes. There is "only limited evidence showing adverse respiratory and cardiovascular effects in humans", with the authors of a 2020 review calling for more long-term studies on the subject. E-cigarettes increase the risk of asthma by 40% and chronic obstructive pulmonary disease by 50% compared to not using nicotine at all.
Pregnancy
The British Royal College of Midwives states: "While vaping devices such as electronic cigarettes (e-cigs) do contain some toxins, they are at far lower levels than found in tobacco smoke. If a pregnant woman who has been smoking chooses to use an e-cig and it helps her to quit smoking and stay smokefree, she should be supported to do so." Based on the available evidence on e-cigarette safety, there was also "no reason to believe that use of an e-cig has any adverse effect on breastfeeding." The statement went on to say, "vaping should continue, if it is helpful to quitting smoking and staying smokefree". The UK National Health Service says: "If using an e-cigarette helps you to stop smoking, it is much safer for you and your baby than continuing to smoke." Many women who vape continue to do so during pregnancy because of the perceived safety of e-cigarettes compared to tobacco.
United States
In one of the few studies identified, a 2015 survey of 316 pregnant women in a Maryland clinic found that the majority had heard of e-cigarettes, 13% had used them, and 0.6% were current daily users. These findings are of concern because the dose of nicotine delivered by e-cigarettes can be as high or higher than that delivered by traditional cigarettes.
Data from two states in the Pregnancy Risk Assessment System (PRAMS) show that in 2015—roughly the mid-point of the study period—10.8% of the sample used e-cigarettes in the three months prior to the pregnancy while 7.0%, 5.8%, and 1.4% used these products at the time of the pregnancy, in the first trimester, and at birth respectively. According to National Health Interview Survey (NHIS) data from 2014 to 2017, 38.9% of pregnant smokers used e-cigarettes compared to only 13.5% of non-pregnant, reproductive age women smokers. A health economic study found that passing an e-cigarette minimum legal sale age law in the United States increased teenage prenatal smoking by 0.6 percentage points and had no effect on birth outcomes. Nevertheless, additional research needs to be done on the health effects of electronic cigarette use during pregnancy.
According to the CDC, E-cigarettes are not safe during pregnancy. "Although the aerosol of e-cigarettes generally has fewer harmful substances than cigarette smoke, e-cigarettes and other products containing nicotine are not safe to use during pregnancy. Nicotine is a health danger for pregnant women and developing babies and can damage a developing baby's brain and lungs. Also, some of the flavorings used in e-cigarettes may be harmful to a developing baby."
A popular vaporizer named Juul was widely used by American young people until 2022, when the FDA banned its products from sale. Close to 80% of respondents in a 2017 Truth Initiative study aged 15–24 reported using Juul also used the device in the last 30 days. In the 2010s, US teenagers used the verb "Juuling" to describe vaping, and Juuling was the subject of widespread memes on social media.
Harm reduction
Harm reduction refers to any reduction in harm from a prior level. Harm minimization strives to reduce harms to the lowest achievable level. When a person does not want to quit nicotine, harm minimization means striving to eliminate tobacco exposure by replacing it with vaping. E-cigarettes can reduce smokers' exposure to carcinogens and other toxic chemicals found in tobacco.
Tobacco harm reduction has been a controversial area of tobacco control. Health advocates have been slow to support a harm reduction method out of concern that tobacco companies cannot be trusted to sell products that will lower the risks associated with tobacco use. A large number of smokers want to reduce harm from smoking by using e-cigarettes. The argument for harm reduction does not take into account the adverse effects of nicotine. There cannot be a defensible reason for harm reduction in children who are vaping with a base of nicotine. Quitting smoking is the most effective strategy to tobacco harm reduction.
Tobacco smoke contains 100 known carcinogens and 900 potentially cancer-causing chemicals, but e-cigarette vapor contains less of the potential carcinogens than found in tobacco smoke. A study in 2015 using a third-generation device found levels of formaldehyde were greater than with cigarette smoke when adjusted to a maximum power setting. E-cigarettes cannot be considered safe because there is no safe level for carcinogens. Due to their similarity to traditional cigarettes, e-cigarettes could play a valuable role in tobacco harm reduction. The public health community remains divided concerning the appropriateness of endorsing a device whose safety and efficacy for smoking cessation remain unclear. Overall, the available evidence supports the cautionary implementation of harm reduction interventions aimed at promoting e-cigarettes as attractive and competitive alternatives to cigarette smoking, while taking measures to protect vulnerable groups and individuals.
The core concern is that smokers who could have quit entirely will develop an alternative nicotine addiction. Dual use may be an increased risk to a smoker who continues to use even a minimal amount of traditional cigarettes, rather than quitting. Because of the convenience of e-cigarettes, it may further increase the risk of addiction. The promotion of vaping as a harm reduction aid is premature, while a 2011 review found they appear to have the potential to lower tobacco-related death and disease. Evidence to substantiate the potential of vaping to lower tobacco-related death and disease is unknown. The health benefits of reducing cigarette use while vaping is unclear. E-cigarettes could have an influential role in tobacco harm reduction. The authors warned against the potential harm of excessive regulation and advised health professionals to consider advising smokers who are reluctant to quit by other methods to switch to e-cigarettes as a safer alternative to smoking.
A 2014 review recommended that regulations for e-cigarettes could be similar to those for dietary supplements or cosmetic products to not limit their potential for harm reduction. A 2012 review found e-cigarettes could considerably reduce traditional cigarettes use and they likely could be used as a lower risk replacement for traditional cigarettes, but there is not enough data on their safety and efficacy to draw definite conclusions. There is no research available on vaping for reducing harm in high-risk groups such as people with mental disorders.
A 2014 PHE report concluded that hazards associated with products currently on the market are probably low, and apparently much lower than smoking. However, harms could be reduced further through reasonable product standards. The British Medical Association encourages health professionals to recommend conventional nicotine replacement therapies, but for patients unwilling to use or continue using such methods, health professionals may present e-cigarettes as a lower-risk option than tobacco smoking.
The American Association of Public Health Physicians (AAPHP) suggests those who are unwilling to quit tobacco smoking or unable to quit with medical advice and pharmaceutical methods should consider other nicotine-containing products such as e-cigarettes and smokeless tobacco for long-term use instead of smoking. A 2014 WHO report concluded that some smokers will switch completely to e-cigarettes from traditional tobacco but a "sizeable" number will use both. This report found that such "dual-use" of e-cigarettes and tobacco "will have much smaller beneficial effects on overall survival compared with quitting smoking completely."
Smoking cessation
Whether e-cigarettes help people quit smoking is debated. Limited evidence suggests that e-cigarettes likely do help people to stop smoking when used in clinical settings. However, more smokers become dual users than succeed in complete abstinence. Outside clinical settings, vaping does not greatly change the odds of quitting smoking.
A small number of studies have looked at whether using e-cigarettes reduces the number of cigarettes smokers consume. E-cigarette use may decrease the number of cigarettes smoked, but smoking just one to four cigarettes daily greatly increases the risk of cardiovascular disease compared to not smoking. The extent to which decreasing cigarette smoking with vaping leads to quitting is unknown.
It is unclear whether e-cigarettes are only helpful for particular types of smokers. Vaping with nicotine may reduce tobacco use among daily smokers. Whether vaping is effective for quitting smoking may depend on whether it was used as part of an effort to quit.
One of the challenges in studying e-cigarettes is that there are hundreds of brands and models of e-cigarettes sold that vary in the design and operation of the devices and composition of the liquid, and the technology continues to change. E-cigarettes have not been subjected to the same type of efficacy testing as nicotine replacement products. There are also social concerns - use of e-cigarettes may normalize tobacco use and prolong cigarette use for people who could have quit instead, or it could put extra pressure on smokers to stop cigarette smoking because e-cigarettes are a more socially acceptable alternative. The evidence indicates smokers are more frequently able to completely quit smoking using tank devices compared to cigalikes, which may be due to their more efficient nicotine delivery. One study supports the claim that smokers are more likely to use e-cigarettes as a nicotine replacement product to aid in smoking cessation than other pharmaceutical products.
There is low quality evidence that vaping assists smokers to quit smoking in the long-term compared with nicotine-free vaping. Nicotine-containing e-cigarettes were associated with greater effectiveness for quitting smoking than e-cigarettes without nicotine. A 2013 study in smokers who were not trying to quit, found that vaping, with or without nicotine decreased the number of cigarettes consumed. E-cigarettes without nicotine may reduce tobacco cravings because of the smoking-related physical stimuli.
A 2015 meta-analysis on clinical trials found that e-cigarettes containing nicotine are more effective than nicotine-free ones for quitting smoking. They compared their finding that nicotine-containing e-cigarettes helped 20% of people quit with the results from other studies that found nicotine replacement products helps 10% of people quit. A 2016 review found low quality evidence of a trend towards benefit of e-cigarettes with nicotine for smoking cessation. In terms of whether flavored e-cigarettes assisted quitting smoking, the evidence is inconclusive. Tentative evidence indicates that health warnings on vaping products may influence users to give up vaping.
As of 2020, the efficacy and safety of vaping for quitting smoking during pregnancy was unknown. No research is available to provide details on the efficacy of vaping for quitting smoking during pregnancy. There is robust evidence that vaping is not effective for quitting smoking among adolescents. In view of the shortage of evidence, vaping is not recommend for cancer patients, although for all patients vaping is likely less dangerous than smoking cigarettes. The effectiveness of vaping for quitting smoking among vulnerable groups is uncertain.
Safety
There is no consensus on the risks of e-cigarette use. There is little data about their safety, and a considerable variety of liquids are used as carriers, and thus are present in the aerosol delivered to the user. Reviews of the safety of e-cigarettes have reached quite different conclusions. A 2014 WHO report cautioned about potential risks of using e-cigarettes. Regulated US FDA products such as nicotine inhalers may be safer than e-cigarettes, but e-cigarettes are generally seen as safer than combusted tobacco products such as cigarettes and cigars.
The risk of early death is anticipated to be similar to that of smokeless tobacco. Since vapor does not contain tobacco and does not involve combustion, users may avoid several harmful constituents usually found in tobacco smoke, such as ash, tar, and carbon monoxide. However, e-cigarette use with or without nicotine cannot be considered risk-free because the long-term effects of e-cigarette use are unknown.
The cytotoxicity of e-liquids varies, and contamination with various chemicals have been detected in the liquid. Metal parts of e-cigarettes in contact with the e-liquid can contaminate it with metal particles. Many chemicals including carbonyl compounds such as formaldehyde can inadvertently be produced when the nichrome wire (heating element) that touches the e-liquid is heated and chemically reacted with the liquid. Normal usage of e-cigarettes, and reduced voltage (3.0 V) devices generate very low levels of formaldehyde.
The later-generation and "tank-style" e-cigarettes with a higher voltage (5.0 V) may generate equal or higher levels of formaldehyde compared to smoking. A 2015 report by Public Health England found that high levels of formaldehyde only occurred in overheated "dry-puffing". Users detect the "dry puff" (also known as a "dry hit") and avoid it, and they concluded that "There is no indication that EC users are exposed to dangerous levels of aldehydes." However, e-cigarette users may "learn" to overcome the unpleasant taste due to elevated aldehyde formation, when the nicotine craving is high enough.
Another common chemical found in e-cigarettes is ketene. When it enters the lungs after inhaled, this chemical causes damage to the cellular structure of lung tissue causing the cells to not function at maximum capacity and not absorb gasses as readily. This can cause shortness of breath which can lead to other health conditions such as tachycardia and respiratory failure. E-cigarette users who use devices that contain nicotine are exposed to its potentially harmful effects.
Nicotine is associated with cardiovascular disease, possible birth defects, and poisoning. In vitro studies of nicotine have associated it with cancer, but carcinogenicity has not been demonstrated in vivo. There is inadequate research to show that nicotine is associated with cancer in humans. The risk is probably low from the inhalation of propylene glycol and glycerin. No information is available on the long-term effects of the inhalation of flavors.
In October 2021, researchers at Johns Hopkins University reported over 2,000 unknown chemicals in the vape clouds that they tested from Vuse, Juul, Blu and Mi-Salt vape devices.
In 2019–2020, there was an outbreak of vaping-related lung illness in the US and Canada, primarily related to vaping THC with vitamin E acetate.
E-cigarettes create vapor that consists of fine and ultrafine particles of particulate matter, with the majority of particles in the ultrafine range. The vapor have been found to contain propylene glycol, glycerin, nicotine, flavors, small amounts of toxicants, carcinogens, and heavy metals, as well as metal nanoparticles, and other substances. Many carcinogenic compounds have been detected in e-cigarettes, such as N-Nitrosonornicotine (NNN), N-Nitrosoanatabine (NAT), etc., all of which have been proven to be harmful to human health. Exactly what the vapor consists of varies in composition and concentration across and within manufacturers, and depends on the contents of the liquid, the physical and electrical design of the device, and user behavior, among other factors.
E-cigarette vapor potentially contains harmful chemicals not found in tobacco smoke. The majority of toxic chemicals found in cigarette smoke are absent in e-cigarette vapor. E-cigarette vapor contains lower concentrations of potentially toxic chemicals than with cigarette smoke. Those which are present, are mostly below 1% of the corresponding levels permissible by workplace safety standards. But workplace safety standards do not recognize exposure to certain vulnerable groups such as people with medical ailments, children, and infants who may be exposed to second-hand vapor.
Concern exists that some of the mainstream vapor exhaled by e-cigarette users may be inhaled by bystanders, particularly indoors, although e-cigarette pollutant levels are much lower than for cigarettes and likely to pose a much lower risk, if any, compared to cigarettes. E-cigarette use by a parent might lead to inadvertent health risks to offspring. A 2014 review recommended that e-cigarettes should be regulated for consumer safety. There is limited information available on the environmental issues around production, use, and disposal of e-cigarettes that use cartridges. E-cigarettes that are not reusable may contribute to the problem of electronic waste.
Addiction
Nicotine, a key ingredient in most e-liquids, is well-recognized as one of the most addictive substances, as addictive as heroin and cocaine. Addiction is believed to be a disorder of experience-dependent brain plasticity. The reinforcing effects of nicotine play a significant role in the beginning and continuing use of the drug. First-time nicotine users develop a dependence about 32% of the time. Chronic nicotine use involves both psychological and physical dependence. Nicotine-containing e-cigarette vapor induces addiction-related neurochemical, physiological and behavioral changes. Nicotine affects neurological, neuromuscular, cardiovascular, respiratory, immunological and gastrointestinal systems.
Addiction to e-cigarettes appears lower than from smoking, with slower nicotine absorption rates.
Neuroplasticity within the brain's reward system occurs as a result of long-term nicotine use, leading to nicotine dependence. The neurophysiological activities that are the basis of nicotine dependence are intricate. It includes genetic components, age, gender, and the environment. Nicotine addiction is a disorder which alters different neural systems such as dopaminergic, glutamatergic, GABAergic, serotoninergic, that take part in reacting to nicotine. Long-term nicotine use affects a broad range of genes associated with neurotransmission, signal transduction, and synaptic architecture. The ability to quitting smoking is affected by genetic factors, including genetically based differences in the way nicotine is metabolized.
Nicotine is a parasympathomimetic stimulant that binds to and activates nicotinic acetylcholine receptors in the brain, which subsequently causes the release of dopamine and other neurotransmitters, such as norepinephrine, acetylcholine, serotonin, gamma-aminobutyric acid, glutamate, endorphins, and several neuropeptides, including proopiomelanocortin-derived α-MSH and adrenocorticotropic hormone. Corticotropin-releasing factor, Neuropeptide Y, orexins, and norepinephrine are involved in nicotine addiction. Continuous exposure to nicotine can cause an increase in the number of nicotinic receptors, which is believed to be a result of receptor desensitization and subsequent receptor upregulation.
Long-term exposure to nicotine can also result in downregulation of glutamate transporter 1. Long-term nicotine exposure upregulates cortical nicotinic receptors, but it also lowers the activity of the nicotinic receptors in the cortical vasodilation region. These effects are not easily understood. With constant use of nicotine, tolerance occurs at least partially as a result of the development of new nicotinic acetylcholine receptors in the brain.
After several months of nicotine abstinence, the number of receptors go back to normal. The extent to which alterations in the brain caused by nicotine use are reversible is not fully understood. Nicotine also stimulates nicotinic acetylcholine receptors in the adrenal medulla, resulting in increased levels of epinephrine and beta-endorphin. Its physiological effects stem from the stimulation of nicotinic acetylcholine receptors, which are located throughout the central and peripheral nervous systems.
When nicotine intake stops, the upregulated nicotinic acetylcholine receptors induce withdrawal symptoms. These symptoms can include cravings for nicotine, anger, irritability, anxiety, depression, impatience, trouble sleeping, restlessness, hunger, weight gain, and difficulty concentrating. When trying to quit smoking with vaping a base containing nicotine, symptoms of withdrawal can include irritability, restlessness, poor concentration, anxiety, depression, and hunger. The changes in the brain cause a nicotine user to feel abnormal when not using nicotine. In order to feel normal, the user has to keep his or her body supplied with nicotine. E-cigarettes may reduce cigarette craving and withdrawal symptoms.
It is not clear whether e-cigarette use will decrease or increase overall nicotine addiction, but the nicotine content in e-cigarettes is adequate to sustain nicotine dependence. Chronic nicotine use causes a broad range of neuroplastic adaptations, making quitting hard to accomplish. A 2015 study found that users vaping non-nicotine e-liquid exhibited signs of dependence. Experienced users tend to take longer puffs which may result in higher nicotine intake. It is difficult to assess the impact of nicotine dependence from e-cigarette use because of the wide range of e-cigarette products. The addiction potential of e-cigarettes may have risen because as they have progressed, they have delivered nicotine better.
A 2015 American Academy of Pediatrics (AAP) policy statement stressed "the potential for these products to addict a new generation of youth to nicotine and reverse more than 50 years of public health gains in tobacco control." The World Health Organization (WHO) is concerned about starting nicotine use among non-smokers, and the National Institute on Drug Abuse said e-cigarettes could maintain nicotine addiction in those who are attempting to quit. The limited available data suggests that the likelihood of excessive use of e-cigarettes is smaller than traditional cigarettes. No long-term studies have been done on the effectiveness of e-cigarettes in treating tobacco addiction, but some evidence suggests that dual use of e-cigarettes and traditional cigarettes may be associated with greater nicotine dependence.
There is concern that children may progress from vaping to smoking. Adolescents are likely to underestimate nicotine's addictiveness. Vulnerability to the brain-modifying effects of nicotine, along with youthful experimentation with e-cigarettes, could lead to a lifelong addiction. A long-term nicotine addiction from using a vape may result in using other tobacco products. The majority of addiction to nicotine starts during youth and young adulthood. Adolescents are more likely to become nicotine dependent than adults.
The adolescent brain seems to be particularly sensitive to neuroplasticity as a result of nicotine. Minimal exposure could be enough to produce neuroplastic alterations in the very sensitive adolescent brain. A 2014 review found that in studies up to a third of young people who have not tried a traditional cigarette have used e-cigarettes. The degree to which teens are using e-cigarettes in ways the manufacturers did not intend, such as increasing the nicotine delivery, is unknown, as is the extent to which e-cigarette use may lead to addiction or substance dependence in young people.
Positions
Because of overlap with tobacco laws and medical drug policies, e-cigarette legislation is being debated in many countries. The revised EU Tobacco Products Directive came into effect in May 2016, providing stricter regulations for e-cigarettes. In February 2010 the US District Court ruled against the FDA's seizure of E-Cigarettes as a "drug-device" and in December 2010 the US Court of Appeals confirmed them to be tobacco products which were by then subject to regulation under the 2009 FSPTC Act. In August 2016, the US FDA extended its regulatory power to include e-cigarettes, cigars, and "all other tobacco products". Large tobacco companies have greatly increased their marketing efforts.
The scientific community in US and Europe are primarily concerned with their possible effect on public health. There is concern among public health experts that e-cigarettes could renormalize smoking, weaken measures to control tobacco, and serve as a gateway for smoking among young people. The public health community is divided over whether to support e-cigarettes, because their safety and efficacy for quitting smoking is unclear. Many in the public health community acknowledge the potential for their quitting smoking and decreasing harm benefits, but there remains a concern over their long-term safety and potential for a new era of users to get addicted to nicotine and then tobacco. There is concern among tobacco control academics and advocates that prevalent universal vaping "will bring its own distinct but as yet unknown health risks in the same way tobacco smoking did, as a result of chronic exposure", among other things.
Medical organizations differ in their views about the health implications of vaping. There is general agreement that e-cigarettes expose users to fewer toxicants than tobacco cigarettes. Some healthcare groups and policy makers have hesitated to recommend e-cigarettes for quitting smoking, because of limited evidence of effectiveness and safety. Some have advocated bans on e-cigarette sales and others have suggested that e-cigarettes may be regulated as tobacco products but with less nicotine content or be regulated as a medicinal product.
A 2019 World Health Organization (WHO) report found that the scientific evidence "does not support the tobacco industry's claim that these products are less harmful relative to conventional tobacco products" and that there is insufficient evidence to support vaping as a smoking cessation tool. Healthcare organizations in the UK (including the Royal College of Physicians and Public Health England) have encouraged smokers to switch to e-cigarettes or other nicotine replacements if they cannot quit, as this would potentially save millions of lives. The American Cancer Society, American Heart Association, and the surgeon general of the United States have cautioned that accumulating evidence indicates e-cigarettes may have negative effects on the heart and lungs and should not be used to quit smoking without sufficient evidence that they are safe and effective.
In 2016, the US Food and Drug Administration (US FDA) stated that "Although ENDS [electronic nicotine delivery systems] may potentially provide cessation benefits to individual smokers, no ENDS have been approved as effective cessation aids." In 2019 the European Respiratory Society stated that "The long-term effects of ECIG use are unknown, and there is therefore no evidence that ECIGs are safer than tobacco in the long term" and that "[t]he tobacco harm reduction strategy is based on well-meaning but incorrect or undocumented claims or assumptions." Following hundreds of possible cases of severe lung illness and five confirmed deaths associated with vaping in the US, the Centers for Disease Control and Prevention stated on 6 September 2019 that people should consider not using vaping products while their investigation is ongoing.
History
It is commonly stated that the modern e-cigarette was invented in 2003 by Chinese pharmacist Hon Lik, but tobacco companies had been developing nicotine aerosol generation devices since as early as 1963.
Early prototypes and barriers to entry: 1920s–1990s
In 1927, Joseph Robinson applied for a patent for an electronic vaporizer to be used with medicinal compounds. The patent was approved in 1930 but the device was never marketed. In 1930, the United States Patent and Trademark Office reported a patent stating, "for holding medicinal compounds which are electrically or otherwise heated to produce vapors for inhalation." In 1934 and 1936, further similar patents were applied for.
The earliest e-cigarette can be traced to American Herbert A. Gilbert. In 1963, Gilbert applied for a patent for "a smokeless non-tobacco cigarette" that involved "replacing burning tobacco and paper with heated, moist, flavored air". This device produced flavored steam without nicotine. The patent was granted in 1965. Gilbert's invention was ahead of its time. However, it received little attention and was never commercialized because smoking was still fashionable at that time. Gilbert said in 2013 that today's electric cigarettes follow the basic design set forth in his original patent.
The Favor cigarette, introduced in 1986 by public company Advanced Tobacco Products, was another early noncombustible product promoted as an alternative nicotine-containing tobacco product. Favor was conceptualized by Phil Ray, one of the founders of Datapoint Corporation and inventors of the microprocessor. Development started in 1979 by Phil Ray and Norman Jacobson. Favor was a "plastic, smoke-free product shaped and colored like a conventional cigarette that contained a filter paper soaked with liquid nicotine so users could draw a small dose by inhaling. There was no electricity, combustion, or smoke; it delivered only nicotine."
Favor cigarettes were sold in California and several Southwestern states, marketed as "an alternative to smokers, and only to smokers, to use where smoking is unacceptable or prohibited." In 1987, the FDA exercised jurisdiction over products analogous to E-Cigarettes. Advanced Tobacco Products never challenged the Warning Letter and ceased all distribution of Favor. Ray's wife Brenda Coffee coined the term vaping. Philip Morris' division NuMark, launched in 2013 the MarkTen e-cigarette that Philip Morris had been working on since 1990.
Modern electronic cigarette: 2000s
Despite these earlier efforts, Hon Lik, a Chinese pharmacist and inventor, who worked as a research pharmacist for a company producing ginseng products, is frequently credited with the invention of the modern e-cigarette. Hon quit smoking after his father, also a heavy smoker, died of lung cancer. In 2001, he thought of using a high frequency, piezoelectric ultrasound-emitting element to vaporize a pressurized jet of liquid containing nicotine. This design creates a smoke-like vapor. Hon said that using resistance heating obtained better results and the difficulty was to scale down the device to a small enough size. Hon's invention was intended to be an alternative to smoking. Hon Lik sees the e-cigarette as comparable to the "digital camera taking over from the analogue camera." Ultimately, Hon Lik did not quit smoking. He is now a dual user, both smoking and vaping.
Hon Lik registered a patent for the modern e-cigarette design in 2003. Hon is credited with developing the first commercially successful electronic cigarette. The e-cigarette was first introduced to the Chinese domestic market in 2004. Many versions made their way to the US, sold mostly over the Internet by small marketing firms. E-cigarettes entered the European market and the US market in 2006 and 2007. The company that Hon worked for, Golden Dragon Holdings, registered an international patent in November 2007. The company changed its name to Ruyan (如烟, literally "like smoke") later the same month, and started exporting its products.
Many US and Chinese e-cigarette makers copied his designs illegally, so Hon has not received much financial reward for his invention (although some US manufacturers have compensated him through out-of-court settlements). Ruyan later changed its company name to Dragonite International Limited. As of 2014, most e-cigarettes used a battery-powered heating element rather than the earlier ultrasonic technology design.
Initially, their performance did not meet the expectations of users. The e-cigarette continued to evolve from the first-generation three-part device. In 2007, British entrepreneurs Umer and Tariq Sheikh invented the cartomizer. This is a mechanism that integrates the heating coil into the liquid chamber. They launched this new device in the UK in 2008 under their Gamucci brand and the design is now widely adopted by most "cigalike" brands. Other users tinkered with various parts to produce more satisfactory homemade devices, and the hobby of "modding" was born. The first mod to replace the e-cigarette's case to accommodate a longer-lasting battery, dubbed the "screwdriver", was developed by Ted and Matt Rogers in 2008. Other enthusiasts built their own mods to improve functionality or aesthetics. When pictures of mods appeared at online vaping forums many people wanted them, so some mod makers produced more for sale.
In 2008, a consumer created an e-cigarette called the screwdriver. The device generated a lot of interest back then, as it let the user to vape for hours at one time. The invention led to demand for customizable e-cigarettes, prompting manufacturers to produce devices with interchangeable components that could be selected by the user. In 2009, Joyetech developed the eGo series which offered the power of the screwdriver model and a user-activated switch to a wide market. The clearomizer was invented in 2009. Originating from the cartomizer design, it contained the wicking material, an e-liquid chamber, and an atomizer coil within a single clear component. The clearomizer allows the user to monitor the liquid level in the device. Soon after the clearomizer reached the market, replaceable atomizer coils and variable voltage batteries were introduced. Clearomizers and eGo batteries became the best-selling customizable e-cigarette components in early 2012.
International growth: 2010s
The market for electronic cigarettes rose rapidly during the early 2010s and it started gaining attention in mainstream media. In the United States, some of the most notable start-ups in the market were blu eCigs, NJOY, V2 Cigs, and Logic, as of 2013. International tobacco companies dismissed e-cigarettes as a fad at first. However, recognizing the development of a potential new market sector that could render traditional tobacco products obsolete, they began to produce and market their own brands of e-cigarettes and acquire existing e-cigarette companies.
The large tobacco companies bought some of the established e-cigarette companies. blu eCigs, a prominent US e-cigarette manufacturer, was acquired by Lorillard Inc. for $135 million in April 2012. Japan Tobacco invested in Ploom. British American Tobacco was the first tobacco business to sell e-cigarettes in the UK and launched the e-cigarette Vype in July 2013. Imperial Tobacco's Fontem Ventures acquired the intellectual property owned by Hon Lik through Dragonite International Limited for $US 75 million in 2013 and launched Puritane in partnership with Boots UK. On 1 October 2013 Lorillard Inc. acquired another e-cigarette company, this time the UK based company SKYCIG. SKY was rebranded as blu.
On 3 February 2014, Altria Group, Inc. acquired popular e-cigarette brand Green Smoke for $110 million. The deal was finalized in April 2014 for $110 million with $20 million in incentive payments. Altria also markets its own e-cigarette, the MarkTen, while Reynolds American has entered the sector with its Vuse product. Philip Morris, the world's largest tobacco company, purchased UK's Nicocigs in June 2014. On 30 April 2015, Japan Tobacco bought the US Logic e-cigarette brand. Japan Tobacco also bought the UK E-Lites brand in June 2014. On 15 July 2014, Lorillard sold blu to Imperial Tobacco as part of a deal for $7.1 billion.
Following these changes, the main players in the e-cigarette market (at least in the US) were as follows (as of end 2015)::
Despite the acquisitions by big tobacco companies, some independent e-cigarette companies had more success, most notably Juul Labs, as of 2018.
, 95% of e-cigarettes were made in China. Despite international growth of e-cigarettes during the 2010s, not all regions around the world have yet embraced it as much. In 2018, Indonesia became only one of the first in Asia or the Global South to recognise e-cigarettes as a genuine alternative to smoking tobacco.
Established: 2020s
In the United States between 2020 and 2022, the number of e-cigarettes sold had climbed to 22.7 million units. Elf Bar/EBDESIGN, Vuse, JUUL, NJOY and Breeze Smoke were recognized as the five most popular brands as of December 2022. The surge was driven by non-tobacco flavors such as menthol (for prefilled cartridges) and fruit and candy (for disposables), according to the CDC's health economist Fatma Romeh Ali.
In the UK, where most vaping uses refillable sets and e-liquid, there is now support from the National Health Service, and other medical bodies now embrace the use of e-cigarettes as a viable way to quit smoking. This has contributed to record numbers of people vaping, with estimated 3.6 million in 2019, 3.2 million in 2020, rising to 3.6 million in 2021. Current vapers being 2.2 million as of 2024.
Society and culture
Consumers have shown passionate support for e-cigarettes that other nicotine replacement products did not receive. They have a mass appeal that could challenge combustible tobacco's market position.
By 2013, a subculture had emerged calling itself "the vaping community". Members often see e-cigarettes as a safer alternative to smoking, and some view it as a hobby. The online forum E-Cig-Reviews.com was one of the first major communities. It and other online forums, such as UKVaper.org, were where the hobby of modding started. There are also groups on Facebook and Reddit. Online forums based around modding have grown in the vaping community.
Vapers embrace activities associated with e-cigarettes and sometimes evangelise for them. E-cigarette companies have a substantial online presence, and there are many individual vapers who blog and tweet about e-cigarette related products. A 2014 Postgraduate Medical Journal editorial said vapers "also engage in grossly offensive online attacks on anyone who has the temerity to suggest that ENDS are anything other than an innovation that can save thousands of lives with no risks".
Contempt for Big Tobacco is part of vaping culture. A 2014 review stated that tobacco and e-cigarette companies interact with consumers for their policy agenda. The companies use websites, social media, and marketing to get consumers involved in opposing bills that include e-cigarettes in smoke-free laws. This is similar to tobacco industry activity going back to the 1980s. These approaches were used in Europe to minimize the EU Tobacco Products Directive in October 2013. Grassroots lobbying also influenced the Tobacco Products Directive decision. Tobacco companies have worked with organizations conceived to promote e-cigarette use, and these organizations have worked to hamper legislation intended at restricting e-cigarette use.
Large gatherings of vapers, called vape meets, take place around the US. They focus on e-cigarette devices, accessories, and the lifestyle that accompanies them. Vapefest, which started in 2010, is an annual show hosted by different cities. People attending these meetings are usually enthusiasts that use specialized, community-made products not found in convenience stores or gas stations. These products are mostly available online or in dedicated "vape" storefronts where mainstream e-cigarettes brands from the tobacco industry and larger e-cig manufacturers are not as popular. Some vape shops have a vape bar where patrons can test out different e-liquids and socialize. The Electronic Cigarette Convention in North America which started in 2013, is an annual show where companies and consumers meet up.
A subclass of vapers configure their atomizers to produce large amounts of vapor by using low-resistance heating coils. This practice is called "cloud-chasing". By using a coil with very low resistance, the batteries are stressed to a potentially unsafe extent. This could present a risk of dangerous battery failures. As vaping comes under increased scrutiny, some members of the vaping community have voiced their concerns about cloud-chasing, stating the practice gives vapers a bad reputation when doing it in public. The Oxford Dictionaries' word of the year for 2014 was "vape".
Regulation
Regulation of e-cigarettes varies across countries and states, ranging from no regulation to banning them entirely. For instance, e-cigarettes containing nicotine are illegal in Japan, forcing the market to use heated tobacco products for cigarette alternatives. Others have introduced strict restrictions and some have licensed devices as medicines such as in the UK. However, , there is no e-cigarette device that has been given a medical license that is commercially sold or available by prescription in the UK. , around two thirds of major nations have regulated e-cigarettes in some way.
Because of the potential relationship with tobacco laws and medical drug policies, e-cigarette legislation is being debated in many countries. The companies that make e-cigarettes have been pushing for laws that support their interests. In 2016 the US Department of Transportation banned the use of e-cigarettes on commercial flights. This regulation applies to all flights to and from the US. In 2018, the Royal College of Physicians asked that a balance is found in regulations over e-cigarettes that ensure product safety while encouraging smokers to use them instead of tobacco, as well as keep an eye on any effects contrary to the control agencies for tobacco.
The legal status of e-cigarettes is currently pending in many countries. Many countries such as Brazil, Singapore, Uruguay, and India have banned e-cigarettes. Canada-wide in 2014, they were technically illegal to sell, as no nicotine-containing e-cigarettes are not regulated by Health Canada, but this is generally unenforced and they are commonly available for sale Canada-wide. In 2016, Health Canada announced plans to regulate vaping products. In the US and the UK, the use and sale to adults of e-cigarettes are legal. The revised EU Tobacco Products Directive came into effect in May 2016, providing stricter regulations for e-cigarettes. It limits e-cigarette advertising in print, on television and radio, along with reducing the level of nicotine in liquids and reducing the flavors used. It does not ban vaping in public places. It requires the purchaser for e-cigarettes to be at least 18 and does not permit buying them for anyone less than 18 years of age. The updated Tobacco Products Directive has been disputed by tobacco lobbyists whose businesses could be impacted by these revisions.
As of 8 August 2016, the US FDA extended its regulatory power to include e-cigarettes, e-liquid and all related products. Under this ruling the FDA will evaluate certain issues, including ingredients, product features and health risks, as well their appeal to minors and non-users. The FDA rule also bans access to minors. A photo ID is now required to buy e-cigarettes, and their sale in all-ages vending machines is not permitted in the US. As of August 2017, regulatory compliance deadlines relating to premarket review requirements for most e-cigarette and e-liquid products have been extended from November 2017 to 8 August 2022, which attracted a lawsuit filed by the American Heart Association, American Academy of Pediatrics, the Campaign for Tobacco-Free Kids, and other plaintiffs.
In May 2016, the US FDA used its authority under the Family Smoking Prevention and Tobacco Control Act to deem e-cigarette devices and e-liquids to be tobacco products, which meant it intended to regulate the marketing, labelling, and manufacture of devices and liquids; vape shops that mix e-liquids or make or modify devices were considered manufacturing sites that needed to register with US FDA and comply with good manufacturing practice regulation. E-cigarette and tobacco companies have recruited lobbyists in an effort to prevent the US FDA from evaluating e-cigarette products or banning existing products already on the market.
In February 2014, the European Parliament passed regulations requiring standardization and quality control for liquids and vaporizers, disclosure of ingredients in liquids, and child-proofing and tamper-proofing for liquid packaging. In April 2014 the US FDA published proposed regulations for e-cigarettes. In the US some states tax e-cigarettes as tobacco products, and some state and regional governments have broadened their indoor smoking bans to include e-cigarettes. , 12 US states and 615 localities had prohibited the use of e-cigarettes in venues in which traditional cigarette smoking was prohibited. In 2015, at least 48 states and 2 territories had banned e-cigarette sales to minors.
In November 2020, the New Zealand government passed a vaping regulation that requires vape stores to register as specialist vape retailers before they can sell e-cigarettes, the wider range of flavoured e-liquids, and other related vaping products. Vaping products are required to be notified by the government before they can be sold to ensure that the products are following safety requirements and ingredients in liquids do not contain prohibited substances.
E-cigarettes containing nicotine have been listed as drug delivery devices in a number of countries, and the marketing of such products has been restricted or put on hold until safety and efficacy clinical trials are conclusive. Since they do not contain tobacco, television advertising in the US is not restricted. Some countries have regulated e-cigarettes as a medical product even though they have not approved them as a smoking cessation aid. A 2014 review stated the emerging phenomenon of e-cigarettes has raised concerns in the health community, governments, and the general public and recommended that e-cigarettes should be regulated to protect consumers. It added, "heavy regulation by restricting access to e-cigarettes would just encourage continuing use of much unhealthier tobacco smoking." A 2014 review said regulation of the e-cigarette should be considered on the basis of reported adverse health effects.
Criticism of vaping bans
Critics of vaping bans state that vaping is a much safer alternative to smoking tobacco products and that vaping bans incentivize people to return to smoking cigarettes. For example, critics cite the British Journal of Family Medicine in August 2015 which stated, "E-cigarettes are 95% safer than traditional smoking." Additionally, San Francisco's chief economist, Ted Egan, when discussing the San Francisco vaping ban stated the city's ban on e-cigarette sales will increase smoking as vapers switch to combustible cigarettes. Critics of smoking bans stress the absurdity of criminalizing the sale of a safer alternative to tobacco while tobacco continues to be legal. Prominent proponents of smoking bans are not in favor of criminalizing tobacco either, but rather allowing consumers to have the choice to choose whatever products they desire.
In 2022, after two years of review, the Food and Drug Administration (FDA) denied Juul's application to keep its tobacco and menthol flavored vaping products on the market. Critics of this denial note that research published in Nicotine and Tobacco Research found that smokers who transitioned to Juuls in North America were significantly more likely to switch to vaping than those in the United Kingdom who only had access to lower-strength nicotine products. This happens as the Biden administration seeks to mandate low-nicotine cigarettes which, critics note, is not what makes cigarettes dangerous. They also note that vaping does not contain many of the components that make smoking dangerous such as the combustion process and certain chemicals that are present in cigarettes that are not present in vape products.
Product liability
Multiple reports from the U.S. Fire Administration conclude that electronic cigarettes have been combusting and injuring people and surrounding areas. The composition of a cigarette is the cause of this, as the cartridges that are meant to contain the liquid mixture are in such close proximity to the battery. A research report by the U.S. Fire Administration supports this, stating that, "Unlike mobile phones, some e-cigarette lithium-ion batteries within e-cigarettes offer no protection to stop the coil overheating".
In 2015, the U.S. Fire Administration noted in their report that electronic cigarettes are not created by Big Tobacco or other tobacco companies, but by independent factories that have little quality control. Because of this low quality control when made, electronic cigarettes have led to incidents in which people are hurt, or in which the surrounding area is damaged.
Marketing
They are marketed to men, women, and children as being safer than traditional cigarettes. They are also marketed to non-smokers. E-cigarette marketing is common. There are growing concerns that e-cigarette advertising campaigns unjustifiably focus on young adults, adolescents, and women. Large tobacco companies have greatly increased their marketing efforts. This marketing trend may expand the use of e-cigarettes and contribute to re-glamorizing smoking. Some companies may use e-cigarette advertising to advocate smoking, deliberately, or inadvertently, is an area of concern. A 2014 review said, "the e-cigarette companies have been rapidly expanding using aggressive marketing messages similar to those used to promote cigarettes in the 1950s and 1960s."
E-cigarette companies are using methods that were once used by the tobacco industry to persuade young people to start using cigarettes. E-cigarettes are promoted to a certain extent to forge a vaping culture that entices non-smokers. Themes in e-cigarette marketing, including sexual content and customer satisfaction, are parallel to themes and techniques that are appealing to young people and young adults in traditional cigarette advertising and promotion. A 2017 review found "The tobacco industry sees a future where ENDS accompany and perpetuate, rather than supplant, tobacco use, especially targeting the youth." E-cigarettes and nicotine are regularly promoted as safe and even healthy in the media and on brand websites, which is an area of concern.
While advertising of tobacco products is banned in most countries, television and radio e-cigarette advertising in several countries may be indirectly encouraging traditional cigarette use. E-cigarette advertisements are also in magazines, newspapers, online, and in retail stores. Between 2010 and 2014, e-cigarettes were second only to cigarettes as the top advertised product in magazines. As cigarette companies have acquired the largest e-cigarette brands, they currently benefit from a dual market of smokers and e-cigarette users while simultaneously presenting themselves as agents of harm reduction. This raises concerns about the appropriateness of endorsing a product that directly profits the tobacco industry. There is no evidence that the cigarette brands are selling e-cigarettes as part of a plan to phase out traditional cigarettes, despite some stating to want to cooperate in "harm reduction". E-cigarette advertising for using e-cigarettes as a quitting tool have been seen in the US, UK, and China, which have not been supported by regulatory bodies.
In the US, six large e-cigarette businesses spent $59.3 million on promoting e-cigarettes in 2013. In the US and Canada, over $2 million is spent yearly on promoting e-cigarettes online. E-cigarette websites often made unscientific health statements in 2012. The ease to get past the age verification system at e-cigarette company websites allows underage individuals to access and be exposed to marketing. Around half of e-cigarette company websites have a minimum age notice that prohibited underage individuals from entering.
Celebrity endorsements are used to encourage e-cigarette use. A 2012 national US television advertising campaign for e-cigarettes starred Stephen Dorff exhaling a "thick flume" of what the advertisement describes as "vapor, not tobacco smoke", exhorting smokers with the message "We are all adults here, it's time to take our freedom back." Opponents of the tobacco industry state that the Blu advertisement, in a context of longstanding prohibition of tobacco advertising on television, seems to have resorted to advertising tactics that got former generations of people in the US addicted to traditional cigarettes. Cynthia Hallett of Americans for Non-Smokers' Rights described the US advertising campaign as attempting to "re-establish a norm that smoking is okay, that smoking is glamorous and acceptable".
University of Pennsylvania communications professor Joseph Cappella stated that the setting of the advertisement near an ocean was meant to suggest an association of clean air with the nicotine product. In 2012 and 2013, e-cigarette companies advertised to a large television audience in the US which included 24 million young people. The channels to which e-cigarette advertising reached the largest numbers of young people (ages 12–17) were AMC, Country Music Television, Comedy Central, WGN America, TV Land, and VH1.
Since at least 2007, e-cigarettes have been heavily promoted across media outlets globally. They are vigorously advertised, mostly through the Internet, as a safe substitute to traditional cigarettes, among other things. E-cigarette companies promote their e-cigarette products on Facebook, Instagram, YouTube, and Twitter. They are promoted on YouTube by movies with sexual material and music icons, who encourage minors to "take their freedom back." They have partnered with a number of sports and music icons to promote their products. Tobacco companies intensely market e-cigarettes to young people, with industry strategies including cartoon characters and candy flavors. Fruit flavored e-liquid is the most commonly marketed e-liquid flavor on social media.
E-cigarette companies commonly promote that their products contain only water, nicotine, glycerin, propylene glycol, and flavoring but this assertion is misleading as researchers have found differing amounts of heavy metals in the vapor, including chromium, nickel, tin, silver, cadmium, mercury, and aluminum. The widespread assertion that e-cigarettes emit "only water vapor" is not true because the evidence demonstrates e-cigarette vapor contains possibly harmful chemicals such as nicotine, carbonyls, metals, and volatile organic compounds, in addition to particulate matter. Massive advertising included the assertion that they would present little risk to non-users. However, "disadvantages and side effects have been reported in many articles, and the unfavorable effects of its secondhand vapor have been demonstrated in many studies", and evidence indicates that use of e-cigarettes degrades indoor air quality.
Many e-cigarette companies market their products as a smoking cessation aid without evidence of effectiveness. E-cigarette marketing has been found to make unsubstantiated health statements (e.g., that they help one quit smoking) including statements about improving psychiatric symptoms, which may be particularly appealing to smokers with mental illness. E-cigarette marketing advocate weight control and emphasize use of nicotine with many flavors. These marketing angles could particularly entice overweight people, young people, and vulnerable groups. Some e-cigarette companies state that their products are green without supporting evidence which may be purely to increase their sales.
Economics
The number of e-cigarettes sold increased every year from 2003 to 2014. In 2015 a slowdown in the growth in usage occurred in the US. As of January 2018, the growth in usage in the UK has slowed down since 2013. , there were at least 466 e-cigarette brands. Worldwide e-cigarette sales in 2014 were around US$7 billion. Worldwide e-cigarette sales in 2019 were about $19.3 billion. E-cigarette sales could exceed traditional cigarette sales by 2023. Approximately 30–50% of total e-cigarettes sales are handled on the internet. Established tobacco companies have a significant share of the e-cigarette market.
, 95% of e-cigarette devices were made in China, mainly in Shenzhen. Chinese companies' market share of e-liquid is low. In 2014, online and offline sales started to increase. Since combustible cigarettes are relatively inexpensive in China a lower price may not be a large factor in marketing vaping products over there.
In 2015, 80% of all e-cigarette sales in convenience stores in the US were products made by tobacco companies. According to Nielsen Holdings, convenience store e-cigarette sales in the US went down for the first time during the four-week period ending on 10 May 2014. Wells Fargo analyst Bonnie Herzog attributes this decline to a shift in consumers' behavior, buying more specialized devices or what she calls "vapors-tanks-mods (VTMs)" that are not tracked by Nielsen. Wells Fargo estimated that VTMs accounted for 57% of the 3.5 billion dollar market in the US for vapor products in 2015.
In 2014, dollar sales of customizable e-cigarettes and e-liquid surpassed sales of cigalikes in the US, even though, overall, customizables are a less expensive vaping option. In 2014, the Smoke-Free Alternatives Trade Association estimated that there were 35,000 vape shops in the US, more than triple the number a year earlier. However the 2015 slowdown in market growth affected VTMs as well.
Large tobacco retailers are leading the cigalike market. "We saw the market's sudden recognition that the cigarette industry seems to be in serious trouble, disrupted by the rise of vaping," Mad Money's Jim Cramer stated April 2018. "Over the course of three short days, the tobacco stocks were bent, they were spindled and they were mutilated by the realization that electronic cigarettes have become a serious threat to the old-school cigarette makers," he added. In 2019, a vaping industry organization released a report stating that a possible US ban on e-cigarettes flavors can potentially effect greater than 150,000 jobs around the US.
The leading seller in the e-cigarette market in the US is the Juul e-cigarette, which was introduced in June 2015. , Juul accounts for over 72% of the US e-cigarette market monitored by Nielsen, and its closest competitor—RJ Reynolds' Vuse—makes up less than 10% of the market. Juul rose to popularity quickly, growing by 700% in 2016 alone. On 17 July 2018 Reynolds announced it will debut in August 2018 a pod mod type device similar Juul. The popularity of the Juul pod system has led to a flood of other pod devices hitting the market.
In Canada, e-cigarettes had an estimated value of 140 million CAD in 2015. There are numerous e-cigarette retail shops in Canada. A 2014 audit of retailers in four Canadian cities found that 94% of grocery stores, convenience stores, and tobacconist shops which sold e-cigarettes sold nicotine-free varieties only, while all vape shops stocked at least one nicotine-containing product.
By 2015, the e-cigarette market had only reached a twentieth of the size of the tobacco market in the UK. In the UK in 2015 the "most prominent brands of cigalikes" were owned by tobacco companies, however, with the exception of one model, all the tank types came from "non-tobacco industry companies". Yet some tobacco industry products, while using prefilled cartridges, resemble tank models.
France's e-cigarette market was estimated by Groupe Xerfi to be €130 million in 2015. Additionally, France's e-liquid market was estimated at €265 million. In December 2015, there were 2,400 vape shops in France, 400 fewer than in March of the same year. Industry organization Fivape said the reduction was due to consolidation, not to reduced demand.
In Vietnam, the e-cigarette market is growing rapidly, with the use rate increasing 18 times from 2015 to 2020. The use rate of e-cigarettes in adolescents aged 13–15 is 3.5%, up 1.6% from 2019. According to estimates by the World Health Organization (WHO), the global economic losses caused by tobacco each year are $1.4 trillion. Economic losses caused by tobacco are estimated to account for 1% of GDP. The Vietnamese government is making efforts to control the e-cigarette market. However, here are still many challenges to be addressed, such as consumer's lack of understanding of the harm of e-cigarettes, unclear legal regulations, and fierce competition from imported e-cigarette products.
Environmental impact
Compared to traditional cigarettes, reusable e-cigarettes do not create waste and potential litter from every use in the form of discarded cigarette butts. Traditional cigarettes tend to end up in the ocean where they cause pollution, though once discarded they undergo biodegradation and photodegradation. Although some brands have begun recycling services for their e-cigarette cartridges and batteries, the prevalence of recycling is unknown.
E-cigarettes that are not reusable contribute to the problem of electronic waste, which can create a hazard for people and other organisms. If improperly disposed of, they can release heavy metals, nicotine, and other chemicals from batteries and unused e-liquid. A July 2018–April 2019 garbology study found e-cigarette products composed 19% of the waste from all traditional and electronic tobacco and cannabis products collected at 12 public high schools in Northern California.
Councils in England and Wales are pushing for a 2024 ban on single-use vapes due to environmental and health risks, as 1.3 million are thrown away weekly. Recycling challenges, waste issues, and fire hazards are cited. Concerns about youth vaping are also raised. The UK Vaping Industry Association defends disposables as quitting aids and warns of potential black market products if banned.
Related technologies
Other devices to deliver inhaled nicotine have been developed. They aim to mimic the ritual and behavioral aspects of traditional cigarettes.
British American Tobacco, through their subsidiary Nicoventures, licensed a nicotine delivery system based on existing asthma inhaler technology from UK-based healthcare company Kind Consumer. In September 2014 a product based on this named Voke obtained approval from the United Kingdom's Medicines and Healthcare Products Regulatory Agency.
In 2011, Philip Morris International bought the rights to a nicotine pyruvate technology developed by Jed Rose at Duke University. The technology is based on the chemical reaction between pyruvic acid and nicotine, which produces an inhalable nicotine pyruvate vapor. Philip Morris Products S.A. created a different kind e-cigarette named P3L. The device is supplied with a cartridge that contains nicotine and lactic acid in different cavities. When turned on and heated, the nicotine salt called nicotine lactate forms an aerosol.
The IQOS is a heated tobacco product marketed by Philip Morris International. It heats tobacco at a lower temperature than traditional cigarettes. The tobacco sticks reach a temperature up to 350 °C. It sold first in Japan since November 2014. In December 2016, the United Tobacco Vapor Group's (UTVG) stated that they have been given a patent for their vaporizing component system. qmos from UTVG does not contain a wick or sponge and the number of components is 5 compared to 20 for traditional e-cigarettes.
Pax Labs has developed vaporizers that heats the leaves of tobacco to deliver nicotine in a vapor. In June 2015, they introduced Juul, a type of e-cigarette which delivers 10 times as much nicotine as other e-cigarettes, equivalent to an actual cigarette puff. Juul was spun off from Pax Labs in June 2017 and is now available by the independent company Juul Labs. The eTron 3T from Vapor Tobacco Manufacturing, launched in December 2014, employs a patented, aqueous system whereby the tobacco is extracted into water. The e-liquid contains organic tobacco, organic glycerin, and water.
In December 2013, Japan Tobacco launched Ploom in Japan. In January 2016, they launched Ploom TECH that produces a vapor from a heated liquid that moves through a capsule of granulated tobacco leaves. In 2016, British American Tobacco (BAT) released its own version of the heat but not burn technology called glo in Japan and Switzerland. It uses tobacco sticks rather than nicotine liquid, and does not directly heat or burn tobacco. Heated tobacco products were first introduced in 1988, but were not a commercial success.
BLOW started selling e-hookahs, an electronic version of the hookah in 2014. The handle of each hose for the e-hookah contains a heating element and a liquid, which produces vapor. Gopal Bhatnagar, based in Toronto, Canada, invented a 3D printed adapter to turn a traditional hookah into an e-hookah. It is used instead of the ceramic bowl that contains shisha tobacco. Rather than the tobacco, users can insert e-cigarettes.
Cannabis vaping
Some vape pens, generally not referred to as "e-cigarettes", contain cannabis derivatives instead of nicotine and tobacco derivatives. Some cannabis pens, known as "dab pens", contain cannabis extracted using butane as solvent ("butane hash oil"). Other vaporizers contain e-liquid made with pure THC, and they generally resemble conventional e-cigarettes. A 2020 study shows that one third of teenagers engaged in conventional, tobacco vaping also engage in THC vaping.
KanaVape is an e-cigarette containing cannabidiol (CBD) and no THC. Several companies including Canada's Eagle Energy Vapor are selling caffeine-based e-cigarettes instead of containing nicotine.
| Biology and health sciences | Health and fitness: General | Health |
12007378 | https://en.wikipedia.org/wiki/Mehmed%20Pa%C5%A1a%20Sokolovi%C4%87%20Bridge | Mehmed Paša Sokolović Bridge | The Mehmed Paša Sokolović Bridge () is a historic bridge in Višegrad, over the Drina River in eastern Bosnia and Herzegovina part of the Republika Srpska entity. It was completed in 1577 by the Ottoman court architect Mimar Sinan on the order of the Grand Vizier Mehmed Paša Sokolović. In 2003 bridge was included into the List of National Monuments of Bosnia and Herzegovina by KONS, and UNESCO inclusion into the World Heritage List followed in 2007.
Characteristics
It is characteristic of the apogee of Turkish monumental architecture and civil engineering. It numbers 11 masonry arches, with spans of 11 to 15 meters, and an access ramp at right angles with four arches on the left bank of the river.
The bridge is a representative masterpiece of Mimar Sinan, one of the greatest architects and engineers of the classical Ottoman period and a contemporary of the Italian Renaissance, with which his work can be compared. The UNESCO summary states: The unique elegance of proportion and monumental nobility of the property as a whole bear witness to the greatness of this style of architecture.
History
The Višegrad Bridge was commissioned by Grand Vizier Mehmed Pasha Sokolović, who exercised power over a long period at the summit of the Ottoman Empire during the reign of three sultans as a tribute to his native region and a symbol of trade and prosperity. Construction of the bridge took place between 1571 and 1577. Major renovations of the bridge have taken place in 1664, 1875, 1911, 1940 and 1950–52. Three of its 11 arches were destroyed during World War I and five were damaged during World War II but subsequently restored.
Renovation
The bridge received UNESCO World Heritage Listing in 2007.
The Turkish International Co-operation and Development Agency (TIKA) provided 3.5 million euros for the restoration of the Mehmed Paša Sokolović Bridge. Representatives of TIKA, the BiH Commission for Co-operation with UNESCO, the Republika Srpska Cultural Ministry and the Višegrad municipality signed an agreement to renovate the bridge on 19 April 2010.
In literature
The bridge is widely known because of the book The Bridge on the Drina (1945) written by Yugoslav writer Ivo Andrić, Nobel Prize–winning author.
Gallery
| Technology | Bridges | null |
3094621 | https://en.wikipedia.org/wiki/Charge%20conservation | Charge conservation | In physics, charge conservation is the principle, of experimental nature, that the total electric charge in an isolated system never changes. The net quantity of electric charge, the amount of positive charge minus the amount of negative charge in the universe, is always conserved. Charge conservation, considered as a physical conservation law, implies that the change in the amount of electric charge in any volume of space is exactly equal to the amount of charge flowing into the volume minus the amount of charge flowing out of the volume. In essence, charge conservation is an accounting relationship between the amount of charge in a region and the flow of charge into and out of that region, given by a continuity equation between charge density and current density .
This does not mean that individual positive and negative charges cannot be created or destroyed. Electric charge is carried by subatomic particles such as electrons and protons. Charged particles can be created and destroyed in elementary particle reactions. In particle physics, charge conservation means that in reactions that create charged particles, equal numbers of positive and negative particles are always created, keeping the net amount of charge unchanged. Similarly, when particles are destroyed, equal numbers of positive and negative charges are destroyed. This property is supported without exception by all empirical observations so far.
Although conservation of charge requires that the total quantity of charge in the universe is constant, it leaves open the question of what that quantity is. Most evidence indicates that the net charge in the universe is zero; that is, there are equal quantities of positive and negative charge.
History
Charge conservation was first proposed by British scientist William Watson in 1746 and American statesman and scientist Benjamin Franklin in 1747, although the first convincing proof was given by Michael Faraday in 1843.
Formal statement of the law
Mathematically, we can state the law of charge conservation as a continuity equation:
where is the electric charge accumulation rate in a specific volume at time , is the amount of charge flowing into the volume and is the amount of charge flowing out of the volume; both amounts are regarded as generic functions of time.
The integrated continuity equation between two time values reads:
The general solution is obtained by fixing the initial condition time , leading to the integral equation:
The condition corresponds to the absence of charge quantity change in the control volume: the system has reached a steady state. From the above condition, the following must hold true:
therefore, and are equal (not necessarily constant) over time, then the overall charge inside the control volume does not change. This deduction could be derived directly from the continuity equation, since at steady state holds, and implies .
In electromagnetic field theory, vector calculus can be used to express the law in terms of charge density (in coulombs per cubic meter) and electric current density (in amperes per square meter). This is called the charge density continuity equation
The term on the left is the rate of change of the charge density at a point. The term on the right is the divergence of the current density at the same point. The equation equates these two factors, which says that the only way for the charge density at a point to change is for a current of charge to flow into or out of the point. This statement is equivalent to a conservation of four-current.
Mathematical derivation
The net current into a volume is
where is the boundary of oriented by outward-pointing normals, and is shorthand for , the outward pointing normal of the boundary . Here {{math|J}} is the current density (charge per unit area per unit time) at the surface of the volume. The vector points in the direction of the current.
From the Divergence theorem this can be written
Charge conservation requires that the net current into a volume must necessarily equal the net change in charge within the volume.
The total charge q in volume V is the integral (sum) of the charge density in V''
So, by the Leibniz integral rule
Equating () and () gives
Since this is true for every volume, we have in general
Derivation from Maxwell's Laws
The invariance of charge can be derived as a corollary of Maxwell's equations. The left-hand side of the modified Ampere's law has zero divergence by the div–curl identity. Expanding the divergence of the right-hand side, interchanging derivatives, and applying Gauss's law gives:i.e.,By the Gauss divergence theorem, this means the rate of change of charge in a fixed volume equals the net current flowing through the boundary:
In particular, in an isolated system the total charge is conserved.
Connection to gauge invariance
Charge conservation can also be understood as a consequence of symmetry through Noether's theorem, a central result in theoretical physics that asserts that each conservation law is associated with a symmetry of the underlying physics. The symmetry that is associated with charge conservation is the global gauge invariance of the electromagnetic field. This is related to the fact that the electric and magnetic fields are not changed by different choices of the value representing the zero point of electrostatic potential . However the full symmetry is more complicated, and also involves the vector potential . The full statement of gauge invariance is that the physics of an electromagnetic field are unchanged when the scalar and vector potential are shifted by the gradient of an arbitrary scalar field :
In quantum mechanics the scalar field is equivalent to a phase shift in the wavefunction of the charged particle:
so gauge invariance is equivalent to the well known fact that changes in the overall phase of a wavefunction are unobservable, and only changes in the magnitude of the wavefunction result in changes to the probability function .
Gauge invariance is a very important, well established property of the electromagnetic field and has many testable consequences. The theoretical justification for charge conservation is greatly strengthened by being linked to this symmetry. For example, gauge invariance also requires that the photon be massless, so the good experimental evidence that the photon has zero mass is also strong evidence that charge is conserved. Gauge invariance also implies quantization of hypothetical magnetic charges.
Even if gauge symmetry is exact, however, there might be apparent electric charge non-conservation if charge could leak from our normal 3-dimensional space into hidden extra dimensions.
Experimental evidence
Simple arguments rule out some types of charge nonconservation. For example, the magnitude of the elementary charge on positive and negative particles must be extremely close to equal, differing by no more than a factor of 10−21 for the case of protons and electrons. Ordinary matter contains equal numbers of positive and negative particles, protons and electrons, in enormous quantities. If the elementary charge on the electron and proton were even slightly different, all matter would have a large electric charge and would be mutually repulsive.
The best experimental tests of electric charge conservation are searches for particle decays that would be allowed if electric charge is not always conserved. No such decays have ever been seen.
The best experimental test comes from searches for the energetic photon from an electron decaying into a neutrino and a single photon:
but there are theoretical arguments that such single-photon decays will never occur even if charge is not conserved.
Charge disappearance tests are sensitive to decays without energetic photons, other unusual charge violating processes such as an electron spontaneously changing into a positron,
and to electric charge moving into other dimensions.
The best experimental bounds on charge disappearance are:
| Physical sciences | Particle physics: General | Physics |
3094833 | https://en.wikipedia.org/wiki/Vascular%20tissue | Vascular tissue | Vascular tissue is a complex conducting tissue, formed of more than one cell type, found in vascular plants. The primary components of vascular tissue are the xylem and phloem. These two tissues transport fluid and nutrients internally. There are also two meristems associated with vascular tissue: the vascular cambium and the cork cambium. All the vascular tissues within a particular plant together constitute the vascular tissue system of that plant.
The cells in vascular tissue are typically long and slender. Since the xylem and phloem function in the conduction of water, minerals, and nutrients throughout the plant, it is not surprising that their form should be similar to pipes. The individual cells of phloem are connected end-to-end, just as the sections of a pipe might be. As the plant grows, new vascular tissue differentiates in the growing tips of the plant. The new tissue is aligned with existing vascular tissue, maintaining its connection throughout the plant. The vascular tissue in plants is arranged in long, discrete strands called vascular bundles. These bundles include both xylem and phloem, as well as supporting and protective cells. In stems and roots, the xylem typically lies closer to the interior of the stem with phloem towards the exterior of the stem. In the stems of some Asterales dicots, there may be phloem located inwardly from the xylem as well.
Between the xylem and phloem is a meristem called the vascular cambium. This tissue divides off cells that will become additional xylem and phloem. This growth increases the girth of the plant, rather than its length. As long as the vascular cambium continues to produce new cells, the plant will continue to grow more stout. In trees and other plants that develop wood, the vascular cambium allows the expansion of vascular tissue that produces woody growth. Because this growth ruptures the epidermis of the stem, woody plants also have a cork cambium that develops among the phloem. The cork cambium gives rise to thickened cork cells to protect the surface of the plant and reduce water loss. Both the production of wood and the production of cork are forms of secondary growth.
In leaves, the vascular bundles are located among the spongy mesophyll. The xylem is oriented toward the adaxial surface of the leaf (usually the upper side), and phloem is oriented toward the abaxial surface of the leaf. This is why aphids are typically found on the undersides of the leaves rather than on the top, since the phloem transports sugars manufactured by the plant and they are closer to the lower surface.
| Biology and health sciences | Plant tissues | null |
3096971 | https://en.wikipedia.org/wiki/Phonophobia | Phonophobia | Phonophobia, also called ligyrophobia or sonophobia, is a fear of or aversion to loud sounds (for example firecrackers)—a type of specific phobia. It is a very rare phobia which is often the symptom of hyperacusis. Sonophobia can refer to the hypersensitivity of a patient to sound and can be part of the diagnosis of a migraine.
Occasionally it is called acousticophobia.
The term phonophobia comes from Greek φωνή - phōnē, "voice" or "sound" and φόβος - phobos, "fear".
Ligyrophobics may be fearful of devices that can suddenly emit loud sounds, such as computer speakers or fire alarms. When operating a device such as a home theater system, computer, television, or CD player, they may wish to have the volume turned down all the way before doing anything that would cause the speakers to emit sound, so that once the command to produce sound is given, the user can raise the volume of the speakers to a comfortable listening level. They may avoid parades and carnivals due to the loud instruments such as drums. As festive occasions are accompanied by music of over 120 decibels, many phobics develop agoraphobia. Other ligyrophobics also steer clear of any events in which firecrackers are to be let off.
Another example is watching someone blow up a balloon beyond its normal capacity. This is often an unsettling, even disturbing thing for a person with ligyrophobia to observe, as they anticipate a loud sound when the balloon pops. When balloons pop, two types of reactions are heavy breathing and panic attacks. The sufferer becomes anxious to get away from the source of the loud sound and may get headaches. It may also be related to, caused by, or confused with hyperacusis, extreme sensitivity to loud sounds. Phonophobia also has been proposed to refer to an extreme form of misophonia.
| Biology and health sciences | Symptoms and signs | Health |
3098635 | https://en.wikipedia.org/wiki/Shonisaurus | Shonisaurus | Shonisaurus is a genus of very large ichthyosaurs. At least 37 incomplete fossil specimens of the type species, Shonisaurus popularis, have been found in the Luning Formation of Nevada, USA. This formation dates to the late Carnian-early Norian age of the Late Triassic, around 227 million years ago. Other possible species of Shonisaurus have been discovered from the middle Norian deposits of Canada and Alaska.
Description
Shonisaurus lived during the late Carnian to Norian stages of the Late Triassic. With a large skull about long, S. popularis and measured around in length and in body mass. S. sikanniensis was one of the largest marine reptiles of all time, measuring and weighing .
Shonisaurus had a long snout, and its flippers were much longer and narrower than in other ichthyosaurs. While Shonisaurus was initially reported to have had socketed teeth (rather than teeth set in a groove as in more advanced forms), these were present only at the jaw tips, and only in the very smallest, juvenile specimens. All of these features suggest that Shonisaurus may be a relatively specialised offshoot of the main ichthyosaur evolutionary line. More recent finds however indicate that Shonisaurus possessed teeth in all ontogenetic stages. Robust sectorial teeth and gut contents indicate that Shonisaurus was a macrophagous raptorial
predator which fed on vertebrates and shelled mollusks like cephalopods, possibly even large-bodied prey. Additionally, Shonisaurus was historically depicted with a rather rotund body, but studies of its body shape since the early 1990s have shown that the body was much more slender than traditionally thought, and had a relatively deep body compared with related marine reptiles.
History of discovery
Fossils of Shonisaurus were first found in a large deposit in Nevada in 1920. Thirty years later, they were excavated, uncovering the remains of 37 very large ichthyosaurs. These were named Shonisaurus, which means "lizard from the Shoshone Mountains", after the formation where the fossils were found.
S. popularis, was adopted as the state fossil of Nevada in 1984. Excavations, begun in 1954 under the direction of Charles Camp and Samuel Welles of the University of California, Berkeley, were continued by Camp throughout the 1960s. It was named by Charles Camp in 1976. The Nevada fossil sites can currently be viewed at the Berlin-Ichthyosaur State Park.
A second species from the Pardonet Formation of British Columbia was named Shonisaurus sikanniensis in 2004. However, a phylogenetic study by Sander and colleagues in 2011 later showed S. sikanniensis to be a species of Shastasaurus rather than Shonisaurus.
A subsequent study by Ji and colleagues published in 2013 reasserted the original classification, finding it more closely related to Shonisaurus than to Shastasaurus. Support for both hypotheses has been found in later studies, with some authors classifying the species in Shonisaurus and others in Shastasaurus.
Specimens belonging to S. sikanniensis have been found in the Pardonet Formation British Columbia, dating to the middle Norian age. An isolated humerus from a smaller individual (TMP 94.381.4) and a postorbital region (TMP 98.75.9) from a juvenile were also reported from the same formation and were referred to as Shonisaurus sp. Other fossils from this formation include the ichthyosaurs Macgowania and Callawayia, coelacanths Whiteia banffensis and possibly Garnbergia, and various genera of molluscs including ammonites and bivalves. Large ichthyosaur remains found in Alaska have also been identified as Shonisaurus sp.
| Biology and health sciences | Prehistoric marine reptiles | Animals |
5634358 | https://en.wikipedia.org/wiki/Ekman%20transport | Ekman transport | Ekman transport is part of Ekman motion theory, first investigated in 1902 by Vagn Walfrid Ekman. Winds are the main source of energy for ocean circulation, and Ekman transport is a component of wind-driven ocean current. Ekman transport occurs when ocean surface waters are influenced by the friction force acting on them via the wind. As the wind blows it casts a friction force on the ocean surface that drags the upper 10-100m of the water column with it. However, due to the influence of the Coriolis effect, the ocean water moves at a 90° angle from the direction of the surface wind. The direction of transport is dependent on the hemisphere: in the northern hemisphere, transport occurs at 90° clockwise from wind direction, while in the southern hemisphere it occurs at 90° anticlockwise. This phenomenon was first noted by Fridtjof Nansen, who recorded that ice transport appeared to occur at an angle to the wind direction during his Arctic expedition of the 1890s. Ekman transport has significant impacts on the biogeochemical properties of the world's oceans. This is because it leads to upwelling (Ekman suction) and downwelling (Ekman pumping) in order to obey mass conservation laws. Mass conservation, in reference to Ekman transfer, requires that any water displaced within an area must be replenished. This can be done by either Ekman suction or Ekman pumping depending on wind patterns.
Theory
Ekman theory explains the theoretical state of circulation if water currents were driven only by the transfer of momentum from the wind. In the physical world, this is difficult to observe because of the influences of many simultaneous current driving forces (for example, pressure and density gradients). Though the following theory technically applies to the idealized situation involving only wind forces, Ekman motion describes the wind-driven portion of circulation seen in the surface layer.
Surface currents flow at a 45° angle to the wind due to a balance between the Coriolis force and the drags generated by the wind and the water. If the ocean is divided vertically into thin layers, the magnitude of the velocity (the speed) decreases from a maximum at the surface until it dissipates. The direction also shifts slightly across each subsequent layer (right in the northern hemisphere and left in the southern hemisphere). This is called the Ekman spiral. The layer of water from the surface to the point of dissipation of this spiral is known as the Ekman layer. If all flow over the Ekman layer is integrated, the net transportation is at 90° to the right (left) of the surface wind in the northern (southern) hemisphere.
Mechanisms
There are three major wind patterns that lead to Ekman suction or pumping. The first are wind patterns that are parallel to the coastline. Due to the Coriolis effect, surface water moves at a 90° angle to the wind current. If the wind moves in a direction causing the water to be pulled away from the coast then Ekman suction will occur. On the other hand, if the wind is moving in such a way that surface waters move towards the shoreline then Ekman pumping will take place.
The second mechanism of wind currents resulting in Ekman transfer is the Trade Winds both north and south of the equator pulling surface waters towards the poles. There is a great deal of upwelling Ekman suction at the equator because water is being pulled northward north of the equator and southward south of the equator. This leads to a divergence in the water, resulting in Ekman suction, and therefore, upwelling.
The third wind pattern influencing Ekman transfer is large-scale wind patterns in the open ocean. Open ocean wind circulation can lead to gyre-like structures of piled up sea surface water resulting in horizontal gradients of sea surface height. This pile up of water causes the water to have a downward flow and suction, due to gravity and mass balance. Ekman pumping downward in the central ocean is a consequence of this convergence of water.
Ekman suction
Ekman suction is the component of Ekman transport that results in areas of upwelling due to the divergence of water. Returning to the concept of mass conservation, any water displaced by Ekman transport must be replenished. As the water diverges it creates space and acts as a suction in order to fill in the space by pulling up, or upwelling, deep sea water to the euphotic zone.
Ekman suction has major consequences for the biogeochemical processes in the area because it leads to upwelling. Upwelling carries nutrient rich, and cold deep-sea water to the euphotic zone, promoting phytoplankton blooms and kickstarting an extremely high-productive environment. Areas of upwelling lead to the promotion of fisheries, in fact nearly half of the world's fish catch comes from areas of upwelling.
Ekman suction occurs both along coastlines and in the open ocean, but also occurs along the equator. Along the Pacific coastline of California, Central America, and Peru, as well as along the Atlantic coastline of Africa there are areas of upwelling due to Ekman suction, as the currents move equatorwards. Due to the Coriolis effect the surface water moves 90° to the left (in the South Hemisphere, as it travels toward the equator) of the wind current, therefore causing the water to diverge from the coast boundary, leading to Ekman suction. Additionally, there are areas of upwelling as a consequence of Ekman suction where the Polar Easterlies winds meet the Westerlies in the subpolar regions north of the subtropics, as well as where the Northeast Trade Winds meet the Southeast Trade Winds along the Equator. Similarly, due to the Coriolis effect the surface water moves 90° to the left (in the South Hemisphere) of the wind currents, and the surface water diverges along these boundaries, resulting in upwelling in order to conserve mass.
Ekman pumping
Ekman pumping is the component of Ekman transport that results in areas of downwelling due to the convergence of water. As discussed above, the concept of mass conservation requires that a pile up of surface water must be pushed downward. This pile up of warm, nutrient-poor surface water gets pumped vertically down the water column, resulting in areas of downwelling.
Ekman pumping has dramatic impacts on the surrounding environments. Downwelling, due to Ekman pumping, leads to nutrient poor waters, therefore reducing the biological productivity of the area. Additionally, it transports heat and dissolved oxygen vertically down the water column as warm oxygen rich surface water is being pumped towards the deep ocean water.
Ekman pumping can be found along the coasts as well as in the open ocean. Along the Pacific Coast in the Southern Hemisphere northerly winds move parallel to the coastline. Due to the Coriolis effect the surface water gets pulled 90° to the left of the wind current, therefore causing the water to converge along the coast boundary, leading to Ekman pumping. In the open ocean Ekman pumping occurs with gyres. Specifically, in the subtropics, between 20°N and 50°N, there is Ekman pumping as the tradewinds shift to westerlies causing a pile up of surface water.
Mathematical derivation
Some assumptions of the fluid dynamics involved in the process must be made in order to simplify the process to a point where it is solvable. The assumptions made by Ekman were:
no boundaries;
infinitely deep water;
eddy viscosity, , is constant (this is only true for laminar flow. In the turbulent atmospheric and oceanic boundary layer it is a strong function of depth);
the wind forcing is steady and has been blowing for a long time;
barotropic conditions with no geostrophic flow;
the Coriolis parameter, is kept constant.
The simplified equations for the Coriolis force in the x and y directions follow from these assumptions:
where is the wind stress, is the density, is the east–west velocity, and is the north–south velocity.
Integrating each equation over the entire Ekman layer:
where
Here and represent the zonal and meridional mass transport terms with units of mass per unit time per unit length. Contrarily to common logic, north–south winds cause mass transport in the east–west direction.
In order to understand the vertical velocity structure of the water column, equations and can be rewritten in terms of the vertical eddy viscosity term.
where is the vertical eddy viscosity coefficient.
This gives a set of differential equations of the form
In order to solve this system of two differential equations, two boundary conditions can be applied:
as
friction is equal to wind stress at the free surface ().
Things can be further simplified by considering wind blowing in the y-direction only. This means is the results will be relative to a north–south wind (although these solutions could be produced relative to wind in any other direction):
where
and represent Ekman transport in the u and v direction;
in equation the plus sign applies to the northern hemisphere and the minus sign to the southern hemisphere;
is the wind stress on the sea surface;
is the Ekman depth (depth of Ekman layer).
By solving this at z=0, the surface current is found to be (as expected) 45 degrees to the right (left) of the wind in the Northern (Southern) Hemisphere. This also gives the expected shape of the Ekman spiral, both in magnitude and direction. Integrating these equations over the Ekman layer shows that the net Ekman transport term is 90 degrees to the right (left) of the wind in the Northern (Southern) Hemisphere.
Applications
Ekman transport leads to coastal upwelling, which provides the nutrient supply for some of the largest fishing markets on the planet and can impact the stability of the Antarctic Ice Sheet by pulling warm deep water onto the continental shelf. Wind in these regimes blows parallel to the coast (such as along the coast of Peru, where the wind blows out of the southeast, and also in California, where it blows out of the northwest). From Ekman transport, surface water has a net movement of 90° to right of wind direction in the northern hemisphere (left in the southern hemisphere). Because the surface water flows away from the coast, the water must be replaced with water from below. In shallow coastal waters, the Ekman spiral is normally not fully formed and the wind events that cause upwelling episodes are typically rather short. This leads to many variations in the extent of upwelling, but the ideas are still generally applicable.
Ekman transport is similarly at work in equatorial upwelling, where, in both hemispheres, a trade wind component towards the west causes a net transport of water towards the pole, and a trade wind component towards the east causes a net transport of water away from the pole.
On smaller scales, cyclonic winds induce Ekman transport which causes net divergence and upwelling, or Ekman suction, while anti-cyclonic winds cause net convergence and downwelling, or Ekman pumping
Ekman transport is also a factor in the circulation of the ocean gyres and garbage patches. Ekman transport causes water to flow toward the center of the gyre in all locations, creating a sloped sea-surface, and initiating geostrophic flow (Colling p 65). Harald Sverdrup applied Ekman transport while including pressure gradient forces to develop a theory for this (see Sverdrup balance).
Exceptions
The Ekman theory describing wind-induced current on a rotating planet explains why surface currents in the Northern Hemisphere are generally deflected to the left of wind direction, and in the Southern Hemisphere to the left in most cases. There are also solutions for opposite deflections at periods shorter than the local inertial period, which were not mentioned by Ekman, and are seldom observed. A major example of this effect occurs in the Bay of Bengal, where surface flow is offset to the left of wind direction in the Northern Hemisphere. Ekman's theory can be refined to include this case.
| Physical sciences | Oceanography | Earth science |
11030631 | https://en.wikipedia.org/wiki/Banded%20knifefish | Banded knifefish | The banded knifefish (Gymnotus carapo) is a species of gymniform knifefish native to a wide range of freshwater habitats in South America. It is the most widespread species of Gymnotus, but it has frequently been confused with several relatives, including some found outside its range like the Central America G. maculosus. The English name "banded knifefish" is sometimes used for the entire genus Gymnotus instead of only the species G. carapo.
Range and habitat
This South American fish is found in the Amazon, Orinoco and Río de la Plata basins, as well as rivers in the Guianas, northeastern Brazil (only those exiting along the country's northern coast, such as Parnaíba) and northern Argentina (south to the 36th parallel south), and in Trinidad. This makes it the most widespread species of Gymnotus.
G. carapo occurs in virtually any freshwater habitat in its range, such as rivers and streams (both slow- and fast-flowing), floodplains, estuaries, swamps and lakes. However, it is not known from deep river channels. It can survive in low-oxygen habitats by breathing air with a modified swim bladder, areas affected by pollution, and for a period on land if its aquatic habitat dries out.
Appearance
G. carapo reaches up to in total length, but it rarely surpasses and depending on exact population average is . In a study where two breeding males were located one was long and the other . It is brown with an oblique banded pattern. The strength and details of this pattern varies, both individually and depending on region. There are also some morphometric variations depending on location. A review found that these were insufficient for recognizing the populations as separate species, but did recommend recognizing them as subspecies: G. c. carapo (French Guiana and Suriname), G. c. australis (Río de la Plata basin), G. c. caatingaensis (Parnaíba river basin), G. c. madeirensis (upper Madeira river basin), G. c. occidentalis (Western Amazon, and Rio Negro and Essequibo river basins), G. c. orientalis (Eastern Amazon) and G. c. septentrionalis (Orinoco river basin and Trinidad).
Behavior
This species, as with all Gymnotiformes, is an electric fish that has the capability to generate weak electric charges, and then measure the disturbance in the field of electricity created. This system is used for navigation, finding prey and communicating with other G. carapo. They are highly territorial and will react aggressively if detecting the electric field of another individual of their species. However, they are not able to generate a strong electric field that can be used for incapacitating prey or enemies, like the related electric eel.
G. carapo are nocturnal and eat benthos, such a worms, insects, crustaceans, small fish and plant material.
The male takes care of the young by mouth brooding, and making and watching over a "nest", a depression in the bottom where the female lays the eggs.
| Biology and health sciences | Gymnotiformes | Animals |
348300 | https://en.wikipedia.org/wiki/Digital%20video%20recorder | Digital video recorder | A digital video recorder (DVR), also referred to as a personal video recorder (PVR) particularly in Canadian and British English, is an electronic device that records video in a digital format to a disk drive, USB flash drive, SD memory card, SSD or other local or networked mass storage device. The term includes set-top boxes (STB) with direct to disk recording, portable media players and TV gateways with recording capability, and digital camcorders. Personal computers can be connected to video capture devices and used as DVRs; in such cases the application software used to record video is an integral part of the DVR. Many DVRs are classified as consumer electronic devices. Similar small devices with built-in (~5 inch diagonal) displays and SSD support may be used for professional film or video production, as these recorders often do not have the limitations that built-in recorders in cameras have, offering wider codec support, the removal of recording time limitations and higher bitrates.
History
Hard-disk-based digital video recorders
The first working DVR prototype was developed in 1998 at Stanford University Computer Science department. The DVR design was a chapter of Edward Y. Chang's PhD dissertation, supervised by Professors Hector Garcia-Molina and Jennifer Widom. Two design papers were published at the 1998 VLDB conference,
and the 1999 ICDE conference. The prototype was developed in 1998 at Pat Hanrahan's CS488 class: Experiments in Digital Television, and the prototype was demoed to industrial partners including Sony, Intel, and Apple.
Consumer digital video recorders ReplayTV and TiVo were launched at the 1999 Consumer Electronics Show in Las Vegas, Nevada. Microsoft also demonstrated a unit with DVR capability, but this did not become available until the end of 1999 for full DVR features in Dish Network's DISHplayer receivers. TiVo shipped their first units on March 31, 1999. ReplayTV won the "Best of Show" award in the video category with Netscape co-founder Marc Andreessen as an early investor and board member, but TiVo was more successful commercially. Ad Age cited Forrester Research as saying that market penetration by the end of 1999 was "less than 100,000".
Legal action by media companies forced ReplayTV to remove many features such as automatic commercial skip and the sharing of recordings over the Internet, but newer devices have steadily regained these functions while adding complementary abilities, such as recording onto DVDs and programming and remote control facilities using PDAs, networked PCs, and Web browsers.
In contrast to VCRs, hard-disk based digital video recorders make "time shifting" more convenient and also allow for functions such as pausing live TV, instant replay, chasing playback (viewing a recording before it has been completed) and skipping over advertising during playback.
Many DVRs use the MPEG format for compressing the digital video. Video recording capabilities have become an essential part of the modern set-top box, as TV viewers have wanted to take control of their viewing experiences. As consumers have been able to converge increasing amounts of video content on their set-tops, delivered by traditional 'broadcast' cable, satellite and terrestrial as well as IP networks, the ability to capture programming and view it whenever they want has become a must-have function for many consumers.
Digital video recorders tied to a video service
At the 1999 CES, Dish Network demonstrated the hardware that would later have DVR capability with the assistance of Microsoft software, which also included access to the WebTV service. By the end of 1999 the Dishplayer had full DVR capabilities and within a year, over 200,000 units were sold.
In the UK, digital video recorders are often referred to as "plus boxes" (such as BSKYB's Sky+ and Virgin Media's V+ which integrates an HD capability, and the subscription free Freesat+ and Freeview+). Freeview+ have been around in the UK since the late 2000s, although the platform's first DVR, the Pace Twin, dates to 2002. British Sky Broadcasting marketed a popular combined receiver and DVR as Sky+, now replaced by the Sky Q box. TiVo launched a UK model in 2000, and is no longer supported, except for third party services, and the continuation of TiVo through Virgin Media in 2010. South African based Africa Satellite TV beamer Multichoice recently launched their DVR which is available on their DStv platform. In addition to ReplayTV and TiVo, there are a number of other suppliers of digital terrestrial (DTT) DVRs, including Technicolor SA, Topfield, Fusion, Commscope, Humax, VBox Communications, AC Ryan Playon and Advanced Digital Broadcast (ADB).
Many satellite, cable and IPTV companies are incorporating digital video recording functions into their set-top box, such as with DirecTiVo, DISHPlayer/DishDVR, Scientific Atlanta Explorer 8xxx from Time Warner, Total Home DVR from AT&T U-verse, Motorola DCT6412 from Comcast and others, Moxi Media Center by Digeo (available through Charter, Adelphia, Sunflower, Bend Broadband, and soon Comcast and other cable companies), or Sky+. Astro introduced their DVR system, called Astro MAX, which was the first PVR in Malaysia but was phased out two years after its introduction.
In the case of digital television, there is no encoding necessary in the DVR since the signal is already a digitally encoded MPEG stream. The digital video recorder simply stores the digital stream directly to disk. Having the broadcaster involved with, and sometimes subsidizing, the design of the DVR can lead to features such as the ability to use interactive TV on recorded shows, pre-loading of programs, or directly recording encrypted digital streams. It can, however, also force the manufacturer to implement non-skippable advertisements and automatically expiring recordings.
In the United States, the FCC has ruled that starting on July 1, 2007, consumers will be able to purchase a set-top box from a third-party company, rather than being forced to purchase or rent the set-top box from their cable company. This ruling only applies to "navigation devices", otherwise known as a cable television set-top box, and not to the security functions that control the user's access to the content of the cable operator. The overall net effect on digital video recorders and related technology is unlikely to be substantial as standalone DVRs are currently readily available on the open market.
In Europe Free-To-Air and Pay TV TV gateways with multiple tuners have whole house recording capabilities allowing recording of TV programs to Network Attached Storage or attached USB storage, recorded programs are then shared across the home network to tablet, smartphone, PC, Mac, Smart TV.
Introduction of dual tuners
In 2003 many Satellite and Cable providers introduced dual-tuner digital video recorders. In the UK, BSkyB introduced their first PVR Sky+ with dual tuner support in 2001. These machines have two independent tuners within the same receiver. The main use for this feature is the capability to record a live program while watching another live program simultaneously or to record two programs at the same time, possibly while watching a previously recorded one. Kogan.com introduced a dual-tuner PVR in the Australian market allowing free-to-air television to be recorded on a removable hard drive. Some dual-tuner DVRs also have the ability to output to two separate television sets at the same time. The PVR manufactured by UEC (Durban, South Africa) and used by Multichoice and Scientific Atlanta 8300DVB PVR have the ability to view two programs while recording a third using a triple tuner.
Where several digital subchannels are transmitted on a single RF channel, some PVRs can record two channels and view a third, so long as all three subchannels are on two channels (or one).
In the United States, DVRs were used by 32 percent of all TV households in 2009, and 38 percent by 2010, with viewership among 18- to 40-year-olds 40 percent higher in homes that have them.
Types
Integrated television sets
DVRs are integrated into some television sets (TVs). These systems simplify wiring and operation because they employ a single power cable, have no interconnected ports (e.g., HDMI), and share a common remote control.
VESA compatibility
VESA-compatible DVRs are designed to attach to the VESA mounting holes (100×100 mm) on the back of an LCD television set (TV), allowing users to combine the TV and DVR into an integrated unit.
Set-top boxes (STB)
Over-the-air DVRs are standalone receivers that record broadcast television programs. Several companies have launched over-the-air DVR products for the consumer market over the past few years.
Some pay-TV operators provide receivers that allow subscribers to attach their own network-attached storage (NAS) hard drives or solid-state or flash memory to record video and other media files (e.g., audio and photos).
PC-based
Software and hardware are available which can turn personal computers running Microsoft Windows, Linux, and Mac OS X into DVRs, and is a popular option for home-theater PC (HTPC) enthusiasts.
Linux
There are many free and open source software DVR applications available for Linux. For example, TV gateway interfaces to DVB tuners and provides network tuner and TV server functions, which allows live viewing and recording over IP networks. Other examples include MythTV, Video Disk Recorder (VDR), LinuxMCE, TiVo, VBox Home TV Gateway, and Kodi (formerly XBMC).
macOS
Geniatech makes a series of digital video recording devices called EyeTV. The software supplied with each device is also called EyeTV, and is available separately for use on compatible third-party tuners from manufacturers such as Pinnacle, TerraTec, and Hauppauge.
SageTV provided DVR software for the Mac but no longer sells it. Previously sold devices support the Hauppauge HVR-950, myTV.PVR and HDHomeRun hardware with its DVR software. SageTV software also included the ability to watch YouTube and other online video with a remote control.
MythTV (see above) also runs under Mac OS X, but most recording devices are currently only supported under Linux. Precompiled binaries are available for the MythTV front-end, allowing a Mac to watch video from (and control) a MythTV server running under Linux.
Apple provides applications in the FireWire software developer kit which allow any Mac with a FireWire port to record the MPEG2 transport stream from a FireWire-equipped cable box (for example: Motorola DCT62xx, including HD streams). Applications can also change channels on the cable box via the firewire interface. Only broadcast channels can be recorded as the rest of the channels are encrypted. FireRecord (formerly iRecord) is a free scheduled-recording program derived from this SDK.
Windows
There are several free digital video recording applications available for Microsoft Windows including GB-PVR, MediaPortal, and Orb (web-based remote interface).
There are also several commercial applications available including CyberLink, SageTV (which is no longer available after Google acquired it in June 2011), Beyond TV (which is considered discontinued despite an official announcement from SnapStream since the last update was October 2010 and they are concentrating on their enterprise search products), DVBViewer, Showshifter, InterVideo WinDVR, the R5000-HD and Meedio (now a dead product – Yahoo! bought most of the company's technology and discontinued the Meedio line, and rebranded the software Yahoo! Go – TV, which is now a free product but only works in the U.S.). Most TV tuner cards come bundled with software which allows the PC to record television to hard disk. See TV tuner card. For example, Leadtek's WinFast DTV1000 digital TV card comes bundled with the WinFast PVR2 software, which can also record analog video from the card's composite video input socket.
Windows Media Center is a DVR software by Microsoft which was bundled with the Media Center edition of Windows XP, the Home Premium / Ultimate editions of Windows Vista, as well as most editions of Windows 7. When Windows 8 was released in 2012, Windows Media Center was not included with Windows 8 OEM or Retail installations, and was only available as a $15 add-on pack (including DVD Playback codecs) to Windows 8 Pro users.
The Windows Game Bar specifies all recordings made by it as being titled "Microsoft Game DVR" followed by the game or application's title.
Embeddable
An embeddable DVR is a standalone device that is designed to be easily integrated into more complex systems. It is typically supplied as a compact, bare circuit board that facilitates mounting it as a subsystem component within larger equipment. The control keypad is usually connected with a detachable cable, to allow it to be located on the system's exterior while the DVR circuitry resides inside the equipment.
Source video
Television and video are terms that are sometimes used interchangeably, but differ in their technical meaning. Video is the visual portion of television, whereas television is the combination of video and audio modulated onto a carrier frequency (i.e., a television channel) for delivery. Most DVRs can record both video and audio.
Analog sources
The first digital video recorders were designed to record analog television in NTSC, PAL or SECAM formats.
To record an analog signal a few steps are required. In the case of a television signal, a television tuner must first demodulate the radio frequency signal to produce baseband video. The video is then converted to digital form by a frame grabber, which converts each video image into a collection of numeric values that represent the pixels within the image. At the same time, the audio is also converted to digital form by an analog-to-digital converter running at a constant sampling rate. In many devices, the resulting digital video and audio are compressed before recording to reduce the amount of data that will be recorded, although some DVRs record uncompressed data. When compression is used, video is typically compressed using formats such as H.264 or MPEG-2, and audio is compressed using AAC or MP3.
Analog broadcast copy protection
Many consumer DVRs implement a copy-protection system called Copy Generation Management System—Analog (CGMS-A), which specifies one of four possible copy permissions by means of two bits encoded in the vertical blanking interval:
Copying is freely allowed
Copying is prohibited
Only one copy of this material may be made
This is a copy of material for which only one copy was allowed to be made, so no further copies are allowed.
CGMS-A information may be present in analog broadcast TV signals, and is preserved when the signal is recorded and played back by analog VCRs. VCRs do not understand the meanings of the bits but preserve them in case there is a subsequent attempt to copy the tape to a DVR.
DVRs such as TiVo also detect and act upon analog protection systems such as Macrovision and DCS Copy Protection which were originally designed to block copying on analog VCRs.
Digital sources
Recording digital signals is generally a straightforward capture of the binary MPEG data being received. No expensive hardware is required to quantize and compress the signal (as the television broadcaster has already done this in the studio).
DVD-based PVRs available on the market as of 2006 are not capable of capturing the full range of the visual signal available with high-definition television (HDTV). This is largely because HDTV standards were finalized at a later time than the standards for DVDs. However, DVD-based PVRs can still be used (albeit at reduced visual quality) with HDTV since currently available HDTV sets also have standard A/V connections.
ATSC broadcast
ATSC television broadcasting is primarily used in North America. The ATSC data stream can be directly recorded by a digital video recorder, though many DVRs record only a subset of this information (that can later be transferred to DVD). An ATSC DVR will also act as a set-top box, allowing older televisions or monitors to receive digital television.
Copy protection
The U.S. FCC attempted to limit the abilities of DVRs with its "broadcast flag" regulation. Digital video recorders that had not won prior approval from the FCC for implementing "effective" digital rights management would have been banned from interstate commerce from July 2005, but the regulation was struck down on May 6, 2005.
DVB
DVB digital television contains audio/visual signals that are broadcast over the air in a digital rather than analog format. The DVB data stream can be directly recorded by the DVR. Devices that can use external storage devices (such as hard disks, SSDs, or other flash storage) to store and recover data without the aid of another device are sometimes called telememory devices.
Digital cable and satellite television
Recording satellite television or digital cable signals on a digital video recorder can be more complex than recording analog signals or broadcast digital signals. There are several different transmission schemes, and the video streams may be encrypted to restrict access to subscribers only.
A satellite or cable set-top box both decrypts the signal if encrypted, and decodes the MPEG stream into an analog signal for viewing on the television. In order to record cable or satellite digital signals the signal must be captured after it has been decrypted but before it is decoded; this is how DVRs built into set-top boxes work.
Cable and satellite providers often offer their own digital video recorders along with a service plan. These DVRs have access to the encrypted video stream, and generally enforce the provider's restrictions on copying of material even after recording.
DVD
Many DVD-based DVRs have the capability to copy content from a source DVD (ripping). In the United States, this is prohibited under the Digital Millennium Copyright Act if the disc is encrypted. Most such DVRs will therefore not allow recording of video streams from encrypted movie discs.
Digital camcorders
A digital camcorder combines a camera and a digital video recorder.
Some DVD-based DVRs incorporate connectors that can be used to capture digital video from a camcorder. Some editing of the resulting DVD is usually possible, such as adding chapter points.
Some digital video recorders can now record to solid state flash memory cards (called flash camcorders). They generally use Secure Digital cards, can include wireless connections (Bluetooth and Wi-Fi), and can play SWF files. There are some digital video recorders that combine video and graphics in real time to the flash card, called DTE or "direct to edit". These are used to speed-up the editing workflow in video and television production, since linear videotapes do not then need to be transferred to the edit workstation (see Non-linear editing system).
File formats, resolutions and file systems
DVRs can usually record and play H.264, MPEG-4 Part 2, MPEG-2 .mpg, MPEG-2 .TS, VOB and ISO images video, with MP3 and AC3 audio tracks. They can also display images (JPEG and PNG) and play music files (MP3 and Ogg).
Some devices can be updated to play and record in new formats. DVRs usually record in proprietary file systems for copy protection, although some can use FAT file systems. Recordings from standard-definition television usually have 480p/i/576p/i while HDTV is usually in 720p/1080i.
Applications
Security
Digital video recorders configured for physical security applications record video signals from closed-circuit television cameras for detection and documentation purposes. Many are designed to record audio as well. DVRs have evolved into devices that are feature rich and provide services that exceed the simple recording of video images that was previously done through VCRs. A DVR CCTV system provides a multitude of advanced functions over VCR technology including video searches by event, time, date and camera. There is also much more control over quality and frame rate allowing disk space usage to be optimized and the DVR can also be set to overwrite the oldest security footage should the disk become full. In some DVR security systems remote access to security footage using a PC can also be achieved by connecting the DVR to a LAN network or the Internet.
Some of the latest professional digital video recorders include video analytics firmware, to enable functionality such as 'virtual tripwire' or even the detection of abandoned objects on the scene.
Security DVRs may be categorized as being either PC-based or embedded. A PC-based DVR's architecture is a classical personal computer with video capture cards designed to capture video images. An embedded type DVR is specifically designed as a digital video recorder with its operating system and application software contained in firmware or read-only memory.
Hardware features
Hardware features of security DVRs vary between manufacturers and may include but are not necessarily limited to:
Designed for rack mounting or desktop configurations.
Single or multiple video inputs with connector types consistent with the analogue or digital video provided such as coaxial cable, twisted pair or optical fiber cable. The most common number of inputs are 1, 2, 4, 8, 16 and 32. Systems may be configured with a very large number of inputs by networking or bussing individual DVRs together.
Looping video outputs for each input which duplicates the corresponding input video signal and connector type. These output signals are used by other video equipment such as matrix switchers, multiplexers, and video monitors.
Controlled outputs to external video display monitors.
Front panel switches and indicators that allow the various features of the machine to be controlled.
Network connections consistent with the network type and utilized to control features of the recorder and to send and/or receive video signals.
Connections to external control devices such as keyboards.
A connection to external pan-tilt-zoom drives that position cameras.
Internal CD, DVD, VCR devices typically for archiving video.
Connections to external storage media.
Alarm event inputs from external security detection devices, usually one per video input.
Alarm event outputs from internal detection features such as motion detection or loss of video.
Software features
Software features vary between manufacturers and may include but are not necessarily limited to:
User-selectable image capture rates either on an all input basis or input by input basis. The capture rate feature may be programmed to automatically adjust the capture rate on the occurrence of an external alarm or an internal event
Selectable image resolution either on an all input basis or input by input basis. The image resolution feature may be programmed to automatically adjust the image resolution on the occurrence of an external alarm or an internal event.
Compression methods determine quality of playback. H.264 hardware compression offers fast transfer rates over the Internet with high quality video.
Motion detection: Provided on an input by input basis, this feature detects motion in the total image or a user definable portion of the image and usually provides sensitivity settings. Detection causes an internal event that may be output to external equipment and/or be used to trigger changes in other internal features.
Lack of motion detection. Provided on an input by input basis, this feature detects the movement of an object into the field of view and remaining still for a user definable time. Detection causes an internal event that may be output to external equipment and/or used to trigger changes in other internal features.
Direction of motion detection. Provided on an input by input basis, this feature detects the direction of motion in the image that has been determined by the user as an unacceptable occurrence. Detection causes an internal event that may be output to external equipment and/or be used to trigger changes in other internal features.
Routing of input video to video monitors based on user inputs or automatically on alarms or events.
Input, time and date stamping.
Alarm and event logging on appropriate video inputs.
Alarm and event search.
One or more sound recording channels.
Archival.
Privacy concerns
Some (very few), but certainly not all, digital video recorders which are designed to send information to a service provider over a telephone line or Internet (or any other way) can gather and send real-time data on users' viewing habits. This problem was noted back in 2000 and was still considered a problem, specifically with TiVo, in 2015.
Television advertisements
Digital video recorders are also changing the way television programs advertise products. Watching pre-recorded programs allows users to fast-forward through commercials, and some technology allows users to remove commercials entirely. Half of viewers in the United States, for example, use DVRs to skip commercials entirely. This feature has been controversial for the last decade, with major television networks and movie studios claiming it violates copyright and should be banned.
In 1985, an employee of Honeywell's Physical Sciences Center, David Rafner, first described a drive-based DVR designed for home TV recording, time shifting, and commercial skipping. U.S. Patent 4,972,396 focused on a multi-channel design to allow simultaneous independent recording and playback. Broadly anticipating future DVR developments, it describes possible applications such as streaming compression, editing, captioning, multi-channel security monitoring, military sensor platforms, and remotely piloted vehicles.
In 1999, the first DVR which had a built-in commercial skipping feature was introduced by ReplayTV at the Consumer Electronics Show in Las Vegas. In 2002, five owners of the ReplayTV DVR sued the main television networks and movie studios, asking the federal judge to uphold consumers' rights to record TV shows and skip commercials, claiming that features such as commercial skipping help parents protect their kids from excessive consumerism. ReplayTV was purchased by SONICblue in 2001 and in March 2003, SONICblue filed for Chapter 11 bankruptcy after fighting a copyright infringement suit over the ReplayTV's ability to skip commercials. In 2007, DirecTV purchased the remaining assets of ReplayTV.
A third-party add-on for Windows Media Center called "DVRMSToolbox" has the ability to skip commercials.
There is a command-line program called Comskip that detects commercials in an MPEG-2 file and saves their positions to a text file. This file can then be fed to a program like MEncoder to actually remove the commercials.
Many speculate that television advertisements will be eliminated altogether, replaced by advertising in the TV shows themselves. For example, Extreme Makeover: Home Edition advertises Sears, Kenmore, Kohler, and Home Depot by specifically using products from these companies, and some sports events like the Sprint Cup of NASCAR are named after sponsors.
Another type of advertisement shown more and more, mostly for advertising television shows on the same channel, is where the ad overlays the bottom of the television screen, blocking out some of the picture. "Banners", or "logo bugs", as they are called, are referred to by media companies as Secondary Events (2E). This is done in much the same way as severe weather warnings are done. Sometimes, these take up only 5–10% of the screen, but in the extreme, can take up as much as 25% of the viewing area. Some even make noise or move across the screen. One example of this is the 2E ads for Three Moons Over Milford in the months before its premiere. A video taking up approximately 25% of the bottom-left portion of the screen would show a comet impacting into the moon with an accompanying explosion, during another television program.
Because of this widely used new technology, advertisers are now looking at a new way to market their products on television. An excerpt from the magazine Advertising Age reads: "As advertisers lose the ability to invade the home, and consumer's minds, they will be forced to wait for an invitation. This means that they have to learn what kinds of advertising content customers will actually be willing to seek out and receive."
With ad skipping and the time-sensitive nature of certain ads, advertisers are wary of buying commercial time on shows that are heavily digitally video-recorded. However, technology today makes it possible for networks to insert ads dynamically on videos being played in DVRs. Advertisers could inject time-relevant ads to recorded programs when the program is viewed. This way the ads could be not just topical but also personalized to viewers interests. DirecTV in March 2011 signed an arrangement with NDS Group to enable the delivery of such addressable advertisement.
It is believed that viewers prefer to forward ads, than to switch the channel. By switching channels, viewers will have the probability of skipping the beginning of their program. Users might switch to a channel that is also showing ads. Having the ability to pause, rewind, and forward live TV gives users a chance to change the channel fewer times. Forwarding ads can have a later effect on the viewer. Ads that get the viewers' attention will influence the viewers' to rewind and watch what was missed.
In January 2012, Dish Network announced Hopper service, costing $10 extra per month, which recorded prime-time programming from the four major broadcast networks. With the Auto Hop feature, viewers can watch the programs they choose without commercials, without making the effort to fast-forward. On May 24, 2012, Dish and the networks filed suit in federal court.
Patent and copyright litigation
On July 14, 2005, Forgent Networks filed suit against various companies alleging infringement on , entitled "Computer controlled video system allowing playback during recording". The listed companies included EchoStar, DirecTV, Charter Communications, Cox Communications, Comcast, Time Warner, and Cable One.
Scientific-Atlanta and Motorola, the manufacturers of the equipment sold by the above-mentioned companies, filed a counter-suit against Forgent Networks claiming that their products do not violate the patent, and that the patent is invalid. The two cases were combined into case 6:06-cv-208, filed in the United States District Court for the Eastern District of Texas, Tyler Division.
According to court documents, on June 20, 2006, Motorola requested that the United States Patent and Trademarks Office reexamine the patent, which was first filed in 1991, but has been amended several times.
On March 23, 2007, Cablevision Systems Corp lost a legal battle against several Hollywood studios and television networks to introduce a network-based digital video recorder service to its subscribers. However, on August 4, 2008, Cablevision won its appeal. John M. Walker Jr., a Second Circuit judge, declared that the technology "would not directly infringe" on the media companies' rights. An appeal to the Supreme Court was rejected.
In court, the media companies argued that network digital video recorders were tantamount to video-on-demand, and that they should receive license fees for the recording. Cablevision and the appeals court disagreed. The company noted that each user would record programs on his or her own individual server space, making it a DVR that has a "very long cord".
In 2004, TiVo sued EchoStar Corp, a manufacturer of DVR units, for patent infringement. The parties reached a settlement in 2011 wherein EchoStar pays a one-time fee (in three structured payments) that grants Echostar full rights for life to the disputed TiVo patents upon first payment(as opposed to indefinite and escalating license fees to be constantly renegotiated), and Echostar granted TiVo full rights for life to certain Echostar patents and dropped their counter-suit against TiVo.
In January 2012, AT&T settled a similar suit brought by TiVo claiming patent infringement (just as with Echostar) in exchange for cash payments to TiVo totaling $215 million through June 2018 plus "incremental recurring per subscriber monthly license fees" to TiVo through July 2018, but grants no full lifetime rights as per the Echostar settlement.
In May 2012, Fox Broadcasting sued Dish Network, arguing that Dish's set-top box with DVR function, which allowed the users to automatically record prime-time programs and skip commercials, was copyright infringement and breach of contract. In July 2013, the 9th circuit rejected Fox's claims.
| Technology | Media and communication: Basics | null |
348472 | https://en.wikipedia.org/wiki/Lady%20Amherst%27s%20pheasant | Lady Amherst's pheasant | Lady Amherst's pheasant (Chrysolophus amherstiae) is a bird of the order Galliformes and the family Phasianidae. The genus name is from Ancient Greek khrusolophos, "with golden crest". The English name and amherstiae commemorates Sarah Amherst, who was responsible for sending the first specimen of the bird to London in 1828. It is also sometimes referred to as the Chinese copper pheasant. Lady Amherst's pheasant is evaluated as Least Concern on the IUCN Red List of Threatened Species.
Distribution and habitat
The species is native to southwestern China and far northern Myanmar, but has been introduced elsewhere. Previously, a self-supporting feral population was established in England, the stronghold of which was in West Bedfordshire. Lady Amherst first introduced the ornamental pheasant on her estates, near the Duke of Bedford's Woburn Abbey, where the birds were also shot for game and interbred. Although the introduced British populations are believed to have been extinct since 2015, occasional sightings of the species have occurred in subsequent years; a Lady Amherst's pheasant was photographed in Staplegrove, Taunton in May 2020, and subsequently, in early March 2021, a Lady Amherst's pheasant was spotted in a garden in Scotland. In October 2022, a Lady Amherst's pheasant was spotted in a park in Bedford.
Description
The adult male is in length, its tail accounting for of the total length. It is unmistakable with its nuchal cape which is white black, with a red crest. The long tail is greyish white with black bars and red streaks at the base, the chest and belly are white, the throat is scaled green, the back is dark green, the wings are blue and brown, and the rump is yellow. The "cape" can be raised in display. This species is closely related to the golden pheasant (C. pictus), but slightly larger and has a yellow eye, blue-green bare skin around it. The bill is horn-coloured and they have blue-gray legs.
The female is much less showy, with a duller mottled brown plumage all over, similar to that of the female common pheasant (P. colchicus) but with finer barring. She is very like the female golden pheasant, but has a darker head and cleaner underparts than the hen of that species.
Despite the male's showy appearance, these birds are very difficult to see in their natural habitat, which is dense, dark forests with thick undergrowth. Consequently, little is known of their behaviour in the wild.
Diet and behaviour
They feed on the ground on grain, leaves and invertebrates, but roost in trees at night. Whilst they can fly, they prefer to run, but if startled they can suddenly burst upwards at great speed, with a distinctive wing sound. The male emits a metallic call in the breeding season.
Gallery
| Biology and health sciences | Galliformes | Animals |
348898 | https://en.wikipedia.org/wiki/Fatigue%20%28material%29 | Fatigue (material) | In materials science, fatigue is the initiation and propagation of cracks in a material due to cyclic loading. Once a fatigue crack has initiated, it grows a small amount with each loading cycle, typically producing striations on some parts of the fracture surface. The crack will continue to grow until it reaches a critical size, which occurs when the stress intensity factor of the crack exceeds the fracture toughness of the material, producing rapid propagation and typically complete fracture of the structure.
Fatigue has traditionally been associated with the failure of metal components which led to the term metal fatigue. In the nineteenth century, the sudden failing of metal railway axles was thought to be caused by the metal crystallising because of the brittle appearance of the fracture surface, but this has since been disproved. Most materials, such as composites, plastics and ceramics, seem to experience some sort of fatigue-related failure.
To aid in predicting the fatigue life of a component, fatigue tests are carried out using coupons to measure the rate of crack growth by applying constant amplitude cyclic loading and averaging the measured growth of a crack over thousands of cycles. However, there are also a number of special cases that need to be considered where the rate of crack growth is significantly different compared to that obtained from constant amplitude testing, such as the reduced rate of growth that occurs for small loads near the threshold or after the application of an overload, and the increased rate of crack growth associated with short cracks or after the application of an underload.
If the loads are above a certain threshold, microscopic cracks will begin to initiate at stress concentrations such as holes, persistent slip bands (PSBs), composite interfaces or grain boundaries in metals. The stress values that cause fatigue damage are typically much less than the yield strength of the material.
Stages of fatigue
Historically, fatigue has been separated into regions of high cycle fatigue that require more than 104 cycles to failure where stress is low and primarily elastic and low cycle fatigue where there is significant plasticity. Experiments have shown that low cycle fatigue is also crack growth.
Fatigue failures, both for high and low cycles, all follow the same basic steps: crack initiation, crack growth stages I and II, and finally ultimate failure. To begin the process, cracks must nucleate within a material. This process can occur either at stress risers in metallic samples or at areas with a high void density in polymer samples. These cracks propagate slowly at first during stage I crack growth along crystallographic planes, where shear stresses are highest. Once the cracks reach a critical size they propagate quickly during stage II crack growth in a direction perpendicular to the applied force. These cracks can eventually lead to the ultimate failure of the material, often in a brittle catastrophic fashion.
Crack initiation
The formation of initial cracks preceding fatigue failure is a separate process consisting of four discrete steps in metallic samples. The material will develop cell structures and harden in response to the applied load. This causes the amplitude of the applied stress to increase given the new restraints on strain. These newly formed cell structures will eventually break down with the formation of persistent slip bands (PSBs). Slip in the material is localized at these PSBs, and the exaggerated slip can now serve as a stress concentrator for a crack to form. Nucleation and growth of a crack to a detectable size accounts for most of the cracking process. It is for this reason that cyclic fatigue failures seem to occur so suddenly where the bulk of the changes in the material are not visible without destructive testing. Even in normally ductile materials, fatigue failures will resemble sudden brittle failures.
PSB-induced slip planes result in intrusions and extrusions along the surface of a material, often occurring in pairs. This slip is not a microstructural change within the material, but rather a propagation of dislocations within the material. Instead of a smooth interface, the intrusions and extrusions will cause the surface of the material to resemble the edge of a deck of cards, where not all cards are perfectly aligned. Slip-induced intrusions and extrusions create extremely fine surface structures on the material. With surface structure size inversely related to stress concentration factors, PSB-induced surface slip can cause fractures to initiate.
These steps can also be bypassed entirely if the cracks form at a pre-existing stress concentrator such as from an inclusion in the material or from a geometric stress concentrator caused by a sharp internal corner or fillet.
Crack growth
Most of the fatigue life is generally consumed in the crack growth phase. The rate of growth is primarily driven by the range of cyclic loading although additional factors such as mean stress, environment, overloads and underloads can also affect the rate of growth. Crack growth may stop if the loads are small enough to fall below a critical threshold.
Fatigue cracks can grow from material or manufacturing defects from as small as 10 μm.
When the rate of growth becomes large enough, fatigue striations can be seen on the fracture surface. Striations mark the position of the crack tip and the width of each striation represents the growth from one loading cycle. Striations are a result of plasticity at the crack tip.
When the stress intensity exceeds a critical value known as the fracture toughness, unsustainable fast fracture will occur, usually by a process of microvoid coalescence. Prior to final fracture, the fracture surface may contain a mixture of areas of fatigue and fast fracture.
Acceleration and retardation
The following effects change the rate of growth:
Mean stress effect: Higher mean stress increases the rate of crack growth.
Environment: Increased moisture increases the rate of crack growth. In the case of aluminium, cracks generally grow from the surface, where water vapour from the atmosphere is able to reach the tip of the crack and dissociate into atomic hydrogen which causes hydrogen embrittlement. Cracks growing internally are isolated from the atmosphere and grow in a vacuum where the rate of growth is typically an order of magnitude slower than a surface crack.
Short crack effect: In 1975, Pearson observed that short cracks grow faster than expected. Possible reasons for the short crack effect include the presence of the T-stress, the tri-axial stress state at the crack tip, the lack of crack closure associated with short cracks and the large plastic zone in comparison to the crack length. In addition, long cracks typically experience a threshold which short cracks do not have. There are a number of criteria for short cracks:
cracks are typically smaller than 1 mm,
cracks are smaller than the material microstructure size such as the grain size, or
crack length is small compared to the plastic zone.
Underloads: Small numbers of underloads increase the rate of growth and may counteract the effect of overloads.
Overloads: Initially overloads (> 1.5 the maximum load in a sequence) lead to a small increase in the rate of growth followed by a long reduction in the rate of growth.
Characteristics of fatigue
In metal alloys, and for the simplifying case when there are no macroscopic or microscopic discontinuities, the process starts with dislocation movements at the microscopic level, which eventually form persistent slip bands that become the nucleus of short cracks.
Macroscopic and microscopic discontinuities (at the crystalline grain scale) as well as component design features which cause stress concentrations (holes, keyways, sharp changes of load direction etc.) are common locations at which the fatigue process begins.
Fatigue is a process that has a degree of randomness (stochastic), often showing considerable scatter even in seemingly identical samples in well controlled environments.
Fatigue is usually associated with tensile stresses but fatigue cracks have been reported due to compressive loads.
The greater the applied stress range, the shorter the life.
Fatigue life scatter tends to increase for longer fatigue lives.
Damage is irreversible. Materials do not recover when rested.
Fatigue life is influenced by a variety of factors, such as temperature, surface finish, metallurgical microstructure, presence of oxidizing or inert chemicals, residual stresses, scuffing contact (fretting), etc.
Some materials (e.g., some steel and titanium alloys) exhibit a theoretical fatigue limit below which continued loading does not lead to fatigue failure.
High cycle fatigue strength (about 104 to 108 cycles) can be described by stress-based parameters. A load-controlled servo-hydraulic test rig is commonly used in these tests, with frequencies of around 20–50 Hz. Other sorts of machines—like resonant magnetic machines—can also be used, to achieve frequencies up to 250 Hz.
Low-cycle fatigue (loading that typically causes failure in less than 104 cycles) is associated with localized plastic behavior in metals; thus, a strain-based parameter should be used for fatigue life prediction in metals. Testing is conducted with constant strain amplitudes typically at 0.01–5 Hz.
Timeline of research history
1837: Wilhelm Albert publishes the first article on fatigue. He devised a test machine for conveyor chains used in the Clausthal mines.
1839: Jean-Victor Poncelet describes metals as being 'tired' in his lectures at the military school at Metz.
1842: William John Macquorn Rankine recognises the importance of stress concentrations in his investigation of railroad axle failures. The Versailles train wreck was caused by fatigue failure of a locomotive axle.
1843: Joseph Glynn reports on the fatigue of an axle on a locomotive tender. He identifies the keyway as the crack origin.
1848: The Railway Inspectorate reports one of the first tyre failures, probably from a rivet hole in tread of railway carriage wheel. It was likely a fatigue failure.
1849: Eaton Hodgkinson is granted a "small sum of money" to report to the UK Parliament on his work in "ascertaining by direct experiment, the effects of continued changes of load upon iron structures and to what extent they could be loaded without danger to their ultimate security".
1854: F. Braithwaite reports on common service fatigue failures and coins the term fatigue.
1860: Systematic fatigue testing undertaken by Sir William Fairbairn and August Wöhler.
1870: A. Wöhler summarises his work on railroad axles. He concludes that cyclic stress range is more important than peak stress and introduces the concept of endurance limit.
1903: Sir James Alfred Ewing demonstrates the origin of fatigue failure in microscopic cracks.
1910: O. H. Basquin proposes a log-log relationship for S-N curves, using Wöhler's test data.
1940: Sidney M. Cadwell publishes first rigorous study of fatigue in rubber.
1945: A. M. Miner popularises Palmgren's (1924) linear damage hypothesis as a practical design tool.
1952: W. Weibull An S-N curve model.
1954: The world's first commercial jetliner, the de Havilland Comet, suffers disaster as three planes break up in mid-air, causing de Havilland and all other manufacturers to redesign high altitude aircraft and in particular replace square apertures like windows with oval ones.
1954: L. F. Coffin and S. S. Manson explain fatigue crack-growth in terms of plastic strain in the tip of cracks.
1961: P. C. Paris proposes methods for predicting the rate of growth of individual fatigue cracks in the face of initial scepticism and popular defence of Miner's phenomenological approach.
1968: Tatsuo Endo and M. Matsuishi devise the rainflow-counting algorithm and enable the reliable application of Miner's rule to random loadings.
1970: Smith, Watson, and Topper developed a mean stress correction model, where the fatigue damage in a cycle is determined by the product of the maximum stress and strain amplitude.
1970: W. Elber elucidates the mechanisms and importance of crack closure in slowing the growth of a fatigue crack due to the wedging effect of plastic deformation left behind the tip of the crack.
1973: M. W. Brown and K. J. Miller observe that fatigue life under multiaxial conditions is governed by the experience of the plane receiving the most damage, and that both tension and shear loads on the critical plane must be considered.
Predicting fatigue life
The American Society for Testing and Materials defines fatigue life, Nf, as the number of stress cycles of a specified character that a specimen sustains before failure of a specified nature occurs. For some materials, notably steel and titanium, there is a theoretical value for stress amplitude below which the material will not fail for any number of cycles, called a fatigue limit or endurance limit. However, in practice, several bodies of work done at greater numbers of cycles suggest that fatigue limits do not exist for any metals.
Engineers have used a number of methods to determine the fatigue life of a material:
the stress-life method,
the strain-life method,
the crack growth method and
probabilistic methods, which can be based on either life or crack growth methods.
Whether using stress/strain-life approach or using crack growth approach, complex or variable amplitude loading is reduced to a series of fatigue equivalent simple cyclic loadings using a technique such as the rainflow-counting algorithm.
Stress-life and strain-life methods
A mechanical part is often exposed to a complex, often random, sequence of loads, large and small. In order to assess the safe life of such a part using the fatigue damage or stress/strain-life methods the following series of steps is usually performed:
Complex loading is reduced to a series of simple cyclic loadings using a technique such as rainflow analysis;
A histogram of cyclic stress is created from the rainflow analysis to form a fatigue damage spectrum;
For each stress level, the degree of cumulative damage is calculated from the S-N curve; and
The effect of the individual contributions are combined using an algorithm such as Miner's rule.
Since S-N curves are typically generated for uniaxial loading, some equivalence rule is needed whenever the loading is multiaxial. For simple, proportional loading histories (lateral load in a constant ratio with the axial), Sines rule may be applied. For more complex situations, such as non-proportional loading, critical plane analysis must be applied.
Miner's rule
In 1945, Milton A. Miner popularised a rule that had first been proposed by Arvid Palmgren in 1924. The rule, variously called Miner's rule or the Palmgren–Miner linear damage hypothesis, states that where there are k different stress magnitudes in a spectrum, Si (1 ≤ i ≤ k), each contributing ni(Si) cycles, then if Ni(Si) is the number of cycles to failure of a constant stress reversal Si (determined by uni-axial fatigue tests), failure occurs when:
Usually, for design purposes, C is assumed to be 1. This can be thought of as assessing what proportion of life is consumed by a linear combination of stress reversals at varying magnitudes.
Although Miner's rule may be a useful approximation in many circumstances, it has several major limitations:
It fails to recognize the probabilistic nature of fatigue and there is no simple way to relate life predicted by the rule with the characteristics of a probability distribution. Industry analysts often use design curves, adjusted to account for scatter, to calculate Ni(Si).
The sequence in which high vs. low stress cycles are applied to a sample in fact affect the fatigue life, for which Miner's Rule does not account. In some circumstances, cycles of low stress followed by high stress cause more damage than would be predicted by the rule. It does not consider the effect of an overload or high stress which may result in a compressive residual stress that may retard crack growth. High stress followed by low stress may have less damage due to the presence of compressive residual stress (or localized plastic damages around crack tip).
Stress-life (S-N) method
Materials fatigue performance is commonly characterized by an S-N curve, also known as a Wöhler curve. This is often plotted with the cyclic stress (S) against the cycles to failure (N) on a logarithmic scale. S-N curves are derived from tests on samples of the material to be characterized (often called coupons or specimens) where a regular sinusoidal stress is applied by a testing machine which also counts the number of cycles to failure. This process is sometimes known as coupon testing. For greater accuracy but lower generality component testing is used. Each coupon or component test generates a point on the plot though in some cases there is a runout where the time to failure exceeds that available for the test (see censoring). Analysis of fatigue data requires techniques from statistics, especially survival analysis and linear regression.
The progression of the S-N curve can be influenced by many factors such as stress ratio (mean stress), loading frequency, temperature, corrosion, residual stresses, and the presence of notches. A constant fatigue life (CFL) diagram is useful for the study of stress ratio effect. The Goodman line is a method used to estimate the influence of the mean stress on the fatigue strength.
A Constant Fatigue Life (CFL) diagram is useful for stress ratio effect on S-N curve. Also, in the presence of a steady stress superimposed on the cyclic loading, the Goodman relation can be used to estimate a failure condition. It plots stress amplitude against mean stress with the fatigue limit and the ultimate tensile strength of the material as the two extremes. Alternative failure criteria include Soderberg and Gerber.
As coupons sampled from a homogeneous frame will display a variation in their number of cycles to failure, the S-N curve should more properly be a Stress-Cycle-Probability (S-N-P) curve to capture the probability of failure after a given number of cycles of a certain stress.
With body-centered cubic materials (bcc), the Wöhler curve often becomes a horizontal line with decreasing stress amplitude, i.e. there is a fatigue strength that can be assigned to these materials. With face-centered cubic metals (fcc), the Wöhler curve generally drops continuously, so that only a fatigue limit can be assigned to these materials.
Strain-life (ε-N) method
When strains are no longer elastic, such as in the presence of stress concentrations, the total strain can be used instead of stress as a similitude parameter. This is known as the strain-life method. The total strain amplitude is the sum of the elastic strain amplitude and the plastic strain amplitude and is given by
.
Basquin's equation for the elastic strain amplitude is
where is Young's modulus.
The relation for high cycle fatigue can be expressed using the elastic strain amplitude
where is a parameter that scales with tensile strength obtained by fitting experimental data, is the number of cycles to failure and is the slope of the log-log curve again determined by curve fitting.
In 1954, Coffin and Manson proposed that the fatigue life of a component was related to the plastic strain amplitude using
.
Combining the elastic and plastic portions gives the total strain amplitude accounting for both low and high cycle fatigue
.
where is the fatigue strength coefficient, is the fatigue strength exponent, is the fatigue ductility coefficient, is the fatigue ductility exponent, and is the number of cycles to failure ( being the number of reversals to failure).
Crack growth methods
An estimate of the fatigue life of a component can be made using a crack growth equation by summing up the width of each increment of crack growth for each loading cycle. Safety or scatter factors are applied to the calculated life to account for any uncertainty and variability associated with fatigue. The rate of growth used in crack growth predictions is typically measured by applying thousands of constant amplitude cycles to a coupon and measuring the rate of growth from the change in compliance of the coupon or by measuring the growth of the crack on the surface of the coupon. Standard methods for measuring the rate of growth have been developed by ASTM International.
Crack growth equations such as the Paris–Erdoğan equation are used to predict the life of a component. They can be used to predict the growth of a crack from 10 um to failure. For normal manufacturing finishes this may cover most of the fatigue life of a component where growth can start from the first cycle. The conditions at the crack tip of a component are usually related to the conditions of test coupon using a characterising parameter such as the stress intensity, J-integral or crack tip opening displacement. All these techniques aim to match the crack tip conditions on the component to that of test coupons which give the rate of crack growth.
Additional models may be necessary to include retardation and acceleration effects associated with overloads or underloads in the loading sequence. In addition, small crack growth data may be needed to match the increased rate of growth seen with small cracks.
Typically, a cycle counting technique such as rainflow-cycle counting is used to extract the cycles from a complex sequence. This technique, along with others, has been shown to work with crack growth methods.
Crack growth methods have the advantage that they can predict the intermediate size of cracks. This information can be used to schedule inspections on a structure to ensure safety whereas strain/life methods only give a life until failure.
Dealing with fatigue
Design
Dependable design against fatigue-failure requires thorough education and supervised experience in structural engineering, mechanical engineering, or materials science. There are at least five principal approaches to life assurance for mechanical parts that display increasing degrees of sophistication:
Design to keep stress below threshold of fatigue limit (infinite lifetime concept);
Fail-safe, graceful degradation, and fault-tolerant design: Instruct the user to replace parts when they fail. Design in such a way that there is no single point of failure, and so that when any one part completely fails, it does not lead to catastrophic failure of the entire system.
Safe-life design: Design (conservatively) for a fixed life after which the user is instructed to replace the part with a new one (a so-called lifed part, finite lifetime concept, or "safe-life" design practice); planned obsolescence and disposable product are variants that design for a fixed life after which the user is instructed to replace the entire device;
Damage tolerance: Is an approach that ensures aircraft safety by assuming the presence of cracks or defects even in new aircraft. Crack growth calculations, periodic inspections and component repair or replacement can be used to ensure critical components that may contain cracks, remain safe. Inspections usually use nondestructive testing to limit or monitor the size of possible cracks and require an accurate prediction of the rate of crack-growth between inspections. The designer sets some aircraft maintenance checks schedule frequent enough that parts are replaced while the crack is still in the "slow growth" phase. This is often referred to as damage tolerant design or "retirement-for-cause".
Risk Management: Ensures the probability of failure remains below an acceptable level. This approach is typically used for aircraft where acceptable levels may be based on probability of failure during a single flight or taken over the lifetime of an aircraft. A component is assumed to have a crack with a probability distribution of crack sizes. This approach can consider variability in values such as crack growth rates, usage and critical crack size. It is also useful for considering damage at multiple locations that may interact to produce multi-site or widespread fatigue damage. Probability distributions that are common in data analysis and in design against fatigue include the log-normal distribution, extreme value distribution, Birnbaum–Saunders distribution, and Weibull distribution.
Testing
Fatigue testing can be used for components such as a coupon or a full-scale test article to determine:
the rate of crack growth and fatigue life of components such as a coupon or a full-scale test article.
location of critical regions
degree of fail-safety when part of the structure fails
the origin and cause of the crack initiating defect from fractographic examination of the crack.
These tests may form part of the certification process such as for airworthiness certification.
Repair
Stop drill Fatigue cracks that have begun to propagate can sometimes be stopped by drilling holes, called drill stops, at the tip of the crack. The possibility remains of a new crack starting in the side of the hole.
Blend. Small cracks can be blended away and the surface cold worked or shot peened.
Oversize holes. Holes with cracks growing from them can be drilled out to a larger hole to remove cracking and bushed to restore the original hole. Bushes can be cold shrink Interference fit bushes to induce beneficial compressive residual stresses. The oversized hole can also be cold worked by drawing an oversized mandrel through the hole.
Patch. Cracks may be repaired by installing a patch or repair fitting. Composite patches have been used to restore the strength of aircraft wings after cracks have been detected or to lower the stress prior to cracking in order to improve the fatigue life. Patches may restrict the ability to monitor fatigue cracks and may need to be removed and replaced for inspections.
Life improvement
Change material. Changes in the materials used in parts can also improve fatigue life. For example, parts can be made from better fatigue rated metals. Complete replacement and redesign of parts can also reduce if not eliminate fatigue problems. Thus helicopter rotor blades and propellers in metal are being replaced by composite equivalents. They are not only lighter, but also much more resistant to fatigue. They are more expensive, but the extra cost is amply repaid by their greater integrity, since loss of a rotor blade usually leads to total loss of the aircraft. A similar argument has been made for replacement of metal fuselages, wings and tails of aircraft.
Induce residual stresses Peening a surface can reduce such tensile stresses and create compressive residual stress, which prevents crack initiation. Forms of peening include: shot peening, using high-speed projectiles, high-frequency impact treatment (also called high-frequency mechanical impact) using a mechanical hammer, and laser peening which uses high-energy laser pulses. Low plasticity burnishing can also be used to induce compresses stress in fillets and cold work mandrels can be used for holes. Increases in fatigue life and strength are proportionally related to the depth of the compressive residual stresses imparted. Shot peening imparts compressive residual stresses approximately 0.005 inches (0.1 mm) deep, while laser peening can go 0.040 to 0.100 inches (1 to 2.5 mm) deep, or deeper.
Deep cryogenic treatment. The use of Deep Cryogenic treatment has been shown to increase resistance to fatigue failure. Springs used in industry, auto racing and firearms have been shown to last up to six times longer when treated. Heat checking, which is a form of thermal cyclic fatigue has been greatly delayed.
Re-profiling. Changing the shape of a stress concentration such as a hole or cutout may be used to extend the life of a component. Shape optimisation using numerical optimisation algorithms have been used to lower the stress concentration in wings and increase their life.
Fatigue of composites
Composite materials can offer excellent resistance to fatigue loading. In general, composites exhibit good fracture toughness and, unlike metals, increase fracture toughness with increasing strength. The critical damage size in composites is also greater than that for metals.
The primary mode of damage in a metal structure is cracking. For metal, cracks propagate in a relatively well-defined manner with respect to the applied stress, and the critical crack size and rate of crack propagation can be related to specimen data through analytical fracture mechanics. However, with composite structures, there is no single damage mode which dominates. Matrix cracking, delamination, debonding, voids, fiber fracture, and composite cracking can all occur separately and in combination, and the predominance of one or more is highly dependent on the laminate orientations and loading conditions. In addition, the unique joints and attachments used for composite structures often introduce modes of failure different from those typified by the laminate itself.
The composite damage propagates in a less regular manner and damage modes can change. Experience with composites indicates that the rate of damage propagation in does not exhibit the two distinct regions of initiation and propagation like metals. The crack initiation range in metals is propagation, and there is a significant quantitative difference in rate while the difference appears to be less apparent with composites. Fatigue cracks of composites may form in the matrix and propagate slowly since the matrix carries such a small fraction of the applied stress. And the fibers in the wake of the crack experience fatigue damage. In many cases, the damage rate is accelerated by deleterious interactions with the environment like oxidation or corrosion of fibers.
Notable fatigue failures
Versailles train crash
Following the King Louis-Philippe I's celebrations at the Palace of Versailles, a train returning to Paris crashed in May 1842 at Meudon after the leading locomotive broke an axle. The carriages behind piled into the wrecked engines and caught fire. At least 55 passengers were killed trapped in the locked carriages, including the explorer Jules Dumont d'Urville. This accident is known in France as the . The accident was witnessed by the British locomotive engineer Joseph Locke and widely reported in Britain. It was discussed extensively by engineers, who sought an explanation.
The derailment had been the result of a broken locomotive axle. Rankine's investigation of broken axles in Britain highlighted the importance of stress concentration, and the mechanism of crack growth with repeated loading. His and other papers suggesting a crack growth mechanism through repeated stressing, however, were ignored, and fatigue failures occurred at an ever-increasing rate on the expanding railway system. Other spurious theories seemed to be more acceptable, such as the idea that the metal had somehow "crystallized". The notion was based on the crystalline appearance of the fast fracture region of the crack surface, but ignored the fact that the metal was already highly crystalline.
de Havilland Comet
Two de Havilland Comet passenger jets broke up in mid-air and crashed within a few months of each other in 1954. As a result, systematic tests were conducted on a fuselage immersed and pressurised in a water tank. After the equivalent of 3,000 flights, investigators at the Royal Aircraft Establishment (RAE) were able to conclude that the crash had been due to failure of the pressure cabin at the forward Automatic Direction Finder window in the roof. This 'window' was in fact one of two apertures for the aerials of an electronic navigation system in which opaque fibreglass panels took the place of the window 'glass'. The failure was a result of metal fatigue caused by the repeated pressurisation and de-pressurisation of the aircraft cabin. Also, the supports around the windows were riveted, not bonded, as the original specifications for the aircraft had called for. The problem was exacerbated by the punch rivet construction technique employed. Unlike drill riveting, the imperfect nature of the hole created by punch riveting caused manufacturing defect cracks which may have caused the start of fatigue cracks around the rivet.
The Comet's pressure cabin had been designed to a safety factor comfortably in excess of that required by British Civil Airworthiness Requirements (2.5 times the cabin proof test pressure as opposed to the requirement of 1.33 times and an ultimate load of 2.0 times the cabin pressure) and the accident caused a revision in the estimates of the safe loading strength requirements of airliner pressure cabins.
In addition, it was discovered that the stresses around pressure cabin apertures were considerably higher than had been anticipated, especially around sharp-cornered cut-outs, such as windows. As a result, all future jet airliners would feature windows with rounded corners, greatly reducing the stress concentration. This was a noticeable distinguishing feature of all later models of the Comet. Investigators from the RAE told a public inquiry that the sharp corners near the Comets' window openings acted as initiation sites for cracks. The skin of the aircraft was also too thin, and cracks from manufacturing stresses were present at the corners.
Alexander L. Kielland oil platform capsizing
Alexander L. Kielland was a Norwegian semi-submersible drilling rig that capsized whilst working in the Ekofisk oil field in March 1980, killing 123 people. The capsizing was the worst disaster in Norwegian waters since World War II. The rig, located approximately 320 km east of Dundee, Scotland, was owned by the Stavanger Drilling Company of Norway and was on hire to the United States company Phillips Petroleum at the time of the disaster. In driving rain and mist, early in the evening of 27 March 1980 more than 200 men were off duty in the accommodation on Alexander L. Kielland. The wind was gusting to 40 knots with waves up to 12 m high. The rig had just been winched away from the Edda production platform. Minutes before 18:30 those on board felt a 'sharp crack' followed by 'some kind of trembling'. Suddenly the rig heeled over 30° and then stabilised. Five of the six anchor cables had broken, with one remaining cable preventing the rig from capsizing. The list continued to increase and at 18:53 the remaining anchor cable snapped and the rig turned upside down.
A year later in March 1981, the investigative report concluded that the rig collapsed owing to a fatigue crack in one of its six bracings (bracing D-6), which connected the collapsed D-leg to the rest of the rig. This was traced to a small 6 mm fillet weld which joined a non-load-bearing flange plate to this D-6 bracing. This flange plate held a sonar device used during drilling operations. The poor profile of the fillet weld contributed to a reduction in its fatigue strength. Further, the investigation found considerable amounts of lamellar tearing in the flange plate and cold cracks in the butt weld. Cold cracks in the welds, increased stress concentrations due to the weakened flange plate, the poor weld profile, and cyclical stresses (which would be common in the North Sea), seemed to collectively play a role in the rig's collapse.
Others
The 1862 Hartley Colliery Disaster was caused by the fracture of a steam engine beam and killed 204 people.
The 1919 Boston Great Molasses Flood has been attributed to a fatigue failure.
The 1948 Northwest Airlines Flight 421 crash due to fatigue failure in a wing spar root
The 1957 "Mt. Pinatubo", presidential plane of Philippine President Ramon Magsaysay, crashed due to engine failure caused by metal fatigue.
The 1965 capsize of the UK's first offshore oil platform, the Sea Gem, was due to fatigue in part of the suspension system linking the hull to the legs.
The 1968 Los Angeles Airways Flight 417 lost one of its main rotor blades due to fatigue failure.
The 1968 MacRobertson Miller Airlines Flight 1750 lost a wing due to improper maintenance leading to fatigue failure.
The 1969 F-111A crash due to a fatigue failure of the wing pivot fitting from a material defect resulted in the development of the damage-tolerant approach for fatigue design.
The 1977 Dan-Air Boeing 707 crash caused by fatigue failure resulting in the loss of the right horizontal stabilizer.
The 1979 American Airlines Flight 191 crashed after engine separation attributed to fatigue damage in the pylon structure holding the engine to the wing, caused by improper maintenance procedures.
The 1980 LOT Flight 7 crashed due to fatigue in an engine turbine shaft resulting in engine disintegration leading to loss of control.
The 1985 Japan Airlines Flight 123 crashed after the aircraft lost its vertical stabilizer due to faulty repairs on the rear bulkhead.
The 1988 Aloha Airlines Flight 243 suffered an explosive decompression at after a fatigue failure.
The 1989 United Airlines Flight 232 lost its tail engine due to fatigue failure in a fan disk hub.
The 1992 El Al Flight 1862 lost both engines on its right-wing due to fatigue failure in the pylon mounting of the #3 Engine.
The 1998 Eschede train disaster was caused by fatigue failure of a single composite wheel.
The 2000 Hatfield rail crash was likely caused by rolling contact fatigue.
The 2000 recall of 6.5 million Firestone tires on Ford Explorers originated from fatigue crack growth leading to separation of the tread from the tire.
The 2002 China Airlines Flight 611 disintegrated in-flight due to fatigue failure.
The 2005 Chalk's Ocean Airways Flight 101 lost its right wing due to fatigue failure brought about by inadequate maintenance practices.
The 2009 Viareggio train derailment due to fatigue failure.
The 2009 Sayano–Shushenskaya power station accident due to metal fatigue of turbine mountings.
The 2017 Air France Flight 66 had in-flight engine failure due to cold dwell fatigue fracture in the fan hub.
The 2023 Titan submersible implosion is thought to have occurred due to fatigue delamination of the carbon-fiber material used for the hull.
| Physical sciences | Solid mechanics | null |
348973 | https://en.wikipedia.org/wiki/Antichain | Antichain | In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two distinct elements in the subset are incomparable.
The size of the largest antichain in a partially ordered set is known as its width. By Dilworth's theorem, this also equals the minimum number of chains (totally ordered subsets) into which the set can be partitioned. Dually, the height of the partially ordered set (the length of its longest chain) equals by Mirsky's theorem the minimum number of antichains into which the set can be partitioned.
The family of all antichains in a finite partially ordered set can be given join and meet operations, making them into a distributive lattice. For the partially ordered system of all subsets of a finite set, ordered by set inclusion, the antichains are called Sperner families
and their lattice is a free distributive lattice, with a Dedekind number of elements. More generally, counting the number of antichains of a finite partially ordered set is #P-complete.
Definitions
Let be a partially ordered set. Two elements and of a partially ordered set are called comparable if If two elements are not comparable, they are called incomparable; that is, and are incomparable if neither
A chain in is a subset in which each pair of elements is comparable; that is, is totally ordered. An antichain in is a subset of in which each pair of different elements is incomparable; that is, there is no order relation between any two different elements in
(However, some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than two distinct elements of the antichain.)
Height and width
A maximal antichain is an antichain that is not a proper subset of any other antichain. A maximum antichain is an antichain that has cardinality at least as large as every other antichain. The of a partially ordered set is the cardinality of a maximum antichain. Any antichain can intersect any chain in at most one element, so, if we can partition the elements of an order into chains then the width of the order must be at most (if the antichain has more than elements, by the pigeonhole principle, there would be 2 of its elements belonging to the same chain, a contradiction). Dilworth's theorem states that this bound can always be reached: there always exists an antichain, and a partition of the elements into chains, such that the number of chains equals the number of elements in the antichain, which must therefore also equal the width. Similarly, one can define the of a partial order to be the maximum cardinality of a chain. Mirsky's theorem states that in any partial order of finite height, the height equals the smallest number of antichains into which the order may be partitioned.
Sperner families
An antichain in the inclusion ordering of subsets of an -element set is known as a Sperner family. The number of different Sperner families is counted by the Dedekind numbers, the first few of which numbers are
2, 3, 6, 20, 168, 7581, 7828354, 2414682040998, 56130437228687557907788 .
Even the empty set has two antichains in its power set: one containing a single set (the empty set itself) and one containing no sets.
Join and meet operations
Any antichain corresponds to a lower set
In a finite partial order (or more generally a partial order satisfying the ascending chain condition) all lower sets have this form. The union of any two lower sets is another lower set, and the union operation corresponds in this way to a join operation
on antichains:
Similarly, we can define a meet operation on antichains, corresponding to the intersection of lower sets:
The join and meet operations on all finite antichains of finite subsets of a set define a distributive lattice, the free distributive lattice generated by Birkhoff's representation theorem for distributive lattices states that every finite distributive lattice can be represented via join and meet operations on antichains of a finite partial order, or equivalently as union and intersection operations on the lower sets of the partial order.
Computational complexity
A maximum antichain (and its size, the width of a given partially ordered set) can be found in polynomial time.
Counting the number of antichains in a given partially ordered set is #P-complete.
| Mathematics | Order theory | null |
349053 | https://en.wikipedia.org/wiki/Iterated%20integral | Iterated integral | In multivariable calculus, an iterated integral is the result of applying integrals to a function of more than one variable (for example or ) in such a way that each of the integrals considers some of the variables as given constants. For example, the function , if is considered a given parameter, can be integrated with respect to , . The result is a function of and therefore its integral can be considered. If this is done, the result is the iterated integral
It is key for the notion of iterated integrals that this is different, in principle, from the multiple integral
In general, although these two can be different, Fubini's theorem states that under specific conditions, they are equivalent.
The alternative notation for iterated integrals
is also used.
In the notation that uses parentheses, iterated integrals are computed following the operational order indicated by the parentheses starting from the most inner integral outside. In the alternative notation, writing , the innermost integrand is computed first.
Examples
A simple computation
For the iterated integral
the integral
is computed first and then the result is used to compute the integral with respect to y.
This example omits the constants of integration. After the first integration with respect to x, we would rigorously need to introduce a "constant" function of y. That is, If we were to differentiate this function with respect to x, any terms containing only y would vanish, leaving the original integrand. Similarly for the second integral, we would introduce a "constant" function of x, because we have integrated with respect to y. In this way, indefinite integration does not make very much sense for functions of several variables.
The order is important
The order in which the integrals are computed is important in iterated integrals, particularly when the integrand is not continuous on the domain of integration. Examples in which the different orders lead to different results are usually for complicated functions as the one that follows.
Define the sequence such that . Let be a sequence of continuous functions not vanishing in the interval and zero elsewhere, such that for every . Define
In the previous sum, at each specific , at most one term is different from zero.
For this function it happens that
| Mathematics | Multivariable and vector calculus | null |
349319 | https://en.wikipedia.org/wiki/Healthcare%20industry | Healthcare industry | The healthcare industry (also called the medical industry or health economy) is an aggregation and integration of sectors within the economic system that provides goods and services to treat patients with curative, preventive, rehabilitative, and palliative care. It encompasses the creation and commercialization of products and services conducive to the preservation and restoration of well-being. The contemporary healthcare sector comprises three fundamental facets, namely services, products, and finance. It can be further subdivided into numerous sectors and categories and relies on interdisciplinary teams of highly skilled professionals and paraprofessionals to address the healthcare requirements of both individuals and communities.
The healthcare industry is one of the world's largest and fastest-growing industries. Consuming over 10 percent of gross domestic product (GDP) of most developed nations, health care can form an enormous part of a country's economy. U.S. healthcare spending grew 2.7 percent in 2021, reaching $4.3 trillion or $12,914 per person. As a share of the nation's Gross Domestic Product, health spending accounted for 18.3 percent. The per capita expenditure on health and pharmaceuticals in OECD countries has steadily grown from a couple of hundred in the 1970s to an average of US$4'000 per year in current purchasing power parities.
Backgrounds
For the purpose of finance and management, the healthcare industry is typically divided into several areas. As a basic framework for defining the sector, the United Nations International Standard Industrial Classification (ISIC) categorizes the healthcare industry as generally consisting of:
Hospital activities;
Medical and dental practice activities;
"Other human health activities".
This third class involves activities of or under the supervision of, nurses, midwives, physiotherapists, scientific or diagnostic laboratories, pathology clinics, residential health facilities, or other allied health professions, e.g. in the field of optometry, hydrotherapy, medical massage, yoga therapy, music therapy, occupational therapy, speech therapy, chiropody, homoeopathy, chiropractic, acupuncture, etc.
The Global Industry Classification Standard and the Industry Classification Benchmark further distinguish the industry into two main groups:
healthcare equipment and services; and
pharmaceuticals, biotechnology and related life sciences.
The healthcare equipment and services group consists of companies and entities that provide medical equipment, medical supplies, and healthcare services, such as hospitals, home healthcare providers, and nursing homes. The latter listed industry group includes companies that produce biotechnology, pharmaceuticals, and miscellaneous scientific services.
Other approaches to defining the scope of the healthcare industry tend to adopt a broader definition, also including other key actions related to health, such as education and training of health professionals, regulation and management of health services delivery, provision of traditional and complementary medicines, and administration of health insurance., chiropractic, acupuncture, etc.
Providers and professionals
A healthcare provider is an institution (such as a hospital or clinic) or person (such as a physician, nurse, allied health professional or community health worker) that provides preventive, curative, promotional, rehabilitative or palliative care services in a systematic way to individuals, families or communities.
The World Health Organization estimates there are 9.2 million physicians, 19.4 million nurses and midwives, 1.9 million dentists and other dentistry personnel, 2.6 million pharmacists and other pharmaceutical personnel, and over 1.3 million community health workers worldwide, making the health care industry one of the largest segments of the workforce.
The medical industry is also supported by many professions that do not directly provide health care itself, but are part of the management and support of the health care system. The incomes of managers and administrators, underwriters, and medical malpractice attorneys, marketers, investors, and shareholders of for-profit services, all are attributable to health care costs.
In 2017, healthcare costs paid to hospitals, physicians, nursing homes, diagnostic laboratories, pharmacies, medical device manufacturers, and other components of the healthcare system, consumed 17.9 percent of the gross domestic product (GDP) of the United States, the largest of any country in the world. It is expected that the health share of the Gross domestic product (GDP) will continue its upward trend, reaching 19.9 percent of GDP by 2025. In 2001, for the OECD countries the average was 8.4 percent with the United States (13.9%), Switzerland (10.9%), and Germany (10.7%) being the top three. US health care expenditures totaled US$2.2 trillion in 2006. According to Health Affairs, US$7,498 be spent on every woman, man and child in the United States in 2007, 20 percent of all spending. Costs are projected to increase to $12,782 by 2016.
The government does not ensure all-inclusive health care to every one of its residents. However, certain freely supported healthcare programs help to accommodate a portion of people who are elderly, disabled, or poor. Elected law guarantees community to crisis benefits paying little respect to the capacity to pay. Those without health protection scope are relied upon to pay secretly for therapeutic administrations. Health protection is costly and hospital expenses are overwhelmingly the most well-known explanation behind individual liquidation in the United States.
Spending
Expand the OECD charts below to see the breakdown:
"Government/compulsory": Government spending and compulsory health insurance.
"Voluntary": Voluntary health insurance and private funds such as households’ out-of-pocket payments, NGOs and private corporations.
They are represented by columns starting at zero. They are not stacked. The 2 are combined to get the total.
At the source you can run your cursor over the columns to get the year and the total for that country.
Click the table tab at the source to get 3 lists (one after another) of amounts by country: "Total", "Government/compulsory", and "Voluntary".
Health system
The delivery of healthcare services—from primary care to secondary and tertiary levels of care—is the most visible part of any healthcare system, both to users and the general public. There are many ways of providing healthcare in the modern world. The place of delivery may be in the home, the community, the workplace, or in health facilities. The most common way is face-to-face delivery, where care provider and patient see each other in person. This is what occurs in general medicine in most countries. However, with modern telecommunications technology, in absentia health care or Tele-Health is becoming more common. This could be when practitioner and patient communicate over the phone, video conferencing, the internet, email, text messages, or any other form of non-face-to-face communication. Practices like these are especial applicable to rural regions in developed nations. These services are typically implemented on a clinic-by-clinic basis.
Improving access, coverage and quality of health services depends on the ways services are organized and managed, and on the incentives influencing providers and users. In market-based health care systems, for example in the United States, such services are usually paid for by the patient or through the patient's health insurance company. Other mechanisms include government-financed systems (such as the National Health Service in the United Kingdom). In many poorer countries, development aid, as well as funding through charities or volunteers, help support the delivery and financing of health care services among large segments of the population.
The structure of healthcare charges can also vary dramatically among countries. For instance, Chinese hospital charges tend toward 50% for drugs, another major percentage for equipment, and a small percentage for healthcare professional fees. China has implemented a long-term transformation of its healthcare industry, beginning in the 1980s. Over the first twenty-five years of this transformation, government contributions to healthcare expenditures have dropped from 36% to 15%, with the burden of managing this decrease falling largely on patients. Also over this period, a small proportion of state-owned hospitals have been privatized. As an incentive to privatization, foreign investment in hospitals—up to 70% ownership has been encouraged.
Healthcare systems dictate the means by which people and institutions pay for and receive health services. Models vary based on the country with the responsibility of payment ranging from the public (social insurance) and private health insurers to the consumer-driven by patients themselves. These systems finance and organize the services delivered by providers. A two-tier system of public and private is common.
The American Academy of Family Physicians defines four commonly utilized systems of payment:
Beveridge model
Named after British economist and social reformer William Beveridge, the Beveridge model sees healthcare financed and provided by a central government. The system was initially proposed in his 1942 report, Social Insurance and Allied Services—known as the Beveridge Report. The system is the guiding basis of the modern British healthcare model enacted post-World War II. It has been utilized in numerous countries, including The United Kingdom, Cuba, and New Zealand.
The system sees all healthcare services— which are provided and financed solely by the government. This single payer system is financed through national taxation. Typically, the government owns and runs the clinics and hospitals, meaning that doctors are employees of the government. However, depending on the specific system, public providers can be accompanied by private doctors who collect fees from the government. The underlying principle of this system is that healthcare is a fundamental human right. Thus, the government provides universal coverage to all citizens. Generally, the Beveridge model yields a low cost per capita compared to other systems.
Bismarck model
The Bismarck system was first employed in 1883 by Prussian Chancellor Otto von Bismarck. In this system, insurance is mandated by the government and is typically sold on a non-profit basis. In many cases, employers and employees finance insurers through payroll deduction. In a pure Bismarck system, access to insurance is seen as a right solely predicated on labor status. The system attempts to cover all working citizens, meaning patients cannot be excluded from insurance due to pre-existing conditions. While care is privatized, it is closely regulated by the state through fixed procedure pricing. This means that most insurance claims are reimbursed without challenge, creating a low administrative burden. Archetypal implementation of the Bismarck system can be seen in Germany's nationalized healthcare. Similar systems can be found in France, Belgium, and Japan.
National health insurance model
The national insurance model shares and mixes elements from both the Bismarck and Beveridge models. The emergence of the National Health Insurance model is cited as a response to the challenges presented by the traditional Bismarck and Beveridge systems. For instance, it is difficult for Bismarck Systems to contend with aging populations, as these demographics are less economically active. Ultimately, this model has more flexibility than a traditional Bismarck or Beveridge model, as it can pull effective practices from both systems as needed.
This model maintains private providers, but payment comes directly from the government. Insurance plans control costs by paying for limited services. In some instances, citizens can opt out of public insurance for private insurance plans. However, large public insurance programs provide the government with bargaining power, allowing them to drive down prices for certain services and medication. In Canada, for instance, drug prices have been extensively lowered by the Patented Medicine Prices Review Board. Examples of this model can be found in Canada, Taiwan, and South Korea.
Out-of-pocket model
In areas with low levels of government stability or poverty, there is often no mechanism for ensuring that health costs are covered by a party other than the individual. In this case, patients must pay for services on their own. Payment methods can vary—ranging from physical currency, to trade for goods and services. Those that cannot afford treatment typically remain sick or die.
Inefficiencies
In countries where insurance is not mandated, there can be underinsurance—especially among disadvantaged and impoverished communities that can not afford private plans. The UK National Health System creates excellent patient outcomes and mandates universal coverage but also has large lag times for treatment. Critics argue that reforms brought about by the Health and Social Care Act 2012 only proved to fragment the system, leading to high regulatory burden and long treatment delays. In his review of NHS leadership in 2015, Sir Stuart Rose concluded that "the NHS is drowning in bureaucracy."
| Biology and health sciences | General concepts | Health |
349458 | https://en.wikipedia.org/wiki/Simplex%20algorithm | Simplex algorithm | In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming.
The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are not actually used in the method, but one interpretation of it is that it operates on simplicial cones, and these become proper simplices with an additional constraint. The simplicial cones in question are the corners (i.e., the neighborhoods of the vertices) of a geometric object called a polytope. The shape of this polytope is defined by the constraints applied to the objective function.
History
George Dantzig worked on planning methods for the US Army Air Force during World War II using a desk calculator. During 1946, his colleague challenged him to mechanize the planning process to distract him from taking another job. Dantzig formulated the problem as linear inequalities inspired by the work of Wassily Leontief, however, at that time he didn't include an objective as part of his formulation. Without an objective, a vast number of solutions can be feasible, and therefore to find the "best" feasible solution, military-specified "ground rules" must be used that describe how goals can be achieved as opposed to specifying a goal itself. Dantzig's core insight was to realize that most such ground rules can be translated into a linear objective function that needs to be maximized. Development of the simplex method was evolutionary and happened over a period of about a year.
After Dantzig included an objective function as part of his formulation during mid-1947, the problem was mathematically more tractable. Dantzig realized that one of the unsolved problems that he had mistaken as homework in his professor Jerzy Neyman's class (and actually later solved), was applicable to finding an algorithm for linear programs. This problem involved finding the existence of Lagrange multipliers for general linear programs over a continuum of variables, each bounded between zero and one, and satisfying linear constraints expressed in the form of Lebesgue integrals. Dantzig later published his "homework" as a thesis to earn his doctorate. The column geometry used in this thesis gave Dantzig insight that made him believe that the Simplex method would be very efficient.
Overview
The simplex algorithm operates on linear programs in the canonical form
maximize
subject to and
with the coefficients of the objective function, is the matrix transpose, and are the variables of the problem, is a p×n matrix, and . There is a straightforward process to convert any linear program into one in standard form, so using this form of linear programs results in no loss of generality.
In geometric terms, the feasible region defined by all values of such that and is a (possibly unbounded) convex polytope. An extreme point or vertex of this polytope is known as basic feasible solution (BFS).
It can be shown that for a linear program in standard form, if the objective function has a maximum value on the feasible region, then it has this value on (at least) one of the extreme points. This in itself reduces the problem to a finite computation since there is a finite number of extreme points, but the number of extreme points is unmanageably large for all but the smallest linear programs.
It can also be shown that, if an extreme point is not a maximum point of the objective function, then there is an edge containing the point so that the value of the objective function is strictly increasing on the edge moving away from the point. If the edge is finite, then the edge connects to another extreme point where the objective function has a greater value, otherwise the objective function is unbounded above on the edge and the linear program has no solution. The simplex algorithm applies this insight by walking along edges of the polytope to extreme points with greater and greater objective values. This continues until the maximum value is reached, or an unbounded edge is visited (concluding that the problem has no solution). The algorithm always terminates because the number of vertices in the polytope is finite; moreover since we jump between vertices always in the same direction (that of the objective function), we hope that the number of vertices visited will be small.
The solution of a linear program is accomplished in two steps. In the first step, known as Phase I, a starting extreme point is found. Depending on the nature of the program this may be trivial, but in general it can be solved by applying the simplex algorithm to a modified version of the original program. The possible results of Phase I are either that a basic feasible solution is found or that the feasible region is empty. In the latter case the linear program is called infeasible. In the second step, Phase II, the simplex algorithm is applied using the basic feasible solution found in Phase I as a starting point. The possible results from Phase II are either an optimum basic feasible solution or an infinite edge on which the objective function is unbounded above.
Standard form
The transformation of a linear program to one in standard form may be accomplished as follows. First, for each variable with a lower bound other than 0, a new variable is introduced representing the difference between the variable and bound. The original variable can then be eliminated by substitution. For example, given the constraint
a new variable, , is introduced with
The second equation may be used to eliminate from the linear program. In this way, all lower bound constraints may be changed to non-negativity restrictions.
Second, for each remaining inequality constraint, a new variable, called a slack variable, is introduced to change the constraint to an equality constraint. This variable represents the difference between the two sides of the inequality and is assumed to be non-negative. For example, the inequalities
are replaced with
It is much easier to perform algebraic manipulation on inequalities in this form. In inequalities where ≥ appears such as the second one, some authors refer to the variable introduced as a surplus variable.
Third, each unrestricted variable is eliminated from the linear program. This can be done in two ways, one is by solving for the variable in one of the equations in which it appears and then eliminating the variable by substitution. The other is to replace the variable with the difference of two restricted variables. For example, if is unrestricted then write
The equation may be used to eliminate from the linear program.
When this process is complete the feasible region will be in the form
It is also useful to assume that the rank of is the number of rows. This results in no loss of generality since otherwise either the system has redundant equations which can be dropped, or the system is inconsistent and the linear program has no solution.
Simplex tableau
A linear program in standard form can be represented as a tableau of the form
The first row defines the objective function and the remaining rows specify the constraints. The zero in the first column represents the zero vector of the same dimension as the vector (different authors use different conventions as to the exact layout). If the columns of can be rearranged so that it contains the identity matrix of order (the number of rows in ) then the tableau is said to be in canonical form. The variables corresponding to the columns of the identity matrix are called basic variables while the remaining variables are called nonbasic or free variables. If the values of the nonbasic variables are set to 0, then the values of the basic variables are easily obtained as entries in and this solution is a basic feasible solution. The algebraic interpretation here is that the coefficients of the linear equation represented by each row are either , , or some other number. Each row will have column with value , columns with coefficients , and the remaining columns with some other coefficients (these other variables represent our non-basic variables). By setting the values of the non-basic variables to zero we ensure in each row that the value of the variable represented by a in its column is equal to the value at that row.
Conversely, given a basic feasible solution, the columns corresponding to the nonzero variables can be expanded to a nonsingular matrix. If the corresponding tableau is multiplied by the inverse of this matrix then the result is a tableau in canonical form.
Let
be a tableau in canonical form. Additional row-addition transformations can be applied to remove the coefficients from the objective function. This process is called pricing out and results in a canonical tableau
where zB is the value of the objective function at the corresponding basic feasible solution. The updated coefficients, also known as relative cost coefficients, are the rates of change of the objective function with respect to the nonbasic variables.
Pivot operations
The geometrical operation of moving from a basic feasible solution to an adjacent basic feasible solution is implemented as a pivot operation. First, a nonzero pivot element is selected in a nonbasic column. The row containing this element is multiplied by its reciprocal to change this element to 1, and then multiples of the row are added to the other rows to change the other entries in the column to 0. The result is that, if the pivot element is in a row r, then the column becomes the r-th column of the identity matrix. The variable for this column is now a basic variable, replacing the variable which corresponded to the r-th column of the identity matrix before the operation. In effect, the variable corresponding to the pivot column enters the set of basic variables and is called the entering variable, and the variable being replaced leaves the set of basic variables and is called the leaving variable. The tableau is still in canonical form but with the set of basic variables changed by one element.
Algorithm
Let a linear program be given by a canonical tableau. The simplex algorithm proceeds by performing successive pivot operations each of which give an improved basic feasible solution; the choice of pivot element at each step is largely determined by the requirement that this pivot improves the solution.
Entering variable selection
Since the entering variable will, in general, increase from 0 to a positive number, the value of the objective function will decrease if the derivative of the objective function with respect to this variable is negative. Equivalently, the value of the objective function is increased if the pivot column is selected so that the corresponding entry in the objective row of the tableau is positive.
If there is more than one column so that the entry in the objective row is positive then the choice of which one to add to the set of basic variables is somewhat arbitrary and several entering variable choice rules such as Devex algorithm have been developed.
If all the entries in the objective row are less than or equal to 0 then no choice of entering variable can be made and the solution is in fact optimal. It is easily seen to be optimal since the objective row now corresponds to an equation of the form
By changing the entering variable choice rule so that it selects a column where the entry in the objective row is negative, the algorithm is changed so that it finds the minimum of the objective function rather than the maximum.
Leaving variable selection
Once the pivot column has been selected, the choice of pivot row is largely determined by the requirement that the resulting solution be feasible. First, only positive entries in the pivot column are considered since this guarantees that the value of the entering variable will be nonnegative. If there are no positive entries in the pivot column then the entering variable can take any non-negative value with the solution remaining feasible. In this case the objective function is unbounded below and there is no minimum.
Next, the pivot row must be selected so that all the other basic variables remain positive. A calculation shows that this occurs when the resulting value of the entering variable is at a minimum. In other words, if the pivot column is c, then the pivot row r is chosen so that
is the minimum over all r so that arc > 0. This is called the minimum ratio test. If there is more than one row for which the minimum is achieved then a dropping variable choice rule can be used to make the determination.
Example
Consider the linear program
Minimize
Subject to
With the addition of slack variables s and t, this is represented by the canonical tableau
where columns 5 and 6 represent the basic variables s and t and the corresponding basic feasible solution is
Columns 2, 3, and 4 can be selected as pivot columns, for this example column 4 is selected. The values of z resulting from the choice of rows 2 and 3 as pivot rows are 10/1 = 10 and 15/3 = 5 respectively. Of these the minimum is 5, so row 3 must be the pivot row. Performing the pivot produces
Now columns 4 and 5 represent the basic variables z and s and the corresponding basic feasible solution is
For the next step, there are no positive entries in the objective row and in fact
so the minimum value of Z is −20.
Finding an initial canonical tableau
In general, a linear program will not be given in the canonical form and an equivalent canonical tableau must be found before the simplex algorithm can start. This can be accomplished by the introduction of artificial variables. Columns of the identity matrix are added as column vectors for these variables. If the b value for a constraint equation is negative, the equation is negated before adding the identity matrix columns. This does not change the set of feasible solutions or the optimal solution, and it ensures that the slack variables will constitute an initial feasible solution. The new tableau is in canonical form but it is not equivalent to the original problem. So a new objective function, equal to the sum of the artificial variables, is introduced and the simplex algorithm is applied to find the minimum; the modified linear program is called the Phase I problem.
The simplex algorithm applied to the Phase I problem must terminate with a minimum value for the new objective function since, being the sum of nonnegative variables, its value is bounded below by 0. If the minimum is 0 then the artificial variables can be eliminated from the resulting canonical tableau producing a canonical tableau equivalent to the original problem. The simplex algorithm can then be applied to find the solution; this step is called Phase II. If the minimum is positive then there is no feasible solution for the Phase I problem where the artificial variables are all zero. This implies that the feasible region for the original problem is empty, and so the original problem has no solution.
Example
Consider the linear program
Minimize
Subject to
This is represented by the (non-canonical) tableau
Introduce artificial variables u and v and objective function W = u + v, giving a new tableau
The equation defining the original objective function is retained in anticipation of Phase II.
By construction, u and v are both basic variables since they are part of the initial identity matrix. However, the objective function W currently assumes that u and v are both 0. In order to adjust the objective function to be the correct value where u = 10 and v = 15, add the third and fourth rows to the first row giving
Select column 5 as a pivot column, so the pivot row must be row 4, and the updated tableau is
Now select column 3 as a pivot column, for which row 3 must be the pivot row, to get
The artificial variables are now 0 and they may be dropped giving a canonical tableau equivalent to the original problem:
This is, fortuitously, already optimal and the optimum value for the original linear program is −130/7.
Advanced topics
Implementation
The tableau form used above to describe the algorithm lends itself to an immediate implementation in which the tableau is maintained as a rectangular (m + 1)-by-(m + n + 1) array. It is straightforward to avoid storing the m explicit columns of the identity matrix that will occur within the tableau by virtue of B being a subset of the columns of [A, I]. This implementation is referred to as the "standard simplex algorithm". The storage and computation overhead is such that the standard simplex method is a prohibitively expensive approach to solving large linear programming problems.
In each simplex iteration, the only data required are the first row of the tableau, the (pivotal) column of the tableau corresponding to the entering variable and the right-hand-side. The latter can be updated using the pivotal column and the first row of the tableau can be updated using the (pivotal) row corresponding to the leaving variable. Both the pivotal column and pivotal row may be computed directly using the solutions of linear systems of equations involving the matrix B and a matrix-vector product using A. These observations motivate the "revised simplex algorithm", for which implementations are distinguished by their invertible representation of B.
In large linear-programming problems A is typically a sparse matrix and, when the resulting sparsity of B is exploited when maintaining its invertible representation, the revised simplex algorithm is much more efficient than the standard simplex method. Commercial simplex solvers are based on the revised simplex algorithm.
Degeneracy: stalling and cycling
If the values of all basic variables are strictly positive, then a pivot must result in an improvement in the objective value. When this is always the case no set of basic variables occurs twice and the simplex algorithm must terminate after a finite number of steps. Basic feasible solutions where at least one of the basic variables is zero are called degenerate and may result in pivots for which there is no improvement in the objective value. In this case there is no actual change in the solution but only a change in the set of basic variables. When several such pivots occur in succession, there is no improvement; in large industrial applications, degeneracy is common and such "stalling" is notable.
Worse than stalling is the possibility the same set of basic variables occurs twice, in which case, the deterministic pivoting rules of the simplex algorithm will produce an infinite loop, or "cycle". While degeneracy is the rule in practice and stalling is common, cycling is rare in practice. A discussion of an example of practical cycling occurs in Padberg. Bland's rule prevents cycling and thus guarantees that the simplex algorithm always terminates. Another pivoting algorithm, the criss-cross algorithm never cycles on linear programs.
History-based pivot rules such as Zadeh's rule and Cunningham's rule also try to circumvent the issue of stalling and cycling by keeping track of how often particular variables are being used and then favor such variables that have been used least often.
Efficiency in the worst case
The simplex method is remarkably efficient in practice and was a great improvement over earlier methods such as Fourier–Motzkin elimination. However, in 1972, Klee and Minty gave an example, the Klee–Minty cube, showing that the worst-case complexity of simplex method as formulated by Dantzig is exponential time. Since then, for almost every variation on the method, it has been shown that there is a family of linear programs for which it performs badly. It is an open question if there is a variation with polynomial time, although sub-exponential pivot rules are known.
In 2014, it was proved that a particular variant of the simplex method is NP-mighty, i.e., it can be used to solve, with polynomial overhead, any problem in NP implicitly during the algorithm's execution. Moreover, deciding whether a given variable ever enters the basis during the algorithm's execution on a given input, and determining the number of iterations needed for solving a given problem, are both NP-hard problems. At about the same time it was shown that there exists an artificial pivot rule for which computing its output is PSPACE-complete. In 2015, this was strengthened to show that computing the output of Dantzig's pivot rule is PSPACE-complete.
Efficiency in practice
Analyzing and quantifying the observation that the simplex algorithm is efficient in practice despite its exponential worst-case complexity has led to the development of other measures of complexity. The simplex algorithm has polynomial-time average-case complexity under various probability distributions, with the precise average-case performance of the simplex algorithm depending on the choice of a probability distribution for the random matrices. Another approach to studying "typical phenomena" uses Baire category theory from general topology, and to show that (topologically) "most" matrices can be solved by the simplex algorithm in a polynomial number of steps.
Another method to analyze the performance of the simplex algorithm studies the behavior of worst-case scenarios under small perturbation – are worst-case scenarios stable under a small change (in the sense of structural stability), or do they become tractable? This area of research, called smoothed analysis, was introduced specifically to study the simplex method. Indeed, the running time of the simplex method on input with noise is polynomial in the number of variables and the magnitude of the perturbations.
Other algorithms
Other algorithms for solving linear-programming problems are described in the linear-programming article. Another basis-exchange pivoting algorithm is the criss-cross algorithm. There are polynomial-time algorithms for linear programming that use interior point methods: these include Khachiyan's ellipsoidal algorithm, Karmarkar's projective algorithm, and path-following algorithms. The Big-M method is an alternative strategy for solving a linear program, using a single-phase simplex.
Linear-fractional programming
Linear–fractional programming (LFP) is a generalization of linear programming (LP). In LP the objective function is a linear function, while the objective function of a linear–fractional program is a ratio of two linear functions. In other words, a linear program is a fractional–linear program in which the denominator is the constant function having the value one everywhere. A linear–fractional program can be solved by a variant of the simplex algorithm or by the criss-cross algorithm.
| Mathematics | Optimization | null |
349627 | https://en.wikipedia.org/wiki/Calcium%20chloride | Calcium chloride | Calcium chloride is an inorganic compound, a salt with the chemical formula . It is a white crystalline solid at room temperature, and it is highly soluble in water. It can be created by neutralising hydrochloric acid with calcium hydroxide.
Calcium chloride is commonly encountered as a hydrated solid with generic formula , where n = 0, 1, 2, 4, and 6. These compounds are mainly used for de-icing and dust control. Because the anhydrous salt is hygroscopic and deliquescent, it is used as a desiccant.
History
Calcium chloride was apparently discovered in the 15th century but wasn't studied properly until the 18th century. It was historically called "fixed sal ammoniac" () because it was synthesized during the distillation of ammonium chloride with lime and was nonvolatile (while the former appeared to sublime); in more modern times (18th-19th cc.) it was called "muriate of lime" ().
Uses
De-icing and freezing-point depression
By depressing the freezing point of water, calcium chloride is used to prevent ice formation and is used to de-ice. This application consumes the greatest amount of calcium chloride. Calcium chloride is relatively harmless to plants and soil. As a de-icing agent, it is much more effective at lower temperatures than sodium chloride. When distributed for this use, it usually takes the form of small, white spheres a few millimeters in diameter, called prills. Solutions of calcium chloride can prevent freezing at temperatures as low as −52 °C (−62 °F), making it ideal for filling agricultural implement tires as a liquid ballast, aiding traction in cold climates.
It is also used in domestic and industrial chemical air dehumidifiers.
Road surfacing
The second largest application of calcium chloride exploits its hygroscopic nature and the tackiness of its hydrates; calcium chloride is highly hygroscopic and its hydration is an exothermic process. A concentrated solution keeps a liquid layer on the surface of dirt roads, which suppresses the formation of dust. It keeps the finer dust particles on the road, providing a cushioning layer. If these are allowed to blow away, the large aggregate begins to shift around and the road breaks down. Using calcium chloride reduces the need for grading by as much as 50% and the need for fill-in materials as much as 80%.
Food
In the food industry, calcium chloride is frequently employed as a firming agent in canned vegetables, particularly for canned tomatoes and cucumber pickles. It is also used in firming soybean curds into tofu and in producing a caviar substitute from vegetable or fruit juices. It is also used to enhance the texture of various other products, such as whole apples, whole hot peppers, whole and sliced strawberries, diced tomatoes, and whole peaches.
The firming effect of calcium chloride can be attributed to several mechanisms:
Complexation, since calcium ions form complexes with pectin, a polysaccharide found in the cell wall and middle lamella of plant tissues.
Membrane stabilization, since calcium ions contribute to the stabilization of the cell membrane.
Turgor pressure regulation, since calcium ions influence cell turgor pressure, which is the pressure exerted by the cell contents against the cell wall.
Calcium chloride's freezing-point depression properties are used to slow the freezing of the caramel in caramel-filled chocolate bars. Also, it is frequently added to sliced apples to maintain texture.
In brewing beer, calcium chloride is sometimes used to correct mineral deficiencies in the brewing water. It affects flavor and chemical reactions during the brewing process, and can also affect yeast function during fermentation.
In cheesemaking, calcium chloride is sometimes added to processed (pasteurized/homogenized) milk to restore the natural balance between calcium and protein in casein. It is added before the coagulant.
Calcium chloride is also commonly used as an "electrolyte" in sports drinks and other beverages; as a food additive used in conjunction with other inorganic salts it adds taste to bottled water.
The average intake of calcium chloride as food additives has been estimated to be 160–345 mg/day. Calcium chloride is permitted as a food additive in the European Union for use as a sequestrant and firming agent with the E number E509. It is considered as generally recognized as safe (GRAS) by the U.S. Food and Drug Administration. Its use in organic crop production is generally prohibited under the US National Organic Program.
The elemental calcium content in calcium chloride hexahydrate (CaCl2·6H2O) is approximately 18.2%. This means that for every gram of calcium chloride hexahydrate, there are about 182 milligrams of elemental calcium.
For anhydrous calcium chloride (CaCl2), the elemental calcium content is slightly higher, around 36.1% (for every gram of anhydrous calcium chloride there are about 361 milligrams of elemental calcium).
Calcium chloride has a very salty taste and can cause mouth and throat irritation at high concentrations, so it is typically not the first choice for long-term oral supplementation (as a calcium supplement). Calcium chloride, characterized by its low molecular weight and high water solubility, readily breaks down into calcium and chloride ions when exposed to water. These ions are efficiently absorbed from the intestine. However, caution should be exercised when handling calcium chloride, for it has the potential to release heat energy upon dissolution in water. This release of heat can lead to trauma and burns in the mouth, throat, esophagus, and stomach. In fact, there have been reported cases of stomach necrosis resulting from burns caused by accidental ingestions of big amounts of undissolved calcium chloride.
The extremely salty taste of calcium chloride is used to flavor pickles without increasing the food's sodium content.
Calcium chloride is used to prevent cork spot and bitter pit on apples by spraying on the tree during the late growing season.
Laboratory and related drying operations
Drying tubes are frequently packed with calcium chloride. Kelp is dried with calcium chloride for use in producing sodium carbonate. Anhydrous calcium chloride has been approved by the FDA as a packaging aid to ensure dryness (CPG 7117.02).
The hydrated salt can be dried for re-use but will dissolve in its own water of hydration if heated quickly and form a hard amalgamated solid when cooled.
Metal reduction flux
Similarly, is used as a flux and electrolyte in the FFC Cambridge electrolysis process for titanium production, where it ensures the proper exchange of calcium and oxygen ions between the electrodes.
Medical use
Calcium chloride infusions may be used as an intravenous therapy to prevent hypocalcemia.
Calcium chloride is a highly soluble calcium salt. Hexahydrate calcium chloride (CaCl2·6H2O) has solubility in water of 811 g/L at 25 °C. Calcium chloride when taken orally completely dissociates into calcium ions (Ca2+) in the gastrointestinal tract, resulting in readily bioavailable calcium. The high concentration of calcium ions facilitates efficient absorption in the small intestine. However, the use of calcium chloride as a source of calcium taken orally is less common compared to other calcium salts because of potential adverse effects such as gastrointestinal irritation and discomfort.
When tasted, calcium chloride exhibits a distinctive bitter flavor alongside its salty taste. The bitterness is attributable to the calcium ions and their interaction with human taste receptors: certain members of the TAS2R family of bitter taste receptors respond to calcium ions; the bitter perception of calcium is thought to be a protective mechanism to avoid ingestion of toxic substances, as many poisonous compounds taste bitter. While chloride ions (Cl⁻) primarily contribute to saltiness, at higher concentrations, they can enhance the bitter sensation. The combination of calcium and chloride ions intensifies the overall bitterness. At lower concentrations, calcium chloride may taste predominantly salty. The salty taste arises from the electrolyte nature of the compound, similar to sodium chloride (table salt). As the concentration increases, the bitter taste becomes more pronounced: the increased presence of calcium ions enhances the activation of bitterness receptors.
Other applications
Calcium chloride is used in concrete mixes to accelerate the initial setting, but chloride ions lead to corrosion of steel rebar, so it should not be used in reinforced concrete. The anhydrous form of calcium chloride may also be used for this purpose and can provide a measure of the moisture in concrete.
Calcium chloride is included as an additive in plastics and in fire extinguishers, in blast furnaces as an additive to control scaffolding (clumping and adhesion of materials that prevent the furnace charge from descending), and in fabric softener as a thinner.
The exothermic dissolution of calcium chloride is used in self-heating cans and heating pads.
Calcium chloride is used as a water hardener in the maintenance of hot tub water, as insufficiently hard water can lead to corrosion and foaming.
In the oil industry, calcium chloride is used to increase the density of solids-free brines. It is also used to provide inhibition of swelling clays in the water phase of invert emulsion drilling fluids.
Calcium chloride () acts as flux material, decreasing the melting point, in the Davy process for the industrial production of sodium metal through the electrolysis of molten .
Calcium chloride is also used in the production of activated charcoal.
Calcium chloride can be used to precipitate fluoride ions from water as insoluble .
Calcium chloride is also an ingredient used in ceramic slipware. It suspends clay particles so that they float within the solution, making it easier to use in a variety of slipcasting techniques.
For watering plants to use as a fertilizer, a moderate concentration of calcium chloride is used to avoid potential toxicity: 5 to 10 mM (millimolar) is generally effective and safe for most plants—that is of anhydrous calcium chloride () per liter of water or of calcium chloride hexahydrate (·6) per liter of water. Calcium chloride solution is used immediately after preparation to prevent potential alterations in its chemical composition. Besides that, calcium chloride is highly hygroscopic, meaning it readily absorbs moisture from the air. If the solution is left standing, it can absorb additional water vapor, leading to dilution and a decrease in the intended concentration. Prolonged standing may lead to the precipitation of calcium hydroxide or other insoluble calcium compounds, reducing the availability of calcium ions in the solution and reducing the effectiveness of the solution as a calcium source for plants. Nutrient solutions can become a medium for microbial growth if stored for extended periods. Microbial contamination may alter the composition of the solution and potentially introduce pathogens to the plants. When dissolved in water, calcium chloride can undergo hydrolysis, especially over time, which can lead to the formation of small amounts of hydrochloric acid and calcium hydroxide: +2 ⇌ +2. This reaction can lower the pH of the solution, making it more acidic. Acidic solutions may harm plant tissues and disrupt nutrient uptake.
Calcium chloride dihydrate (20 percent by weight) dissolved in ethanol (95 percent ABV) has been used as a sterilant for male animals. The solution is injected into the testes of the animal. Within one month, necrosis of testicular tissue results in sterilization.
Cocaine producers in Colombia import tons of calcium chloride to recover solvents that are on the INCB Red List and are more tightly controlled.
Hazards
Although the salt is non-toxic in small quantities when wet, the strongly hygroscopic properties of non-hydrated calcium chloride present some hazards. It can act as an irritant by desiccating moist skin. Solid calcium chloride dissolves exothermically, and burns can result in the mouth and esophagus if it is ingested. Ingestion of concentrated solutions or solid products may cause gastrointestinal irritation or ulceration.
Consumption of calcium chloride can lead to hypercalcemia.
Properties
Calcium chloride dissolves in water, producing chloride and the aquo complex . In this way, these solutions are sources of "free" calcium and free chloride ions. This description is illustrated by the fact that these solutions react with phosphate sources to give a solid precipitate of calcium phosphate:
Calcium chloride has a very high enthalpy change of solution, indicated by considerable temperature rise accompanying dissolution of the anhydrous salt in water. This property is the basis for its largest-scale application.
Aqueous solutions of calcium chloride tend to be slightly acidic due to the influence of the chloride ions on the hydrogen ion concentration in water. The slight acidity of calcium chloride solutions is primarily due to the increased ionic strength of the solution, which can influence the activity of hydrogen ions and lower the pH slightly. The pH of calcium chloride in aqueous solution is the following:
Molten calcium chloride can be electrolysed to give calcium metal and chlorine gas:
Preparation
In much of the world, calcium chloride is derived from limestone as a by-product of the Solvay process, which follows the net reaction below:
North American consumption in 2002 was 1,529,000 tonnes (3.37 billion pounds). In the US, most calcium chloride is obtained by purification from brine. As with most bulk commodity salt products, trace amounts of other cations from the alkali metals and alkaline earth metals (groups 1 and 2) and other anions from the halogens (group 17) typically occur.
Occurrence
Calcium chloride occurs as the rare evaporite minerals sinjarite (dihydrate) and antarcticite (hexahydrate). Another natural hydrate known is ghiaraite – a tetrahydrate. The related minerals chlorocalcite (potassium calcium chloride, ) and tachyhydrite (calcium magnesium chloride, ) are also very rare. The same is true for rorisite, CaClF (calcium chloride fluoride).
| Physical sciences | Salts | null |
349628 | https://en.wikipedia.org/wiki/Mediterranean%20climate | Mediterranean climate | A Mediterranean climate ( ), also called a dry summer climate, described by Köppen and Trewartha as Cs, is a temperate climate type that occurs in the lower mid-latitudes (normally 30 to 44 north and south latitude). Such climates typically have dry summers and wet winters, with summer conditions being hot and winter conditions typically being mild. These weather conditions are typically experienced in the majority of Mediterranean-climate regions and countries, but remain highly dependent on proximity to the ocean, altitude and geographical location.
The dry summer climate is found throughout the warmer middle latitudes, affecting almost exclusively the western portions of continents in relative proximity to the coast. The climate type's name is in reference to the coastal regions of the Mediterranean Sea, which mostly share this type of climate, but it can also be found in the Atlantic portions of Iberia and Northwest Africa, the Pacific portions of the United States and Chile, extreme west areas of Argentina, around Cape Town in South Africa, parts of Southwest and South Australia, and parts of Central Asia. They tend to be found in proximity (both poleward and near the coast) of desert and semi-arid climates, and equatorward of oceanic climates.
Mediterranean climate zones are typically located along the western coasts of landmasses, between roughly 30 and 45 degrees north or south of the equator. The main cause of Mediterranean, or dry summer, climate is the subtropical ridge, which extends towards the pole of the hemisphere in question during the summer and migrates towards the equator during the winter. This is due to the seasonal poleward-equatorward variations of temperatures.
The resulting vegetation of Mediterranean climates are the garrigue or maquis in the European Mediterranean Basin, the chaparral in California, the fynbos in South Africa, the mallee in Australia, and the matorral in Chile. Areas with this climate are also where the so-called "Mediterranean trinity" of major agricultural crops have traditionally been successfully grown (wheat, grapes and olives). As a result, these regions are notable for their high-quality wines, grapeseed/olive oils, and bread products.
Köppen climate classification
Under the Köppen climate classification, "hot dry-summer" climates (classified as Csa) and "cool dry-summer" climates (classified as Csb) are often referred to as just "Mediterranean". Under the Köppen climate system, the first letter indicates the climate group (in this case temperate climates). Temperate climates or "C" zones average temperature above (or ), but below , in their coolest months. The second letter indicates the precipitation pattern ("s" represents dry summers). Köppen has defined a dry summer month as a month with less than of precipitation and as a month within the high-sun months of April to September, in the case of the Northern Hemisphere and October to March, in the case of the Southern Hemisphere, and it also must contain exactly or less than one-third that of the wettest winter month. Some, however, use a level. The third letter indicates the degree of summer heat: "a" represents an average temperature in the warmest month above , while "b" indicates the average temperature in the warmest month below . There is a "c" with 3 or less months' average temperature above , but this climate is rare and is very isolated.
Under the Köppen classification, dry-summer climates (Csa, Csb) usually occur on the western sides of continents. Csb zones in the Köppen system include areas normally not associated with Mediterranean climates but with Oceanic climates, such as much of the Pacific Northwest, much of southern Chile, parts of west-central Argentina, and parts of New Zealand. Additional highland areas in the subtropics also meet Cs requirements, though they, too, are not normally associated with Mediterranean climates. The same goes for a number of oceanic islands such as Madeira, the Juan Fernández Islands, the western part of the Canary Islands, and the eastern part of the Azores.
Under Trewartha's modified Köppen climate classification, the two major requirements for a Cs climate are revised. Under Trewartha's system, at least eight months must have average temperatures of or higher (subtropical), and the average annual precipitation must not exceed , as well as satisfying Köppen's precipitation requirements.
Precipitation
Poleward extension and expansion of the subtropical anticyclone over the oceans bring subsiding air to the region in summer, with clear skies and high temperatures. When the anticyclone moves Equator-ward in winter, it is replaced by traveling, frontal cyclones with their attendant precipitation.
During summer, regions of the Mediterranean climate are strongly influenced by the subtropical ridge which keeps atmospheric conditions very dry with minimal cloud coverage. In some areas, such as coastal California, the cold current has a stabilizing effect on the surrounding air, further reducing the chances for rain, but often causing thick layers of marine fog that usually evaporates by mid-day. Similar to desert climates, in many Mediterranean climates there is a strong diurnal character to daily temperatures in the warm summer months due to strong solar heating during the day from sunlight and rapid cooling at night.
In winter, the subtropical ridge migrates towards the equator and leaves the area, making rainfall much more likely. As a result, areas with this climate receive almost all of their precipitation during their winter and spring seasons, and may go anywhere from four to six months during the summer and early fall without having any significant precipitation. In the lower latitudes, precipitation usually decreases in both the winter and summer due to higher evapotranspiration. Toward the polar latitudes, total moisture usually increases; for instance, the Mediterranean climate in Southern Europe has more rain. The rainfall also tends to be more evenly distributed throughout the year in Southern Europe, while in places such as Southern California, the summer is nearly or completely dry. In places where evapotranspiration is higher, steppe climates tend to prevail, but still follow the basic pattern of the Mediterranean climates.
Irregularity of the rainfall, which can vary considerably from year to year, accentuates the droughts of the Mediterranean climate. Rain does not fall evenly, nor does the rain arrive at the same time or within the same intervals. In Gibraltar, for instance, rain starts falling nearly half a season earlier than at the Dead Sea. In the Holy Land no rain at all falls in summer but early rains may come in autumn.
Temperature
The majority of the regions with Mediterranean climates have relatively mild winters and very warm summers. However, winter and summer temperatures can vary greatly between different regions with a Mediterranean climate. For instance, in the case of winters, Funchal experiences mild to warm temperatures in the winter, with frost and snowfall almost unknown, whereas Tashkent has cold winters with annual frosts and snowfall; or, to consider summer, Seville experiences rather high temperatures in that season. In contrast, San Francisco has cool summers with daily highs around due to the continuous upwelling of cold subsurface waters along the coast.
Because most regions with a Mediterranean climate are near large bodies of water, temperatures are generally moderate, with a comparatively small range of temperatures between the winter low and summer high unlike (the relatively rare) dry-summer humid continental climates (although the daily diurnal range of temperature during the summer is large due to dry and clear conditions, except along the immediate coastlines). Temperatures during winter only occasionally fall below the freezing point and snow is generally seldom seen. Summer temperatures can be cool to very hot, depending on the distance from a large body of water, elevation, and latitude, among other factors. Strong winds from inland desert regions can sometimes boost summer temperatures up, quickly increasing the risk of wildfires. Notable exceptions to the usual proximity from bodies of water, thus featuring extremely high summer temperatures and cooler winters, include south-eastern Turkey and northern Iraq (Urfa, Erbil), surrounded by hot deserts to the south and mountains to the north. Those places routinely experience summer daily means of over and daily highs above , while receiving enough rainfall in winter not to fall into arid or semi-arid classifications.
As in every climatologic domain, the highland locations of the Mediterranean domain can present cooler temperatures in the summer and winter than the lowland areas, temperatures which can sometimes prohibit the growth of typical cold-sensitive Mediterranean plants. Some Spanish authors opt to use the term 'continental Mediterranean climate' () for some regions with lower temperatures in winter than the coastal areas, but Köppen's Cs zones show no distinction as long as winter temperature means stay above freezing.
Additionally, the temperature and rainfall pattern for a Csa or even a Csb climate can exist as a microclimate in some high-altitude locations adjacent to a rare tropical As (tropical savanna climate with dry summers, typically in a rainshadow region, as in Hawaii).
These have a favourable climate, with mild wet winters and fairly warm, dry summers.
Mediterranean biome
The Mediterranean forests, woodlands, and scrub biome is closely associated with Mediterranean climate zones, as are unique freshwater communities, though vegetation native to the Mediterranean climate zone can also be found in the approximate nearby climate zones, which usually tend to be the humid subtropical, oceanic and/or semi-arid zones, depending on the region and location. Particularly distinctive of the climate are sclerophyll shrublands, called maquis in the Mediterranean Basin, chaparral in California, matorral in Chile, fynbos in South Africa, and mallee and kwongan shrublands in Australia.
Mediterranean vegetation shows a number of adaptations to drought, grazing, and frequent fire regimes. The small sclerophyllous leaves that characterize many of the perennial shrubs of this biome, help conserve water and prevent nutrient loss. The soils generally are of low fertility, and many plants have mutualistic relationships with nitrogen-fixing bacteria.
Aquatic communities in Mediterranean climate regions are adapted to a yearly cycle in which abiotic (environmental) controls of stream populations and community structure dominate during floods, biotic components (e.g. competition and predation) controls become increasingly important as the flood discharge declines, and environmental controls regain dominance as environmental conditions become very harsh (i.e. hot and dry); as a result, these communities are well suited to recover from droughts, floods, and fires. Aquatic organisms in these regions show distinct long-term patterns in their structure and function, and are also highly sensitive to the recent effects of climate change.
Natural vegetation
The native vegetation of Mediterranean climate lands must be adapted to survive long, hot summer droughts in summer and prolonged wet periods in winter. Mediterranean vegetation examples include the following:
Evergreen trees: eucalyptus, casuarina, melaleuca, pine, and cypress
Deciduous trees: sycamore and some types of oak
Fruit trees: olive, figs, walnuts and grapes
Shrubs: rosemary, Erica, Banksia, buckeyes, chamise, Bay laurel and some oaks.
Sub-shrubs: lavender, Halimium, grevillea and sagebrush
Grasses: grassland types, Themeda triandra, bunchgrasses; sedges, and rushes
Herbs: Achillea, Dietes, Helichrysum and Penstemon
Many native vegetations in Mediterranean climate area valleys have been cleared for agriculture and farming. In places such as the Sacramento Valley and Oxnard Plain in California, draining marshes and estuaries combined with supplemental irrigation has led to a century of intensive agriculture. Much of the Overberg in the southern Cape of South Africa, was once covered with renosterveld, but has likewise been largely converted to agriculture, mainly for wheat. In hillside and mountainous areas, away from the urban sprawls, ecosystems and habitats of native vegetation are more sustained and undisturbed.
The fynbos vegetation in the South-western Cape in South Africa is famed for its high floral diversity, and includes such plant types as members of the Restionaceae, Ericas (Heaths) and Proteas. Representatives of the Proteaceae also grow in Australia, such as Banksias. The palette of California native plants is also renowned for its species and cultivar diversity.
Hot-summer Mediterranean climate
This subtype of the Mediterranean climate (Csa) is the most common form of the Mediterranean climate, therefore it is also known as a "typical Mediterranean climate". As stated earlier, regions with this form of a Mediterranean climate experience average monthly temperatures in excess of during its warmest month and an average in the coldest month between or, in some applications, between . Regions with this form of the Mediterranean climate typically experience hot, sometimes very hot and dry summers. Winters can be mild, cool or chilly, and some cities in this region receive somewhat regular snowfall (e.g. Kermanshah), while others do not receive any (e.g. Casablanca).
Csa climates are mainly found around the Mediterranean Sea, southern Australia, southwestern South Africa, sections of Central Asia, northern sections of Iran and Iraq, the interior of northern California west of the Sierra Nevada, along the Wasatch Front in Utah, and inland areas of southern Oregon west of the Cascade Mountains. Southern California's coasts also experience hot summers due to the shielding effect of the Channel Islands. However, unshielded areas of that coastline can have warm-summer Mediterranean climates with hot-summer areas just a few kilometres inland.
Warm-summer Mediterranean climate
Occasionally also termed the "cool-summer Mediterranean climate", this subtype of the Mediterranean climate (Csb) is less common and involves warm (but not hot) and dry summers, with no average monthly temperatures above during its warmest month and as usual an average in the coldest month between or, in some applications, between .
Also, at least four months must average above .
Cool ocean currents, upwelling and higher latitudes are often the reason for this cooler type of Mediterranean climate.
The other main reason for this cooler type is the altitude. For instance, Menton on the French coast has a Csa climate while Castellar, Alpes-Maritimes, the adjacent town just north of Menton, with an altitude between , has a Csb climate instead. The village of Siah Bisheh in Northern Iran also has a Csb climate because of its location inside the Alborz mountains.
Winters in this zone are rainy and can be mild to chilly. Some locales in this zone experience some amount of snowfall, while others do not.
Csb climates are found in northwestern Iberian Peninsula (namely Galicia and the Norte region and west coast of Portugal), in coastal Northern California, in the Pacific Northwest (namely western Washington, western Oregon and southern portions of Vancouver Island in British Columbia), in central Chile, in parts of southern Australia and in sections of southwestern South Africa. A few locations close to the south coast of England such as Weymouth and Portland just scrape into this climate classification due to very low rainfall in July. A trend towards slightly drier summers during the 1971–2000 climate average period, meant that this classification previously extended slightly further to include a few other weather stations in southern England, such as Bognor Regis and Teignmouth. Rarer instances of this climate can be found in relatively small and isolated high altitude areas of the Andes in Northern Ecuador, Peru, Colombia, and Western Venezuela.
Cold-summer Mediterranean climate
The cold-summer subtype of the Mediterranean climate (Csc) is rare and predominantly found at scattered high-altitude locations along the west coasts of North and South America having a similar climate. This type is characterized by cool, dry summers, with less than four months with a mean temperature at or above , as well as with cool, wet winters, with no winter month having a mean temperature below (or ), depending on the isotherm used). Regions in the Americas with this climate are influenced by the dry-summer trend (though briefly) that extends considerably poleward along the west coast, as well as the moderating influences of high altitude and relative proximity to the Pacific Ocean. These conditions maintain an unusually narrow temperature range through the year for climate zones at such distances from coasts.
In North America, areas with Csc climate can be found in the Olympic, Cascade, Klamath, and Sierra Nevada ranges in Washington, Oregon and California. These locations are found at high altitude nearby lower altitude regions characterized by a warm-summer Mediterranean climate (Csb) or hot-summer Mediterranean climate (Csa). A rare instance of this climate occurs in the tropics, on Haleakalā Summit in Hawaii.
In South America, Csc regions can be found along the Andes in Chile and Argentina. The town of Balmaceda, Chile is one of the few towns confirmed to have this climate.
Small areas with a Csc climate can be found at high elevations in Corsica.
In Norway, the small fishing village of Røstlandet, in Røst Municipality, above the Arctic Circle has a climate bordering on Csc and is known as a "climatic anomaly" due to abnormally warm temperatures despite its latitude located above 67°N latitude.
| Physical sciences | Climates | Earth science |
349676 | https://en.wikipedia.org/wiki/Electrophoresis | Electrophoresis | Electrophoresis is the motion of charged dispersed particles or dissolved charged molecules relative to a fluid under the influence of a spatially uniform electric field. As a rule, these are zwitterions.
Electrophoresis is used in laboratories to separate macromolecules based on their charges. The technique normally applies a negative charge called cathode so anionic protein molecules move towards a positive charge called anode. Therefore, electrophoresis of positively charged particles or molecules (cations) is sometimes called cataphoresis, while electrophoresis of negatively charged particles or molecules (anions) is sometimes called anaphoresis.
Electrophoresis is the basis for analytical techniques used in biochemistry and bioinorganic chemistry to separate particles, molecules, or ions by size, charge, or binding affinity, either freely or through a supportive medium using a one-directional flow of electrical charge. It is used extensively in DNA, RNA and protein analysis.
Liquid "droplet electrophoresis" is significantly different from the classic "particle electrophoresis" because of droplet characteristics such as a mobile surface charge and the nonrigidity of the interface. Also, the liquid–liquid system, where there is an interplay between the hydrodynamic and electrokinetic forces in both phases, adds to the complexity of electrophoretic motion.
History
Theory
Suspended particles have an electric surface charge, strongly affected by surface adsorbed species, on which an external electric field exerts an electrostatic Coulomb force. According to the double layer theory, all surface charges in fluids are screened by a diffuse layer of ions, which has the same absolute charge but opposite sign with respect to that of the surface charge. The electric field also exerts a force on the ions in the diffuse layer which has direction opposite to that acting on the surface charge. This latter force is not actually applied to the particle, but to the ions in the diffuse layer located at some distance from the particle surface, and part of it is transferred all the way to the particle surface through viscous stress. This part of the force is also called electrophoretic retardation force, or ERF in short.
When the electric field is applied and the charged particle to be analyzed is at steady movement through the diffuse layer, the total resulting force is zero:
Considering the drag on the moving particles due to the viscosity of the dispersant, in the case of low Reynolds number and moderate electric field strength E, the drift velocity of a dispersed particle v is simply proportional to the applied field, which leaves the electrophoretic mobility μe defined as:
The most well known and widely used theory of electrophoresis was developed in 1903 by Marian Smoluchowski:
,
where εr is the dielectric constant of the dispersion medium, ε0 is the permittivity of free space (C2 N−1 m−2), η is dynamic viscosity of the dispersion medium (Pa s), and ζ is zeta potential (i.e., the electrokinetic potential of the slipping plane in the double layer, units mV or V).
The Smoluchowski theory is very powerful because it works for dispersed particles of any shape at any concentration. It has limitations on its validity. For instance, it does not include Debye length κ−1 (units m). However, Debye length must be important for electrophoresis, as follows immediately from Figure 2,
"Illustration of electrophoresis retardation".
Increasing thickness of the double layer (DL) leads to removing the point of retardation force further from the particle surface. The thicker the DL, the smaller the retardation force must be.
Detailed theoretical analysis proved that the Smoluchowski theory is valid only for sufficiently thin DL, when particle radius a is much greater than the Debye length:
.
This model of "thin double layer" offers tremendous simplifications not only for electrophoresis theory but for many other electrokinetic theories. This model is valid for most aqueous systems, where the Debye length is usually only a few nanometers. It only breaks for nano-colloids in solution with ionic strength close to water.
The Smoluchowski theory also neglects the contributions from surface conductivity. This is expressed in modern theory as condition of small Dukhin number:
In the effort of expanding the range of validity of electrophoretic theories, the opposite asymptotic case was considered, when Debye length is larger than particle radius:
.
Under this condition of a "thick double layer", Erich Hückel predicted the following relation for electrophoretic mobility:
.
This model can be useful for some nanoparticles and non-polar fluids, where Debye length is much larger than in the usual cases.
There are several analytical theories that incorporate surface conductivity and eliminate the restriction of a small Dukhin number, pioneered by Theodoor Overbeek and F. Booth. Modern, rigorous theories valid for any Zeta potential and often any aκ stem mostly from Dukhin–Semenikhin theory.
In the thin double layer limit, these theories confirm the numerical solution to the problem provided by Richard W. O'Brien and Lee R. White.
For modeling more complex scenarios, these simplifications become inaccurate, and the electric field must be modeled spatially, tracking its magnitude and direction. Poisson's equation can be used to model this spatially-varying electric field. Its influence on fluid flow can be modeled with the Stokes law, while transport of different ions can be modeled using the Nernst–Planck equation. This combined approach is referred to as the Poisson-Nernst-Planck-Stokes equations. It has been validated for the electrophoresis of particles.
| Physical sciences | Electrical methods | Chemistry |
349735 | https://en.wikipedia.org/wiki/Baryon%20number | Baryon number | In particle physics, the baryon number is a strictly conserved additive quantum number of a system. It is defined as
where is the number of quarks, and is the number of antiquarks. Baryons (three quarks) have a baryon number of +1, mesons (one quark, one antiquark) have a baryon number of 0, and antibaryons (three antiquarks) have a baryon number of −1. Exotic hadrons like pentaquarks (four quarks, one antiquark) and tetraquarks (two quarks, two antiquarks) are also classified as baryons and mesons depending on their baryon number.
Baryon number vs. quark number
Quarks carry not only electric charge, but also charges such as color charge and weak isospin. Because of a phenomenon known as color confinement, a hadron cannot have a net color charge; that is, the total color charge of a particle has to be zero ("white"). A quark can have one of three "colors", dubbed "red", "green", and "blue"; while an antiquark may be either "anti-red", "anti-green" or "anti-blue".
For normal hadrons, a white color can thus be achieved in one of three ways:
A quark of one color with an antiquark of the corresponding anticolor, giving a meson with baryon number 0,
Three quarks of different colors, giving a baryon with baryon number +1,
Three antiquarks of different anticolors, giving an antibaryon with baryon number −1.
The baryon number was defined long before the quark model was established, so rather than changing the definitions, particle physicists simply gave quarks one third the baryon number. Nowadays it might be more accurate to speak of the conservation of quark number.
In theory, exotic hadrons can be formed by adding pairs of quarks and antiquarks, provided that each pair has a matching color/anticolor. For example, a pentaquark (four quarks, one antiquark) could have the individual quark colors: red, green, blue, blue, and antiblue. In 2015, the LHCb collaboration at CERN reported results consistent with pentaquark states in the decay of bottom Lambda baryons ().
Particles not formed of quarks
Particles without any quarks have a baryon number of zero. Such particles are
leptons – the electron, muon, tauon, and their corresponding neutrinos
vector bosons – the photon, W and Z bosons, gluons
scalar boson – the Higgs boson
second-order tensor boson – the hypothetical graviton
Conservation
The baryon number is conserved in all the interactions of the Standard Model, with one possible exception. The conservation is due to global symmetry of the QCD Lagrangian. 'Conserved' means that the sum of the baryon number of all incoming particles is the same as the sum of the baryon numbers of all particles resulting from the reaction. The one exception is the hypothesized Adler–Bell–Jackiw anomaly in electroweak interactions; however, sphalerons are not all that common and could occur at high energy and temperature levels and can explain electroweak baryogenesis and leptogenesis. Electroweak sphalerons can only change the baryon and/or lepton number by 3 or multiples of 3 (collision of three baryons into three leptons/antileptons and vice versa). No experimental evidence of sphalerons has yet been observed.
The hypothetical concepts of grand unified theory (GUT) models and supersymmetry allows for the changing of a baryon into leptons and antiquarks (see B − L), thus violating the conservation of both baryon and lepton numbers. Proton decay would be an example of such a process taking place, but has never been observed.
The conservation of baryon number is not consistent with the physics of black hole evaporation via Hawking radiation. It is expected in general that quantum gravitational effects violate the conservation of all charges associated to global symmetries. The violation of conservation of baryon number led John Archibald Wheeler to speculate on a principle of mutability for all physical properties.
| Physical sciences | Quantum numbers | Physics |
349819 | https://en.wikipedia.org/wiki/Pin%20tumbler%20lock | Pin tumbler lock | The pin tumbler lock, also known as the Yale lock after the inventor of the modern version, is a lock mechanism that uses pins of varying lengths to prevent the lock from opening without the correct key.
Pin tumblers are most commonly employed in cylinder locks, but may also be found in tubular pin tumbler locks (also known as radial locks or ace locks).
History
The first known example of a tumbler lock was found in the ruins of the Palace of Khorsabad built by king Sargon II (721–705 BC.) in Iraq. Basic principles of the pin tumbler lock may date as far back as 2000 BC in Egypt; the lock consisted of a wooden post affixed to the door and a horizontal bolt that slid into the post. The bolt had vertical openings into which a set of pins fitted. These could be lifted, using a key, to a sufficient height to allow the bolt to move and unlock the door. This wooden lock was one of Egypt's major developments in domestic architecture during classical times.
Such a lock, however, may be defeated by lifting the pins uniformly beyond the unlatching point. In 1805, the earliest patent for a double-acting pin tumbler lock one where lifting the pins too much or too little prevented opening was granted to American physician Abraham O. Stansbury in England. It was based on earlier Egyptian locks and Joseph Bramah's tubular pin tumbler lock. Two years later, Stansbury was granted a patent in the United States for his lock.
In 1848, Linus Yale Sr. invented the modern pin-tumbler lock. In 1861, Linus Yale Jr., inspired by the original 1840s pin-tumbler lock designed by his father, invented and patented a smaller flat key with serrated edges, as well as pins of varying lengths within the lock itself, the same design of the pin-tumbler lock in use today.
Design
The pin tumbler is commonly used in cylinder locks. In this type of lock, an outer casing has a cylindrical hole in which the plug is housed. To open the lock, the plug must rotate.
The plug has a straight-shaped slot known as the keyway at one end to allow the key to enter the plug; the other end may have a cam or lever, which activates a mechanism to retract a locking bolt. The keyway often has protruding ledges that serve to prevent the key pins from falling into the plug, and to make the lock more resistant to picking. A series of holes, typically five or six of them, are drilled vertically into the plug. These holes contain key pins of various lengths, which are rounded to permit the key to slide over them easily.
Above each key pin is one or more spring-loaded driver pins. Simple locks typically have only one driver pin for each key pin, but locks requiring multi-keyed entry, such as a group of locks having a master key, may have extra driver pins known as spacer pins. The outer casing has several vertical shafts, which hold the spring-loaded pins.
When the plug and outer casing are assembled, the pins are pushed down into the plug by the springs. The point where the plug and cylinder meet is called the shear point. With a key properly cut and inserted into the groove on the end of the plug, the pins will rise causing them to align exactly at the shear point. This allows the plug to rotate, thus opening the lock. When the key is not in the lock, the pins straddle the shear point, preventing the plug from rotating.
Commonly pin tumbler locks are found in a cylinder that can be easily unscrewed by a locksmith to facilitate rekeying. The first main advantage to a cylinder lock, also known as a profile cylinder lock or euro, is that the cylinder can be changed without altering the boltwork hardware. Removing the cylinder typically requires only loosening a set screw, then sliding the cylinder from the boltwork. The second is that it is usually possible to obtain, from various lock manufacturers, cylinders in different formats that can all be used with the same type of key. This allows the user to have keyed-alike, and master-keyed systems that incorporate a wide variety of different types of lock, such as nightlatches, deadbolts and roller door locks.
Typically, commercial padlocks can also be included, although these rarely have removable cylinders. Standardised types of cylinder include:
Rim-mounted (also known as night latch) cylinders
Euro cylinders
Key-in-knobset cylinders
Ingersoll-format cylinders
American, and Scandinavian round mortise cylinders
Scandinavian oval cylinders
There are also standardised cross-sectional profiles for lock cylinders that may vary in length - for example to suit different door thicknesses. These profiles include the europrofile (or DIN standard), the British oval profile and the Swiss profile.
Other varieties
A tubular pin tumbler lock is a pin-tumbler lock with a round keyway.
A dimple lock is a pin tumbler lock where the bitting is located on the side of the key, rather than the top.
Master keying
A master-keyed lock is a variation of the pin tumbler lock that allows the lock to be opened with two (or more) different keys. This type is often used for doorlocks in commercial buildings with multiple tenants, such as office buildings, hotels, student accommodation and storage facilities. Each tenant is given a key that only unlocks their own door, called the change key, but the second key is the master key, which unlocks all the doors, and is usually kept by the building manager, so they can enter any room in the building.
In a master-keyed lock, some or all of the pin chambers in the lock have three pins in them instead of two. Between the driver pin and the key pin is a third pin called the spacer pin, also known as a master wafer. Thus each pin line has two shear points, one where the driver and spacer pins meet, and one where the spacer and key pins meet. So the lock will open with two keys; one aligns the first set of shear points and the other aligns the second set of shear points. The locks are manufactured so one set of shear points is unique to each lock, while the second set is identical in all the locks. A downside of a lock configured in this way is that it may be easier to pick, because a pin stack with more shear points offers more chances for a picking attack to succeed. A more secure type of mechanism has two separate tumblers, each opened by one key.
More complicated master-key lock systems are also made, with two or more levels of master keying, so there can be subordinate master keys that open only certain subsets of the locks, and a top-level master key that opens all the locks.
Vulnerabilities
The basic pin tumbler lock alone is vulnerable to several lock picking methods. The most straightforward include lock bumping and snap guns. To combat this, many higher security cylinders incorporate the use of a variety of specialised pins, collectively known as security pins, that are designed to catch in the lock cylinder if a snap gun or bump key is used. Some types of security pin are spool pins that have a narrow machined waist, so called because they resemble a cotton spool, and serrated pins which are driver and/or key pins that have one or more narrow grooves cut into them, known as serrations. Both these pin modifications can give an inexperienced or opportunist lock picker the illusion of progress by causing the lock core to partially rotate or emit extra clicks. These may make the pin appear to be set when in fact it is still blocking the shear line and preventing the lock from opening. These and other security pin designs can add delay (increasing the chance of being apprehended) and, by adding complexity, may deter an attacker who does not know how to defeat these countermeasures. Generally speaking, an attack by a sufficiently experienced picker may eventually succeed. Some security pins as well as different spring designs can also make a bumping attack less likely to succeed, though this may depend on factors such as the degree of variation in bitting height between adjacent cuts in the operating key.
Lock snapping is a method of forced entry that certain types of cylinder locks are vulnerable to. Lock snapping involves applying a strong torque force to the lock cylinder, usually with a pair of locking pliers, thereby breaking the mechanism and allowing access to the latch. It can take between 50 seconds and 2 minutes to snap the lock and gain entry. Police in the UK have estimated that around 22 million doors throughout the country could be at risk from lock snapping.
Lock snapping is possible when the lock has a weakness where the retaining bolt passes through a thinner part of the lock. A recent development is to build a lock with a front section that snaps off the main body, leaving enough of the mechanism behind to prevent access to the operating latch. Some designs feature more than one sacrificial section which can stop the door from being opened from the attacked side (even with the key) while allowing the door to be opened from the other side.
Criminals utilise a small blow torch to target the area of uPVC or composite material surrounding the euro lock and door handle. The reason for this is to create a hole deep enough to reach deep into the door concentrating on the euro lock area. The goal once having created the hole is to reach with mole grips deep past any sacrificial lines of an inferior euro cylinder lock. The weak point of any euro lock is the centre screw hole which essentially holds the lock in place but also above this centre screw hole is the euro locks cam switch which is the switch that locks and unlocks the door. Once past the initial sacrificial lines of the euro lock, the burglar applies pressure to the screw hole area located in the centre of the cylinder lock, which then breaks easily as per a standard lock snapping method.
Cylinders that meet either Sold Secure SS312 Diamond or TS007 3 Star standard will protect against drilling, picking, bumping, snapping, and plug extraction methods of attack. When fitting uprated cylinder door locks it is advisable to make sure they are paired with an effective security door furniture (handle).
| Technology | Mechanisms | null |
350073 | https://en.wikipedia.org/wiki/Ruff%20%28bird%29 | Ruff (bird) | The ruff (Calidris pugnax) is a medium-sized wading bird that breeds in marshes and wet meadows across northern Eurasia. This highly gregarious sandpiper is migratory and sometimes forms huge flocks in its winter grounds, which include southern and western Europe, Africa, southern Asia and Australia.
The ruff is a long-necked, pot-bellied bird. This species shows marked sexual dimorphism; the male is much larger than the female (the reeve), and has a breeding plumage that includes brightly coloured head tufts, bare orange facial skin, extensive black on the breast, and the large collar of ornamental feathers that inspired this bird's English name. The female and the non-breeding male have grey-brown upperparts and mainly white underparts. Three differently plumaged types of male, including a rare form that mimics the female, use a variety of strategies to obtain mating opportunities at a lek, and the colourful head and neck feathers are erected as part of the elaborate main courting display. The female has one brood per year and lays four eggs in a well-hidden ground nest, incubating the eggs and rearing the chicks, which are mobile soon after hatching, on her own. Predators of wader chicks and eggs include mammals such as foxes, feral cats and stoats, and birds such as large gulls, corvids and skuas.
The ruff forages in wet grassland and soft mud, probing or searching by sight for edible items. It primarily feeds on insects, especially in the breeding season, but it will consume plant material, including rice and maize, on migration and in winter. Classified as "least concern" on the IUCN Red List criteria, the global conservation concerns are relatively low because of the large numbers that breed in Scandinavia and the Arctic. However, the range in much of Europe is contracting because of land drainage, increased fertiliser use, the loss of mown or grazed breeding sites, and over-hunting. This decline has seen it listed in the Agreement on the Conservation of African-Eurasian Migratory Waterbirds (AEWA).
Taxonomy and nomenclature
The ruff is a wader in the large family Scolopacidae, the typical shorebirds. Recent research suggests that its closest relatives are the broad-billed sandpiper, Calidris falcinellus, and the sharp-tailed sandpiper, Calidris acuminata. It has no recognised subspecies or geographical variants.
This species was first described by Carl Linnaeus in his Systema Naturae in 1758 as Tringa pugnax. It was moved to the monotypic genus Philomachus by German naturalist Blasius Merrem in 1804. More recent DNA research has shown it fits comfortably into the wader genus Calidris. The genus name comes from the Ancient Greek kalidris or skalidris, a term used by Aristotle for some grey-coloured waterside birds. The specific epithet refers to the aggressive behaviour of the bird at its mating arenas — pugnax from the Latin term for "combative".
The original English name for this bird, dating back to at least 1465, is the ree, perhaps derived from a dialectical term meaning "frenzied"; a later name reeve, which is still used for the female, is of unknown origin, but may be derived from the shire-reeve, a feudal officer, likening the male's flamboyant plumage to the official's robes. The current name was first recorded in 1634, and is derived from the ruff, an exaggerated collar fashionable from the mid-sixteenth century to the mid-seventeenth century, since the male bird's neck ornamental feathers resemble the neck-wear.
Description
The ruff has a distinctive gravy boat appearance, with a small head, medium-length bill, longish neck and pot-bellied body. It has long legs that are variable in colour, being dark greenish in juveniles, pink to orange in adults with some males having reddish orange legs only during the breeding season. In flight, it has a deeper, slower wing stroke than other waders of a similar size, and displays a thin, indistinct white bar on the wing, and white ovals on the sides of the tail. The species shows sexual dimorphism. Although a small percentage of males resemble females, the typical male is much larger than the female and has an elaborate breeding plumage. The males are long with a wingspan, and weighs about . In the May-to-June breeding season, the male's legs, bill and warty bare facial skin are orange, and he has distinctive head tufts and a neck ruff. These ornaments vary on individual birds, being black, chestnut or white, with the colouring solid, barred or irregular. The grey-brown back has a scale-like pattern, often with black or chestnut feathers, and the underparts are white with extensive black on the breast. The extreme variability of the main breeding plumage is thought to have developed to aid individual recognition in a species that has communal breeding displays, but is usually mute.
Outside the breeding season, the male's head and neck decorations and the bare facial skin are lost and the legs and bill become duller. The upperparts are grey-brown, and the underparts are white with grey mottling on the breast and flanks.
The females, or "reeve", is long with a wingspan, and weighs about . In breeding plumage, they have grey-brown upperparts with white-fringed, dark-centred feathers. The breast and flanks are variably blotched with black. In winter, the females' plumage is similar to that of the male, but the sexes are distinguishable by their size. The plumage of the juvenile ruff resembles the non-breeding adult, but has upperparts with a neat, scale-like pattern with dark feather centres, and a strong buff tinge to the underparts.
Adult male ruffs start to moult into the main display plumage before their return to the breeding areas, and the proportion of birds with head and neck decorations gradually increases through the spring. Second-year birds lag behind full adults in developing breeding plumage. They have a lower body mass and a slower weight increase than full adults, and perhaps the demands made on their energy reserves during the migration flight are the main reason of the delayed moult.
Ruffs of both sexes have an additional moult stage between the winter and final summer plumages, a phenomenon also seen in the bar-tailed godwit. Before developing the full display finery with coloured ruff and tufts, the males replace part of their winter plumage with striped feathers. Females also develop a mix of winter and striped feathers before reaching their summer appearance. The final male breeding plumage results from the replacement of both winter and striped feathers, but the female retains the striped feathers and replaces only the winter feathers to reach her summer plumage. The striped prenuptial plumages may represent the original breeding appearance of this species, the male's showy nuptial feathers evolving later under strong sexual selection pressures.
Adult males and most adult females start their pre-winter moult before returning south, but complete most feather replacement on the wintering grounds. In Kenya, males moult 3–4 weeks ahead of the females, finishing before December, whereas females typically complete feather replacement during December and early January. Juveniles moult from their first summer body plumage into winter plumage during late September to November, and later undergo a pre-breeding moult similar in timing and duration to that of the adults, and often producing as brightly coloured an appearance.
Two other waders can be confused with the ruff. The juvenile sharp-tailed sandpiper is a little smaller than a juvenile female ruff and has a similar rich orange-buff breast, but the ruff is slimmer with a longer neck and legs, a rounder head, and a much plainer face. The buff-breasted sandpiper also resembles a small juvenile ruff, but even the female ruff is noticeably larger than the sandpiper, with a longer bill, more rotund body and scaly-patterned upperparts.
Distribution and habitat
The ruff is a migratory species, breeding in wetlands in colder regions of northern Eurasia, and spends the northern winter in the tropics, mainly in Africa. Some Siberian breeders undertake an annual round trip of up to to the West African wintering grounds. There is a limited overlap of the summer and winter ranges in western Europe. The ruff breeds in extensive lowland freshwater marshes and damp grasslands. It avoids barren tundra and areas badly affected by severe weather, preferring hummocky marshes and deltas with shallow water. The wetter areas provide a source of food, the mounds and slopes may be used for leks, and dry areas with sedge or low scrub offer nesting sites. A Hungarian study showed that moderately intensive grazing of grassland, with more than one cow per hectare (2.5 acres), was found to attract more nesting pairs. When not breeding, the birds use a wider range of shallow wetlands, such as irrigated fields, lake margins, and mining subsidence and other floodlands. Dry grassland, tidal mudflats and the seashore are less frequently used. The density can reach 129 individuals per square kilometre (334 per square mile), but is usually much lower.
The ruff breeds in Europe and Asia from Scandinavia and Great Britain almost to the Pacific. In Europe it is found in cool temperate areas, but over its Russian range it is an Arctic species, occurring mainly north of about 65°N. The largest numbers breed in Russia (more than 1 million pairs), Sweden (61,000 pairs), Finland (39,000 pairs) and Norway (14,000 pairs). Although it also breeds from Britain east through the Low Countries to Poland, Germany and Denmark, there are fewer than 2,000 pairs in these more southerly areas.
It is highly gregarious on migration, travelling in large flocks that can contain hundreds or thousands of individuals. Huge dense groups form on the wintering grounds; one flock in Senegal contained a million birds. A minority winter further east to Burma, south China, New Guinea and scattered parts of southern Australia, or on the Atlantic and Mediterranean coasts of Europe. In Great Britain and parts of coastal western Europe, where the breeding and wintering ranges overlap, birds may be present throughout the year. Non-breeding birds may also remain year round in the tropical wintering quarters. The Ruff is an uncommon visitor to Alaska (where it has occasionally bred), Canada and the contiguous states of the US, and has wandered to Iceland, Middle America, northern South America, Madagascar and New Zealand. It has been recorded as breeding well south of its main range in northern Kazakhstan, a major migration stopover area.
The male, which plays no part in nesting or chick care, leaves the breeding grounds in late June or early July, followed later in July by the female and juveniles. Males typically make shorter flights and winter further north than females; for example, virtually all wintering ruffs in Britain are males, whereas in Kenya most are females. Many migratory species use this differential wintering strategy, since it reduces feeding competition between the sexes and enables territorial males to reach the breeding grounds as early as possible, improving their chances of successful mating. Male ruffs may also be able to tolerate colder winter conditions because they are larger than females.
Birds returning north in spring across the central Mediterranean appear to follow a well-defined route. Large concentrations of ruffs form every year at particular stopover sites to feed, and individuals marked with rings or dye reappear in subsequent years. The refuelling sites are closer together than the theoretical maximum travel distance calculated from the mean body mass, and provide evidence of a migration strategy using favoured intermediate sites. The ruff stores fat as a fuel, but unlike mammals, uses lipids as the main energy source for exercise (including migration) and, when necessary, keeps warm by shivering; however, little research has been conducted on the mechanisms by which they oxidise lipids.
Behaviour
Mating
Males display during the breeding season at a lek in a traditional open grassy arena. The ruff is one of the few lekking species in which the display is primarily directed at other males rather than to the females, and it is among the small percentage of birds in which the males have well-marked and inherited variations in plumage and mating behaviour. There are three male forms: the typical territorial males, satellite males which have a white neck ruff, and a very rare variant with female-like plumage. The behaviour and appearance for an individual male remain constant through its adult life, and are determined by its genes (see §Biology of variation among males).
The territorial males, about 84% of the total, have strongly coloured black or chestnut ruffs and stake out and occupy small mating territories in the lek. They actively court females and display a high degree of aggression towards other resident males; 5–20 territorial males each hold an area of the lek about across, usually with bare soil in the centre. They perform an elaborate display that includes wing fluttering, jumping, standing upright, crouching with ruff erect, or lunging at rivals. They are typically silent even when displaying, although a soft gue-gue-gue may occasionally be given.
Territorial males are very site-faithful; 90% return to the same lekking sites in subsequent seasons, the most dominant males being the most likely to reappear. Site-faithful males can acquire accurate information about the competitive abilities of other males, leading to well-developed dominance relationships. Such stable relationships reduce conflict and the risk of injury, and the consequent lower levels of male aggression are less likely to scare off females. Lower-ranked territorial males also benefit from site fidelity since they can remain on the leks while waiting for the top males eventually to drop out.
Satellite males, about 16% of the total number, have white or mottled ruffs and do not occupy territories; they enter leks and attempt to mate with the females visiting the territories occupied by the resident males. Resident males tolerate the satellite birds because, although they are competitors for mating with the females, the presence of both types of male on a territory attracts additional females. Females also prefer larger leks, and leks surrounded by taller plants, which give better nesting habitat.
Although satellite males are on average slightly smaller and lighter than residents, the nutrition of the chicks does not, as previously thought, influence mating strategy; rather, the inherited mating strategy influences body size. Resident-type chicks will, if provided with the same amount of food, grow heavier than satellite-type chicks. Satellite males do not have to expend energy to defend a territory, and can spend more time foraging, so they do not need to be as bulky as the residents; indeed, since they fly more, there would be a physiological cost to additional weight.
A third type of male was first described in 2006; this is a permanent female mimic, the first such reported for a bird. About 1% of males are small, intermediate in size between males and females, and do not grow the elaborate breeding plumage of the territorial and satellite males, although they have much larger internal testes than the ruffed males. Although the males of most lekking bird species have relatively small testes for their size, male ruffs have the most disproportionately large testes of any bird.
This cryptic male, or "faeder" (Old English "father") obtains access to mating territories together with the females, and "steals" matings when the females crouch to solicit copulation. The faeder moults into the prenuptial male plumage with striped feathers, but does not go on to develop the ornamental feathers of the more common males. As described above, this stage is thought to show the original male breeding plumage, before other male types evolved. A faeder can be distinguished in the hand by its wing length, which is intermediate between those of displaying males and females. Despite their feminine appearance, the faeders migrate with the larger lekking males and spend the winter with them.
The faeders are sometimes mounted by independent or satellite males, but are as often "on top" in homosexual mountings as the ruffed males, suggesting that their true identity is known by the other males. Females never mount males. Females often seem to prefer mating with faeders to copulation with the more common males, and those males also copulate with faeders (and vice versa) relatively more often than with females. The homosexual copulations may attract females to the lek, like the presence of satellite males.
Not all mating takes place at the lek, since only a minority of the males attend an active lek. As alternative strategies, males can also directly pursue females ("following") or wait for them as they approached good feeding sites ("intercepting"). Males switched between the three tactics, being more likely to attend a lek when the copulation rate the previous day was high or when fewer females were available after nesting had started. Lekking rates were low in cold weather early in the season when off-lek males spent most of their time feeding.
The level of polyandry in the ruff is the highest known for any avian lekking species and for any shorebird. More than half of female ruffs mate with, and have clutches fertilised by, more than one male, and individual females mate with males of both main behavioural morphs more often than expected by chance. In lekking species, females can choose mates without risking the loss of support from males in nesting and rearing chicks, since the males take no part in raising the brood anyway. In the absence of this cost, if polyandry is advantageous, it would be expected to occur at a higher rate in lekking than among pair-bonded species.
Genetics of variation among males
As indicated above, the ruff has three male forms, which differ in mating behaviour and in appearance: the typical territorial males which have a dark neck ruff, satellite males which have a white neck ruff, and the very rare cryptic males known as "faeders" which have female-like plumage. The behaviour and appearance of each individual male remain constant through its adult life, and are determined by a simple genetic polymorphism. Territorial behaviour and appearance is recessive to satellite behaviour and appearance, while preliminary research results suggest that the faeder characteristics are genetically controlled by a single dominant gene. It was originally thought that the difference between territorial and satellite males was due to a sex-linked genetic factor, but in fact the genetic locus relevant for the mating strategy is located on an autosome, or non-sex chromosome. That means that both sexes can carry the two different forms of the gene, not just males. The female does not normally show evidence of her genetic type, but when females are given testosterone implants, they display the male behaviour corresponding to their genotype. This testosterone-linked behaviour is unusual in birds, where external sexual characteristics are normally determined by the presence or absence of oestrogen.
In 2016, two studies further pinpointed the responsible region to chromosome 11 and a 4.5-Mb covering chromosomal rearrangement. The scientists were able to show that the first genetic change happened 3.8 million years ago on the resident chromosome, when a part of it broke off and was reintroduced in the wrong direction. This inversion created the faeder allele. About 500,000 years ago another rare recombination event of faeder and resident allele in the very same inverted region led to the satellite allele. The 4.5 Mb inversion covers 90 genes, one of them is the centromere coding gene N- CENPN-, which is located exactly at one of the inversion breakpoints. The inactivation of the gene has severe deleterious effects and pedigree data of a captive ruff colony suggests that the inversion is homozygous lethal. Over the course of the past 3.8 million years, further mutations have accumulated within the inversion i.e. three deletions ranging from 3.3 to 17.6 kb. Two of these deletions remove evolutionary highly conserved elements close to two genes- HSD17B2 and SDR42E1-both holding important roles in metabolism of steroid hormones. Hormone measurements around mating time showed that whereas residents have a sharp increase of testosterone, faeders and satellites only experience higher androstenedione levels, a substance which is considered an intermediate in testosterone biosynthesis. The authors conclude that one or more of the deletions act as a cis-acting regulatory mutation which is altering the expression of one or both genes and eventually contributes to the different male phenotypes and behaviour.
Nesting and survival
The nest is a shallow ground scrape lined with grass leaves and stems, and concealed in marsh plants or tall grass up to from the lek. Nesting is solitary, although several females may lay in the general vicinity of a lek. The eggs are slightly glossy, green or olive, and marked with dark blotches; they are laid from mid-March to early June depending on latitude.
The typical clutch is four eggs, each egg measuring in size and weighing of which 5% is shell. Incubation is by the female alone, and the time to hatching is 20–23 days, with a further 25–28 days to fledging. The precocial chicks have buff and chestnut down, streaked and barred with black, and frosted with white; they feed themselves on a variety of small invertebrates, but are brooded by the female. One brood is raised each year.
Ruffs often show a pronounced inequality in the numbers of each sex. A study of juveniles in Finland found that only 34% were males and 1% were faeders. It appears that females produce a larger proportion of males at the egg stage when they are in poor physical condition. When females are in better condition, any bias in the sex ratio is smaller or absent.
Predators of waders breeding in wet grasslands include birds such as large gulls, common raven, carrion and hooded crows, and great and Arctic skuas; foxes occasionally take waders, and the impact of feral cats and stoats is unknown. Overgrazing can increase predation by making nests easier to find. In captivity, the main causes of chick mortality were stress-related sudden death and twisted neck syndrome. Adults seem to show little evidence of external parasites, but may have significant levels of disease on their tropical wintering grounds, including avian malaria in their inland freshwater habitats, and so they might be expected to invest strongly in their immune systems; however, a 2006 study that analysed the blood of migrating ruffs intercepted in Friesland showed that this bird actually has unexplained low levels of immune responses on at least one measure of resistance. The ruff can breed from its second year, and the average lifespan for birds that have passed the chick stage is about 4.4 years, although a Finnish bird lived to a record 13 years and 11 months.
Feeding
The ruff normally feeds using a steady walk and pecking action, selecting food items by sight, but it will also wade deeply and submerge its head. On saline lakes in East Africa it often swims like a phalarope, picking items off the surface. It will feed at night as well as during the day. It is thought that Ruff use both visual and auditory cues to find prey. When feeding, the ruff frequently raises its back feathers, producing a loose pointed peak on the back; this habit is shared only by the black-tailed godwit.
During the breeding season, the ruff's diet consists almost exclusively of the adults and larva of terrestrial and aquatic insects such as beetles and flies. On migration and during the winter, the ruff eats insects (including caddis flies, water-beetles, mayflies and grasshoppers), crustaceans, spiders, molluscs, worms, frogs, small fish, and also the seeds of rice and other cereals, sedges, grasses and aquatic plants. Migrating birds in Italy varied their diet according to what was available at each stopover site. Green aquatic plant material, spilt rice and maize, flies and beetles were found, along with varying amounts of grit. On the main wintering grounds in West Africa, rice is a favoured food during the later part of the season as the ricefields dry out.
Just before migration, the ruff increases its body mass at a rate of about 1% a day, much slower than the bar-tailed godwits breeding in Alaska, which fatten at four times that rate. This is thought to be because the godwit cannot use refuelling areas to feed on its trans-Pacific flight, whereas the ruff is able to make regular stops and take in food during overland migration. For the same reason, the ruff does not physiologically shrink its digestive organs to reduce bodyweight before migrating, unlike the godwit.
Relationship with humans
Ruffs were formerly trapped for food in England in large numbers; on one occasion, 2,400 were served at Archbishop Neville's enthronement banquet in 1465. The birds were netted while lekking, sometimes being fattened with bread, milk and seed in holding pens before preparation for the table.
...if expedition is required, sugar is added, which will make them in a fortnight's
time a lump of fat: they then sell for two Shillings or half-a-crown a piece… The method of killing them is by cutting off their head with a pair of scissars, the quantity of blood that issues is very great, considering the size of the bird. They are dressed like the Woodcock, with their intestines; and, when killed at the critical time, say the Epicures, are reckoned the most delicious of all morsels.
The heavy toll on breeding birds, together with loss of habitat through drainage and collection by nineteenth-century trophy hunters and egg collectors, meant that the species became almost extinct in England by the 1880s, although recolonisation in small numbers has occurred since 1963. The draining of wetlands from the 1800s onwards in southern Sweden has resulted in the ruff's disappearance from many areas there, although it remains common in the north of the country. The use of insecticides and draining of wetlands has led to a decrease in the number of ruff in Denmark since the early 1900s. There are still areas where the ruff and other wetland birds are hunted legally or otherwise for food. A large-scale example is the capture of more than one million waterbirds (including ruffs) in a single year from Lake Chilwa in Malawi.
Although this bird eats rice on the wintering grounds, where it can make up nearly 40% of its diet, it takes mainly waste and residues from cropping and threshing, not harvestable grain. It has sometimes been viewed as a pest, but the deeper water and presence of invertebrate prey in the economically important early winter period means that the wader has little effect on crop yield.
Conservation status
The ruff has a large range, estimated at 1–10 million square kilometres (0.38–3.8 million square miles) and a population of at least 2,000,000 birds. The European population of 200,000–510,000 pairs, occupying more than half of the total breeding range, seems to have declined by up to 30% over ten years, but this may reflect geographical changes in breeding populations. The Ruff was last assessed in 2016 on a global scale, and is listed as a least concern species. In 2021, the IUCN assessed the European population of the ruff as near-threatened, which could later reflect an uplisting of the species.
The most important breeding populations in Europe, in Russia and Sweden are stable, and the breeding range in Norway has expanded to the south, but populations have more than halved in Finland, Estonia, Poland, Latvia and The Netherlands. Although the small populations in these countries are of limited overall significance, the decline is a continuation of trend towards range contraction that has occurred over the last two centuries. The drop in numbers in Europe has been attributed to drainage, increased fertiliser use, the loss of formerly mown or grazed breeding sites and over-hunting.
Fossils from the Pleistocene suggest that this species bred further south in Europe in the cool periods between glaciations than it does now. Its sensitivity to changing climate as well as to water table levels and the speed of vegetation growth has led to suggestions that its range is affected by global warming, and the ruff might act as an indicator species for monitoring climate change. Potential threats to this species may also include outbreaks of diseases to which it is susceptible such as influenza, botulism and avian malaria.
The ruff is one of the species to which the Agreement on the Conservation of African-Eurasian Migratory Waterbirds (AEWA) applies, where it is allocated to category 2c; that is, the populations in need of special attention as they are showing "significant long-term decline" in much of its range. This commits signatories to regulate the taking of listed species or their eggs, to establish protected areas to conserve habitats for the listed species, to regulate hunting and to monitor the populations of the birds concerned.
| Biology and health sciences | Charadriiformes | Animals |
350206 | https://en.wikipedia.org/wiki/Sable | Sable | The sable (Martes zibellina) is a species of marten, a small omnivorous mammal primarily inhabiting the forest environments of Russia, from the Ural Mountains throughout Siberia, and northern Mongolia. Its habitat also borders eastern Kazakhstan, China, North Korea and Hokkaido, Japan.
The name "sable" originates from Slavic languages and entered Western European languages through the medieval fur trade. Sables are small, omnivorous mammals that inhabit dense forests in regions like Russia, Mongolia, and China. They are known for their luxurious fur, which ranges from light to dark brown and is softer and silkier than that of American martens. Sables resemble pine martens in size and appearance but have more elongated heads, longer ears, and shorter tails. They are skilled climbers and primarily hunt by sound and scent. Mating occurs between June and August, and litters typically have two or three offspring. Sable fur has been highly valued in the fur trade since the early Middle Ages, and its popularity has driven hunting and conservation efforts. Today, sable fur is often used to decorate clothing items, and the species has no special conservation status according to the IUCN Red List.
Etymology
The name sable appears to be of Slavic origin and entered most Western European languages via the early medieval fur trade. Thus the Russian () and Polish became the German , Dutch ; the French , Spanish , Finnish , Portuguese and Medieval Latin derive from the Italian form (). The English and Medieval Latin word comes from the Old French or .
The term has become a generic description for some black-furred animal breeds, such as sable cats or rabbits, and for the colour black in heraldry.
Description
Males measure in body length, with a tail measuring , and weigh . Females have a body length of , with a tail length of . The winter pelage is longer and more luxurious than the summer coat. Different subspecies display geographic variations of fur colour, which ranges from light to dark brown, with individual coloring being lighter ventrally and darker on the back and legs. Japanese sables (known locally as or ) in particular are marked with black on their legs and feet. Individuals also display a light patch of fur on their throat which may be gray, white, or pale yellow. The fur is softer and silkier than that of American martens. Sables greatly resemble pine martens in size and appearance, but have more elongated heads, longer ears and proportionately shorter tails. Their skulls are similar to those of pine martens, but larger and more robust with more arched zygomatic arches.
Behaviour
Sables inhabit dense forests dominated by spruce, pine, larch, Siberian cedar, and birch in both lowland and mountainous terrain. They defend home territories that may be anything from in size, depending on local terrain and food availability. However, when resources are scarce they may move considerable distances in search of food, with travel rates of per day having been recorded.
Sables live in burrows near riverbanks and in the thickest parts of woods. These burrows are commonly made more secure by being dug among tree roots. They are good climbers of cliffs and trees. They are primarily crepuscular, hunting during the hours of twilight, but become more active in the day during the mating season. Their dens are well hidden, and lined by grass and shed fur, but may be temporary, especially during the winter, when the animal travels more widely in search of prey.
Sables are omnivores, and their diet varies seasonally. In the summer, they eat large numbers of mountain hare and other small mammals. In winter, when they are confined to their retreats by frost and snow, they feed on wild berries, rodents, hares, and even small musk deer. They also hunt ermine, small weasels and birds. Sometimes, sables follow the tracks of wolves and bears and feed on the remains of their kills. They eat gastropods such as slugs, which they rub on the ground in order to remove the mucus. Sables also occasionally eat fish, which they catch with their front paws.
They hunt primarily by sound and scent, and they have an acute sense of hearing. Sables mark their territory with scent produced in glands on the abdomen. Predators of sable include a number of larger carnivores, such as wolves, foxes, wolverines, tigers, lynxes, eagles and large owls.
Reproduction
Mating generally occurs between June and August 15, though the date varies geographically. When courting, sables run, jump and "rumble" like cats. Males dig metre long shallow grooves in the snow, frequently accompanied by urination. Males fight each other violently for females. Females enter estrus in spring. Mating can last as long as eight hours. After insemination, the blastocyst does not implant into the uterine wall of the female. Instead, implantation occurs eight months later; although gestation lasts 245 to 298 days, embryonic development requires only 25–30 days. Sables birth in tree hollows, where they build nests composed of moss, leaves, and dried grass. Litters number one to seven young, although litters of two or three are most common. Males assist females by defending their territories and providing food.
Sables are born with eyes closed and skin covered in a very thin layer of hair. Newborn cubs weigh between and average in length. They open their eyes between 30 and 36 days, and leave the nest shortly afterwards. At seven weeks, the young are weaned and given regurgitated food. They reach sexual maturity at the age of two years. They have been reported to live for up to twenty two years on fur farms, and up to eighteen years in the wild.
Sables can interbreed with pine martens. This has been observed in the wild, where the two species overlap in the Ural Mountains, and is sometimes deliberately encouraged on fur farms. The resulting hybrid, referred to as a kidus, is slightly smaller than a pure sable, with coarser fur, but otherwise similar markings, and a long bushy tail. Kiduses are typically sterile, although there has been one recorded instance of a female kidus successfully breeding with a male pine marten.
Distribution
In Russia, the sable's distribution is largely the result of mass re-introductions involving 19,000 animals between 1940 and 1965. Their range extends northward to the tree line, and extends south to 55–60° latitude in western Siberia, and 42° in the mountainous areas of eastern Asia. Their western distribution encompasses the Ural Mountains, where they are sympatric with the European pine marten. They are also found on Sakhalin.
In Mongolia, sables occur in the Altai Mountains and in the surrounding forests of Lake Hovsgol, the latter being contiguous with the Trans-Baikal boreal forest region from which the most valuable sable pelts come. In China, sables occur in a limited area of the Xinjiang Uygur Autonomous Region. In northeastern China, sables are now limited to the Greater Khingan Range. In eastern Heilongjiang, the persistence of sables is suspected in the Lesser Khingan Range. Sables also occur in Hokkaido and on the Korean peninsula.
Because of the variable appearance of the sable in different geographic localities, there has been some debate over the exact number of subspecies that can be clearly identified. Mammal Species of the World recognises seventeen different subspecies, but other recent scholarly sources have identified anything from seven to thirty.
History of fur use and status
Sable fur has been a highly valued item in the fur trade since the early Middle Ages, and is generally considered to have the most beautiful and richly tinted pelt among martens. Sable fur is unique because it retains its smoothness in every direction it is stroked. The fur of other animals feels rough stroked against the grain. A wealthy 17th-century Russian diplomat once described the sable as "A beast that the Ancient Greeks and Romans called the Golden Fleece." Russian sables would typically be skinned over the mouth with no incision being made on the body. The feet would be retained, so as to keep as much fur as possible. Byzantine priests would wear sable for their rituals.
In England, sable fur was held in great esteem. Henry I was presented with a wreath of black sable by the Bishop of Lincoln, for no less than £100, a considerable sum at the time. Sable fur was a favourite of Henry VIII, who once received five sets of sable fur worth £400 from Emperor Charles V. Henry later decreed that sable fur was to be worn only by nobles exceeding the rank of viscount. The Russian conquest of Siberia was largely spurred by the availability of sables there. Ivan Grozny once demanded an annual tribute of 30,000 sable pelts from the newly conquered Kazan Tatars, though they never sent more than a thousand, as Russia at the time was unable to enforce the tribute due to wars with Sweden and Poland. The best skins were obtained in Irkutsk and Kamchatka.
According to the Secret History of the Mongols, when Genghis Khan married his first wife, Börte Ujin, his mother Hoelun received a coat of sable furs from the girl's parents. This was reportedly a very noble gift, serving not only an aesthetic need but also a practical one. Shortly after, when the young Shigi Qutuqu was found wandering a destroyed Tatar camp, he was recognised to be of noble descent because of his sable-lined silk jerkin.
According to Atkinson's Travels in Asiatic Russia, Barguzin, on Lake Baikal, was famed for its sables. The fur of this population is a deep jet black with white tipped hair. Eighty to ninety dollars were sometimes demanded by hunters for a single skin. In 1916, the first nature reserve in the Russian Empire was created—known as the Barguzin Nature Reserve—precisely to preserve and increase the numbers of Barguzin sable. Sable fur would continue to be the most favoured fur in Russia, until the discovery of sea otters in the Kamchatka peninsula, whose fur was considered even more valuable. Sable furs were coveted by the nobility of the Russian Empire, with very few skins ever being found outside the country during that period. Some, however, would be privately obtained by Jewish traders and brought annually to the Leipzig fair. Sometimes, sable hunting was a job given to convicts exiled to Siberia.
Imperial Russian fur companies produced 25,000 skins annually, with nearly ninety percent of the produce being exported to France and Germany. The civic robes of the Lord Mayor and Corporation of London, which were worn on State occasions, were trimmed with sable. As with minks and martens, sables were commonly caught in steel traps. Intensified hunting in Russia in the 19th and early 20th century caused a severe-enough decline in numbers that a five-year ban on hunting was instituted in 1935, followed by a winter-limited licensed hunt. These restrictions together with the development of sable farms have allowed the species to recolonize much of its former range and attain healthy numbers.
The Soviet Union allowed Old Believer communities to continue their traditional way of life on the condition that they hand over all sable skins they produced. The dissolution of the Soviet Union led to an increase of hunting and poaching in the 1990s, in part because wild caught Russian furs are considered the most luxurious and demand the highest prices on the international market. Currently, the species has no special conservation status according to the IUCN Red List, though the isolated Japanese subspecies M. zibellina brachyurus is listed as "data-deficient".
Because of its great expense, sable fur is typically integrated into various clothes fashions: to decorate collars, sleeves, hems and hats (see, for example the shtreimel). The so-called kolinsky sable-hair brushes used for watercolour or oil painting are not manufactured from sable hair, but from that of the Siberian weasel.
| Biology and health sciences | Mustelidae | Animals |
350239 | https://en.wikipedia.org/wiki/Basket | Basket | A basket is a container that is traditionally constructed from stiff fibers, and can be made from a range of materials, including wood splints, runners, and cane. While most baskets are made from plant materials, other materials such as horsehair, baleen, or metal wire can be used. Baskets are generally woven by hand. Some baskets are fitted with a lid, while others are left open on top.
Uses
Baskets serve utilitarian as well as aesthetic purposes. Some baskets are ceremonial, that is religious, in nature. While baskets are usually used for harvesting, storage and transport, specialized baskets are used as sieves for a variety of purposes, including cooking, processing seeds or grains, tossing gambling pieces, rattles, fans, fish traps, and laundry.
History
Prior to the invention of woven baskets, people used tree bark to make simple containers. These containers could be used to transport gathered food and other items, but crumbled after only a few uses. Weaving strips of bark or other plant material to support the bark containers would be the next step, followed by entirely woven baskets. The last innovation appears to be baskets so tightly woven that they could hold water.
Depending on soil conditions, baskets may or may not be preserved in the archaeological record. Sites in the Middle East show that weaving techniques were used to make mats, and possibly also baskets, circa 8000 BCE. Twined baskets date back to 7000 in Oasisamerica. Baskets made with interwoven techniques were common at 3000 BCE.
Baskets were originally designed as multi-purpose vessels to carry and store materials and to keep stray items about the home. The plant life available in a region affects the choice of material, which in turn influences the weaving technique. Rattan and other members of the Arecaceae or palm tree family, the thin grasses of temperate regions, and broad-leaved tropical bromeliads each require a different method of twisting and braiding to be made into a basket. The practice of basket making has evolved into an art. Artistic freedom allows basket makers a wide choice of colors, materials, sizes, patterns, and details.
The carrying of a basket on the head, particularly by rural women, has long been practiced. Representations of this in Ancient Greek art are called Canephorae.
Figurative and literary usage
The phrase "to hell in a handbasket" means to deteriorate rapidly. The origin of this use is unclear. "Basket" is sometimes used as an adjective for a person who is born out of wedlock. This occurs more commonly in British English. "Basket" also refers to a bulge in a man's crotch. The word “basket” is frequently used in the colloquial “don’t put all your eggs in one basket.” In this sense, the basket is a metaphor for a chance at success.
Materials
Basket makers use a wide range of materials, including:
Bamboo
Carbon fiber
Metal
Palm
Plastic
Straw
Wicker (traditionally made of willow, rattan, reed, and bamboo)
Image gallery
| Technology | Containers | null |
350375 | https://en.wikipedia.org/wiki/Spinning%20jenny | Spinning jenny | The spinning jenny is a multi-spindle spinning frame, and was one of the key developments in the industrialisation of textile manufacturing during the early Industrial Revolution. It was invented in 1764–1765 by James Hargreaves in Stan hill, Oswaldtwistle, Lancashire in England.
The device reduced the amount of work needed to produce cloth, with a worker able to work eight or more spools at once. This grew to 120 as technology advanced. The yarn produced by the jenny was not very strong until Richard Arkwright invented the water-powered water frame. The spinning jenny helped to start the factory system of cotton manufacturing.
History
The spinning jenny was invented by James Hargreaves. He was born in Oswaldtwistle, near Blackburn, around 1720. Blackburn was a town with a population of about 5,000, known for the production of "Blackburn greys," cloths of linen warp and cotton weft initially imported from India. They were usually sent to London to be printed.
At the time, cotton yarn production could not keep up with demand of the textile industry, and Hargreaves spent some time considering how to improve the process. The flying shuttle (John Kay 1733) had increased yarn demand by the weavers by doubling their productivity, and now the spinning jenny could supply that demand by increasing the spinners' productivity even more. The machine produced coarse thread.
Components
The idea was developed by Hargreaves as a metal frame with eight wooden spindles at one end. A set of eight rovings was attached to a beam on that frame. The rovings when extended passed through two horizontal bars of wood that could be clasped together. These bars could be drawn along the top of the frame by the spinner's left hand thus extending the thread. The spinner used his right hand to rapidly turn a wheel which caused all the spindles to revolve, and the thread to be spun. When the bars were returned, the thread wound onto the spindle. A pressing wire (faller) was used to guide the threads onto the right place on the spindle.
The politics of cotton
In the 17th century, England was famous for its woollen and worsted cloth. That industry was centred in the east and south in towns such as Norwich which jealously protected their product. Cotton processing was tiny: in 1701 only of cotton-wool was imported into England, and by 1730 this had fallen to . This was due to commercial legislation (Calico Acts) to protect the woollen industry. Cheap calico prints, imported by the East India Company from Hindustan (as India was then called), became popular. In 1700 an Act of Parliament was passed to prevent the importation of dyed or printed calicoes from India, China or Persia. This caused grey cloth (calico that had not been finished – dyed or printed) to be imported instead, and these were printed in southern England with popular patterns. Lancashire businessmen produced grey cloth with linen warp and cotton weft, which they sent to London to be finished. Cotton-wool imports recovered and by 1720 were almost back to 1701 levels. Again the woollen manufacturers claimed this was taking jobs from workers in Coventry. Another law was passed, to fine anyone caught wearing printed or stained calico; muslins, neckcloths and fustians were exempted. It was this exemption that the Lancashire manufacturers exploited.
The use of coloured cotton weft, with linen warp was permitted in the 1736 Manchester Act. There now was an artificial demand for woven cloth. In 1764, of cotton-wool was imported.
The economics of Northern England in 1750
In England, before canals, railways, and before the turnpikes, the only way to transport goods such as calicos, broadcloth or cotton-wool was by packhorse. Strings of packhorses travelled along a network of bridle paths. A merchant would be away from home most of the year, carrying his takings in cash in his saddlebag. Later a series of chapmen would work for the merchant, taking wares to wholesalers and clients in other towns, and with them would go sample books.
Before 1720, the handloom weaver spent part of each day visiting neighbours buying any weft they had. Carding and spinning might be the only income for that household, or part of it. The family might farm a few acres and card, spin and weave wool and cotton. It took three carders to provide the roving for one spinner, and up to three spinners to provide the yarn for one weaver. The process was continuous, and done by both sexes, from the youngest to the oldest. The weaver would go once a week to the market with his wares and offer them for sale.
A change came about 1740 when fustian masters gave out raw cotton and warps to the weavers and returned to collect the finished cloth (Putting-out system). The weaver organised the carding, spinning and weaving to the master's specification. The master then dyed or printed the grey cloth, and took it to shopkeepers. Ten years later this had changed and the fustian masters were middle men, who collected the grey cloth and took it to market in Manchester where it was sold to merchants who organised the finishing.
To hand weave a piece of eighteenpenny weft took 14 days and paid 36 shillings in all. Of this nine shillings was paid for spinning, and nine for carding. So by 1750, a rudimentary manufacturing system feeding into a marketing system emerged.
In 1738, John Kay started to improve the loom. He improved the reed, and invented the raceboard, the shuttleboxes and the picker which together allowed one weaver to double his output. This invention is commonly called the flying shuttle. It met with violent opposition and he fled from Lancashire to Leeds. Though the workers thought this was a threat to their jobs, it was adopted and the pressure was on to speed up carding and spinning.
The shortage of spinning capacity to feed the more efficient looms provided the motivation to develop more productive spinning techniques such as the spinning jenny, and triggered the start of the Industrial Revolution.
Success
Hargreaves kept the machine secret for some time, but produced a number for his own growing industry. The price of yarn fell, angering the large spinning community in Blackburn. Eventually they broke into his house and smashed his machines, forcing him to flee to Nottingham in 1768. This was a centre for the hosiery industry, and knitted silks, cottons and wool. There he set up shop producing jennies in secret for one Mr Shipley, with the assistance of a joiner named Thomas James. He and James set up a textile business in Mill Street. On 12 July 1770, he took out a patent (no. 962) on his invention, the Spinning Jenny—a machine for spinning, drawing and twisting cotton.
By this time a number of spinners in Lancashire were using copies of the machine, and Hargreaves sent notice that he was taking legal action against them. The manufacturers met, and offered Hargreaves £3,000. He at first demanded £7,000, and stood out for £4,000, but the case eventually fell apart when it was learned he had sold several in the past.
The spinning jenny succeeded because it held more than one ball of yarn, making more yarn in a shorter time and reducing the overall cost. The spinning jenny would not have been such a success if the flying shuttle had not been invented and installed in textile factories. Its success was limited in that it required the rovings to be prepared on a wheel, and this was limited by the need to card by hand. It continued in common use in the cotton and fustian industry until about 1810. The spinning jenny was superseded by the spinning mule. The jenny was adapted for the process of slubbing, being the basis of the Slubbing Billy.
Origin and myth
The most common story told about the invention of the device and the origin of the jenny in the machine's name is that one of his daughters (or his wife) named Jenny knocked over one of their own spinning wheels. The device kept working normally, with the spindle now pointed upright. Hargreaves realised there was no particular reason the spindles had to be horizontal, as they always had been, and he could place them vertically in a row.
The name is variously said to derive from this tale. The Registers of Church Kirk show that Hargreaves had several daughters, but none named Jenny (neither was his wife). A more likely explanation of the name is that jenny was an abbreviation of engine.
Thomas Highs of Leigh has claimed to be the inventor and the story is repeated using his wife's name.
Another myth has Thomas Earnshaw inventing a spinning device of a similar description – but destroying it after fearing he might be taking bread out of the mouths of the poor.
| Technology | Spinning | null |
350708 | https://en.wikipedia.org/wiki/MultiMediaCard | MultiMediaCard | MultiMediaCard, officially abbreviated as MMC, is a memory card standard used for solid-state storage. Unveiled in 1997 by SanDisk and Siemens, MMC is based on a surface-contact low-pin-count serial interface using a single memory stack substrate assembly, and is therefore much smaller than earlier systems based on high-pin-count parallel interfaces using traditional surface-mount assembly such as CompactFlash. Both products were initially introduced using SanDisk NOR-based flash technology.
MMC is about the size of a postage stamp: 32 mm × 24 mm × 1.4 mm. MMC originally used a 1-bit serial interface, but newer versions of the specification allow transfers of 4 or 8 bits at a time. MMC can be used in many devices that can use Secure Digital (SD) cards. MMCs may be available in sizes up to 16 gigabytes (GB).
They are used in almost every context in which memory cards are used, like cellular phones, digital audio players, digital cameras, and PDAs. Typically, an MMC operates as a storage medium for devices, in a form that can easily be removed for access by a PC via a connected MMC reader.
eMMC (embedded MMC) is a small MMC chip used as embedded non-volatile memory that is normally soldered on printed circuit boards, though pluggable eMMC modules are used on some devices (e.g. Orange Pi and ODROID).
History
The latest version of the eMMC standard (JESD84-B51) by JEDEC is version 5.1A, released January 2019, with speeds (250 MB/s read, 125 MB/s write) rivaling discrete SATA-based SSDs (500 MB/s).
As of 23 September 2008, the MultimediaCard Association (MMCA) turned over all MMC specifications to the JEDEC organization including embedded MMC (eMMC), SecureMMC, and miCARD assets. JEDEC is an organization devoted to standards for the solid-state industry.
The latest eMMC specifications can be requested from JEDEC, free-of-charge for JEDEC members. Older versions of the standard are freely available, but some optional enhancements to the standard such as MiCard and SecureMMC specifications, must be purchased separately.
While there is no royalty charged for devices to host an MMC or eMMC, a royalty may be necessary in order to manufacture the cards themselves.
A highly detailed datasheet that contains essential information for writing an MMC host driver is available online.
Variants
RS-MMC
In 2004, the Reduced-Size MultiMediaCard (RS-MMC) was introduced as a smaller form factor of the MMC, with about half the size: 24 mm × 18 mm × 1.4 mm. The RS-MMC uses a simple mechanical adapter to elongate the card so it can be used in any MMC (or SD) slot. RS-MMCs are currently available in sizes up to and including 2 GB.
The modern continuation of an RS-MMC is commonly known as MiniDrive (MD-MMC). A MiniDrive is generally a microSD card adapter in the RS-MMC form factor. This allows a user to take advantage of the wider range of modern MMCs available to exceed the historic 2 GB limitations of older chip technology.
Implementations of RS-MMCs include Nokia and Siemens, who used RS-MMC in their Series 60 Symbian smartphones, the Nokia 770 Internet Tablet, and generations 65 and 75 (Siemens). However, since 2006, all of Nokia's new devices with card slots have used miniSD or microSD cards, with the company dropping support for the MMC standard in its products. While Siemens exited the mobile phone business completely in 2006, the company continues to use MMC for some PLC storage leveraging MD-MMC advances.
DV-MMC
The Dual-Voltage MultimediaCard (DV-MMC) was one of the first changes in MMC. These cards can operate at 1.8 V in addition to 3.3 V. Running at lower voltages reduces the card's energy consumption, which is important in mobile devices. However, simple dual-voltage parts quickly went out of production in favor of MMCplus and MMCmobile, which offer capabilities in addition to dual-voltage capability.
MMCplus and MMCmobile
The version 4.x of the MMC standard, introduced in 2005, introduced two significant changes to compete against SD cards: (1) the ability to run at higher speeds (26 MHz and 52 MHz) than the original MMC (20 MHz) or SD (25 MHz, 50 MHz), and (2) a four- or eight-bit-wide data bus.
Version 4.x full-size cards and reduced-size cards can be marketed as MMCplus and MMCmobile, respectively.
Version 4.x cards are fully backward compatible with existing readers but require updated hardware and software to use their new capabilities. Even though the four-bit-wide bus and high-speed modes of operation are deliberately electrically compatible with SD, the initialization protocol is different, so firmware and software updates are required to use these features in an SD reader.
MMCmicro
MMCmicro is a smaller version of MMC. With dimensions of 14 mm × 12 mm × 1.1 mm, it is smaller and thinner than RS-MMC. Like MMCmobile, MMCmicro allows dual voltage, is backward compatible with MMC, and can be used in full-size MMC and SD slots with a mechanical adapter. MMCmicro cards have the high-speed and four-bit-bus features of the 4.x spec, but not the eight-bit bus, due to the absence of the extra pins.
This variant was formerly known as S-card when introduced by Samsung on 13 December 2004. It was later adapted and introduced in 2005 by the MultiMediaCard Association (MMCA) as the third form factor memory card in the MultiMediaCard family.
MMCmicro appears very similar to microSD, but the two formats are not physically compatible and have incompatible pinouts.
MiCard
The MiCard is a backward-compatible extension of the MMC standard with a theoretical maximum size of 2048 GB (2 terabytes) announced on 2 June 2007. The card is composed of two detachable parts, much like a microSD card with an SD adapter. The small memory card fits directly in a USB port and has MMC-compatible electrical contacts. With an included electromechanical adapter, it can also fit in traditional MMC and SD card readers. To date, only one manufacturer (Pretec) has produced cards in this format.
The MiCard was developed by the Industrial Technology Research Institute in Taiwan. At the time of the announcement, twelve Taiwanese companies (including ADATA Technology, Asustek, BenQ, Carry Computer Eng. Co., C-One Technology, DBTel, Power Digital Card Co., and RiCHIP) had signed on to manufacture the new memory card. However, as of June 2011, none of the listed companies had released any such cards, nor had any further announcements been made about plans for the format.
The card was announced to be available starting in the third quarter of 2007. It was expected to save the 12 Taiwanese companies who planned to manufacture the product and related hardware up to US$40 million in licensing fees, which presumably would otherwise be paid to owners of competing flash memory formats. The initial card was to have a capacity of 8 GB, while the standard would allow sizes up to 2048 GB. It was stated to have data transfer speeds of 480 Mbit/s (60 Mbyte/s), with plans to increase data over time.
eMMC
The currently implemented embedded MMC (eMMC or ) architecture puts the MMC components (flash memory, buffer and controller) into a small ball grid array (BGA) IC package for use in circuit boards as an embedded non-volatile memory system. This is noticeably different from other versions of MMC as this is not a user-removable card, but rather a permanent attachment to the printed circuit board (PCB). Therefore, in the event of an issue with either the memory or its controller, the eMMC would need to be replaced or repaired. In eMMC, the host system simply reads and writes data to and from the logical block addresses. The eMMC controller hardware and firmware lifts the burden on the host system by performing error correction and data management. eMMC exists in 100, 153, and 169 ball packages and is based on an 8-bit parallel interface.
Almost all mobile phones and tablets used this form of flash for main storage until 2016, when Universal Flash Storage (UFS) started to take control of the market. However, as of 2023, eMMC is still used in many consumer applications, including lower-end smartphones, such that Kioxia has introduced new 64 GB and 128 GB eMMC 5.1 modules based on modern 3D NAND flash scheduled for mass production in 2024.
eMMC does not support the SPI-bus protocol and uses NAND flash.
Uses
Modern computers, both laptops and desktops, often have SD slots, which can additionally read MMCs if the operating system drivers can. Since the introduction of SD cards, few companies build MMC slots into their devices (an exception is some mobile devices like the Nokia 9300 communicator in 2004, where the smaller size of the MMC is a benefit), but the slightly thinner, pin-compatible MMCs can be used in almost any device that can use SD cards if the software/firmware on the device is capable.
While few companies build MMC slots into devices , due to SD cards dominating the memory card market, the embedded MMC () is still widely used in consumer electronics as a primary means of integrated storage and boot loader in portable devices. eMMC provides a low-cost flash-memory system with a built-in controller that can reside inside an Android or Windows phone or in a low-cost PC and can appear to its host as a bootable device, in lieu of a more expensive form of solid-state storage, such as a traditional solid-state drive.
Similar formats
In 2004, a group of companies—including Seagate and Hitachi—introduced an interface called CE-ATA for small form factor hard disk drives. This interface was electrically and physically compatible with the MMC specification. However, support for further development of the standard ended in 2008.
The game card format used on the PlayStation Vita was found to be based on the MMC standard, but with a different pinout and support for custom initialization commands as well as copy protection.
| Technology | Non-volatile memory | null |
350915 | https://en.wikipedia.org/wiki/Chromaticity | Chromaticity | Chromaticity is an objective specification of the quality of a color regardless of its luminance. Chromaticity consists of two independent parameters, often specified as hue (h) and colorfulness (s), where the latter is alternatively called saturation, chroma, intensity, or excitation purity. This number of parameters follows from trichromacy of vision of most humans, which is assumed by most models in color science.
Quantitative description
In color science, the white point of an illuminant or of a display is a neutral reference characterized by a chromaticity; all other chromaticities may be defined in relation to this reference using polar coordinates. The hue is the angular component, and the purity is the radial component, normalized by the maximum radius for that hue.
Purity is roughly equivalent to the term saturation in the HSV color model. The property hue is as used in general color theory and in specific color models such as HSL and HSV color spaces, though it is more perceptually uniform in color models such as Munsell, CIELAB or CIECAM02.
Some color spaces separate the three dimensions of color into one luminance dimension and a pair of chromaticity dimensions. For example, the white point of an sRGB display is an , chromaticity of (0.3127, 0.3290), where and coordinates are used in the xyY space.
These pairs determine a chromaticity as affine coordinates on a triangle in a 2D-space, which contains all possible chromaticities. These and are used because of simplicity of expression in CIE 1931 (see below) and have no inherent advantage. Other coordinate systems on the same X-Y-Z triangle, or other color triangles, can be used.
On the other hand, some color spaces such as RGB and XYZ do not separate out chromaticity, but chromaticity is defined by a mapping that normalizes out intensity, and its coordinates, such as and or and , can be calculated through the division operation, such as , and so on.
The xyY space is a cross between the CIE XYZ and its normalized chromaticity coordinates xyz, such that the luminance Y is preserved and augmented with just the required two chromaticity dimensions.
| Physical sciences | Basics | Physics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.