id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
3,402 | https://en.wikipedia.org/wiki/Bead | A bead is a small, decorative object that is formed in a variety of shapes and sizes of a material such as stone, bone, shell, glass, plastic, wood, or pearl and with a small hole for threading or stringing. Beads range in size from under 1 millimeter (0.039 in) to over 1 centimeter (0.39 in) in diameter.
Beads represent some of the earliest forms of jewellery, with a pair of beads made from Nassarius sea snail shells dating to approximately 100,000 years ago thought to be the earliest known example.[1][2] Beadwork is the art or craft of making things with beads. Beads can be woven together with specialized thread, strung onto thread or soft, flexible wire, or adhered to a surface (e.g. fabric, clay).
Types of beads
Beads can be divided into several types of overlapping categories based on different criteria such as the materials from which they are made, the process used in their manufacturing, the place or period of origin, the patterns on their surface, or their general shape. In some cases, such as millefiori and cloisonné beads, multiple categories may overlap in an interdependent fashion.
Components
Beads can be made of many different materials. The earliest beads were made of a variety of natural materials which, after they were gathered, could be readily drilled and shaped. As humans became capable of obtaining and working with more difficult materials, those materials were added to the range of available substances.
Beads were a part of different cultures, each made with different materials throughout history and using beads to form something handmade. Beads came in different colors, shapes, and forms, what materials were used, and whether there was a meaning or meaning behind the beads.
In modern manufacturing, the most common bead materials are wood, plastic, glass, metal, and stone.
Natural materials
Beads are still made from many naturally occurring materials, both organic (i.e., of animal- or plant-based origin) and inorganic (purely mineral origin). However, some of these materials now routinely undergo some extra processing beyond mere shaping and drilling such as color enhancement via dyes or irradiation.
The natural organics include bone, coral, horn, ivory, seeds (such as tagua nuts), animal shells, and wood. For most of human history, pearls were the ultimate precious beads of natural origin because of their rarity; the modern pearl-culturing process has made them far more common. Amber and jet are also of natural organic origin although both are the result of partial fossilization.
The natural inorganics include various types of stones, ranging from gemstones to common minerals, and metals. Of the latter, only a few precious metals occur in pure forms, but other purified base metals may as well be placed in this category along with certain naturally occurring alloys such as electrum.
Synthetic materials
The oldest-surviving synthetic materials used for bead making have generally been ceramics: pottery and glass. Beads were also made from ancient alloys such as bronze and brass, but as those were more vulnerable to oxidation they have generally been less well-preserved at archaeological sites.
Many different subtypes of glass are now used for beadmaking, some of which have their component-specific names. Lead crystal beads have a high percentage of lead oxide in the glass formula, increasing the refractive index. Most of the other named glass types have their formulations and patterns inseparable from the manufacturing process.
Small, colorful, fusible plastic beads can be placed on a solid plastic-backed peg array to form designs and then melted together with a clothes iron; alternatively, they can be strung into necklaces and bracelets or woven into keychains. Fusible beads come in many colors and degrees of transparency/opacity, including varieties that glow in the dark or have internal glitter; peg boards come in various shapes and several geometric patterns. Plastic toy beads, made by chopping plastic tubes into short pieces, were introduced in 1958 by Munkplast AB in Munka-Ljungby, Sweden. Known as Indian beads, they were originally sewn together to form ribbons. The pegboard for bead designs was invented in the early 1960s (patented 1962, patent granted 1967) by Gunnar Knutsson in Vällingby, Sweden, as a therapy for elderly homes; the pegboard later gained popularity as a toy for children.[1] The bead designs were glued to cardboard or Masonite boards and used as trivets. Later, when the beads were made of polyethylene, it became possible to fuse them with a flat iron. Hama come in three sizes: mini (diameter 2 mm (0.079 in)), midi (5 mm (0.20 in)) and maxi (10 mm (0.39 in)).[2] Perler beads come in two sizes called classic (5 mm) and biggie (10 mm). Pyssla beads (by IKEA) only come in one size (5 mm).
Manufacturing
Modern mass-produced beads are generally shaped by carving or casting, depending on the material and desired effect. In some cases, more specialized metalworking or glassworking techniques may be employed, or a combination of multiple techniques and materials may be used such as in cloisonné.
Beads are small circular shapes that come in different shapes and sizes. The materials are made from different qualities such as color, shape, shine, pattern, or even exotic materials used, etc. In making beads, they have to have holes in the center in putting through a string to hold the beads together using different techniques that can help. Some archaeologists had been working at Blombos cave located in South Africa, there was a recent discovery showing forty-one marine shell (Nassarius kraussianus) beads. It was estimated that it was made about seventy-five thousand years ago.
Glassworking
Most glass beads are pressed glass, mass-produced by preparing a molten batch of glass of the desired color and pouring it into molds to form the desired shape. This is also true of most plastic beads.
A smaller and more expensive subset of glass and lead crystal beads are cut into precise faceted shapes on an individual basis. This was once done by hand but has largely been taken over by precision machinery.
"Fire-polished" faceted beads are a less expensive alternative to hand-cut faceted glass or crystal. They derive their name from the second half of a two-part process: first, the glass batch is poured into round bead molds, then they are faceted with a grinding wheel. The faceted beads are then poured onto a tray and briefly reheated just long enough to melt the surface, "polishing" out any minor surface irregularities from the grinding wheel.
Specialized glass techniques and types
There are several specialized glassworking techniques that create a distinctive appearance throughout the body of the resulting beads, which are then primarily referred to by the glass type.
If the glass batch is used to create a large massive block instead of pre-shaping it as it cools, the result may then be carved into smaller items in the same manner as stone. Conversely, glass artisans may make beads by lampworking the glass on an individual basis; once formed, the beads undergo little or no further shaping after the layers have been properly annealed.
Most of these glass subtypes are some form of fused glass, although goldstone is created by controlling the reductive atmosphere and cooling conditions of the glass batch rather than by fusing separate components together.
Dichroic glass beads incorporate a semitransparent microlayer of metal between two or more layers. Fibre optic glass beads have an eyecatching chatoyant effect across the grain.
There are also several ways to fuse many small glass canes together into a multicolored pattern, resulting in millefiori beads or chevron beads (sometimes called "trade beads"). "Furnace glass" beads encase a multicolored core in a transparent exterior layer which is then annealed in a furnace.
More economically, millefiori beads can also be made by limiting the patterning process to long, narrow canes or rods known as murrine. Thin cross-sections, or "decals", can then be cut from the murrine and fused into the surface of a plain glass bead.
Shapes
Beads can be made in variety of shapes, including the following, as well as tubular and oval-shaped beads.
Round
This is the most common shape of beads that are strung on wire to create necklaces, and bracelets. The shape of the round beads lay together and are pleasing to the eye. Round beads can be made of glass, stone, ceramic, metal, or wood.
Square or cubed
Square beads can be to enhance a necklace design as a spacer however a necklace can be strung with just square beads. The necklaces with square beads are used in Rosary necklaces/prayer necklaces, and wooden or shell ones are made for beachwear.
Hair pipe beads
Elk rib bones were the original material for the long, tubular hair pipe beads. Today these beads are commonly made of bison and water buffalo bones and are popular for breastplates and chokers among Plains Indians. Black variations of these beads are made from the animals' horns.
Seed beads
Seed beads are uniformly shaped spheroidal or tube shaped beads ranging in size from under a millimetre to several millimetres. "Seed bead" is a generic term for any small bead. Usually rounded in shape, seed beads are most commonly used for loom and off-loom bead weaving.
Place or period of origin
African trade beads or slave beads may be antique beads that were manufactured in Europe and used for trade during the colonial period, such as chevron beads; or they may have been made in West Africa by and for Africans, such as Mauritanian Kiffa beads, Ghanaian and Nigerian powder glass beads, or African-made brass beads. Archaeologists have documented that as recently as the late-nineteenth century beads manufactured in Europe continued to accompany exploration of Africa using Indigenous routes into the interior.
Austrian crystal is a generic term for cut lead-crystal beads, based on the location and prestige of the Swarovski firm.
Czech glass beads are made in the Czech Republic, in particular an area called Jablonec nad Nisou. Production of glass beads in the area dates back to the 14th century, though production was depressed under communist rule. Because of this long tradition, their workmanship and quality has an excellent reputation.
Islamic glass beads have been made in a wide geographical and historical range of Islamic cultures. Used and manufactured from medieval Spain and North Africa in the West and to China in the East, they can be identified by recognizable features, including styles and techniques.
Vintage beads, in the collectibles and antique market, refers to items that are at least 25 or more years old. Vintage beads are available in materials that include lucite, plastic, crystal, metal and glass.
With beads being something that's popular, people are drawn to them and are intrigued to know more about them. People like archaeologists, beadmakers, collectors etc. are interested in beads in knowing more about them and doing research.
Miscellaneous ethnic beads
Tibetan Dzi beads and Rudraksha beads are used to make Buddhist and Hindu rosaries (malas). Magatama are traditional Japanese beads, and cinnabar was often used for making beads in China. Wampum are cylindrical white or purple beads made from quahog or North Atlantic channeled whelk shells by northeastern Native American tribes, such as the Wampanoag and Shinnecock. Job's tears are seed beads popular among southeastern Native American tribes. Heishe are beads made of shells or stones by the Kewa Pueblo people of New Mexico.
Symbolic meaning of beads
In many parts of the world, beads are used for symbolic purposes, for example:
use for prayer or devotion - e.g. rosary beads for Roman Catholics and many other Christians, misbaha for Shia and many other Muslims, japamala/nenju for Hindus, Buddhists, Jains, some Sikhs, Confucianism, Taoists/Daoists, Shinto, etc.
use for anti-tension devices, e.g. Greek komboloi, or worry beads.
use as currency e.g. Aggrey beads from Ghana.
use for gaming e.g. oware beads for mancala.
Different religions around the world like Hinduism, Buddhism, Christianity, and many more had a role to play. Beads were used to carry letters, patterns, and ideas in counting prayers. In the medieval times era, it was common to use prayer beads in different classes. The materials that were fashioned to make them were stones, metal including bones.
History
Beads are thought to be one of the earliest forms of trade between members of the human race. It is believed that bead trading was one of the reasons why humans developed language.[1] Beads are said to have been used and traded for most of human history. The oldest beads found to date were at Blombos Cave, about 72,000 years old, and at Ksar Akil[2] in Lebanon, about 40,000 years old.
Surface patterns
After shaping, glass and crystal beads can have their surface appearance enhanced by etching a translucent frosted layer, applying an additional color layer, or both. Aurora Borealis, or AB, is a surface coating that diffuses light into a rainbow. Other surface coatings are vitrail, moonlight, dorado, satin, star shine, and heliotrope.
Faux beads are beads that are made to look like a more expensive original material, especially in the case of fake pearls and simulated rocks, minerals and gemstones. Precious metals and ivory are also imitated.
Tagua nuts from South America are used as an ivory substitute since the natural ivory trade has been restricted worldwide.
See also
Fly tying#Bead (Spherical brass, tungsten, and glass beads are often used in Fly tying)
Glass beadmaking
Jewelry design
Mardi Gras beads
Murano beads
Pearl
Ultraviolet-sensitive bead
References
Further reading
Beck, Horace (1928) "Classification and Nomenclature of Beads and Pendants." Archaeologia 77. (Reprinted by Shumway Publishers York, PA 1981)
Dubin, Lois Sherr. North American Indian Jewelry and Adornment: From Prehistory to the Present. New York: Harry N. Abrams, 1999: 170–171. .
Dubin, Lois Sherr. The History of Beads: From 100,000 B.C. to the Present, Revised and Expanded Edition. New York: Harry N. Abrams, (2009). .
Beadwork
Craft materials
Jewellery components | Bead | [
"Technology"
] | 3,050 | [
"Jewellery components",
"Components"
] |
3,410 | https://en.wikipedia.org/wiki/Bird | Birds are a group of warm-blooded vertebrates constituting the class Aves (), characterised by feathers, toothless beaked jaws, the laying of hard-shelled eggs, a high metabolic rate, a four-chambered heart, and a strong yet lightweight skeleton. Birds live worldwide and range in size from the bee hummingbird to the common ostrich. There are over 11,000 living species and they are split into 44 orders. More than half are passerine or "perching" birds. Birds have wings whose development varies according to species; the only known groups without wings are the extinct moa and elephant birds. Wings, which are modified forelimbs, gave birds the ability to fly, although further evolution has led to the loss of flight in some birds, including ratites, penguins, and diverse endemic island species. The digestive and respiratory systems of birds are also uniquely adapted for flight. Some bird species of aquatic environments, particularly seabirds and some waterbirds, have further evolved for swimming. The study of birds is called ornithology.
Birds are feathered theropod dinosaurs and constitute the only known living dinosaurs. Likewise, birds are considered reptiles in the modern cladistic sense of the term, and their closest living relatives are the crocodilians. Birds are descendants of the primitive avialans (whose members include Archaeopteryx) which first appeared during the Late Jurassic. According to some estimates, modern birds (Neornithes) evolved in the Late Cretaceous or between the Early and Late Cretaceous (100 Ma) and diversified dramatically around the time of the Cretaceous–Paleogene extinction event 66 million years ago, which killed off the pterosaurs and all non-ornithuran dinosaurs.
Many social species preserve knowledge across generations (culture). Birds are social, communicating with visual signals, calls, and songs, and participating in such behaviour as cooperative breeding and hunting, flocking, and mobbing of predators. The vast majority of bird species are socially (but not necessarily sexually) monogamous, usually for one breeding season at a time, sometimes for years, and rarely for life. Other species have breeding systems that are polygynous (one male with many females) or, rarely, polyandrous (one female with many males). Birds produce offspring by laying eggs which are fertilised through sexual reproduction. They are usually laid in a nest and incubated by the parents. Most birds have an extended period of parental care after hatching.
Many species of birds are economically important as food for human consumption and raw material in manufacturing, with domesticated and undomesticated birds being important sources of eggs, meat, and feathers. Songbirds, parrots, and other species are popular as pets. Guano (bird excrement) is harvested for use as a fertiliser. Birds figure throughout human culture. About 120 to 130 species have become extinct due to human activity since the 17th century, and hundreds more before then. Human activity threatens about 1,200 bird species with extinction, though efforts are underway to protect them. Recreational birdwatching is an important part of the ecotourism industry.
Evolution and classification
The first classification of birds was developed by Francis Willughby and John Ray in their 1676 volume Ornithologiae.
Carl Linnaeus modified that work in 1758 to devise the taxonomic classification system currently in use. Birds are categorised as the biological class Aves in Linnaean taxonomy. Phylogenetic taxonomy places Aves in the clade Theropoda as an infraclass or more recently a subclass.
Definition
Aves and a sister group, the order Crocodilia, contain the only living representatives of the reptile clade Archosauria. During the late 1990s, Aves was most commonly defined phylogenetically as all descendants of the most recent common ancestor of modern birds and Archaeopteryx lithographica. However, an earlier definition proposed by Jacques Gauthier gained wide currency in the 21st century, and is used by many scientists including adherents to the PhyloCode. Gauthier defined Aves to include only the crown group of the set of modern birds. This was done by excluding most groups known only from fossils, and assigning them, instead, to the broader group Avialae, on the principle that a clade based on extant species should be limited to those extant species and their closest extinct relatives.
Gauthier and de Queiroz identified four different definitions for the same biological name "Aves", which is a problem. The authors proposed to reserve the term Aves only for the crown group consisting of the last common ancestor of all living birds and all of its descendants, which corresponds to meaning number 4 below. They assigned other names to the other groups.
Aves can mean all archosaurs closer to birds than to crocodiles (alternately Avemetatarsalia)
Aves can mean those advanced archosaurs with feathers (alternately Avifilopluma)
Aves can mean those feathered dinosaurs that fly (alternately Avialae)
Aves can mean the last common ancestor of all the currently living birds and all of its descendants (a "crown group", in this sense synonymous with Neornithes)
Under the fourth definition Archaeopteryx, traditionally considered one of the earliest members of Aves, is removed from this group, becoming a non-avian dinosaur instead. These proposals have been adopted by many researchers in the field of palaeontology and bird evolution, though the exact definitions applied have been inconsistent. Avialae, initially proposed to replace the traditional fossil content of Aves, is often used synonymously with the vernacular term "bird" by these researchers.
Most researchers define Avialae as branch-based clade, though definitions vary. Many authors have used a definition similar to "all theropods closer to birds than to Deinonychus", with Troodon being sometimes added as a second external specifier in case it is closer to birds than to Deinonychus. Avialae is also occasionally defined as an apomorphy-based clade (that is, one based on physical characteristics). Jacques Gauthier, who named Avialae in 1986, re-defined it in 2001 as all dinosaurs that possessed feathered wings used in flapping flight, and the birds that descended from them.
Despite being currently one of the most widely used, the crown-group definition of Aves has been criticised by some researchers. Lee and Spencer (1997) argued that, contrary to what Gauthier defended, this definition would not increase the stability of the clade and the exact content of Aves will always be uncertain because any defined clade (either crown or not) will have few synapomorphies distinguishing it from its closest relatives. Their alternative definition is synonymous to Avifilopluma.
Dinosaurs and the origin of birds
Based on fossil and biological evidence, most scientists accept that birds are a specialised subgroup of theropod dinosaurs and, more specifically, members of Maniraptora, a group of theropods which includes dromaeosaurids and oviraptorosaurs, among others. As scientists have discovered more theropods closely related to birds, the previously clear distinction between non-birds and birds has become blurred. By the 2000s, discoveries in the Liaoning Province of northeast China, which demonstrated many small theropod feathered dinosaurs, contributed to this ambiguity.
The consensus view in contemporary palaeontology is that the flying theropods, or avialans, are the closest relatives of the deinonychosaurs, which include dromaeosaurids and troodontids. Together, these form a group called Paraves. Some basal members of Deinonychosauria, such as Microraptor, have features which may have enabled them to glide or fly. The most basal deinonychosaurs were very small. This evidence raises the possibility that the ancestor of all paravians may have been arboreal, have been able to glide, or both. Unlike Archaeopteryx and the non-avialan feathered dinosaurs, who primarily ate meat, studies suggest that the first avialans were omnivores.
The Late Jurassic Archaeopteryx is well known as one of the first transitional fossils to be found, and it provided support for the theory of evolution in the late 19th century. Archaeopteryx was the first fossil to display both clearly traditional reptilian characteristics—teeth, clawed fingers, and a long, lizard-like tail—as well as wings with flight feathers similar to those of modern birds. It is not considered a direct ancestor of birds, though it is possibly closely related to the true ancestor.
Early evolution
Over 40% of key traits found in modern birds evolved during the 60 million year transition from the earliest bird-line archosaurs to the first maniraptoromorphs, i.e. the first dinosaurs closer to living birds than to Tyrannosaurus rex. The loss of osteoderms otherwise common in archosaurs and acquisition of primitive feathers might have occurred early during this phase. After the appearance of Maniraptoromorpha, the next 40 million years marked a continuous reduction of body size and the accumulation of neotenic (juvenile-like) characteristics. Hypercarnivory became increasingly less common while braincases enlarged and forelimbs became longer. The integument evolved into complex, pennaceous feathers.
The oldest known paravian (and probably the earliest avialan) fossils come from the Tiaojishan Formation of China, which has been dated to the late Jurassic period (Oxfordian stage), about 160 million years ago. The avialan species from this time period include Anchiornis huxleyi, Xiaotingia zhengi, and Aurornis xui.
The well-known probable early avialan, Archaeopteryx, dates from slightly later Jurassic rocks (about 155 million years old) from Germany. Many of these early avialans shared unusual anatomical features that may be ancestral to modern birds but were later lost during bird evolution. These features include enlarged claws on the second toe which may have been held clear of the ground in life, and long feathers or "hind wings" covering the hind limbs and feet, which may have been used in aerial maneuvering.
Avialans diversified into a wide variety of forms during the Cretaceous period. Many groups retained primitive characteristics, such as clawed wings and teeth, though the latter were lost independently in a number of avialan groups, including modern birds (Aves). Increasingly stiff tails (especially the outermost half) can be seen in the evolution of maniraptoromorphs, and this process culminated in the appearance of the pygostyle, an ossification of fused tail vertebrae. In the late Cretaceous, about 100 million years ago, the ancestors of all modern birds evolved a more open pelvis, allowing them to lay larger eggs compared to body size. Around 95 million years ago, they evolved a better sense of smell.
A third stage of bird evolution starting with Ornithothoraces (the "bird-chested" avialans) can be associated with the refining of aerodynamics and flight capabilities, and the loss or co-ossification of several skeletal features. Particularly significant are the development of an enlarged, keeled sternum and the alula, and the loss of grasping hands.
Early diversity of bird ancestors
The first large, diverse lineage of short-tailed avialans to evolve were the Enantiornithes, or "opposite birds", so named because the construction of their shoulder bones was in reverse to that of modern birds. Enantiornithes occupied a wide array of ecological niches, from sand-probing shorebirds and fish-eaters to tree-dwelling forms and seed-eaters. While they were the dominant group of avialans during the Cretaceous period, Enantiornithes became extinct along with many other dinosaur groups at the end of the Mesozoic era.
Many species of the second major avialan lineage to diversify, the Euornithes (meaning "true birds", because they include the ancestors of modern birds), were semi-aquatic and specialised in eating fish and other small aquatic organisms. Unlike the Enantiornithes, which dominated land-based and arboreal habitats, most early euornithians lacked perching adaptations and likely included shorebird-like species, waders, and swimming and diving species.
The latter included the superficially gull-like Ichthyornis and the Hesperornithiformes, which became so well adapted to hunting fish in marine environments that they lost the ability to fly and became primarily aquatic. The early euornithians also saw the development of many traits associated with modern birds, like strongly keeled breastbones, toothless, beaked portions of their jaws (though most non-avian euornithians retained teeth in other parts of the jaws). Euornithes also included the first avialans to develop true pygostyle and a fully mobile fan of tail feathers, which may have replaced the "hind wing" as the primary mode of aerial maneuverability and braking in flight.
A study on mosaic evolution in the avian skull found that the last common ancestor of all Neornithes might have had a beak similar to that of the modern hook-billed vanga and a skull similar to that of the Eurasian golden oriole. As both species are small aerial and canopy foraging omnivores, a similar ecological niche was inferred for this hypothetical ancestor.
Diversification of modern birds
Most studies agree on a Cretaceous age for the most recent common ancestor of modern birds but estimates range from the Early Cretaceous to the latest Cretaceous. Similarly, there is no agreement on whether most of the early diversification of modern birds occurred in the Cretaceous and associated with breakup of the supercontinent Gondwana or occurred later and potentially as a consequence of the Cretaceous–Palaeogene extinction event. This disagreement is in part caused by a divergence in the evidence; most molecular dating studies suggests a Cretaceous evolutionary radiation, while fossil evidence points to a Cenozoic radiation (the so-called 'rocks' versus 'clocks' controversy).
The discovery in 2005 of Vegavis from the Maastrichtian, the last stage of the Late Cretaceous, proved that the diversification of modern birds started before the Cenozoic era. The affinities of an earlier fossil, the possible galliform Austinornis lentus, dated to about 85 million years ago, are still too controversial to provide a fossil evidence of modern bird diversification. In 2020, Asteriornis from the Maastrichtian was described, it appears to be a close relative of Galloanserae, the earliest diverging lineage within Neognathae.
Attempts to reconcile molecular and fossil evidence using genomic-scale DNA data and comprehensive fossil information have not resolved the controversy. However, a 2015 estimate that used a new method for calibrating molecular clocks confirmed that while modern birds originated early in the Late Cretaceous, likely in Western Gondwana, a pulse of diversification in all major groups occurred around the Cretaceous–Palaeogene extinction event. Modern birds would have expanded from West Gondwana through two routes. One route was an Antarctic interchange in the Paleogene. The other route was probably via Paleocene land bridges between South America and North America, which allowed for the rapid expansion and diversification of Neornithes into the Holarctic and Paleotropics. On the other hand, the occurrence of Asteriornis in the Northern Hemisphere suggest that Neornithes dispersed out of East Gondwana before the Paleocene.
Classification of bird orders
All modern birds lie within the crown group Aves (alternately Neornithes), which has two subdivisions: the Palaeognathae, which includes the flightless ratites (such as the ostriches) and the weak-flying tinamous, and the extremely diverse Neognathae, containing all other birds. These two subdivisions have variously been given the rank of superorder, cohort, or infraclass. The number of known living bird species is around 11,000 although sources may differ in their precise numbers.
Cladogram of modern bird relationships based on Stiller et al (2024)., showing the 44 orders recognised by the IOC.
The classification of birds is a contentious issue. Sibley and Ahlquist's Phylogeny and Classification of Birds (1990) is a landmark work on the subject. Most evidence seems to suggest the assignment of orders is accurate, but scientists disagree about the relationships among the orders themselves; evidence from modern bird anatomy, fossils and DNA have all been brought to bear on the problem, but no strong consensus has emerged. Fossil and molecular evidence from the 2010s is providing an increasingly clear picture of the evolution of modern bird orders.
Genomics
In 2010, the genome had been sequenced for only two birds, the chicken and the zebra finch. , the genomes of 542 species of birds had been completed. At least one genome has been sequenced from every order. These include at least one species in about 90% of extant avian families (218 out of 236 families recognised by the Howard and Moore Checklist).
Being able to sequence and compare whole genomes gives researchers many types of information, about genes, the DNA that regulates the genes, and their evolutionary history. This has led to reconsideration of some of the classifications that were based solely on the identification of protein-coding genes. Waterbirds such as pelicans and flamingos, for example, may have in common specific adaptations suited to their environment that were developed independently.
Distribution
Birds live and breed in most terrestrial habitats and on all seven continents, reaching their southern extreme in the snow petrel's breeding colonies up to inland in Antarctica. The highest bird diversity occurs in tropical regions. It was earlier thought that this high diversity was the result of higher speciation rates in the tropics; however studies from the 2000s found higher speciation rates in the high latitudes that were offset by greater extinction rates than in the tropics. Many species migrate annually over great distances and across oceans; several families of birds have adapted to life both on the world's oceans and in them, and some seabird species come ashore only to breed, while some penguins have been recorded diving up to deep.
Many bird species have established breeding populations in areas to which they have been introduced by humans. Some of these introductions have been deliberate; the ring-necked pheasant, for example, has been introduced around the world as a game bird. Others have been accidental, such as the establishment of wild monk parakeets in several North American cities after their escape from captivity. Some species, including cattle egret, yellow-headed caracara and galah, have spread naturally far beyond their original ranges as agricultural expansion created alternative habitats although modern practices of intensive agriculture have negatively impacted farmland bird populations.
Anatomy and physiology
Compared with other vertebrates, birds have a body plan that shows many unusual adaptations, mostly to facilitate flight.
Skeletal system
The skeleton consists of very lightweight bones. They have large air-filled cavities (called pneumatic cavities) which connect with the respiratory system. The skull bones in adults are fused and do not show cranial sutures. The orbital cavities that house the eyeballs are large and separated from each other by a bony septum (partition). The spine has cervical, thoracic, lumbar and caudal regions with the number of cervical (neck) vertebrae highly variable and especially flexible, but movement is reduced in the anterior thoracic vertebrae and absent in the later vertebrae. The last few are fused with the pelvis to form the synsacrum. The ribs are flattened and the sternum is keeled for the attachment of flight muscles except in the flightless bird orders. The forelimbs are modified into wings. The wings are more or less developed depending on the species; the only known groups that lost their wings are the extinct moa and elephant birds.
Excretory system
Like the reptiles, birds are primarily uricotelic, that is, their kidneys extract nitrogenous waste from their bloodstream and excrete it as uric acid, instead of urea or ammonia, through the ureters into the intestine. Birds do not have a urinary bladder or external urethral opening and (with exception of the ostrich) uric acid is excreted along with faeces as a semisolid waste. However, birds such as hummingbirds can be facultatively ammonotelic, excreting most of the nitrogenous wastes as ammonia. They also excrete creatine, rather than creatinine like mammals. This material, as well as the output of the intestines, emerges from the bird's cloaca. The cloaca is a multi-purpose opening: waste is expelled through it, most birds mate by joining cloaca, and females lay eggs from it. In addition, many species of birds regurgitate pellets.
It is a common but not universal feature of altricial passerine nestlings (born helpless, under constant parental care) that instead of excreting directly into the nest, they produce a fecal sac. This is a mucus-covered pouch that allows parents to either dispose of the waste outside the nest or to recycle the waste through their own digestive system.
Reproductive system
Most male birds do not have intromittent penises. Males within Palaeognathae (with the exception of the kiwis), the Anseriformes (with the exception of screamers), and in rudimentary forms in Galliformes (but fully developed in Cracidae) possess a penis, which is never present in Neoaves. Its length is thought to be related to sperm competition and it fills with lymphatic fluid instead of blood when erect. When not copulating, it is hidden within the proctodeum compartment within the cloaca, just inside the vent. Female birds have sperm storage tubules that allow sperm to remain viable long after copulation, a hundred days in some species. Sperm from multiple males may compete through this mechanism. Most female birds have a single ovary and a single oviduct, both on the left side, but there are exceptions: species in at least 16 different orders of birds have two ovaries. Even these species, however, tend to have a single oviduct. It has been speculated that this might be an adaptation to flight, but males have two testes, and it is also observed that the gonads in both sexes decrease dramatically in size outside the breeding season. Also terrestrial birds generally have a single ovary, as does the platypus, an egg-laying mammal. A more likely explanation is that the egg develops a shell while passing through the oviduct over a period of about a day, so that if two eggs were to develop at the same time, there would be a risk to survival. While rare, mostly abortive, parthenogenesis is not unknown in birds and eggs can be diploid, automictic and results in male offspring.
Birds are solely gonochoric, meaning they have two sexes: either female or male. The sex of birds is determined by the Z and W sex chromosomes, rather than by the X and Y chromosomes present in mammals. Male birds have two Z chromosomes (ZZ), and female birds have a W chromosome and a Z chromosome (WZ). A complex system of disassortative mating with two morphs is involved in the white-throated sparrow Zonotrichia albicollis, where white- and tan-browed morphs of opposite sex pair, making it appear as if four sexes were involved since any individual is compatible with only a fourth of the population.
In nearly all species of birds, an individual's sex is determined at fertilisation. However, one 2007 study claimed to demonstrate temperature-dependent sex determination among the Australian brushturkey, for which higher temperatures during incubation resulted in a higher female-to-male sex ratio. This, however, was later proven to not be the case. These birds do not exhibit temperature-dependent sex determination, but temperature-dependent sex mortality.
Respiratory and circulatory systems
Birds have one of the most complex respiratory systems of all animal groups. Upon inhalation, 75% of the fresh air bypasses the lungs and flows directly into a posterior air sac which extends from the lungs and connects with air spaces in the bones and fills them with air. The other 25% of the air goes directly into the lungs. When the bird exhales, the used air flows out of the lungs and the stored fresh air from the posterior air sac is simultaneously forced into the lungs. Thus, a bird's lungs receive a constant supply of fresh air during both inhalation and exhalation. Sound production is achieved using the syrinx, a muscular chamber incorporating multiple tympanic membranes which diverges from the lower end of the trachea; the trachea being elongated in some species, increasing the volume of vocalisations and the perception of the bird's size.
In birds, the main arteries taking blood away from the heart originate from the right aortic arch (or pharyngeal arch), unlike in the mammals where the left aortic arch forms this part of the aorta. The postcava receives blood from the limbs via the renal portal system. Unlike in mammals, the circulating red blood cells in birds retain their nucleus.
Heart type and features
The avian circulatory system is driven by a four-chambered, myogenic heart contained in a fibrous pericardial sac. This pericardial sac is filled with a serous fluid for lubrication. The heart itself is divided into a right and left half, each with an atrium and ventricle. The atrium and ventricles of each side are separated by atrioventricular valves which prevent back flow from one chamber to the next during contraction. Being myogenic, the heart's pace is maintained by pacemaker cells found in the sinoatrial node, located on the right atrium.
The sinoatrial node uses calcium to cause a depolarising signal transduction pathway from the atrium through right and left atrioventricular bundle which communicates contraction to the ventricles. The avian heart also consists of muscular arches that are made up of thick bundles of muscular layers. Much like a mammalian heart, the avian heart is composed of endocardial, myocardial and epicardial layers. The atrium walls tend to be thinner than the ventricle walls, due to the intense ventricular contraction used to pump oxygenated blood throughout the body. Avian hearts are generally larger than mammalian hearts when compared to body mass. This adaptation allows more blood to be pumped to meet the high metabolic need associated with flight.
Organisation
Birds have a very efficient system for diffusing oxygen into the blood; birds have a ten times greater surface area to gas exchange volume than mammals. As a result, birds have more blood in their capillaries per unit of volume of lung than a mammal. The arteries are composed of thick elastic muscles to withstand the pressure of the ventricular contractions, and become more rigid as they move away from the heart. Blood moves through the arteries, which undergo vasoconstriction, and into arterioles which act as a transportation system to distribute primarily oxygen as well as nutrients to all tissues of the body. As the arterioles move away from the heart and into individual organs and tissues they are further divided to increase surface area and slow blood flow. Blood travels through the arterioles and moves into the capillaries where gas exchange can occur.
Capillaries are organised into capillary beds in tissues; it is here that blood exchanges oxygen for carbon dioxide waste. In the capillary beds, blood flow is slowed to allow maximum diffusion of oxygen into the tissues. Once the blood has become deoxygenated, it travels through venules then veins and back to the heart. Veins, unlike arteries, are thin and rigid as they do not need to withstand extreme pressure. As blood travels through the venules to the veins a funneling occurs called vasodilation bringing blood back to the heart. Once the blood reaches the heart, it moves first into the right atrium, then the right ventricle to be pumped through the lungs for further gas exchange of carbon dioxide waste for oxygen. Oxygenated blood then flows from the lungs through the left atrium to the left ventricle where it is pumped out to the body.
Nervous system
The nervous system is large relative to the bird's size. The most developed part of the brain of birds is the one that controls the flight-related functions, while the cerebellum coordinates movement and the cerebrum controls behaviour patterns, navigation, mating and nest building. Most birds have a poor sense of smell with notable exceptions including kiwis, New World vultures and tubenoses. The avian visual system is usually highly developed. Water birds have special flexible lenses, allowing accommodation for vision in air and water. Some species also have dual fovea. Birds are tetrachromatic, possessing ultraviolet (UV) sensitive cone cells in the eye as well as green, red and blue ones. They also have double cones, likely to mediate achromatic vision.
Many birds show plumage patterns in ultraviolet that are invisible to the human eye; some birds whose sexes appear similar to the naked eye are distinguished by the presence of ultraviolet reflective patches on their feathers. Male blue tits have an ultraviolet reflective crown patch which is displayed in courtship by posturing and raising of their nape feathers. Ultraviolet light is also used in foraging—kestrels have been shown to search for prey by detecting the UV reflective urine trail marks left on the ground by rodents. With the exception of pigeons and a few other species, the eyelids of birds are not used in blinking. Instead the eye is lubricated by the nictitating membrane, a third eyelid that moves horizontally. The nictitating membrane also covers the eye and acts as a contact lens in many aquatic birds. The bird retina has a fan shaped blood supply system called the pecten.
Eyes of most birds are large, not very round and capable of only limited movement in the orbits, typically 10–20°. Birds with eyes on the sides of their heads have a wide visual field, while birds with eyes on the front of their heads, such as owls, have binocular vision and can estimate the depth of field. The avian ear lacks external pinnae but is covered by feathers, although in some birds, such as the Asio, Bubo and Otus owls, these feathers form tufts which resemble ears. The inner ear has a cochlea, but it is not a spiral as in mammals.
Defence and intraspecific combat
A few species are able to use chemical defences against predators; some Procellariiformes can eject an unpleasant stomach oil against an aggressor, and some species of pitohuis from New Guinea have a powerful neurotoxin in their skin and feathers.
A lack of field observations limit our knowledge, but intraspecific conflicts are known to sometimes result in injury or death. The screamers (Anhimidae), some jacanas (Jacana, Hydrophasianus), the spur-winged goose (Plectropterus), the torrent duck (Merganetta) and nine species of lapwing (Vanellus) use a sharp spur on the wing as a weapon. The steamer ducks (Tachyeres), geese and swans (Anserinae), the solitaire (Pezophaps), sheathbills (Chionis), some guans (Crax) and stone curlews (Burhinus) use a bony knob on the alular metacarpal to punch and hammer opponents. The jacanas Actophilornis and Irediparra have an expanded, blade-like radius. The extinct Xenicibis was unique in having an elongate forelimb and massive hand which likely functioned in combat or defence as a jointed club or flail. Swans, for instance, may strike with the bony spurs and bite when defending eggs or young.
Feathers, plumage, and scales
Feathers are a feature characteristic of birds (though also present in some dinosaurs not currently considered to be true birds). They facilitate flight, provide insulation that aids in thermoregulation, and are used in display, camouflage, and signalling. There are several types of feathers, each serving its own set of purposes. Feathers are epidermal growths attached to the skin and arise only in specific tracts of skin called pterylae. The distribution pattern of these feather tracts (pterylosis) is used in taxonomy and systematics. The arrangement and appearance of feathers on the body, called plumage, may vary within species by age, social status, and sex.
Plumage is regularly moulted; the standard plumage of a bird that has moulted after breeding is known as the "" plumage, or—in the Humphrey–Parkes terminology—"basic" plumage; breeding plumages or variations of the basic plumage are known under the Humphrey–Parkes system as "" plumages. Moulting is annual in most species, although some may have two moults a year, and large birds of prey may moult only once every few years. Moulting patterns vary across species. In passerines, flight feathers are replaced one at a time with the innermost being the first. When the fifth of sixth primary is replaced, the outermost begin to drop. After the innermost tertiaries are moulted, the starting from the innermost begin to drop and this proceeds to the outer feathers (centrifugal moult). The greater primary are moulted in synchrony with the primary that they overlap.
A small number of species, such as ducks and geese, lose all of their flight feathers at once, temporarily becoming flightless. As a general rule, the tail feathers are moulted and replaced starting with the innermost pair. Centripetal moults of tail feathers are however seen in the Phasianidae. The centrifugal moult is modified in the tail feathers of woodpeckers and treecreepers, in that it begins with the second innermost pair of feathers and finishes with the central pair of feathers so that the bird maintains a functional climbing tail. The general pattern seen in passerines is that the primaries are replaced outward, secondaries inward, and the tail from centre outward. Before nesting, the females of most bird species gain a bare brood patch by losing feathers close to the belly. The skin there is well supplied with blood vessels and helps the bird in incubation.
Feathers require maintenance and birds preen or groom them daily, spending an average of around 9% of their daily time on this. The bill is used to brush away foreign particles and to apply waxy secretions from the uropygial gland; these secretions protect the feathers' flexibility and act as an antimicrobial agent, inhibiting the growth of feather-degrading bacteria. This may be supplemented with the secretions of formic acid from ants, which birds receive through a behaviour known as anting, to remove feather parasites.
The scales of birds are composed of the same keratin as beaks, claws, and spurs. They are found mainly on the toes and metatarsus, but may be found further up on the ankle in some birds. Most bird scales do not overlap significantly, except in the cases of kingfishers and woodpeckers.
The scales of birds are thought to be homologous to those of reptiles and mammals.
Flight
Most birds can fly, which distinguishes them from almost all other vertebrate classes. Flight is the primary means of locomotion for most bird species and is used for searching for food and for escaping from predators. Birds have various adaptations for flight, including a lightweight skeleton, two large flight muscles, the pectoralis (which accounts for 15% of the total mass of the bird) and the supracoracoideus, as well as a modified forelimb (wing) that serves as an aerofoil.
Wing shape and size generally determine a bird's flight style and performance; many birds combine powered, flapping flight with less energy-intensive soaring flight. About 60 extant bird species are flightless, as were many extinct birds. Flightlessness often arises in birds on isolated islands, most likely due to limited resources and the absence of mammalian land predators. Flightlessness is almost exclusively correlated with gigantism due to an island's inherent condition of isolation. Although flightless, penguins use similar musculature and movements to "fly" through the water, as do some flight-capable birds such as auks, shearwaters and dippers.
Behaviour
Most birds are diurnal, but some birds, such as many species of owls and nightjars, are nocturnal or crepuscular (active during twilight hours), and many coastal waders feed when the tides are appropriate, by day or night.
Diet and feeding
are varied and often include nectar, fruit, plants, seeds, carrion, and various small animals, including other birds. The digestive system of birds is unique, with a crop for storage and a gizzard that contains swallowed stones for grinding food to compensate for the lack of teeth. Some species such as pigeons and some psittacine species do not have a gallbladder. Most birds are highly adapted for rapid digestion to aid with flight. Some migratory birds have adapted to use protein stored in many parts of their bodies, including protein from the intestines, as additional energy during migration.
Birds that employ many strategies to obtain food or feed on a variety of food items are called generalists, while others that concentrate time and effort on specific food items or have a single strategy to obtain food are considered specialists. Avian foraging strategies can vary widely by species. Many birds glean for insects, invertebrates, fruit, or seeds. Some hunt insects by suddenly attacking from a branch. Those species that seek pest insects are considered beneficial 'biological control agents' and their presence encouraged in biological pest control programmes. Combined, insectivorous birds eat 400–500 million metric tons of arthropods annually.
Nectar feeders such as hummingbirds, sunbirds, lories, and lorikeets amongst others have specially adapted brushy tongues and in many cases bills designed to fit co-adapted flowers. Kiwis and shorebirds with long bills probe for invertebrates; shorebirds' varied bill lengths and feeding methods result in the separation of ecological niches. Divers, diving ducks, penguins and auks pursue their prey underwater, using their wings or feet for propulsion, while aerial predators such as sulids, kingfishers and terns plunge dive after their prey. Flamingos, three species of prion, and some ducks are filter feeders. Geese and dabbling ducks are primarily grazers.
Some species, including frigatebirds, gulls, and skuas, engage in kleptoparasitism, stealing food items from other birds. Kleptoparasitism is thought to be a supplement to food obtained by hunting, rather than a significant part of any species' diet; a study of great frigatebirds stealing from masked boobies estimated that the frigatebirds stole at most 40% of their food and on average stole only 5%. Other birds are scavengers; some of these, like vultures, are specialised carrion eaters, while others, like gulls, corvids, or other birds of prey, are opportunists.
Water and drinking
Water is needed by many birds although their mode of excretion and lack of sweat glands reduces the physiological demands. Some desert birds can obtain their water needs entirely from moisture in their food. Some have other adaptations such as allowing their body temperature to rise, saving on moisture loss from evaporative cooling or panting. Seabirds can drink seawater and have salt glands inside the head that eliminate excess salt out of the nostrils.
Most birds scoop water in their beaks and raise their head to let water run down the throat. Some species, especially of arid zones, belonging to the pigeon, finch, mousebird, button-quail and bustard families are capable of sucking up water without the need to tilt back their heads. Some desert birds depend on water sources and sandgrouse are particularly well known for congregating daily at waterholes. Nesting sandgrouse and many plovers carry water to their young by wetting their belly feathers. Some birds carry water for chicks at the nest in their crop or regurgitate it along with food. The pigeon family, flamingos and penguins have adaptations to produce a nutritive fluid called crop milk that they provide to their chicks.
Feather care
Feathers, being critical to the survival of a bird, require maintenance. Apart from physical wear and tear, feathers face the onslaught of fungi, ectoparasitic feather mites and bird lice. The physical condition of feathers are maintained by often with the application of secretions from the . Birds also bathe in water or dust themselves. While some birds dip into shallow water, more aerial species may make aerial dips into water and arboreal species often make use of dew or rain that collect on leaves. Birds of arid regions make use of loose soil to dust-bathe. A behaviour termed as anting in which the bird encourages ants to run through their plumage is also thought to help them reduce the ectoparasite load in feathers. Many species will spread out their wings and expose them to direct sunlight and this too is thought to help in reducing fungal and ectoparasitic activity that may lead to feather damage.
Migration
Many bird species migrate to take advantage of global differences of seasonal temperatures, therefore optimising availability of food sources and breeding habitat. These migrations vary among the different groups. Many landbirds, shorebirds, and waterbirds undertake annual long-distance migrations, usually triggered by the length of daylight as well as weather conditions. These birds are characterised by a breeding season spent in the temperate or polar regions and a non-breeding season in the tropical regions or opposite hemisphere. Before migration, birds substantially increase body fats and reserves and reduce the size of some of their organs.
Migration is highly demanding energetically, particularly as birds need to cross deserts and oceans without refuelling. Landbirds have a flight range of around and shorebirds can fly up to , although the bar-tailed godwit is capable of non-stop flights of up to . Some seabirds undertake long migrations, with the longest annual migrations including those of Arctic terns, which were recorded travelling an average of between their Arctic breeding grounds in Greenland and Iceland and their wintering grounds in Antarctica, with one bird covering , and sooty shearwaters, which nest in New Zealand and Chile and make annual round trips of to their summer feeding grounds in the North Pacific off Japan, Alaska and California.
Other seabirds disperse after breeding, travelling widely but having no set migration route. Albatrosses nesting in the Southern Ocean often undertake circumpolar trips between breeding seasons.
Some bird species undertake shorter migrations, travelling only as far as is required to avoid bad weather or obtain food. Irruptive species such as the boreal finches are one such group and can commonly be found at a location in one year and absent the next. This type of migration is normally associated with food availability. Species may also travel shorter distances over part of their range, with individuals from higher latitudes travelling into the existing range of conspecifics; others undertake partial migrations, where only a fraction of the population, usually females and subdominant males, migrates. Partial migration can form a large percentage of the migration behaviour of birds in some regions; in Australia, surveys found that 44% of non-passerine birds and 32% of passerines were partially migratory.
Altitudinal migration is a form of short-distance migration in which birds spend the breeding season at higher altitudes and move to lower ones during suboptimal conditions. It is most often triggered by temperature changes and usually occurs when the normal territories also become inhospitable due to lack of food. Some species may also be nomadic, holding no fixed territory and moving according to weather and food availability. Parrots as a family are overwhelmingly neither migratory nor sedentary but considered to either be dispersive, irruptive, nomadic or undertake small and irregular migrations.
The ability of birds to return to precise locations across vast distances has been known for some time; in an experiment conducted in the 1950s, a Manx shearwater released in Boston in the United States returned to its colony in Skomer, in Wales within 13 days, a distance of . Birds navigate during migration using a variety of methods. For diurnal migrants, the sun is used to navigate by day, and a stellar compass is used at night. Birds that use the sun compensate for the changing position of the sun during the day by the use of an internal clock. Orientation with the stellar compass depends on the position of the constellations surrounding Polaris. These are backed up in some species by their ability to sense the Earth's geomagnetism through specialised photoreceptors.
Communication
Birds communicate primarily using visual and auditory signals. Signals can be interspecific (between species) and intraspecific (within species).
Birds sometimes use plumage to assess and assert social dominance, to display breeding condition in sexually selected species, or to make threatening displays, as in the sunbittern's mimicry of a large predator to ward off hawks and protect young chicks.
Visual communication among birds may also involve ritualised displays, which have developed from non-signalling actions such as preening, the adjustments of feather position, pecking, or other behaviour. These displays may signal aggression or submission or may contribute to the formation of pair-bonds. The most elaborate displays occur during courtship, where "dances" are often formed from complex combinations of many possible component movements; males' breeding success may depend on the quality of such displays.
Bird calls and songs, which are produced in the syrinx, are the major means by which birds communicate with sound. This communication can be very complex; some species can operate the two sides of the syrinx independently, allowing the simultaneous production of two different songs.
Calls are used for a variety of purposes, including mate attraction, evaluation of potential mates, bond formation, the claiming and maintenance of territories, the identification of other individuals (such as when parents look for chicks in colonies or when mates reunite at the start of breeding season), and the warning of other birds of potential predators, sometimes with specific information about the nature of the threat. Some birds also use mechanical sounds for auditory communication. The Coenocorypha snipes of New Zealand drive air through their feathers, woodpeckers drum for long-distance communication, and palm cockatoos use tools to drum.
Flocking and other associations
While some birds are essentially territorial or live in small family groups, other birds may form large flocks. The principal benefits of flocking are safety in numbers and increased foraging efficiency. Defence against predators is particularly important in closed habitats like forests, where ambush predation is common and multiple eyes can provide a valuable early warning system. This has led to the development of many mixed-species feeding flocks, which are usually composed of small numbers of many species; these flocks provide safety in numbers but increase potential competition for resources. Costs of flocking include bullying of socially subordinate birds by more dominant birds and the reduction of feeding efficiency in certain cases. Some species have a mixed system with breeding pairs maintaining territories, while unmated or young birds live in flocks where they secure mates prior to finding territories.
Birds sometimes also form associations with non-avian species. Plunge-diving seabirds associate with dolphins and tuna, which push shoaling fish towards the surface. Some species of hornbills have a mutualistic relationship with dwarf mongooses, in which they forage together and warn each other of nearby birds of prey and other predators.
Resting and roosting
The high metabolic rates of birds during the active part of the day is supplemented by rest at other times. Sleeping birds often use a type of sleep known as vigilant sleep, where periods of rest are interspersed with quick eye-opening "peeks", allowing them to be sensitive to disturbances and enable rapid escape from threats. Swifts are believed to be able to sleep in flight and radar observations suggest that they orient themselves to face the wind in their roosting flight. It has been suggested that there may be certain kinds of sleep which are possible even when in flight.
Some birds have also demonstrated the capacity to fall into slow-wave sleep one hemisphere of the brain at a time. The birds tend to exercise this ability depending upon its position relative to the outside of the flock. This may allow the eye opposite the sleeping hemisphere to remain vigilant for predators by viewing the outer margins of the flock. This adaptation is also known from marine mammals. Communal roosting is common because it lowers the loss of body heat and decreases the risks associated with predators. Roosting sites are often chosen with regard to thermoregulation and safety. Unusual mobile roost sites include large herbivores on the African savanna that are used by oxpeckers.
Many sleeping birds bend their heads over their backs and tuck their bills in their back feathers, although others place their beaks among their breast feathers. Many birds rest on one leg, while some may pull up their legs into their feathers, especially in cold weather. Perching birds have a tendon-locking mechanism that helps them hold on to the perch when they are asleep. Many ground birds, such as quails and pheasants, roost in trees. A few parrots of the genus Loriculus roost hanging upside down. Some hummingbirds go into a nightly state of torpor accompanied with a reduction of their metabolic rates. This physiological adaptation shows in nearly a hundred other species, including owlet-nightjars, nightjars, and woodswallows. One species, the common poorwill, even enters a state of hibernation. Birds do not have sweat glands, but can lose water directly through the skin, and they may cool themselves by moving to shade, standing in water, panting, increasing their surface area, fluttering their throat or using special behaviours like urohidrosis to cool themselves.
Breeding
Social systems
95 per cent of bird species are socially monogamous. These species pair for at least the length of the breeding season or—in some cases—for several years or until the death of one mate. Monogamy allows for both paternal care and biparental care, which is especially important for species in which care from both the female and the male parent is required in order to successfully rear a brood. Among many socially monogamous species, extra-pair copulation (infidelity) is common. Such behaviour typically occurs between dominant males and females paired with subordinate males, but may also be the result of forced copulation in ducks and other anatids.
For females, possible benefits of extra-pair copulation include getting better genes for her offspring and insuring against the possibility of infertility in her mate. Males of species that engage in extra-pair copulations will closely guard their mates to ensure the parentage of the offspring that they raise.
Other mating systems, including polygyny, polyandry, polygamy, polygynandry, and promiscuity, also occur. Polygamous breeding systems arise when females are able to raise broods without the help of males. Mating systems vary across bird families but variations within species are thought to be driven by environmental conditions. A unique system is the formation of trios where a third individual is allowed by a breeding pair temporarily into the territory to assist with brood raising thereby leading to higher fitness.
Breeding usually involves some form of courtship display, typically performed by the male. Most displays are rather simple and involve some type of song. Some displays, however, are quite elaborate. Depending on the species, these may include wing or tail drumming, dancing, aerial flights, or communal lekking. Females are generally the ones that drive partner selection, although in the polyandrous phalaropes, this is reversed: plainer males choose brightly coloured females. Courtship feeding, billing and are commonly performed between partners, generally after the birds have paired and mated.
Homosexual behaviour has been observed in males or females in numerous species of birds, including copulation, pair-bonding, and joint parenting of chicks. Over 130 avian species around the world engage in sexual interactions between the same sex or homosexual behaviours. "Same-sex courtship activities may involve elaborate displays, synchronised dances, gift-giving ceremonies, or behaviours at specific display areas including bowers, arenas, or leks."
Territories, nesting and incubation
Many birds actively defend a territory from others of the same species during the breeding season; maintenance of territories protects the food source for their chicks. Species that are unable to defend feeding territories, such as seabirds and swifts, often breed in colonies instead; this is thought to offer protection from predators. Colonial breeders defend small nesting sites, and competition between and within species for nesting sites can be intense.
All birds lay amniotic eggs with hard shells made mostly of calcium carbonate. Hole and burrow nesting species tend to lay white or pale eggs, while open nesters lay camouflaged eggs. There are many exceptions to this pattern, however; the ground-nesting nightjars have pale eggs, and camouflage is instead provided by their plumage. Species that are victims of brood parasites have varying egg colours to improve the chances of spotting a parasite's egg, which forces female parasites to match their eggs to those of their hosts.
Bird eggs are usually laid in a nest. Most species create somewhat elaborate nests, which can be cups, domes, plates, mounds, or burrows. Some bird nests can be a simple scrape, with minimal or no lining; most seabird and wader nests are no more than a scrape on the ground. Most birds build nests in sheltered, hidden areas to avoid predation, but large or colonial birds—which are more capable of defence—may build more open nests. During nest construction, some species seek out plant matter from plants with parasite-reducing toxins to improve chick survival, and feathers are often used for nest insulation. Some bird species have no nests; the cliff-nesting common guillemot lays its eggs on bare rock, and male emperor penguins keep eggs between their body and feet. The absence of nests is especially prevalent in open habitat ground-nesting species where any addition of nest material would make the nest more conspicuous. Many ground nesting birds lay a clutch of eggs that hatch synchronously, with precocial chicks led away from the nests (nidifugous) by their parents soon after hatching.
Incubation, which regulates temperature for chick development, usually begins after the last egg has been laid. In monogamous species incubation duties are often shared, whereas in polygamous species one parent is wholly responsible for incubation. Warmth from parents passes to the eggs through brood patches, areas of bare skin on the abdomen or breast of the incubating birds. Incubation can be an energetically demanding process; adult albatrosses, for instance, lose as much as of body weight per day of incubation. The warmth for the incubation of the eggs of megapodes comes from the sun, decaying vegetation or volcanic sources. Incubation periods range from 10 days (in woodpeckers, cuckoos and passerine birds) to over 80 days (in albatrosses and kiwis).
The diversity of characteristics of birds is great, sometimes even in closely related species. Several avian characteristics are compared in the table below.
Parental care and fledging
At the time of their hatching, chicks range in development from helpless to independent, depending on their species. Helpless chicks are termed altricial, and tend to be born small, blind, immobile and naked; chicks that are mobile and feathered upon hatching are termed precocial. Altricial chicks need help thermoregulating and must be brooded for longer than precocial chicks. The young of many bird species do not precisely fit into either the precocial or altricial category, having some aspects of each and thus fall somewhere on an "altricial-precocial spectrum". Chicks at neither extreme but favouring one or the other may be termed or .
The length and nature of parental care varies widely amongst different orders and species. At one extreme, parental care in megapodes ends at hatching; the newly hatched chick digs itself out of the nest mound without parental assistance and can fend for itself immediately. At the other extreme, many seabirds have extended periods of parental care, the longest being that of the great frigatebird, whose chicks take up to six months to fledge and are fed by the parents for up to an additional 14 months. The chick guard stage describes the period of breeding during which one of the adult birds is permanently present at the nest after chicks have hatched. The main purpose of the guard stage is to aid offspring to thermoregulate and protect them from predation.
In some species, both parents care for nestlings and fledglings; in others, such care is the responsibility of only one sex. In some species, other members of the same species—usually close relatives of the breeding pair, such as offspring from previous broods—will help with the raising of the young. Such alloparenting is particularly common among the Corvida, which includes such birds as the true crows, Australian magpie and fairy-wrens, but has been observed in species as different as the rifleman and red kite. Among most groups of animals, male parental care is rare. In birds, however, it is quite common—more so than in any other vertebrate class. Although territory and nest site defence, incubation, and chick feeding are often shared tasks, there is sometimes a division of labour in which one mate undertakes all or most of a particular duty.
The point at which chicks fledge varies dramatically. The chicks of the Synthliboramphus murrelets, like the ancient murrelet, leave the nest the night after they hatch, following their parents out to sea, where they are raised away from terrestrial predators. Some other species, such as ducks, move their chicks away from the nest at an early age. In most species, chicks leave the nest just before, or soon after, they are able to fly. The amount of parental care after fledging varies; albatross chicks leave the nest on their own and receive no further help, while other species continue some supplementary feeding after fledging. Chicks may also follow their parents during their first migration.
Brood parasites
Brood parasitism, in which an egg-layer leaves her eggs with another individual's brood, is more common among birds than any other type of organism. After a parasitic bird lays her eggs in another bird's nest, they are often accepted and raised by the host at the expense of the host's own brood. Brood parasites may be either obligate brood parasites, which must lay their eggs in the nests of other species because they are incapable of raising their own young, or non-obligate brood parasites, which sometimes lay eggs in the nests of conspecifics to increase their reproductive output even though they could have raised their own young. One hundred bird species, including honeyguides, icterids, and ducks, are obligate parasites, though the most famous are the cuckoos. Some brood parasites are adapted to hatch before their host's young, which allows them to destroy the host's eggs by pushing them out of the nest or to kill the host's chicks; this ensures that all food brought to the nest will be fed to the parasitic chicks.
Sexual selection
Birds have evolved a variety of mating behaviours, with the peacock tail being perhaps the most famous example of sexual selection and the Fisherian runaway. Commonly occurring sexual dimorphisms such as size and colour differences are energetically costly attributes that signal competitive breeding situations. Many types of avian sexual selection have been identified; intersexual selection, also known as female choice; and intrasexual competition, where individuals of the more abundant sex compete with each other for the privilege to mate. Sexually selected traits often evolve to become more pronounced in competitive breeding situations until the trait begins to limit the individual's fitness. Conflicts between an individual fitness and signalling adaptations ensure that sexually selected ornaments such as plumage colouration and courtship behaviour are "honest" traits. Signals must be costly to ensure that only good-quality individuals can present these exaggerated sexual ornaments and behaviours.
Inbreeding depression
Inbreeding causes early death (inbreeding depression) in the zebra finch Taeniopygia guttata. Embryo survival (that is, hatching success of fertile eggs) was significantly lower for sib-sib mating pairs than for unrelated pairs.
Darwin's finch Geospiza scandens experiences inbreeding depression (reduced survival of offspring) and the magnitude of this effect is influenced by environmental conditions such as low food availability.
Inbreeding avoidance
Incestuous matings by the purple-crowned fairy wren Malurus coronatus result in severe fitness costs due to inbreeding depression (greater than 30% reduction in hatchability of eggs). Females paired with related males may undertake extra pair matings (see Promiscuity#Other animals for 90% frequency in avian species) that can reduce the negative effects of inbreeding. However, there are ecological and demographic constraints on extra pair matings. Nevertheless, 43% of broods produced by incestuously paired females contained extra pair young.
Inbreeding depression occurs in the great tit (Parus major) when the offspring produced as a result of a mating between close relatives show reduced fitness. In natural populations of Parus major, inbreeding is avoided by dispersal of individuals from their birthplace, which reduces the chance of mating with a close relative.
Southern pied babblers Turdoides bicolor appear to avoid inbreeding in two ways. The first is through dispersal, and the second is by avoiding familiar group members as mates.
Cooperative breeding in birds typically occurs when offspring, usually males, delay dispersal from their natal group in order to remain with the family to help rear younger kin. Female offspring rarely stay at home, dispersing over distances that allow them to breed independently, or to join unrelated groups. In general, inbreeding is avoided because it leads to a reduction in progeny fitness (inbreeding depression) due largely to the homozygous expression of deleterious recessive alleles. Cross-fertilisation between unrelated individuals ordinarily leads to the masking of deleterious recessive alleles in progeny.
Ecology
Birds occupy a wide range of ecological positions. While some birds are generalists, others are highly specialised in their habitat or food requirements. Even within a single habitat, such as a forest, the niches occupied by different species of birds vary, with some species feeding in the forest canopy, others beneath the canopy, and still others on the forest floor. Forest birds may be insectivores, frugivores, or nectarivores. Aquatic birds generally feed by fishing, plant eating, and piracy or kleptoparasitism. Many grassland birds are granivores. Birds of prey specialise in hunting mammals or other birds, while vultures are specialised scavengers. Birds are also preyed upon by a range of mammals including a few avivorous bats. A wide range of endo- and ectoparasites depend on birds and some parasites that are transmitted from parent to young have co-evolved and show host-specificity.
Some nectar-feeding birds are important pollinators, and many frugivores play a key role in seed dispersal. Plants and pollinating birds often coevolve, and in some cases a flower's primary pollinator is the only species capable of reaching its nectar.
Birds are often important to island ecology. Birds have frequently reached islands that mammals have not; on those islands, birds may fulfil ecological roles typically played by larger animals. For example, in New Zealand nine species of moa were important browsers, as are the kererū and kōkako today. Today the plants of New Zealand retain the defensive adaptations evolved to protect them from the extinct moa.
Many birds act as ecosystem engineers through the construction of nests, which provide important microhabitats and food for hundreds of species of invertebrates. Nesting seabirds may affect the ecology of islands and surrounding seas, principally through the concentration of large quantities of guano, which may enrich the local soil and the surrounding seas.
A wide variety of avian ecology field methods, including counts, nest monitoring, and capturing and marking, are used for researching avian ecology.
Relationship with humans
Since birds are highly visible and common animals, humans have had a relationship with them since the dawn of man. Sometimes, these relationships are mutualistic, like the cooperative honey-gathering among honeyguides and African peoples such as the Borana. Other times, they may be commensal, as when species such as the house sparrow have benefited from human activities. Several species have reconciled to habits of farmers who practice traditional farming. Examples include the Sarus Crane that begins nesting in India when farmers flood the fields in anticipation of rains, and the woolly-necked storks that have taken to nesting on a short tree grown for agroforestry beside fields and canals. Several bird species have become commercially significant agricultural pests, and some pose an aviation hazard. Human activities can also be detrimental, and have threatened numerous bird species with extinction (hunting, avian lead poisoning, pesticides, roadkill, wind turbine kills and predation by pet cats and dogs are common causes of death for birds).
Birds can act as vectors for spreading diseases such as psittacosis, salmonellosis, campylobacteriosis, mycobacteriosis (avian tuberculosis), avian influenza (bird flu), giardiasis, and cryptosporidiosis over long distances. Some of these are zoonotic diseases that can also be transmitted to humans.
Economic importance
Domesticated birds raised for meat and eggs, called poultry, are the largest source of animal protein eaten by humans; in 2003, tons of poultry and tons of eggs were produced worldwide. Chickens account for much of human poultry consumption, though domesticated turkeys, ducks, and geese are also relatively common. Many species of birds are also hunted for meat. Bird hunting is primarily a recreational activity except in extremely undeveloped areas. The most important birds hunted in North and South America are waterfowl; other widely hunted birds include pheasants, wild turkeys, quail, doves, partridge, grouse, snipe, and woodcock. Muttonbirding is also popular in Australia and New Zealand. Although some hunting, such as that of muttonbirds, may be sustainable, hunting has led to the extinction or endangerment of dozens of species.
Other commercially valuable products from birds include feathers (especially the down of geese and ducks), which are used as insulation in clothing and bedding, and seabird faeces (guano), which is a valuable source of phosphorus and nitrogen. The War of the Pacific, sometimes called the Guano War, was fought in part over the control of guano deposits.
Birds have been domesticated by humans both as pets and for practical purposes. Colourful birds, such as parrots and mynas, are bred in captivity or kept as pets, a practice that has led to the illegal trafficking of some endangered species. Falcons and cormorants have long been used for hunting and fishing, respectively. Messenger pigeons, used since at least 1 AD, remained important as recently as World War II. Today, such activities are more common either as hobbies, for entertainment and tourism.
Amateur bird enthusiasts (called birdwatchers, twitchers or, more commonly, birders) number in the millions. Many homeowners erect bird feeders near their homes to attract various species. Bird feeding has grown into a multimillion-dollar industry; for example, an estimated 75% of households in Britain provide food for birds at some point during the winter.
In religion and mythology
Birds play prominent and diverse roles in religion and mythology.
In religion, birds may serve as either messengers or priests and leaders for a deity, such as in the Cult of Makemake, in which the Tangata manu of Easter Island served as chiefs or as attendants, as in the case of Hugin and Munin, the two common ravens who whispered news into the ears of the Norse god Odin. In several civilisations of ancient Italy, particularly Etruscan and Roman religion, priests were involved in augury, or interpreting the words of birds while the "auspex" (from which the word "auspicious" is derived) watched their activities to foretell events.
They may also serve as religious symbols, as when Jonah (, dove) embodied the fright, passivity, mourning, and beauty traditionally associated with doves. Birds have themselves been deified, as in the case of the common peacock, which is perceived as Mother Earth by the people of southern India. In the ancient world, doves were used as symbols of the Mesopotamian goddess Inanna (later known as Ishtar), the Canaanite mother goddess Asherah, and the Greek goddess Aphrodite. In ancient Greece, Athena, the goddess of wisdom and patron deity of the city of Athens, had a little owl as her symbol. In religious images preserved from the Inca and Tiwanaku empires, birds are depicted in the process of transgressing boundaries between earthly and underground spiritual realms. Indigenous peoples of the central Andes maintain legends of birds passing to and from metaphysical worlds.
In culture and folklore
Birds have featured in culture and art since prehistoric times, when they were represented in early cave painting and carvings. Some birds have been perceived as monsters, including the mythological Roc and the Māori's legendary , a giant bird capable of snatching humans. Birds were later used as symbols of power, as in the magnificent Peacock Throne of the Mughal and Persian emperors. With the advent of scientific interest in birds, many paintings of birds were commissioned for books.
Among the most famous of these bird artists was John James Audubon, whose paintings of North American birds were a great commercial success in Europe and who later lent his name to the National Audubon Society. Birds are also important figures in poetry; for example, Homer incorporated nightingales into his Odyssey, and Catullus used a sparrow as an erotic symbol in his Catullus 2. The relationship between an albatross and a sailor is the central theme of Samuel Taylor Coleridge's The Rime of the Ancient Mariner, which led to the use of the term as a metaphor for a 'burden'. Other English metaphors derive from birds; vulture funds and vulture investors, for instance, take their name from the scavenging vulture. Aircraft, particularly military aircraft, are frequently named after birds. The predatory nature of raptors make them popular choices for fighter aircraft such as the F-16 Fighting Falcon and the Harrier Jump Jet, while the names of seabirds may be chosen for aircraft primarily used by naval forces such as the HU-16 Albatross and the V-22 Osprey.
Perceptions of bird species vary across cultures. Owls are associated with bad luck, witchcraft, and death in parts of Africa, but are regarded as wise across much of Europe. Hoopoes were considered sacred in Ancient Egypt and symbols of virtue in Persia, but were thought of as thieves across much of Europe and harbingers of war in Scandinavia. In heraldry, birds, especially eagles, often appear in coats of arms In vexillology, birds are a popular choice on flags. Birds feature in the flag designs of 17 countries and numerous subnational entities and territories. Birds are used by nations to symbolise a country's identity and heritage, with 91 countries officially recognising a national bird. Birds of prey are highly represented, though some nations have chosen other species of birds with parrots being popular among smaller, tropical nations.
In music
In music, birdsong has influenced composers and musicians in several ways: they can be inspired by birdsong; they can intentionally imitate bird song in a composition, as Vivaldi, Messiaen, and Beethoven did, along with many later composers; they can incorporate recordings of birds into their works, as Ottorino Respighi first did; or like Beatrice Harrison and David Rothenberg, they can duet with birds.
A 2023 archaeological excavation of a 10,000-year-old site in Israel yielded hollow wing bones of coots and ducks with perforations made on the side that are thought to have allowed them to be used as flutes or whistles possibly used by Natufian people to lure birds of prey.
Threats and conservation
Human activities have caused population decreases or extinction in many bird species. Over a hundred bird species have gone extinct in historical times, although the most dramatic human-caused avian extinctions, eradicating an estimated 750–1800 species, occurred during the human colonisation of Melanesian, Polynesian, and Micronesian islands. Many bird populations are declining worldwide, with 1,227 species listed as threatened by BirdLife International and the IUCN in 2009. There have been long-term declines in North American bird populations, with an estimated loss of 2.9 billion breeding adults, about 30% of the total, since 1970.
The most commonly cited human threat to birds is habitat loss. Other threats include overhunting, accidental mortality due to collisions with buildings or vehicles, long-line fishing bycatch, pollution (including oil spills and pesticide use), competition and predation from nonnative invasive species, and climate change.
Governments and conservation groups work to protect birds, either by passing laws that preserve and restore bird habitat or by establishing captive populations for reintroductions. Such projects have produced some successes; one study estimated that conservation efforts saved 16 species of bird that would otherwise have gone extinct between 1994 and 2004, including the California condor and Norfolk parakeet.
Human activities have allowed the expansion of a few temperate area species, such as the barn swallow and European starling. In the tropics and sub-tropics, relatively more species are expanding due to human activities, particularly due to the spread of crops such as rice whose expansion in south Asia has benefitted at least 64 bird species, though may have harmed many more species.
See also
Biodiversity loss
Climate change and birds
Glossary of bird terms
List of individual birds
Ornithology
List of bird genera
References
Further reading
All the Birds of the World, Lynx Edicions, 2020.
Del Hoyo, Josep; Elliott, Andrew; Sargatal, Jordi (eds.). Handbook of the Birds of the World (17-volume encyclopaedia), Lynx Edicions, Barcelona, 1992–2010. (Vol. 1: Ostrich to Ducks: , etc.).
Lederer, Roger; Carol Burr (2014). Latein für Vogelbeobachter: über 3000 ornithologische Begriffe erklärt und erforscht, aus dem Englischen übersetzt von Susanne Kuhlmannn-Krieg, Verlag DuMont, Köln, .
National Geographic Field Guide to Birds of North America, National Geographic, 7th edition, 2017.
National Audubon Society Field Guide to North American Birds: Eastern Region, National Audubon Society, Knopf.
National Audubon Society Field Guide to North American Birds: Western Region, National Audubon Society, Knopf.
Svensson, Lars (2010). Birds of Europe, Princeton University Press, second edition.
Svensson, Lars (2010). Collins Bird Guide: The Most Complete Guide to the Birds of Britain and Europe, Collins, 2nd edition.
External links
Birdlife International – Dedicated to bird conservation worldwide; has a database with about 250,000 records on endangered bird species.
Bird biogeography
Birds and Science from the National Audubon Society
Cornell Lab of Ornithology
Essays on bird biology
North American Birds for Kids
Ornithology
Sora – Searchable online research archive; Archives of the following ornithological journals The Auk, Condor, Journal of Field Ornithology''', North American Bird Bander, Studies in Avian Biology, Pacific Coast Avifauna, and the Wilson Bulletin''.
The Internet Bird Collection – A free library of videos of the world's birds
The Institute for Bird Populations, California
List of field guides to birds, from the International Field Guides database
RSPB bird identifier – Interactive identification of all UK birds
Are Birds Really Dinosaurs? — University of California Museum of Paleontology.
Animal classes
Ornithurae
Extant Late Cretaceous first appearances
Feathered dinosaurs
Extant Maastrichtian first appearances
Taxa named by Carl Linnaeus | Bird | [
"Biology"
] | 16,106 | [
"Birds",
"Animals"
] |
3,419 | https://en.wikipedia.org/wiki/Bay%20leaf | The bay leaf is an aromatic leaf commonly used as a herb in cooking. It can be used whole, either dried or fresh, in which case it is removed from the dish before consumption, or less commonly used in ground form. The flavor that a bay leaf imparts to a dish has not been universally agreed upon, but many agree it is a subtle addition.
Bay leaves come from various plants and are used for their distinctive flavor and fragrance. The most common source is the bay laurel (Laurus nobilis). Other types include California bay laurel, Indian bay leaf, West Indian bay laurel, and Mexican bay laurel. Bay leaves contain essential oils, such as eucalyptol, terpenes, and methyleugenol, which contribute to their taste and aroma.
Bay leaves are used in cuisines including Indian, Filipino, European, and Caribbean. They are typically used in soups, stews, meat, seafood, and vegetable dishes. The leaves should be removed from the cooked food before eating as they can be abrasive in the digestive tract.
Bay leaves are used as an insect repellent in pantries and as an active ingredient in killing jars for entomology. In Eastern Orthodoxy liturgy, they are used to symbolize Jesus' destruction of Hades and freeing of the dead.
While some visually similar plants have poisonous leaves, bay leaves are not toxic. However, they remain stiff even after cooking and may pose a choking hazard or cause harm to the digestive tract if swallowed whole or in large pieces. Canadian food and drug regulations set specific standards for bay leaves, including limits on ash content, moisture levels, and essential oil content.
Sources
Bay leaves come from several plants, such as:
Bay laurel (Laurus nobilis, Lauraceae). Fresh or dried bay leaves are used in cooking for their distinctive flavour and fragrance. The leaves should be removed from the cooked food before eating (see safety section below). The leaves are often used to flavour soups, stews, braises and pâtés in many countries. The fresh leaves are very mild and do not develop their full flavour until several weeks after picking and drying.
California bay leaf. The leaf of the California bay tree (Umbellularia californica, Lauraceae), also known as California laurel, Oregon myrtle, and pepperwood, is similar to the Mediterranean bay laurel but contains the toxin umbellulone, which can cause methemoglobinemia.
Indian bay leaf or malabathrum (Cinnamomum tamala, Lauraceae) differs from bay laurel leaves, which are shorter and light- to medium-green in colour, with one large vein down the length of the leaf. Indian bay leaves are about twice as long and wider, usually olive green in colour, and have three veins running the length of the leaf. Culinarily, Indian bay leaves are quite different, having a fragrance and taste similar to cinnamon (cassia) bark, but milder.
Indonesian bay leaf or Indonesian laurel (salam leaf, Syzygium polyanthum, Myrtaceae) is not commonly found outside Indonesia; this herb is applied to meat and, less often, to rice and to vegetables.
West Indian bay leaf, the leaf of the West Indian bay tree (Pimenta racemosa, Myrtaceae) is used culinarily (especially in Caribbean cuisine) and to produce the cologne called bay rum.
Mexican bay leaf (Litsea glaucescens, Lauraceae).
Chemical constituents
The leaves of the European / Mediterranean plant Laurus nobilis contain about 1.3% essential oils (ol. lauri folii), consisting of 45% eucalyptol, 12% other terpenes, 8–12% terpinyl acetate, 3–4% sesquiterpenes, 3% methyleugenol, and other α- and β-pinenes, phellandrene, linalool, geraniol, terpineol, and also contain lauric acid.
Taste and aroma
If eaten whole, Laurus nobilis bay leaves are pungent and have a sharp, bitter taste. As with many spices and flavourings, the fragrance of the bay leaf is more noticeable than its taste. When the leaf is dried, the aroma is herbal, slightly floral, and somewhat similar to oregano and thyme. Myrcene, a component of many essential oils used in perfumery, can be extracted from this bay leaf. They also contain eugenol.
Uses
In Indian cuisine, bay laurel leaves are sometimes used in place of Indian bay leaf, although they have a different flavour. They are most often used in rice dishes like biryani and as an ingredient in garam masala. Bay leaves are called (, in Hindi), Tejpātā (তেজপাতা) in Bengali, তেজ পাত in Assamese and usually rendered into English as Tej Patta.
In the Philippines, dried bay laurel leaves are used in several Filipino dishes, such as menudo, beef pares, and adobo. Bay leaves were used for flavouring by the ancient Greeks. They are a fixture in the cooking of many European cuisines (particularly those of the Mediterranean), as well as in the Americas. They are used in soups, stews, brines, meat, seafood, vegetable dishes, and sauces. The leaves also flavour many classic French and Italian dishes. The leaves are most often used whole (sometimes in a ) and removed before serving (they can be abrasive in the digestive tract). Thai and Laotian cuisine employs bay leaf (, ) in a few Arab-influenced dishes, notably massaman curry.
Bay leaves can also be crushed or ground before cooking. Crushed bay leaves impart more fragrance than whole leaves, but are more difficult to remove and thus they are often used in a muslin bag or tea infuser. Ground bay laurel may be substituted for whole leaves and does not need to be removed, but it is much stronger.
To brew tea, bay leaves are best boiled for a brief period—typically 3 minutes—to prevent bitterness, as prolonged boiling may overpower the tea's flavor. Fresh bay leaves impart a stronger aroma, while dried leaves require longer steeping for a similar effect.
Bay leaves are also used in the making of jerk chicken in the Caribbean Islands. The bay leaves are soaked and placed on the cool side of the grill. Pimento sticks are placed on top of the leaves, and the chicken is placed on top and smoked. The leaves are also added whole to soups, stews, and other Caribbean dishes.
Bay leaves can also be used scattered in a pantry to repel meal moths, flies, and cockroaches. Mediouni-Ben Jemaa and Tersim 2011 find the essential oil to be usable as an insect repellent.
Bay leaves have been used in entomology as the active ingredient in killing jars. The crushed, fresh, young leaves are put into the jar under a layer of paper. The vapors they release kill insects slowly but effectively and keep the specimens relaxed and easy to mount. The leaves discourage the growth of molds. They are not effective for killing large beetles and similar specimens, but insects that have been killed in a cyanide killing jar can be transferred to a laurel jar to await mounting. There is confusion in the literature about whether Laurus nobilis is a source of cyanide to any practical extent, but there is no evidence that cyanide is relevant to its value in killing jars. It certainly is rich in various essential oil components that could incapacitate insects in high concentrations; such compounds include 1,8-cineole, alpha-terpinyl acetate, and methyl eugenol. It also is unclear to what extent the alleged effect of cyanide released by the crushed leaves has been mis-attributed to Laurus nobilis in confusion with the unrelated Prunus laurocerasus, the so-called cherry laurel, which certainly does contain dangerous concentrations of cyanogenic glycosides together with the enzymes to generate the hydrogen cyanide from the glycocides if the leaf is physically damaged.
Bay leaves are used in Eastern Orthodoxy liturgy. To mark Jesus' destruction of Hades and freeing of the dead, parishioners throw bay leaves and flowers into the air, letting them flutter to the ground.
Safety
Some members of the laurel family, as well as the unrelated but visually similar mountain laurel and cherry laurel, have leaves that are poisonous to humans and livestock. While these plants are not sold anywhere for culinary use, their visual similarity to bay leaves has led to the oft-repeated belief that bay leaves should be removed from food after cooking because they are poisonous. This is not true; bay leaves may be eaten without toxic effect. However, they remain unpleasantly stiff even after thorough cooking, and if swallowed whole or in large pieces they may pose a risk of harming the digestive tract or causing choking. Thus, most recipes that use bay leaves will recommend their removal after the cooking process has finished.
Canadian food and drug regulations
The Canadian government requires that ground bay leaves contain no more than 4.5% total ash material, with a maximum of 0.5% of which is insoluble in hydrochloric acid. To be considered dried, they must contain 7% moisture or less. The oil content cannot be less than 1 milliliter per 100 grams of the spice.
References
Herbs
Leaves
Non-timber forest products
Plant common names | Bay leaf | [
"Biology"
] | 1,964 | [
"Plants",
"Plant common names",
"Common names of organisms"
] |
3,430 | https://en.wikipedia.org/wiki/Bulletin%20board%20system | A bulletin board system (BBS), also called a computer bulletin board service (CBBS), is a computer server running software that allows users to connect to the system using a terminal program. Once logged in, the user performs functions such as uploading and downloading software and data, reading news and bulletins, and exchanging messages with other users through public message boards and sometimes via direct chatting. In the early 1980s, message networks such as FidoNet were developed to provide services such as NetMail, which is similar to internet-based email.
Many BBSes also offered online games in which users could compete with each other. BBSes with multiple phone lines often provided chat rooms, allowing users to interact with each other. Bulletin board systems were in many ways a precursor to the modern form of the World Wide Web, social networks, and other aspects of the Internet. Low-cost, high-performance asynchronous modems drove the use of online services and BBSes through the early 1990s. InfoWorld estimated that there were 60,000 BBSes serving 17 million users in the United States alone in 1994, a collective market much larger than major online services such as CompuServe.
The introduction of inexpensive dial-up internet service and the Mosaic web browser offered ease of use and global access that BBS and online systems did not provide, and led to a rapid crash in the market starting in late 1994 to early 1995. Over the next year, many of the leading BBS software providers went bankrupt and tens of thousands of BBSes disappeared. Today, BBSing survives largely as a nostalgic hobby in most parts of the world, but it is still a popular form of communication for middle aged Taiwanese (see PTT Bulletin Board System). Most surviving BBSes are accessible over Telnet and typically offer free email accounts, FTP services, and IRC. Some offer access through packet switched networks or packet radio connections.
History
Precursors
A precursor to the public bulletin board system was Community Memory, which started in August 1973 in Berkeley, California. Microcomputers did not exist at that time, and modems were both expensive and slow. Community Memory ran on a mainframe computer and was accessed through terminals located in several San Francisco Bay Area neighborhoods. The poor quality of the original modem connecting the terminals to the mainframe prompted Community Memory hardware person, Lee Felsenstein, to invent the Pennywhistle modem, whose design was influential in the mid-1970s.
Community Memory allowed the user to type messages into a computer terminal after inserting a coin, and offered a "pure" bulletin board experience with public messages only (no email or other features). It did offer the ability to tag messages with keywords, which the user could use in searches. The system acted primarily in the form of a buy and sell system with the tags taking the place of the more traditional classifications. But users found ways to express themselves outside these bounds, and the system spontaneously created stories, poetry and other forms of communications. The system was expensive to operate, and when their host machine became unavailable and a new one could not be found, the system closed in January 1975.
Similar functionality was available to most mainframe users, which might be considered a sort of ultra-local BBS when used in this fashion. Commercial systems, expressly intended to offer these features to the public, became available in the late 1970s and formed the online service market that lasted into the 1990s. One particularly influential example was PLATO, which had thousands of users by the late 1970s, many of whom used the messaging and chat room features of the system in the same way that would later become common on BBSes.
The first BBSes
Early modems were generally either expensive or very simple devices using acoustic couplers to handle telephone operation. The user would pick up the phone, dial a number, then press the handset into rubber cups on the top of the modem. Disconnecting at the end of a call required the user to pick up the handset and return it to the phone. Examples of direct-connecting modems did exist, and these often allowed the host computer to send it commands to answer or hang up calls, but these were very expensive devices used by large banks and similar companies.
With the introduction of microcomputers with expansion slots, like the S-100 bus machines and Apple II, it became possible for the modem to communicate instructions and data on separate lines. These machines typically only supported asynchronous communications, and synchronous modems were much more expensive than asynchronous modems. A number of modems of this sort were available by the late 1970s. This made the BBS possible for the first time, as it allowed software on the computer to pick up an incoming call, communicate with the user, and then hang up the call when the user logged off.
The first public dial-up BBS was developed by Ward Christensen and Randy Suess, members of the Chicago Area Computer Hobbyists' Exchange (CACHE). According to an early interview, when Chicago was snowed under during the Great Blizzard of 1978, the two began preliminary work on the Computerized Bulletin Board System, or CBBS. The system came into existence largely through a fortuitous combination of Christensen having a spare S-100 bus computer and an early Hayes internal modem, and Suess's insistence that the machine be placed at his house in Chicago where it would be a local phone call for more users. Christensen patterned the system after the cork board his local computer club used to post information like "need a ride". CBBS officially went online on 16 February 1978. CBBS, which kept a count of callers, reportedly connected 253,301 callers before it was finally retired.
Smartmodem
A key innovation required for the popularization of the BBS was the Smartmodem manufactured by Hayes Microcomputer Products. Internal modems like the ones used by CBBS and similar early systems were usable, but generally expensive due to the manufacturer having to make a different modem for every computer platform they wanted to target. They were also limited to those computers with internal expansion, and could not be used with other useful platforms like video terminals. External modems were available for these platforms but required the phone to be dialed using a conventional handset. Internal modems could be software-controlled to perform outbound and inbound calls, but external modems had only the data pins to communicate with the host system.
Hayes' solution to the problem was to use a small microcontroller to implement a system that examined the data flowing into the modem from the host computer, watching for certain command strings. This allowed commands to be sent to and from the modem using the same data pins as all the rest of the data, meaning it would work on any system that could support even the most basic modems. The Smartmodem could pick up the phone, dial numbers, and hang up again, all without any operator intervention. The Smartmodem was not necessary for BBS use but made overall operation dramatically simpler. It also improved usability for the caller, as most terminal software allowed different phone numbers to be stored and dialed on command, allowing the user to easily connect to a series of systems.
The introduction of the Smartmodem led to the first real wave of BBS systems. Limited in speed and storage capacity, these systems were normally dedicated solely to messaging, private email and public forums. File transfers were extremely slow at these speeds, and file libraries were typically limited to text files containing lists of other BBS systems. These systems attracted a particular type of user who used the BBS as a unique type of communications medium, and when these local systems were crowded from the market in the 1990s, their loss was lamented for many years.
Higher speeds, commercialization
Speed improved with the introduction of 1200 bit/s asynchronous modems in the early 1980s, giving way to 2400 bit/s fairly rapidly. The improved performance led to a substantial increase in BBS popularity. Most of the information was displayed using ordinary ASCII text or ANSI art, but a number of systems attempted character-based graphical user interfaces (GUIs) which began to be practical at 2400 bit/s.
There was a lengthy delay before 9600 bit/s models began to appear on the market. 9600 bit/s was not even established as a strong standard before V.32bis at 14.4 kbit/s took over in the early 1990s. This period also saw the rapid rise in capacity and a dramatic drop in the price of hard drives. By the late 1980s, many BBS systems had significant file libraries, and this gave rise to leechingusers calling BBSes solely for their files. These users would use the modem for some time, leaving less time for other users, who got busy signals. The resulting upheaval eliminated many of the pioneering message-centric systems.
This also gave rise to a new class of BBS systems, dedicated solely to file upload and downloads. These systems charged for access, typically a flat monthly fee, compared to the per-hour fees charged by Event Horizons BBS and most online services. Many third-party services were developed to support these systems, offering simple credit card merchant account gateways for the payment of monthly fees, and entire file libraries on compact disk that made initial setup very easy. Early 1990s editions of Boardwatch were filled with ads for single-click install solutions dedicated to these new sysops. While this gave the market a bad reputation, it also led to its greatest success. During the early 1990s, there were a number of mid-sized software companies dedicated to BBS software, and the number of BBSes in service reached its peak.
Towards the early 1990s, BBS became so popular that it spawned three monthly magazines, Boardwatch, BBS Magazine, and in Asia and Australia, Chips 'n Bits Magazine which devoted extensive coverage of the software and technology innovations and people behind them, and listings to US and worldwide BBSes. In addition, in the US, a major monthly magazine, Computer Shopper, carried a list of BBSes along with a brief abstract of each of their offerings.
GUIs
Through the late 1980s and early 1990s, there was considerable experimentation with ways to develop user-friendly interfaces for BBSes. Almost every popular system used ANSI-based color menus to make reading easier on capable hardware and terminal emulators, and most also allowed cursor commands to offer command-line recall and similar features. Another common feature was the use of autocomplete to make menu navigation simpler, a feature that would not re-appear on the Web until decades later.
A number of systems also made forays into GUI-based interfaces, either using character graphics sent from the host, or using custom GUI-based terminal systems. The latter initially appeared on the Macintosh platform, where TeleFinder and FirstClass became very popular. FirstClass offered a host of features that would be difficult or impossible under a terminal-based solution, including bi-directional information flow and non-blocking operation that allowed the user to exchange files in both directions while continuing to use the message system and chat, all in separate windows. Will Price's "Hermes", released in 1988, combined a familiar PC style with Macintosh GUI interface. (Hermes was already "venerable" by 1994 although the Hermes II release remained popular.) Skypix featured on Amiga a complete markup language. It used a standardized set of icons to indicate mouse driven commands available online and to recognize different filetypes present on BBS storage media. It was capable of transmitting data like images, audio files, and audio clips between users linked to the same BBS or off-line if the BBS was in the circuit of the FidoNet organization.
On the PC, efforts were more oriented to extensions of the original terminal concept, with the GUI being described in the information on the host. One example was the Remote Imaging Protocol, essentially a picture description system, which remained relatively obscure. Probably the ultimate development of this style of operation was the dynamic page implementation of the University of Southern California BBS (USCBBS) by Susan Biddlecomb, which predated the implementation of the HTML Dynamic web page. A complete Dynamic web page implementation was accomplished using TBBS with a TDBS add-on presenting a complete menu system individually customized for each user.
Rise of the Internet and decline of BBS
The demand for complex ANSI and ASCII screens and larger file transfers taxed available channel capacity, which in turn increased demand for faster modems. 14.4 kbit/s modems were standard for a number of years while various companies attempted to introduce non-standard systems with higher performancenormally about 19.2 kbit/s. Another delay followed due to a long V.34 standards process before 28.8 kbit/s was released, only to be quickly replaced by 33.6 kbit/s, and then 56 kbit/s.
These increasing speeds had the side effect of dramatically reducing the noticeable effects of channel efficiency. When modems were slow, considerable effort was put into developing the most efficient protocols and display systems possible. TCP/IP ran slowly over 1200 bit/s modems. 56 kbit/s modems could access the protocol suite more quickly than with slower modems. Dial-up Internet service became widely available in the mid-1990s to the general public outside of universities and research laboratories, and connectivity was included in most general-use operating systems by default as Internet access became popular.
These developments together resulted in the sudden obsolescence of bulletin board technology in 1995 and the collapse of its supporting market. Technically, Internet service offered an enormous advantage over BBS systems, as a single connection to the user's Internet service provider allowed them to contact services around the world. In comparison, BBS systems relied on a direct point-to-point connection, so even dialing multiple local systems required multiple phone calls. Internet protocols also allowed a single connection to be used to contact multiple services simultaneously; for example, downloading files from an FTP library while checking the weather on a local news website. Even with a shell account, it was possible to multitask using job control or a terminal multiplexer such as GNU Screen. In comparison, a connection to a BBS allowed access only to the information on that system.
Estimating numbers
According to the FidoNet Nodelist, BBSes reached their peak usage around 1996, which was the same period that the World Wide Web and AOL became mainstream. BBSes rapidly declined in popularity thereafter, and were replaced by systems using the Internet for connectivity. Some of the larger commercial BBSes, such as MaxMegabyte and ExecPC BBS, evolved into Internet service providers.
The website textfiles.com serves as an archive that documents the history of the BBS. The historical BBS list on textfiles.com contains over 105,000 BBSes that have existed over a span of 20 years in North America alone. The owner of textfiles.com, Jason Scott, also produced BBS: The Documentary, a DVD film that chronicles the history of the BBS and features interviews with well-known people (mostly from the United States) from the heyday BBS era.
In the 2000s, most traditional BBS systems migrated to the Internet using Telnet or SSH protocols. As of September 2022, between 900 and 1000 are thought to be active via the Internet fewer than 30 of these being of the traditional "dial-up" (modem) variety.
Software and hardware
Unlike modern websites and online services that are typically hosted by third-party companies in commercial data centers, BBS computers (especially for smaller boards) were typically operated from the system operator's home. As such, access could be unreliable, and in many cases, only one user could be on the system at a time. Only larger BBSes with multiple phone lines using specialized hardware, multitasking software, or a LAN connecting multiple computers, could host multiple simultaneous users.
The first BBSes each used their own unique software, quite often written entirely or at least customized by the system operators themselves, running on early S-100 bus microcomputer systems such as the Altair 8800, IMSAI 8080 and Cromemco under the CP/M operating system. Soon after, BBS software was being written for all of the major home computer systems of the late 1970s erathe Apple II, Atari 8-bit computers, Commodore PET, TI-99/4A, and TRS-80 being some of the most popular.
In 1981, the IBM Personal Computer was introduced and MS-DOS soon became the operating system on which the majority of BBS programs were run. RBBS-PC, ported over from the CP/M world, and Fido BBS, developed by Tom Jennings (who later founded FidoNet) were the first notable MS-DOS BBS programs. Many successful commercial BBS programs were developed, such as PCBoard BBS, RemoteAccess BBS, Magpie and Wildcat! BBS. Popular freeware BBS programs included Telegard BBS and Renegade BBS, which both had early origins from leaked WWIV BBS source code.
BBS systems on other systems remained popular, especially home computers, largely because they catered to the audience of users running those machines. The ubiquitous Commodore 64 (introduced in 1982) was a common platform in the 1980s. Popular commercial BBS programs were Blue Board, Ivory BBS, Color64 and CNet 64. There was also a devoted contingent of BBS users on TI-99/4A computers, long after Texas Instruments had discontinued the computer in the aftermath of their price war with Commodore. Popular BBSes for the TI-99/4A included Techie, TIBBS (Texas Instruments Bulletin Board System), TI-COMM, and Zyolog. In the early 1990s, a small number of BBSes were also running on the Commodore Amiga. Popular BBS software for the Amiga were ABBS, Amiexpress, C-Net, StormforceBBS, Infinity and Tempest. There was also a small faction of devoted Atari BBSes that used the Atari 800, then the 800XL, and eventually the 1040ST. The earlier machines generally lacked hard drive capabilities, which limited them primarily to messaging.
MS-DOS continued to be the most popular operating system for BBS use up until the mid-1990s, and in the early years, most multi-node BBSes were running under a DOS based multitasker such as DESQview or consisted of multiple computers connected via a LAN. In the late 1980s, a handful of BBS developers implemented multitasking communications routines inside their software, allowing multiple phone lines and users to connect to the same BBS computer. These included Galacticomm's MajorBBS (later WorldGroup), eSoft The Bread Board System (TBBS), and Falken. Other popular BBS's were Maximus and Opus, with some associated applications such as BinkleyTerm being based on characters from the Berkley Breathed cartoon strip of Bloom County. Though most BBS software had been written in BASIC or Pascal (with some low-level routines written in assembly language), the C language was starting to gain popularity.
By 1995, many of the DOS-based BBSes had begun switching to modern multitasking operating systems, such as OS/2, Windows 95, and Linux. One of the first graphics-based BBS applications was Excalibur BBS with low-bandwidth applications that required its own client for efficiency. This led to one of the earliest implementations of Electronic Commerce in 1996 with replication of partner stores around the globe. TCP/IP networking allowed most of the remaining BBSes to evolve and include Internet hosting capabilities. Recent BBS software, such as Synchronet, Mystic BBS, EleBBS, DOC, Magpie or Wildcat! BBS, provide access using the Telnet protocol rather than dialup, or by using legacy DOS-based BBS software with a FOSSIL-to-Telnet redirector such as NetFoss.
Presentation
BBSes were generally text-based, rather than GUI-based, and early BBSes conversed using the simple ASCII character set. However, some home computer manufacturers extended the ASCII character set to take advantage of the advanced color and graphics capabilities of their systems. BBS software authors included these extended character sets in their software, and terminal program authors included the ability to display them when a compatible system was called. Atari's native character set was known as ATASCII, while most Commodore BBSes supported PETSCII. PETSCII was also supported by the nationwide online service Quantum Link.
The use of these custom character sets was generally incompatible between manufacturers. Unless a caller was using terminal emulation software written for, and running on, the same type of system as the BBS, the session would simply fall back to simple ASCII output. For example, a Commodore 64 user calling an Atari BBS would use ASCII rather than the native character set of either. As time progressed, most terminal programs began using the ASCII standard, but could use their native character set if it was available.
COCONET, a BBS system made by Coconut Computing, Inc., was released in 1988 and only supported a GUI (no text interface was initially available but eventually became available around 1990), and worked in EGA/VGA graphics mode, which made it stand out from text-based BBS systems. COCONET's bitmap and vector graphics and support for multiple type fonts were inspired by the PLATO system, and the graphics capabilities were based on what was available in the Borland Graphics Interface library. A competing approach called Remote Imaging Protocol (RIP) emerged and was promoted by Telegrafix in the early to mid-1990s but it never became widespread. A teletext technology called NAPLPS was also considered, and although it became the underlying graphics technology behind the Prodigy service, it never gained popularity in the BBS market. There were several GUI-based BBSes on the Apple Macintosh platform, including TeleFinder and FirstClass, but these were mostly confined to the Mac market.
In the UK, the BBC Micro based OBBS software, available from Pace for use with their modems, optionally allowed for color and graphics using the Teletext based graphics mode available on that platform. Other systems used the Viewdata protocols made popular in the UK by British Telecom's Prestel service, and the on-line magazine Micronet 800 whom were busy giving away modems with their subscriptions.
Over time, terminal manufacturers started to support ANSI X3.64 in addition to or instead of proprietary terminal control codes, e.g., color, cursor positioning.
The most popular form of online graphics was ANSI art, which combined the IBM Extended ASCII character set's blocks and symbols with ANSI escape sequences to allow changing colors on demand, provide cursor control and screen formatting, and even basic musical tones. During the late 1980s and early 1990s, most BBSes used ANSI to make elaborate welcome screens, and colorized menus, and thus, ANSI support was a sought-after feature in terminal client programs. The development of ANSI art became so popular that it spawned an entire BBS "artscene" subculture devoted to it.
The Amiga Skyline BBS software in 1988 featured a script markup language communication protocol called Skypix which was capable of giving the user a complete graphical interface, featuring rich graphics, changeable fonts, mouse-controlled actions, animations and sound.
Today, most BBS software that is still actively supported, such as Worldgroup, Wildcat! BBS and Citadel/UX, is Web-enabled, and the traditional text interface has been replaced (or operates concurrently) with a Web-based user interface. For those more nostalgic for the true BBS experience, one can use NetSerial (Windows) or DOSBox (Windows/*nix) to redirect DOS COM port software to telnet, allowing them to connect to Telnet BBSes using 1980s and 1990s era modem terminal emulation software, like Telix, Terminate, Qmodem and Procomm Plus. Modern 32-bit terminal emulators such as mTelnet and SyncTerm include native telnet support.
Content and access
Since most early BBSes were run by computer hobbyists, content was largely technical, with user communities revolving around hardware and software discussions.
As the BBS phenomenon grew, so did the popularity of special interest boards. Bulletin Board Systems could be found for almost every hobby and interest. Popular interests included politics, religion, music, dating, and alternative lifestyles. Many system operators also adopted a theme in which they customized their entire BBS (welcome screens, prompts, menus, and so on) to reflect that theme. Common themes were based on fantasy, or were intended to give the user the illusion of being somewhere else, such as in a sanatorium, wizard's castle, or on a pirate ship.
In the early days, the file download library consisted of files that the system operators obtained themselves from other BBSes and friends. Many BBSes inspected every file uploaded to their public file download library to ensure that the material did not violate copyright law. As time went on, shareware CD-ROMs were sold with up to thousands of files on each CD-ROM. Small BBSes copied each file individually to their hard drive. Some systems used a CD-ROM drive to make the files available. Advanced BBSes used Multiple CD-ROM disc changer units that switched 6 CD-ROM disks on demand for the caller(s). Large systems used all 26 DOS drive letters with multi-disk changers housing tens of thousands of copyright-free shareware or freeware files available to all callers. These BBSes were generally more family-friendly, avoiding the seedier side of BBSes. Access to these systems varied from single to multiple modem lines with some requiring little or no confirmed registration.
Some BBSes, called elite, WaReZ, or pirate boards, were exclusively used for distributing cracked software, phreaking materials, and other questionable or unlawful content. These BBSes often had multiple modems and phone lines, allowing several users to upload and download files at once. Most elite BBSes used some form of new user verification, where new users would have to apply for membership and attempt to prove that they were not a law enforcement officer or a lamer. The largest elite boards accepted users by invitation only. Elite boards also spawned their own subculture and gave rise to the slang known today as leetspeak.
Another common type of board was the support BBS run by a manufacturer of computer products or software. These boards were dedicated to supporting users of the company's products with question and answer forums, news and updates, and downloads. Most of them were not a free call. Today, these services have moved to the Web.
Some general-purpose Bulletin Board Systems had special levels of access that were given to those who paid extra money, uploaded useful files or knew the system operator personally. These specialty and pay BBSes usually had something unique to offer their users, such as large file libraries, warez, pornography, chat rooms or Internet access.
Pay BBSes such as The WELL and Echo NYC (now Internet forums rather than dial-up), ExecPC, PsudNetwork and MindVox (which folded in 1996) were admired for their close, friendly communities and quality discussion forums. However, many free BBSes also maintained close communities, and some even had annual or bi-annual events where users would travel great distances to meet face-to-face with their on-line friends. These events were especially popular with BBSes that offered chat rooms.
Some of the BBSes that provided access to illegal content faced opposition. On July 12, 1985, in conjunction with a credit card fraud investigation, the Middlesex County, New Jersey Sheriff's department raided and seized The Private Sector BBS, which was the official BBS for grey hat hacker quarterly 2600 Magazine at the time. The notorious Rusty n Edie's BBS, in Boardman, Ohio, was raided by the FBI in January 1993 for trading unlicensed software, and later sued by Playboy for copyright infringement in November 1997. In Flint, Michigan, a 21-year-old man was charged with distributing child pornography through his BBS in March 1996.
Networks
Most early BBSes operated as individual systems. Information contained on that BBS never left the system, and users would only interact with the information and user community on that BBS alone. However, as BBSes became more widespread, there evolved a desire to connect systems together to share messages and files with distant systems and users. The largest such network was FidoNet.
As is it was prohibitively expensive for the hobbyist system operator to have a dedicated connection to another system, FidoNet was developed as a store and forward network. Private email (Netmail), public message boards (Echomail) and eventually even file attachments on a FidoNet-capable BBS would be bundled into one or more archive files over a set time interval. These archive files were then compressed with ARC or ZIP and forwarded to (or polled by) another nearby node or hub via a dialup Xmodem session. Messages would be relayed around various FidoNet hubs until they were eventually delivered to their destination. The hierarchy of FidoNet BBS nodes, hubs, and zones was maintained in a routing table called a Nodelist. Some larger BBSes or regional FidoNet hubs would make several transfers per day, some even to multiple nodes or hubs, and as such, transfers usually occurred at night or in the early morning when toll rates were lowest. In Fido's heyday, sending a Netmail message to a user on a distant FidoNet node, or participating in an Echomail discussion could take days, especially if any FidoNet nodes or hubs in the message's route only made one transfer call per day.
FidoNet was platform-independent and would work with any BBS that was written to use it. BBSes that did not have integrated FidoNet capability could usually add it using an external FidoNet front-end mailer such as SEAdog, FrontDoor, BinkleyTerm, InterMail or D'Bridge, and a mail processor such as FastEcho or Squish. The front-end mailer would conduct the periodic FidoNet transfers, while the mail processor would usually run just before and just after the mailer ran. This program would scan for and pack up new outgoing messages, and then unpack, sort and "toss" the incoming messages into a BBS user's local email box or into the BBS's local message bases reserved for Echomail. As such, these mail processors were commonly called "scanner/tosser/packers".
Many other BBS networks followed the example of FidoNet, using the same standards and the same software. These were called FidoNet Technology Networks (FTNs). They were usually smaller and targeted at selected audiences. Some networks used QWK doors, and others such as RelayNet (RIME) and WWIVnet used non-Fido software and standards.
Before commercial Internet access became common, these networks of BBSes provided regional and international e-mail and message bases. Some even provided gateways, such as UFGATE, by which members could send and receive e-mail to and from the Internet via UUCP, and many FidoNet discussion groups were shared via gateway to Usenet. Elaborate schemes allowed users to download binary files, search gopherspace, and interact with distant programs, all using plain-text e-mail.
As the volume of FidoNet Mail increased and newsgroups from the early days of the Internet became available, satellite data downstream services became viable for larger systems. The satellite service provided access to FidoNet and Usenet newsgroups in large volumes at a reasonable fee. By connecting a small dish and receiver, a constant downstream of thousands of FidoNet and Usenet newsgroups could be received. The local BBS only needed to upload new outgoing messages via the modem network back to the satellite service. This method drastically reduced phone data transfers while dramatically increasing the number of message forums.
FidoNet is still in use today, though in a much smaller form, and many Echomail groups are still shared with Usenet via FidoNet to Usenet gateways. Widespread abuse of Usenet with spam and pornography has led to many of these FidoNet gateways to cease operation completely.
Shareware and freeware
Much of the shareware movement was started via user distribution of software through BBSes. A notable example was Phil Katz's PKARC (and later PKZIP, using the same ".zip" algorithm that WinZip and other popular archivers now use); also other concepts of software distribution like freeware, postcardware like JPEGview and donationware like Red Ryder for the Macintosh first appeared on BBS sites. Doom from id Software and nearly all Apogee Software games were distributed as shareware. The Internet has largely erased the distinction of sharewaremost users now download the software directly from the developer's website rather than receiving it from another BBS user "sharing" it. Today, shareware often refers to electronically distributed software from a small developer.
Many commercial BBS software companies that continue to support their old BBS software products switched to the shareware model or made it entirely free. Some companies were able to make the move to the Internet and provide commercial products with BBS capabilities.
Features
A classic BBS had:
A computer
One or more modems
One or more phone lines, with more allowing for increased concurrent users
A BBS software package
A sysop – system operator
A user community
The BBS software usually provides:
Menu systems
One or more message bases
Uploading and downloading of message packets in QWK format using XMODEM, YMODEM or ZMODEM
File areas
Live viewing of all caller activity by the system operator
Voting – opinion booths
Statistics on message posters, top uploaders / downloaders
Online games (usually single player or only a single active player at a given time)
A doorway to third-party online games
Usage auditing capabilities
Multi-user chat (only possible on multi-line BBSes)
Internet email (more common in later Internet-connected BBSes)
Networked message boards
Most modern BBSes allow telnet access over the Internet using a telnet server and a virtual FOSSIL driver.
A "yell for SysOp" page caller side menu item that sounded an audible alarm to the system operator. If chosen, the system operator could then initiate a text-to-text chat with the caller.
Primitive social networking features, such as leaving messages on a user's profile
See also
ANSI art
Free-net
Imageboard
Internet forum
Internet Relay Chat
List of BBS software
List of bulletin board systems
Minitel
Online magazine
PODSnet
Shell account
Terminal emulator
Textboard
User-generated content
Usenet
Notes
References
Citations
Sources
External links
The BBS Corner
The BBS Documentary – (Video Collection)
()
The Telnet BBS Guide (BBSes available via the Internet)
Textfiles.com – Collection of historical BBS documents, files and history
The BBS organization (longest running bbs services site)
The Lost Civilization of Dial-Up Bulletin Board Systems (The Atlantic, 2016)
Color64 - official project website
Color64 documentation - OasisBBS
American inventions
Online chat
Pre–World Wide Web online services
Internet forums
Computer-mediated communication
Telephony
Telnet
Computer-related introductions in 1978 | Bulletin board system | [
"Technology"
] | 7,405 | [
"Computer-mediated communication",
"Information systems",
"Computing and society"
] |
3,722 | https://en.wikipedia.org/wiki/Berlin%20Wall | The Berlin Wall (, ) was a guarded concrete barrier that encircled West Berlin from 1961 to 1989, separating it from East Berlin and the German Democratic Republic (GDR; East Germany). Construction of the Berlin Wall was commenced by the government of the GDR on 13 August 1961. It included guard towers placed along large concrete walls, accompanied by a wide area (later known as the "death strip") that contained anti-vehicle trenches, beds of nails and other defenses. The primary intention for the Wall's construction was to prevent East German citizens from fleeing to the West.
The Soviet Bloc propaganda portrayed the Wall as protecting its population from "fascist elements conspiring to prevent the will of the people" from building a communist state in the GDR. The authorities officially referred to the Berlin Wall as the Anti-Fascist Protection Rampart (, ). Conversely, West Berlin's city government sometimes referred to it as the "Wall of Shame", a term coined by mayor Willy Brandt in reference to the Wall's restriction on freedom of movement. Along with the separate and much longer inner German border, which demarcated the border between East and West Germany, it came to symbolize physically the Iron Curtain that separated the Western Bloc and Soviet satellite states of the Eastern Bloc during the Cold War.
Before the Wall's erection, 3.5 million East Germans circumvented Eastern Bloc emigration restrictions and defected from the GDR, many by crossing over the border from East Berlin into West Berlin; from there they could then travel to West Germany and to other Western European countries. Between 1961 and 1989, the deadly force associated with the Wall prevented almost all such emigration. During this period, over 100,000 people attempted to escape, and over 5,000 people succeeded in escaping over the Wall, with an estimated death toll of those murdered by East German authorities ranging from 136 to more than 200 in and around Berlin.
In 1989, a series of revolutions in nearby Eastern Bloc countries (Poland and Hungary in particular) and the events of the "Pan-European Picnic" set in motion a peaceful development during which the Iron Curtain largely broke, rulers in the East came under public pressure to cease their repressive policies. After several weeks of civil unrest, the East German government announced on 9 November 1989 that all GDR citizens could visit the FRG and West Berlin. Crowds of East Germans crossed and climbed onto the Wall, joined by West Germans on the other side, and souvenir hunters chipped away parts of the Wall over the next few weeks. The Brandenburg Gate, a few meters from the Berlin Wall, reopened on 22 December 1989, with demolition of the Wall beginning on 13 June 1990 and concluding in 1994. The fall of the Berlin Wall paved the way for German reunification, which formally took place on 3 October 1990.
Background
Post-war Germany
After the end of World War II in Europe, what remained of pre-war Germany west of the Oder-Neisse line was divided into four occupation zones (as per the Potsdam Agreement), each one controlled by one of the four occupying Allied powers: the United States, the United Kingdom, France and the Soviet Union. The capital, Berlin, as the seat of the Allied Control Council, was similarly subdivided into four sectors despite the city's location, which was fully within the Soviet zone.
Within two years, political divisions increased between the Soviets and the other occupying powers. These included the Soviets' refusal to agree to reconstruction plans making post-war Germany self-sufficient, and to a detailed accounting of industrial plants, goods and infrastructure—some of which had already been removed by the Soviets. France, the United Kingdom, the United States, and the Benelux countries later met to combine the non-Soviet zones of Germany into one zone for reconstruction, and to approve the extension of the Marshall Plan.
Eastern Bloc and the Berlin airlift
Following the defeat of Nazi Germany in World War II, the Soviet Union engineered the installation of communist regimes in most of the countries occupied by Soviet military forces at the end of the War, including Poland, Hungary, Czechoslovakia, Bulgaria, Romania, and the GDR, which together with Albania formed the Comecon in 1949 and later a military alliance, the Warsaw Pact. The beginning of the Cold War saw the Eastern Bloc of the Soviet Union confront the Western Bloc of the United States, with the latter grouping becoming largely united in 1949 under NATO and the former grouping becoming largely united in 1955 under the Warsaw Pact. As the Soviet Union already had an armed presence and political domination all over its eastern satellite states by 1955, the pact has been long considered "superfluous", and because of the rushed way in which it was conceived, NATO officials labeled it a "cardboard castle". There was no direct military confrontation between the two organizations; instead, the conflict was fought on an ideological basis and through proxy wars. Both NATO and the Warsaw Pact led to the expansion of military forces and their integration into the respective blocs. The Warsaw Pact's largest military engagement was the Warsaw Pact invasion of Czechoslovakia, its own member state, in August 1968.
Since the end of the War, the USSR installed a Soviet-style regime in the Soviet occupation zone of Germany and later founded the GDR, with the country's political system based on a centrally planned socialist economic model with nationalized means of production, and with repressive secret police institutions, under party dictatorship of the SED (Sozialistische Einheitspartei Deutschlands; Socialist Unity Party of Germany) similar to the party dictatorship of the Soviet Communist Party in the USSR.
At the same time, a parallel country was established under the control of the Western powers in the zones of post-war Germany occupied by them, culminating in the foundation of the Federal Republic of Germany in 1949, which initially claimed to be the sole legitimate power in all of Germany, East and West. The material standard of living in the Western zones of Berlin began to improve quickly, and residents of the Soviet zone soon began leaving for the West in large numbers, fleeing hunger, poverty and repression in the Soviet Zone for a better life in the West. Soon residents of other parts of the Soviet zone began to escape to the West through Berlin, and this migration, called in Germany "Republikflucht", deprived the Soviet zone not only of working forces desperately needed for post-war reconstruction but disproportionately of highly educated people, which came to be known as the "Brain Drain".
In 1948, in response to moves by the Western powers to establish a separate, federal system of government in the Western zones, and to extend the US Marshall Plan of economic assistance to Germany, the Soviets instituted the Berlin Blockade, preventing people, food, materials and supplies from arriving in West Berlin by land routes through the Soviet zone. The United States, the United Kingdom, France, Canada, Australia, New Zealand and several other countries began a massive "airlift", supplying West Berlin with food and other supplies. The Soviets mounted a public relations campaign against the Western policy change. Communists attempted to disrupt the elections of 1948, preceding large losses therein, while 300,000 Berliners demonstrated for the international airlift to continue. In May 1949, Stalin lifted the blockade, permitting the resumption of Western shipments to Berlin.
The German Democratic Republic (the "GDR"; East Germany) was declared on 7 October 1949. On that day, the USSR ended the Soviet military government which had governed the Soviet Occupation Zone (Sowetische Besatzungszone) since the end of the War and handed over legal power to the Provisorische Volkskammer under the new Constitution of the GDR which came into force that day. However, until 1955, the Soviets maintained considerable legal control over the GDR state, including the regional governments, through the Sowetische Kontrollkommission and maintained a presence in various East German administrative, military, and secret police structures. Even after legal sovereignty of the GDR was restored in 1955, the Soviet Union continued to maintain considerable influence over administration and lawmaking in the GDR through the Soviet embassy and through the implicit threat of force which could be exercised through the continuing large Soviet military presence in the GDR, which was used to repress protests in East Germany bloodily in June 1953.
East Germany differed from West Germany (Federal Republic of Germany), which developed into a Western capitalist country with a social market economy and a democratic parliamentary government. Continual economic growth starting in the 1950s fueled a 20-year "economic miracle" (). As West Germany's economy grew, and its standard of living steadily improved, many East Germans wanted to move to West Germany.
Emigration westward in the early 1950s
After the Soviet occupation of Eastern Europe at the end of World War II, the majority of those living in the newly acquired areas of the Eastern Bloc aspired to independence and wanted the Soviets to leave. Taking advantage of the zonal border between occupied zones in Germany, the number of GDR citizens moving to West Germany totaled 187,000 in 1950; 165,000 in 1951; 182,000 in 1952; and 331,000 in 1953. One reason for the sharp 1953 increase was fear of potential further Sovietization, given the increasingly paranoid actions of Joseph Stalin in late 1952 and early 1953. In the first six months of 1953, 226,000 had fled.
Erection of the inner German border
By the early 1950s, the Soviet approach to controlling national movement, restricting emigration, was emulated by most of the rest of the Eastern Bloc, including East Germany. The restrictions presented a quandary for some Eastern Bloc states, which had been more economically advanced and open than the Soviet Union, such that crossing borders seemed more natural—especially where no prior border existed between East and West Germany.
Up until 1952, the demarcation lines between East Germany and the western occupied zones could be easily crossed in most places. On 1 April 1952, East German leaders met the Soviet leader Joseph Stalin in Moscow; during the discussions, Stalin's foreign minister Vyacheslav Molotov proposed that the East Germans should "introduce a system of passes for visits of West Berlin residents to the territory of East Berlin [so as to stop] free movement of Western agents" in the GDR. Stalin agreed, calling the situation "intolerable". He advised the East Germans to build up their border defenses, telling them that "The demarcation line between East and West Germany should be considered a border—and not just any border, but a dangerous one ... The Germans will guard the line of defence with their lives."
Consequently, the inner German border between the two German states was closed, and a barbed-wire fence erected. The border between the Western and Eastern sectors of Berlin, however, remained open, although traffic between the Soviet and the Western sectors was somewhat restricted. This resulted in Berlin becoming a magnet for East Germans desperate to escape life in the GDR, and also a flashpoint for tension between the United States and the Soviet Union.
In 1955, the Soviets gave East Germany authority over civilian movement in Berlin, passing control to a regime not recognized in the West. Initially, East Germany granted "visits" to allow its residents access to West Germany. However, following the defection of large numbers of East Germans (known as Republikflucht) under this regime, the new East German state legally restricted virtually all travel to the West in 1956. Soviet East German ambassador Mikhail Pervukhin observed that "the presence in Berlin of an open and essentially uncontrolled border between the socialist and capitalist worlds unwittingly prompts the population to make a comparison between both parts of the city, which unfortunately does not always turn out in favour of Democratic [East] Berlin."
Berlin emigration loophole
With the closing of the inner German border officially in 1952, the border in Berlin remained considerably more accessible because it was administered by all four occupying powers. Accordingly, Berlin became the main route by which East Germans left for the West. On 11 December 1957, East Germany introduced a new passport law that reduced the overall number of refugees leaving Eastern Germany.
It had the unintended result of drastically increasing the percentage of those leaving through West Berlin from 60% to well over 90% by the end of 1958. Those caught trying to leave East Berlin were subjected to heavy penalties, but with no physical barrier and subway train access still available to West Berlin, such measures were ineffective. The Berlin sector border was essentially a "loophole" through which Eastern Bloc citizens could still escape. The 3.5 million East Germans who had left by 1961 totalled approximately 20% of the entire East German population.
An important reason that passage between East Germany and West Berlin was not stopped earlier was that doing so would cut off much of the railway traffic in East Germany. Construction of a new railway bypassing West Berlin, the Berlin outer ring, commenced in 1951. Following the completion of the railway in 1961, closing the border became a more practical proposition.
Brain drain
The emigrants tended to be young and well-educated, leading to the "brain drain" feared by officials in East Germany. Yuri Andropov, then the CPSU Director on Relations with Communist and Workers' Parties of Socialist Countries, wrote an urgent letter on 28 August 1958 to the Central Committee about the significant 50% increase in the number of East German intelligentsia among the refugees. Andropov reported that, while the East German leadership stated that they were leaving for economic reasons, testimony from refugees indicated that the reasons were more political than material. He stated "the flight of the intelligentsia has reached a particularly critical phase."
By 1960, the combination of World War II and the massive emigration westward left East Germany with only 61% of its population of working age, compared to 70.5% before the war. The loss was disproportionately heavy among professionals: engineers, technicians, physicians, teachers, lawyers, and skilled workers. The direct cost of manpower losses to East Germany (and corresponding gain to the West) has been estimated at $7 billion to $9 billion, with East German party leader Walter Ulbricht later claiming that West Germany owed him $17 billion in compensation, including reparations as well as manpower losses. In addition, the drain of East Germany's young population potentially cost it over 22.5 billion marks in lost educational investment. The brain drain of professionals had become so damaging to the political credibility and economic viability of East Germany that the re-securing of the German communist frontier was imperative.
The exodus of emigrants from East Germany presented two minor potential benefits: an easy way to smuggle East German secret agents to West Germany, and a reduction in the number of citizens hostile to the communist regime. Neither of these advantages, however, proved particularly useful.
Start of the construction (1961)
On 15 June 1961, First Secretary of the Socialist Unity Party and GDR State Council chairman Walter Ulbricht stated in an international press conference, (No one has the intention of erecting a wall!). It was the first time the colloquial term (wall) had been used in this context.
The transcript of a telephone call between Nikita Khrushchev and Ulbricht, on 1 August in the same year, suggests that the initiative for the construction of the Wall came from Khrushchev. However, other sources suggest that Khrushchev had initially been wary about building a wall, fearing negative Western reaction. Nevertheless, Ulbricht had pushed for a border closure for some time, arguing that East Germany's existence was at stake.
Khrushchev had become emboldened upon seeing US president John F. Kennedy's youth and inexperience, which he considered a weakness. In the 1961 Vienna summit, Kennedy made the error of admitting that the US would not actively oppose the building of a barrier. A feeling of miscalculation and failure immediately afterwards was admitted by Kennedy in a candid interview with New York Times columnist James "Scotty" Reston. On Saturday, 12 August 1961, the leaders of the GDR attended a garden party at a government guesthouse in , in a wooded area to the north of East Berlin. There, Ulbricht signed the order to close the border and erect a wall.
At midnight, the police and units of the East German army began to close the border and, by Sunday morning, 13 August, the border with West Berlin was closed. East German troops and workers had begun to tear up streets running alongside the border to make them impassable to most vehicles and to install barbed wire entanglements and fences along the around the three western sectors, and the that divided West and East Berlin. The date of 13 August became commonly referred to as Barbed Wire Sunday in Germany.
The barrier was built inside East Berlin on East German territory to ensure that it did not encroach on West Berlin at any point. Generally, the Wall was only slightly inside East Berlin, but in a few places it was some distance from the legal border, most notably at Potsdamer Bahnhof and the Lenné Triangle that is now much of the Potsdamer Platz development.
Later, the initial barrier was built up into the Wall proper, the first concrete elements and large blocks being put in place on 17 August. During the construction of the Wall, National People's Army (NVA) and Combat Groups of the Working Class (KdA) soldiers stood in front of it with orders to shoot anyone who attempted to defect. Additionally, chain fences, walls, minefields and other obstacles were installed along the length of East Germany's western border with West Germany proper. A wide no man's land was cleared as well to provide a better overview and a clear line of fire at fleeing refugees.
Immediate effects
With the closing of the east–west sector boundary in Berlin, the vast majority of East Germans could no longer travel or emigrate to West Germany. Berlin soon went from being the easiest place to make an unauthorized crossing between East and West Germany to being the most difficult. Many families were split, while East Berliners employed in the West were cut off from their jobs. West Berlin became an isolated exclave in a hostile land. West Berliners demonstrated against the Wall, led by their Mayor () Willy Brandt, who criticized the United States for failing to respond and went so far as to suggest to Washington what to do next. Kennedy was furious. Allied intelligence agencies had hypothesized about a wall to stop the flood of refugees, but the main candidate for its location was around the perimeter of the city. In 1961, Secretary of State Dean Rusk proclaimed, "The Wall certainly ought not to be a permanent feature of the European landscape. I see no reason why the Soviet Union should think it is to their advantage in any way to leave there that monument to communist failure."
United States and UK sources had expected the Soviet sector to be sealed off from West Berlin but were surprised by how long the East Germans took for such a move. They considered the Wall as an end to concerns about a GDR/Soviet retaking or capture of the whole of Berlin; the Wall would presumably have been an unnecessary project if such plans were afloat. Thus, they concluded that the possibility of a Soviet military conflict over Berlin had decreased.
The East German government claimed that the Wall was an "anti-fascist protective rampart" () intended to dissuade aggression from the West. Another official justification was the activities of Western agents in Eastern Europe. The Eastern German government also claimed that West Berliners were buying out state-subsidized goods in East Berlin. East Germans and others greeted such statements with skepticism, as most of the time, the border was only closed for citizens of East Germany traveling to the West, but not for residents of West Berlin travelling to the East. The construction of the Wall had caused considerable hardship to families divided by it. Most people believed that the Wall was mainly a means of preventing the citizens of East Germany from entering or fleeing to West Berlin.
Secondary response
The National Security Agency was the only American intelligence agency that was aware that East Germany was to take action to deal with the brain drain problem. On 9 August 1961, the NSA intercepted an advance warning information of the Socialist Unity Party's plan to close the intra-Berlin border between East and West Berlin completely for foot traffic. The interagency intelligence Berlin Watch Committee assessed that this intercept "might be the first step in a plan to close the border." This warning did not reach John F. Kennedy until noon on 13 August 1961, while he was vacationing in his yacht off the Kennedy Compound in Hyannis Port, Massachusetts. While Kennedy was angry that he had no advance warning, he was relieved that the East Germans and the Soviets had only divided Berlin without taking any action against West Berlin's access to the West. However, he denounced the Berlin Wall, whose erection worsened the relations between the United States and the Soviet Union.
In response to the erection of the Berlin Wall, a retired general, Lucius D. Clay, was appointed by Kennedy as his special advisor with ambassadorial rank. Clay had been the Military Governor of the US Zone of Occupation in Germany during the period of the Berlin Blockade and had ordered the first measures in what became the Berlin Airlift. He was immensely popular with the residents of West Berlin, and his appointment was an unambiguous sign that Kennedy would not compromise on the status of West Berlin. As a symbolic gesture, Kennedy sent Clay and Vice President Lyndon B. Johnson to West Berlin. They landed at Tempelhof Airport on the afternoon of Saturday, 19 August 1961 and were greeted enthusiastically by the local population.
They arrived in a city defended by three Allied brigades—one each from the UK (Berlin Infantry Brigade), the US (Berlin Brigade), and France (Forces Françaises à Berlin). On 16 August, Kennedy had given the order for them to be reinforced. Early on 19 August, the 1st Battle Group, 18th Infantry Regiment (commanded by Colonel Glover S. Johns Jr.) was alerted.
On Sunday morning, U.S. troops marched from West Germany through East Germany, bound for West Berlin. Lead elements—arranged in a column of 491 vehicles and trailers carrying 1,500 men, divided into five march units—left the Helmstedt-Marienborn checkpoint at 06:34. At Marienborn, the Soviet checkpoint next to Helmstedt on the West German-East German border, US personnel were counted by guards. The column was long, and covered from Marienborn to Berlin in full battle gear. East German police watched from beside trees next to the autobahn all the way along.
The front of the convoy arrived at the outskirts of Berlin just before noon, to be met by Clay and Johnson, before parading through the streets of Berlin in front of a large crowd. At 04:00 on 21 August, Lyndon Johnson left West Berlin in the hands of General Frederick O. Hartel and his brigade of 4,224 officers and men. "For the next three and a half years, American battalions would rotate into West Berlin, by autobahn, at three-month intervals to demonstrate Allied rights to the city".
The creation of the Wall had important implications for both German states. By stemming the exodus of people from East Germany, the East German government was able to reassert its control over the country: despite discontent with the Wall, economic problems caused by dual currency and the black market were largely eliminated. The economy in the GDR began to grow. However, the Wall proved a public relations disaster for the communist bloc as a whole. Western powers portrayed it as a symbol of communist tyranny, particularly after East German border guards shot and killed would-be defectors. Such fatalities were later treated as acts of murder by the reunified Germany.
Structure and adjacent areas
Layout and modifications
The Berlin Wall was more than long. In June 1962, a second, parallel fence, also known as a "hinterland" wall (inner wall), was built some farther into East German territory. The houses contained between the wall and fences were razed and the inhabitants relocated, thus establishing what later became known as the death strip. The death strip was covered with raked sand or gravel, rendering footprints easy to notice, easing the detection of trespassers and also enabling officers to see which guards had neglected their task; it offered no cover; and, most importantly, it offered clear fields of fire for the Wall guards.
Through the years, the Berlin Wall evolved through four versions:
Wire fence and concrete block wall (1961)
Improved wire fence (1962–1965)
Improved concrete wall (1965–1975)
(Border Wall 75) (1975–1989)
The "fourth-generation Wall", known officially as "" (retaining wall element UL 12.11), was the final and most sophisticated version of the Wall. Begun in 1975 and completed about 1980, it was constructed from 45,000 separate sections of reinforced concrete, each high and wide, and cost DDM16,155,000 or about US$3,638,000. The concrete provisions added to this version of the Wall were done to prevent escapees from driving their cars through the barricades. At strategic points, the Wall was constructed to a somewhat weaker standard, so that East German and Soviet armored vehicles could easily break through in the event of war.
The top of the wall was lined with a smooth pipe, intended to make it more difficult to scale. The Wall was reinforced by mesh fencing, signal fencing, anti-vehicle trenches, barbed wire, dogs on long lines, "beds of nails" (also known as "Stalin's Carpet") under balconies hanging over the "death strip", over 116 watchtowers, and 20 bunkers with hundreds of guards. This version of the Wall is the one most commonly seen in photographs, and surviving fragments of the Wall in Berlin and elsewhere around the world are generally pieces of the fourth-generation Wall. The layout came to resemble the inner German border in most technical aspects, except that the Berlin Wall had no landmines nor spring-guns. Maintenance was performed on the outside of the wall by personnel who accessed the area outside it either via ladders or via hidden doors within the wall. These doors could not be opened by a single person, needing two separate keys in two separate keyholes to unlock.
As was the case with the inner German border, an unfortified strip of Eastern territory was left outside the wall. This outer strip was used by workers to paint over graffiti and perform other maintenance on the outside of the wall Unlike the inner German border, however, the outer strip was usually no more than four meters wide, and, in photos from the era, the exact location of the actual border in many places appears not even to have been marked. Also in contrast with the inner German border, little interest was shown by East German law enforcement in keeping outsiders off the outer strip; sidewalks of West Berlin streets even ran inside it.
Despite the East German government's general policy of benign neglect, vandals were known to have been pursued in the outer strip, and even arrested. In 1986, defector and political activist Wolfram Hasch and four other defectors were standing inside the outer strip defacing the wall when East German personnel emerged from one of the hidden doors to apprehend them. All but Hasch escaped back into the western sector. Hasch himself was arrested, dragged through the door into the death strip, and later convicted of illegally crossing the de jure border outside the wall. Graffiti artist Thierry Noir has reported having often been pursued there by East German soldiers. While some graffiti artists were chased off the outer strip, others, such as Keith Haring, were seemingly tolerated.
Surrounding municipalities
Besides the sector-sector boundary within Berlin itself, the Wall also separated West Berlin from the present-day state of Brandenburg. The following present-day municipalities, listed in counter-clockwise direction, share a border with the former West Berlin:
Oberhavel: Mühlenbecker Land (partially), Glienicke/Nordbahn, Hohen Neuendorf, Hennigsdorf
Havelland: Schönwalde-Glien, Falkensee, Dallgow-Döberitz
Potsdam (urban district)
Potsdam-Mittelmark: Stahnsdorf, Kleinmachnow, Teltow
Teltow-Fläming: Großbeeren, Blankenfelde-Mahlow
Dahme-Spreewald: Schönefeld (partially)
Official crossings and usage
There were nine border crossings between East and West Berlin. These allowed visits by West Berliners, other West Germans, Western foreigners and Allied personnel into East Berlin, as well as visits by GDR citizens and citizens of other socialist countries into West Berlin, provided that they held the necessary permits. These crossings were restricted according to which nationality was allowed to use it (East Germans, West Germans, West Berliners, other countries). The best known was the vehicle and pedestrian checkpoint at the corner of Friedrichstraße and Zimmerstraße (Checkpoint Charlie), which was restricted to Allied personnel and foreigners.
Several other border crossings existed between West Berlin and surrounding East Germany. These could be used for transit between West Germany and West Berlin, for visits by West Berliners into East Germany, for transit into countries neighbouring East Germany (Poland, Czechoslovakia, and Denmark), and for visits by East Germans into West Berlin carrying a permit. After the 1972 agreements, new crossings were opened to allow West Berlin waste to be transported into East German dumps, as well as some crossings for access to West Berlin's exclaves (see Steinstücken).
Four autobahns connected West Berlin to West Germany, including Berlin-Helmstedt autobahn, which entered East German territory between the towns of Helmstedt and Marienborn (Checkpoint Alpha), and which entered West Berlin at Dreilinden (Checkpoint Bravo for the Allied forces) in southwestern Berlin. Access to West Berlin was also possible by railway (four routes) and by boat for commercial shipping via canals and rivers.
Non-German Westerners could cross the border at Friedrichstraße station in East Berlin and at Checkpoint Charlie. When the Wall was erected, Berlin's complex public transit networks, the S-Bahn and U-Bahn, were divided with it. Some lines were cut in half; many stations were shut down. Three western lines traveled through brief sections of East Berlin territory, passing through eastern stations (called , or ghost stations) without stopping. Both the eastern and western networks converged at , which became a major crossing point for those (mostly Westerners) with permission to cross.
Crossing
West Germans and citizens of other Western countries could generally visit East Germany, often after applying for a visa at an East German embassy several weeks in advance. Visas for day trips restricted to East Berlin were issued without previous application in a simplified procedure at the border crossing. However, East German authorities could refuse entry permits without stating a reason. In the 1980s, visitors from the western part of the city who wanted to visit the eastern part had to exchange at least DM 25 into East German currency at the poor exchange rate of 1:1. It was forbidden to export East German currency from the East, but money not spent could be left at the border for possible future visits. Tourists crossing from the west had to also pay for a visa, which cost DM 5; West Berliners did not have to pay this fee.
West Berliners initially could not visit East Berlin or East Germany at all—all crossing points were closed to them between 26 August 1961 and 17 December 1963. In 1963, negotiations between East and West resulted in a limited possibility for visits during the Christmas season that year (). Similar, very limited arrangements were made in 1964, 1965 and 1966.
In 1971, with the Four Power Agreement on Berlin, agreements were reached that allowed West Berliners to apply for visas to enter East Berlin and East Germany regularly, comparable to the regulations already in force for West Germans. However, East German authorities could still refuse entry permits.
East Berliners and East Germans could not, at first, travel to West Berlin or West Germany at all. This regulation remained in force essentially until the fall of the Wall, but over the years several exceptions to these rules were introduced, the most significant being:
Elderly pensioners could travel to the West starting in 1964
Visits of relatives for important family matters
People who had to travel to the West for professional reasons (for example, artists, truck drivers, musicians, writers, etc.)
For each of these exceptions, GDR citizens had to apply for individual approval, which was never guaranteed. In addition, even if travel was approved, GDR travellers could exchange only a very small number of East German Marks into Deutsche Marks (DM), thus limiting the financial resources available for them to travel to the West. This led to the West German practice of granting a small amount of DM annually (Begrüßungsgeld, or welcome money) to GDR citizens visiting West Germany and West Berlin to help alleviate this situation.
Citizens of other East European countries were in general subject to the same prohibition of visiting Western countries as East Germans, though the applicable exception (if any) varied from country to country.
Allied military personnel and civilian officials of the Allied forces could enter and exit East Berlin without submitting to East German passport controls, purchasing a visa or being required to exchange money. Likewise, Soviet military patrols could enter and exit West Berlin. This was a requirement of the post-war Four Powers Agreements. A particular area of concern for the Western Allies involved official dealings with East German authorities when crossing the border, since Allied policy did not recognize the authority of the GDR to regulate Allied military traffic to and from West Berlin, as well as the Allied presence within Greater Berlin, including entry into, exit from, and presence within East Berlin.
The Allies held that only the Soviet Union, and not the GDR, had the authority to regulate Allied personnel in such cases. For this reason, elaborate procedures were established to prevent inadvertent recognition of East German authority when engaged in travel through the GDR and when in East Berlin. Special rules applied to travel by Western Allied military personnel assigned to the military liaison missions accredited to the commander of Soviet forces in East Germany, located in Potsdam.
Allied personnel were restricted by policy when travelling by land to the following routes:
Transit between West Germany and West Berlin
Road: The Helmstedt–Berlin autobahn (A2) (checkpoints Alpha and Bravo respectively). Soviet military personnel manned these checkpoints and processed Allied personnel for travel between the two points.
Rail: Western Allied military personnel and civilian officials of the Allied forces were forbidden to use commercial train service between West Germany and West Berlin, because of GDR passport and customs controls when using them. Instead, the Allied forces operated a series of official (duty) trains that traveled between their respective duty stations in West Germany and West Berlin. When transiting the GDR, the trains would follow the route between Helmstedt and Griebnitzsee, just outside West Berlin. In addition to persons traveling on official business, authorized personnel could also use the duty trains for personal travel on a space-available basis. The trains traveled only at night, and as with transit by car, Soviet military personnel handled the processing of duty train travelers. (See History of the Berlin S-Bahn.)
Entry into and exit from East Berlin
Checkpoint Charlie (as a pedestrian or riding in a vehicle)
As with military personnel, special procedures applied to travel by diplomatic personnel of the Western Allies accredited to their respective embassies in the GDR. This was intended to prevent inadvertent recognition of East German authority when crossing between East and West Berlin, which could jeopardize the overall Allied position governing the freedom of movement by Allied forces personnel within all Berlin.
Ordinary citizens of the Western Allied powers, not formally affiliated with the Allied forces, were authorized to use all designated transit routes through East Germany to and from West Berlin. Regarding travel to East Berlin, such persons could also use the Friedrichstraße train station to enter and exit the city, in addition to Checkpoint Charlie. In these instances, such travelers, unlike Allied personnel, had to submit to East German border controls.
Defection attempts
During the years of the Wall, around 5,000 people successfully defected to West Berlin. The number of people who died trying to cross the Wall, or as a result of the Wall's existence, has been disputed. The most vocal claims by Alexandra Hildebrandt, director of the Checkpoint Charlie Museum and widow of the museum's founder, estimated the death toll to be well above 200. A historic research group at the Centre for Contemporary History (ZZF) in Potsdam has confirmed at least 140 deaths. Prior official figures listed 98 as being killed.
The East German government issued shooting orders (Schießbefehl) to border guards dealing with defectors, though such orders are not the same as "shoot to kill" orders. GDR officials denied issuing the latter. In an October 1973 order later discovered by researchers, guards were instructed that people attempting to cross the Wall were criminals and needed to be shot:
Early successful escapes involved people jumping the initial barbed wire or leaping out of apartment windows along the line, but these ended as the Wall was fortified. East German authorities no longer permitted apartments near the Wall to be occupied, and any building near the Wall had its windows boarded and later bricked up. On 15 August 1961, Conrad Schumann was the first East German border guard to escape by jumping the barbed wire to West Berlin.
On 22 August 1961, Ida Siekmann was the first casualty at the Berlin Wall: she died after she jumped out of her third floor apartment at 48 Bernauer Strasse. The first person to be shot and killed while trying to cross to West Berlin was Günter Litfin, a twenty-four-year-old tailor. He attempted to swim across the Spree to West Berlin on 24 August 1961, the same day that East German police had received shoot-to-kill orders to prevent anyone from escaping.
Another dramatic escape was carried out in April 1963 by Wolfgang Engels, a 19-year-old civilian employee of the Nationale Volksarmee (NVA). Engels stole a Soviet armored personnel carrier from a base where he was deployed and drove it right into the Wall. He was fired at and seriously wounded by border guards. But a West German policeman intervened, firing his weapon at the East German border guards. The policeman removed Engels from the vehicle, which had become entangled in the barbed wire.
East Germans successfully defected by a variety of methods: digging long tunnels under the Wall, waiting for favorable winds and taking a hot air balloon, sliding along aerial wires, flying ultralights and, in one instance, simply driving a sports car at full speed through the basic, initial fortifications. When a metal beam was placed at checkpoints to prevent this kind of defection, up to four people (two in the front seats and possibly two in the boot) drove under the bar in a sports car that had been modified to allow the roof and windscreen to come away when it made contact with the beam. They lay flat and kept driving forward. The East Germans then built zig-zagging roads at checkpoints. The sewer system predated the Wall, and some people escaped through the sewers, in a number of cases with assistance from the Unternehmen Reisebüro. In September 1962, 29 people escaped through a tunnel to the west. At least 70 tunnels were dug under the wall; only 19 were successful in allowing fugitives—about 400 persons—to escape. The East German authorities eventually used seismographic and acoustic equipment to detect the practice. In 1962, they planned an attempt to use explosives to destroy one tunnel, but this was not carried out as it was apparently sabotaged by a member of the Stasi.
An airborne escape was made by Thomas Krüger, who landed a Zlin Z 42M light aircraft of the Gesellschaft für Sport und Technik, an East German youth military training organization, at RAF Gatow. His aircraft, registration DDR-WOH, was dismantled and returned to the East Germans by road, complete with humorous slogans painted on it by airmen of the Royal Air Force, such as "Wish you were here" and "Come back soon".
If an escapee was wounded in a crossing attempt and lay on the death strip, no matter how close they were to the Western wall, Westerners could not intervene for fear of triggering engaging fire from the 'Grepos', the East Berlin border guards. The guards often let fugitives bleed to death in the middle of this ground, as in the most notorious failed attempt, that of Peter Fechter (aged 18) at a point near Zimmerstrasse in East Berlin. He was shot and bled to death, in full view of the Western media, on 17 August 1962. Fechter's death created negative publicity worldwide that led the leaders of East Berlin to place more restrictions on shooting in public places and provide medical care for possible "would-be escapers". The last person to be shot and killed while trying to cross the border was Chris Gueffroy on 6 February 1989, while the final person to die in an escape attempt was Winfried Freudenberg who was killed when his homemade natural gas-filled balloon crashed on 8 March 1989.
The Wall gave rise to a widespread sense of desperation and oppression in East Berlin, as expressed in the private thoughts of one resident, who confided to her diary "Our lives have lost their spirit... we can do nothing to stop them."
Concerts by Western artists and growing anti-Wall sentiment
David Bowie, 1987
On 6 June 1987, David Bowie, who earlier for several years lived and recorded in West Berlin, played a concert close to the Wall. This was attended by thousands of Eastern concertgoers across the Wall, followed by violent rioting in East Berlin. According to Tobias Ruther, these protests in East Berlin were the first in the sequence of riots that led to those of November 1989. Although other factors were probably more influential in the fall of the Wall, upon his death in 2016, the German Foreign Office tweeted "Good-bye, David Bowie. You are now among #Heroes. Thank you for helping to bring down the #wall."
Bruce Springsteen, 1988
On 19 July 1988, 16 months before the Wall came down, Bruce Springsteen and the E Street Band, played Rocking the Wall, a live concert in East Berlin, which was attended by 300,000 in person and broadcast on television. Springsteen spoke to the crowd in German, saying: "I'm not here for or against any government. I've come to play rock 'n' roll for you in the hope that one day all the barriers will be torn down". East Germany and its FDJ youth organization were worried they were losing an entire generation. They hoped that by letting Springsteen in, they could improve their sentiment among East Germans. However, this strategy of "one step backwards, two steps forwards" backfired, and the concert only made East Germans hungrier for more of the freedoms that Springsteen epitomized.
David Hasselhoff, 1989
On 31 December 1989, American TV actor and pop music singer David Hasselhoff was the headlining performer for the Freedom Tour Live concert, which was attended by over 500,000 people on both sides of the Wall. The live concert footage was directed by music video director Thomas Mignone and aired on broadcast television station Zweites Deutsches Fernsehen ZDF throughout Europe. During shooting, film crew personnel pulled people up from both sides to stand and celebrate on top of the wall. Hasselhoff sang his number one hit song "Looking for Freedom" on a platform at the end of a twenty-meter steel crane that swung above and over the Wall adjacent to the Brandenburg Gate. A small museum was created in 2008 to celebrate Hasselhoff in the basement of the Circus Hostel.
Comments by politicians
On 26 June 1963, 22 months after the erection of the Berlin Wall, U.S. President John F. Kennedy visited West Berlin. Speaking from a platform erected on the steps of Rathaus Schöneberg for an audience of 450,000 and straying from the prepared script, he declared in his Ich bin ein Berliner speech the support of the United States for West Germany and the people of West Berlin in particular:
The message was aimed as much at the Soviets as it was at Berliners and was a clear statement of U.S. policy in the wake of the construction of the Berlin Wall. The speech is considered one of Kennedy's best, both a significant moment in the Cold War and a high point of the New Frontier. It was a great morale boost for West Berliners, who lived in an exclave deep inside East Germany and feared a possible East German occupation.
British prime minister Margaret Thatcher commented in 1982:
In a speech at the Brandenburg Gate commemorating the 750th anniversary of Berlin on 12 June 1987, U.S. President Ronald Reagan challenged Mikhail Gorbachev, then the General Secretary of the Communist Party of the Soviet Union, to tear down the Wall as a symbol of increasing freedom in the Eastern Bloc:
In January 1989, GDR leader Erich Honecker predicted that the Wall would stand for 50 or 100 more years if the conditions that had caused its construction did not change.
Fall
Due to the increasing economic problems in the Eastern Bloc and the failure of the USSR to intervene in relation to the individual communist states, the brackets of the Eastern Bloc slowly began to loosen from the end of the 1980s. One example is the fall of the communist government in neighboring Poland's 1989 Polish parliamentary election. Also in June 1989, the Hungarian government began dismantling the electrified fence along its border with Austria (with Western TV crews present) although the border was still very closely guarded and escape was almost impossible.
The opening of a border gate between Austria and Hungary at the Pan-European Picnic on 19 August 1989, which was based on an idea by Otto von Habsburg to test the reaction of Mikhail Gorbachev, then triggered a peaceful chain reaction, at the end of which there was no longer the GDR and the Eastern Bloc had disintegrated. Because with the non-reaction of the USSR and the GDR to the mass exodus, the media-informed Eastern Europeans could feel the increasing loss of power of their governments and more and more East Germans were now trying to flee via Hungary. Erich Honecker explained to the Daily Mirror regarding the Pan-European Picnic and thus showed his people his own inaction: "Habsburg distributed leaflets far into Poland, on which the East German holidaymakers were invited to a picnic. When they came to the picnic, they were given gifts, food and Deutsche Mark, and then they were persuaded to come to the West." Then, in September, more than 13,000 East German tourists escaped through Hungary to Austria. This set up a chain of events. The Hungarians prevented many more East Germans from crossing the border and returned them to Budapest. These East Germans flooded the West German embassy and refused to return to East Germany.
The East German government responded by disallowing any further travel to Hungary but allowed those already there to return to East Germany. This triggered similar events in neighboring Czechoslovakia. This time, however, the East German authorities allowed people to leave, provided that they did so by train through East Germany. This was followed by mass demonstrations within East Germany itself. Protest demonstrations spread throughout East Germany in September 1989. Initially, protesters were mostly people wanting to leave to the West, chanting ("We want out!"). Then protestors began to chant ("We are staying here!"). This was the start of what East Germans generally call the "Peaceful Revolution" of late 1989. The protest demonstrations grew considerably by early November. The movement neared its height on 4 November, when half a million people gathered to demand political change, at the Alexanderplatz demonstration, East Berlin's large public square and transportation hub. On 9 October 1989, the police and army units were given permission to use force against those assembled, but this did not deter the church service and march from taking place, which gathered 70,000 people.
The longtime leader of East Germany, Erich Honecker, resigned on 18 October 1989 and was replaced by Egon Krenz that day.
The wave of refugees leaving East Germany for the West kept increasing. By early November refugees were finding their way to Hungary via Czechoslovakia, or via the West German Embassy in Prague. This was tolerated by the new Krenz government, because of long-standing agreements with the communist Czechoslovak government, allowing free travel across their common border. However, this movement of people grew so large it caused difficulties for both countries. To ease the difficulties, the politburo led by Krenz decided on 9 November to allow refugees to exit directly through crossing points between East Germany and West Germany, including between East and West Berlin. Later the same day, the ministerial administration modified the proposal to include private, round-trip, and travel. The new regulations were to take effect the next day.
Günter Schabowski, the party boss in East Berlin and the spokesman for the SED Politburo, had the task of announcing the new regulations. However, he had not been involved in the discussions about the new regulations and had not been fully updated. Shortly before a press conference on 9 November, he was handed a note announcing the changes, but given no further instructions on how to handle the information. These regulations had only been completed a few hours earlier and were to take effect the following day, so as to allow time to inform the border guards. But this starting time delay was not communicated to Schabowski. At the end of the press conference, Schabowski read out loud the note he had been given. A reporter, ANSA's Riccardo Ehrman, asked when the regulations would take effect. After a few seconds' hesitation, Schabowski replied, "As far as I know, it takes effect immediately, without delay". After further questions from journalists, he confirmed that the regulations included the border crossings through the Wall into West Berlin, which he had not mentioned until then. He repeated that it was immediate in an interview with American journalist Tom Brokaw.
Excerpts from Schabowski's press conference were the lead story on West Germany's two main news programs that night—at 7:17 p.m. on ZDF's heute and at 8 p.m. on ARD's Tagesschau. As ARD and ZDF had broadcast to nearly all of East Germany since the late 1950s and had become accepted by the East German authorities, the news was broadcast there as well simultaneously. Later that night, on ARD's Tagesthemen, anchorman Hanns Joachim Friedrichs proclaimed, "This 9 November is a historic day. The GDR has announced that, starting immediately, its borders are open to everyone. The gates in the Wall stand open wide."
After hearing the broadcast, East Germans began gathering at the Wall, at the six checkpoints between East and West Berlin, demanding that border guards immediately open the gates. The surprised and overwhelmed guards made many hectic telephone calls to their superiors about the problem. At first, they were ordered to find the "more aggressive" people gathered at the gates and stamp their passports with a special stamp that barred them from returning to East Germany—in effect, revoking their citizenship. However, this still left thousands of people demanding to be let through "as Schabowski said we can". It soon became clear that no one among the East German authorities would take personal responsibility for issuing orders to use lethal force, so the vastly outnumbered soldiers had no way to hold back the huge crowd of East German citizens. Finally, at 10:45 p.m. on 9 November, Harald Jäger, the commander of the Bornholmer Straße border crossing yielded, allowing the guards to open the checkpoints and allowing people through with little or no identity checking. As the Ossis swarmed through, they were greeted by Wessis waiting with flowers and champagne amid wild rejoicing. Soon afterward, a crowd of West Berliners jumped on top of the Wall, and were soon joined by East German youngsters. The evening of 9 November 1989 is known as the night the Wall came down.
Another border crossing to the south may have been opened earlier. An account by Heinz Schäfer indicates that he also acted independently and ordered the opening of the gate at Waltersdorf-Rudow a couple of hours earlier. This may explain reports of East Berliners appearing in West Berlin earlier than the opening of the Bornholmer Straße border crossing.
Thirty years after the fall of the Berlin Wall, The Guardian collected short stories from 9 November 1989 by five German writers who reflect on the day. In this, Kathrin Schmidt remembers comically: "I downed almost an entire bottle of schnapps".
Legacy
Little is left of the Wall at its original site, which was destroyed almost in its entirety. Three long sections are still standing: an piece of the first (westernmost) wall at the Topography of Terror, site of the former Gestapo headquarters, halfway between Checkpoint Charlie and Potsdamer Platz; a longer section of the second (easternmost) wall along the Spree River near the , nicknamed East Side Gallery; and a third section that is partly reconstructed, in the north at Bernauer Straße, which was turned into a memorial in 1998. Other isolated fragments, lampposts, other elements, and a few watchtowers also remain in various parts of the city.
The former leadership in the Schlesischen Busch in the vicinity of the Puschkinallee—the listed, twelve-meter high watchtower stands in a piece of the wall strip, which has been turned into a park, near the Lohmühleninsel.
The former "Kieler Eck" (Kiel Corner) on Kieler Strasse in Mitte, close to the Berlin-Spandau Schifffahrtskanal—the tower is protected as a historic monument and now surrounded on three sides by new buildings. It houses a memorial site named after the Wallopfer Günter Litfin, who was shot at Humboldthafen in August 1961. The memorial site, which is run by the initiative of his brother Jürgen Liftin, can be viewed after registration.
The former management office at Nieder Neuendorf, in the district of Hennigsdorf of the same name—here is the permanent exhibition on the history of the border installations between the two German states.
The former management station at Bergfelde, today the district of Hohen Neuendorf—The tower is located in an already reforested area of the border strip and is used together with surrounding terrain as a nature protection tower by the Deutschen Waldjugend.
The only one of the much slimmer observation towers (BT-11) in the Erna-Berger-Strasse also in Mitte—however, was moved by a few meters for construction work and is no longer in the original location; There is an exhibition about the wall in the area of the Potsdamer Platz in planning.
Nothing still accurately represents the Wall's original appearance better than a very short stretch at Bernauer Straße associated with the Berlin Wall Documentation Center. Other remnants are badly damaged by souvenir seekers. Fragments of the Wall were taken, and some were sold around the world. Appearing both with and without certificates of authenticity, these fragments are now a staple on the online auction service eBay as well as German souvenir shops. Today, the eastern side is covered in graffiti that did not exist while the Wall was guarded by the armed soldiers of East Germany. Previously, graffiti appeared only on the western side. Along some tourist areas of the city centre, the city government has marked the location of the former Wall by a row of cobblestones in the street. In most places only the "first" wall is marked, except near Potsdamer Platz where the stretch of both walls is marked, giving visitors an impression of the dimension of the barrier system.
After the fall of the Berlin Wall, there were initiatives that they want to preserve the death strip walkways and redevelop them into a hiking and cycling area, known as . It is part of the initiative by the Berlin Senate since 11 October 2001.
Cultural differences
For many years after reunification, people in Germany talked about cultural differences between East and West Germans (colloquially Ossis and Wessis), sometimes described as Mauer im Kopf (The wall in the head). A September 2004 poll found that 25 percent of West Germans and 12 percent of East Germans wished that East and West should be separated again by a "Wall". A poll taken in October 2009 on the occasion of the 20th anniversary of the fall of the Wall indicated, however, that only about a tenth of the population was still unhappy with the unification (8 percent in the East; 12 percent in the West). Although differences are still perceived between East and West, Germans make similar distinctions between North and South.
A 2009 poll conducted by Russia's VTsIOM, found that more than half of all Russians do not know who built the Berlin Wall. Ten percent of people surveyed thought Berlin residents built it themselves. Six percent said Western powers built it and four percent thought it was a "bilateral initiative" of the Soviet Union and the West. Fifty-eight percent said they did not know who built it, with just 24 percent correctly naming the Soviet Union and its then-communist ally East Germany.
Wall segments around the world
Not all segments of the Wall were ground up as the Wall was being torn down. Many segments have been given to various institutions around the world. They can be found, for instance, in presidential and historical museums, lobbies of hotels and corporations, at universities and government buildings, and in public spaces in different countries of the world.
50th anniversary commemoration
On 13 August 2011, Germany marked the 50th anniversary of East Germany beginning the erection of the Berlin Wall. Chancellor Angela Merkel joined with President Christian Wulff and Berlin Mayor Klaus Wowereit at the Bernauer Straße memorial park to remember lives and liberty. Speeches extolled freedom and a minute of silence at noon honored those who died trying to flee to the West. "It is our shared responsibility to keep the memory alive and to pass it on to the coming generations as a reminder to stand up for freedom and democracy to ensure that such injustice may never happen again." entreated Mayor Wowereit. "It has been shown once again: Freedom is invincible at the end. No wall can permanently withstand the desire for freedom", proclaimed President Wulff.
Polling
A small minority still support the wall or even support rebuilding the wall back up. In 2008 a poll found that 11% of participants from the former West Berlin and 12% form the former East Berlin said it would be better if the wall was still in place.
A November 2009 poll found that 12% of Germans said the wall should be rebuilt. The poll also found that in the former West German states support was at 12% and in the former East German states it was 13%. A September 2009 poll found 15% of Germans supported a wall, while in the west it was 16% and in the east it was at 10%.
A 2010 poll from Emnid for Bild, found that 24% of West Germans and 23% of East Germans wished for the wall still being in place.
A 2019 poll from Berliner Zeitung on the 30th anniversary, found that 8% of Berliners supported the idea if the wall was still standing, The overwhelming majority of Berliners at 87% however supported the fall of the wall. The poll also found that 28% of the Alternative for Germany (AfD) and 16% of Free Democratic Party (FDP) supporters supported bringing back the wall. A 2019 Yougov poll found that 13% of Germans wanted the wall back, in the West support was at 14% and in the East it was 13%.
A 2019 poll from Forsa found 35% of Berliners thought the construction of the Wall was not so wrong with supporters of the Left Party at 74%.
Related media
Documentaries
Documentary films specifically about the Berlin Wall include:
The Tunnel (December 1962), an NBC News Special documentary film.
The Road to the Wall (1962), a documentary film.
Something to Do with the Wall (1991), a documentary about the fall of the Berlin Wall by Ross McElwee and Marilyn Levine, originally conceived as a commemoration of the 25th anniversary of its construction.
Rabbit à la Berlin (2009), a documentary film, directed by Bartek Konopka, told from the point of view of a group of wild rabbits that inhabited the zone between the two walls.
The American Sector (2020), a documentary by Courtney Stephens and Pacho Velez that tracks down the wall segments located in the U.S.
Feature films
Fictional films featuring the Berlin Wall have included:
Escape from East Berlin (1962), American-West German film inspired by story of 29 East Germans that tunneled under the wall
The Spy Who Came in from the Cold (1965), a Cold War classic set on both sides of The Wall, from the eponymous book by John le Carré, directed by Martin Ritt.
The Boy and the Ball and the Hole in the Wall (1965), Spanish-Mexican co-production.
Funeral in Berlin (1966), a spy movie starring Michael Caine, directed by Guy Hamilton.
Casino Royale (1967), a film featuring a segment centred on a house apparently bisected by the Wall.
The Wicked Dreams of Paula Schultz (1968), a Cold War spy farce about an Olympic athlete who defects, directed by George Marshall.
Berlin Tunnel 21 (1981), a made-for-TV movie about a former American officer leading an attempt to build a tunnel underneath The Wall as a rescue route.
Night Crossing (1982), a British-American drama film starring John Hurt, Jane Alexander, and Beau Bridges, based on the true story of the Strelzyk and Wetzel families, who on 16 September 1979, attempted to escape from East Germany to West Germany in a homemade hot air balloon, during the days of the Inner German border-era.
The Innocent (1993), a film about the joint CIA/MI6 operation to build a tunnel under East Berlin in the 1950s, directed by John Schlesinger.
Sonnenallee (1999), a German comedy film about life in East Berlin in the late 1970s, directed by Leander Haußmann.
The Tunnel (2001), a dramatization of a collaborative tunnel under the Wall, filmed by Roland Suso Richter.
Good Bye Lenin! (2003), film set during German unification that depicts the fall of the Wall through archive footage
Open The Wall (2014), featuring a dramatized story of the East-German border guard who was the first to let East Berliners cross the border to West Berlin on 9 November 1989.
Bridge of Spies (2015), featuring a dramatized subplot about Frederic Pryor, in which an American economics graduate student visits his German girlfriend in East Berlin just as the Berlin Wall is being built. He tries to bring her back into West Berlin but is stopped by Stasi agents and arrested as a spy.
Literature
Some novels specifically about the Berlin Wall include:
John le Carré, The Spy Who Came in from the Cold (1963), classic Cold War spy fiction.
Len Deighton, Berlin Game (1983), classic Cold War spy fiction
T.H.E. Hill, The Day Before the Berlin Wall: Could We Have Stopped It? – An Alternate History of Cold War Espionage, 2010 – based on a legend told in Berlin in the 1970s.
John Marks' The Wall (1999) in which an American spy defects to the East just hours before the Wall falls.
Marcia Preston's West of the Wall (2007, published as Trudy's Promise in North America), in which the heroine, left behind in East Berlin, waits for news of her husband after he makes his escape over the Berlin Wall.
Peter Schneider's The Wall Jumper, (1984; German: Der Mauerspringer, 1982), the Wall plays a central role in this novel set in Berlin of the 1980s.
Music
Music related to the Berlin Wall includes:
Stationary Traveller (1984), a concept album by Camel that takes the theme of families and friends split up by the building of the Berlin Wall.
"West of the Wall", a 1962 top 40 hit by Toni Fisher, which tells the tale of two lovers separated by the newly built Berlin Wall.
"Holidays in the Sun", a song by the English punk rock band Sex Pistols which prominently mentions the Wall, specifically singer Johnny Rotten's fantasy of digging a tunnel under it.
David Bowie's "Heroes" (1977), inspired by the image of a couple kissing at the Berlin Wall (in reality, the couple was his producer Tony Visconti and backup singer Antonia Maaß). The song (which, along with the album of the same name, was recorded in Berlin), makes lyrical references to the kissing couple, and to the "Wall of Shame" ("the shame was on the other side"). Upon Bowie's death, the Federal Foreign Office paid homage to Bowie on Twitter:see also above
"" (1984), a song by the Dutch pop band , about the differences between East and West Berlin during the period of the Berlin Wall.
"Chippin' Away" (1990), a song by Tom Fedora, performed by Crosby, Stills & Nash on the Berlin Wall, which appeared on Graham Nash's solo album Innocent Eyes (1986).
"Berliners", a song by Roy Harper from his 1990 album Once (lyrics include "They built a wall, boys, it stayed up for thirty years"). The song uses a BBC news broadcast describing the fall of the wall.
"Hedwig and the Angry Inch," a rock opera whose genderqueer protagonist Hedwig Robinson was born in East Berlin and later, living in the United States, describes herself as "the new Berlin Wall" standing between "East and West, slavery and freedom, man and woman, top and bottom." As a result, she says, people are moved to "decorate" her with "blood, graffiti and spit." (1998)
The music video for Liza Fox's song "Free" (2013) contains video clips of the fall of the Berlin Wall.
Visual art
Artworks related to the Berlin Wall include:
In 1982, the West-German artist created about 500 artworks along the former border strip around West Berlin as part of his work series Border Injuries. On one of his actions he tore down a large part of the Wall, installed a prepared foil of 3x2m in it, and finished the painting there before the border soldiers on patrol could detect him. This performance was recorded on video. His actions are well-documented both in newspapers from that time and in recent scientific publications.
The Day the Wall Came Down, 1996 and 1998 sculptures by Veryl Goodnight, which depict five horses leaping over actual pieces of the Berlin Wall.
Games
Video games related to the Berlin Wall include:
The Berlin Wall (1991), a video game.
Ostalgie: The Berlin Wall (2018), video game by Kremlingames, where the player, playing as the leader of the GDR from 1989 to 1991, can take down the Berlin Wall themselves or as a result of events in the game, or keep the wall intact as long as the country exists.
See also
Berlin Crisis of 1961 (October 1961)
Chapel of Reconciliation
Deborah Kennedy, American artist whose works were featured on the wall before its fall
Dissolution of the Soviet Union (1991)
East Germany balloon escape
Green Line (Lebanon)
Inner German border
Israeli annexation of East Jerusalem
Korean Demilitarized Zone
Military Demarcation Line
List of walls
Operation Gold
Peace lines
Removal of Hungary's border fence with Austria
The Wall – Live in Berlin (rock opera/concert, 21 July 1990)
World Freedom Day (United States)
Notes
References
Sources
Childs, David (2001). The Fall of the GDR: Germany's Road To Unity, Longman, Pearsoned.co.uk.
Childs, David (1986) [1983]. The GDR: Moscow's German Ally, London: George Allen & Unwin, .
Childs, David (2001). The Fall of the GDR, Longman. Amazon.co.uk
Childs, David (2000). The Two Red Flags: European Social Democracy & Soviet Communism Since 1945, Routledge.
Childs, David (1991). Germany in the Twentieth Century, (From pre-1918 to the restoration of German unity), Batsford, Third edition.
Childs, David (1987). East Germany to the 1990s Can It Resist Glasnost?, The Economist Intelligence Unit. . WorldCat.org
Luftbildatlas. Entlang der Berliner Mauer. Karten, Pläne und Fotos. Hans Wolfgang Hoffmann / Philipp Meuser (eds.) Berlin 2009.
Sarotte, Mary Elise (2014). Collapse: The Accidental Opening of the Berlin Wall, New York: Basic Books,
Sarotte, Mary Elise (2014) [1989]. The Struggle to Create Post-Cold War Europe (Second Edition) Princeton: Princeton University Press,
External links
Berlin Wall on OpenStreetMap (zoomable and scrollable)
Other resources:
Berlin Wall image group on Flickr
Berlin Wall Online, historical chronicle
Border barriers
1961 establishments in East Germany
1961 establishments in West Germany
1961 in military history
1961 in politics
20th century in Berlin
Allied occupation of Germany
Articles containing video clips
Buildings and structures completed in 1961
Buildings and structures demolished in 1989
Buildings and structures demolished in 1990
Demolished buildings and structures in Germany
Eastern Bloc
Former buildings and structures in Germany
Inner German border
Separation barriers
City walls in Germany | Berlin Wall | [
"Engineering"
] | 14,390 | [
"Separation barriers",
"Border barriers"
] |
3,729 | https://en.wikipedia.org/wiki/Burning%20glass | A burning glass or burning lens is a large convex lens that can concentrate the Sun's rays onto a small area, heating up the area and thus resulting in ignition of the exposed surface. Burning mirrors achieve a similar effect by using reflecting surfaces to focus the light. They were used in 18th-century chemical studies for burning materials in closed glass vessels where the products of combustion could be trapped for analysis. The burning glass was a useful contrivance in the days before electrical ignition was easily achieved.
History
Burning glass technology has been known since antiquity, as described by Greek and Roman writers who recorded the use of lenses to start fires for various purposes. Pliny the Elder noted the use of glass vases filled with water to concentrate sunlight heat intensely enough to ignite clothing, as well as convex lenses that were used to cauterize wounds. Plutarch refers to a burning mirror made of joined triangular metal mirrors installed at the temple of the Vestal Virgins. Aristophanes mentions the burning lens in his play The Clouds (424 BC).
The Hellenistic Greek mathematician Archimedes was said to have used a burning glass as a weapon in 212 BC, when Syracuse was besieged by Marcus Claudius Marcellus of the Roman Republic. The Roman fleet was supposedly incinerated, though eventually the city was taken and Archimedes was slain.
The legend of Archimedes gave rise to a considerable amount of research on burning glasses and lenses until the late 17th century. Various researchers from medieval Christendom to the Islamic world worked with burning glasses, including Anthemius of Tralles (6th century AD), Proclus (6th century; who by this means purportedly destroyed the fleet of Vitalian besieging Constantinople), Ibn Sahl in his On Burning Mirrors and Lenses (10th century), Alhazen in his Book of Optics (1021), Roger Bacon (13th century), Giambattista della Porta and his friends (16th century), Athanasius Kircher and Gaspar Schott (17th century), and the Comte de Buffon in 1740 in Paris.
While the effects of camera obscura were mentioned by Greek philosopher Aristotle in the 4th century BC, contemporary Chinese Mohists of China's Warring States Period who compiled the Mozi described their experiments with burning mirrors and the pinhole camera. A few decades after Alhazen described camera obscura in Iraq, the Song dynasty Chinese statesman Shen Kuo was nevertheless the first to clearly describe the relationship of the focal point of a concave mirror, the burning point and the pinhole camera as separate radiation phenomena in his Dream Pool Essays (1088). By the late 15th century Leonardo da Vinci would be the first in Europe to make similar observations about the focal point and pinhole.
Burning lenses were used in the 18th century by both Joseph Priestley and Antoine Lavoisier in their experiments to obtain oxides contained in closed vessels under high temperatures. These included carbon dioxide by burning diamond, and mercuric oxide by heating mercury. This type of experiment contributed to the discovery of "dephlogisticated air" by Priestley, which became better known as oxygen, following Lavoisier's investigations.
Chapter 17 of William Bates' 1920 book Perfect Sight Without Glasses, in which the author argues that observation of the sun is beneficial to those with poor vision, includes a figure of somebody "Focussing the Rays of the Sun Upon the Eye of a Patient by Means of a Burning Glass."
The burning lens of the Grand Duke of Tuscany was used by Sir Humphry Davy and Michael Faraday to burn a diamond in oxygen on 27 March 1814.
Use
War: since the legend of Archimedes
The first story akin to that of burning glass is by Archimedes, for the purpose of war, in 212 BC. When Syracuse was besieged by Marcus Claudius Marcellus, the Roman fleet was supposedly incinerated by the use of not glass per se, but a concave mirror made of brass focusing sunlight. Whether or not that actually happened, eventually the city was taken and Archimedes was slain.
In 1796, during the French Revolution and three years after the declaration of war between France and Great Britain, physicist Étienne-Gaspard Robert met with the French government and proposed the use of mirrors to burn the invading ships of the British Royal Navy. They decided not to take up his proposal.
Domestic use: primitive fire making
Burning glasses (often called fire lenses) are still used to light fires in outdoor and primitive settings. Large burning lenses sometimes take the form of Fresnel lenses, similar to lighthouse lenses, including those for use in solar furnaces. Solar furnaces are used in industry to produce extremely high temperatures without the need for fuel or large supplies of electricity. They sometimes employ a large parabolic array of mirrors (some facilities are several stories high) to focus light to a high intensity.
Religion: sacred fire
In various religions settings, a burning glass is used to set off some sort of sacred fire.
From the 7th to the 16th centuries, a burning glass was used by Christians to set off the Easter Fire during the Easter vigil. Thus, Saint Boniface explained to Pope Zachary that he produced the new fire of Holy Saturday by means of a crystal lens concentrating the rays of the sun. This process was also mentioned in liturgical books until the Roman Pontifical of 1561.
In Cambodia, a burning glass has also been used since ancient times for the cremation of kings and most recently for the funeral of King Sihanouk. The crematorium of the king is traditionally prepared by the Bakus brahmin from the Royal Palace on the last day of the week-long funeral. Small pieces of fragrant agarwood are placed beneath the magnifying glass until it ignites. The incandescent wood is used to light candles and pass on the fire to the attendees, who usually take their lit candles home.
Sports: lighting the Olympic torch
The Olympic torch that is carried around the host country of the Olympic Games is lit by a burning glass, at the site of ancient Olympia in Greece.
Popular culture: verification attempts
There have been several real-world tests to evaluate the validity of the legend of Archimedes described above over the centuries, including a test by Comte de Buffon (circa 1747), documented in the paper titled "Invention De Miroirs Ardens, Pour Brusler a Une Grande Distance", and an experiment by John Scott, which was documented in an 1867 paper.
In 1973, Greek scientist Dr. Ioannis Sakkas, curious about whether Archimedes could really have used a "burning glass" to destroy the Roman fleet in 212 BC, lined up nearly 60 Greek sailors, each holding an oblong mirror tipped to catch the sun's rays and direct them at a wooden ship 160 feet away. The ship caught fire at once. Sakkas said after the experiment there was no doubt in his mind the great inventor could have used bronze mirrors to scuttle the Romans.
However, accounting for battle conditions makes such a weapon impractical, with modern tests refuting such claims. An experiment was carried out by a group at the Massachusetts Institute of Technology in 2005. It concluded that although the theory was sound for stationary objects, the mirrors would not likely have been able to concentrate sufficient solar energy to set a ship on fire under battle conditions. Similar experiments were conducted on the popular science-based TV show MythBusters in 2004, 2006, and 2010, arriving at similar results based on the premise of the controversial myth.
However, an episode of Richard Hammond's Engineering Connections relating to the Keck Observatory (whose reflector glass is based on the Archimedes' Mirror) did successfully use a much smaller curved mirror to burn a wooden model, although the scaled-down model was not made of the same quality of materials as in the MythBusters effort.
See also
Diocles (mathematician)
Nimrud lens
Pyreliophorus
Visby lenses
Solar furnace
References
Further reading
Temple, Robert. The Crystal Sun, .
External links
Magnifiers
Lenses
Ancient weapons | Burning glass | [
"Technology",
"Engineering"
] | 1,652 | [
"Magnifiers",
"Measuring instruments"
] |
3,742 | https://en.wikipedia.org/wiki/Bluetooth | Bluetooth is a short-range wireless technology standard that is used for exchanging data between fixed and mobile devices over short distances and building personal area networks (PANs). In the most widely used mode, transmission power is limited to 2.5 milliwatts, giving it a very short range of up to . It employs UHF radio waves in the ISM bands, from 2.402GHz to 2.48GHz. It is mainly used as an alternative to wired connections to exchange files between nearby portable devices and connect cell phones and music players with wireless headphones, wireless speakers, HIFI systems, car audio and wireless transmission between TVs and soundbars.
Bluetooth is managed by the Bluetooth Special Interest Group (SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. The IEEE standardized Bluetooth as IEEE 802.15.1 but no longer maintains the standard. The Bluetooth SIG oversees the development of the specification, manages the qualification program, and protects the trademarks. A manufacturer must meet Bluetooth SIG standards to market it as a Bluetooth device. A network of patents applies to the technology, which is licensed to individual qualifying devices. , 4.7 billion Bluetooth integrated circuit chips are shipped annually. Bluetooth was first demonstrated in space in 2024, an early test envisioned to enhance IoT capabilities.
Etymology
The name "Bluetooth" was proposed in 1997 by Jim Kardach of Intel, one of the founders of the Bluetooth SIG. The name was inspired by a conversation with Sven Mattisson who related Scandinavian history through tales from Frans G. Bengtsson's The Long Ships, a historical novel about Vikings and the 10th-century Danish king Harald Bluetooth. Upon discovering a picture of the runestone of Harald Bluetooth in the book A History of the Vikings by Gwyn Jones, Kardach proposed Bluetooth as the codename for the short-range wireless program which is now called Bluetooth.
According to Bluetooth's official website,
Bluetooth is the Anglicised version of the Scandinavian Blåtand/Blåtann (or in Old Norse blátǫnn). It was the epithet of King Harald Bluetooth, who united the disparate
Danish tribes into a single kingdom; Kardach chose the name to imply that Bluetooth similarly unites communication protocols.
The Bluetooth logo is a bind rune merging the Younger Futhark runes (ᚼ, Hagall) and (ᛒ, Bjarkan), Harald's initials.
History
The development of the "short-link" radio technology, later named Bluetooth, was initiated in 1989 by Nils Rydbeck, CTO at Ericsson Mobile in Lund, Sweden. The purpose was to develop wireless headsets, according to two inventions by Johan Ullman, and . Nils Rydbeck tasked Tord Wingren with specifying and Dutchman Jaap Haartsen and Sven Mattisson with developing. Both were working for Ericsson in Lund. Principal design and development began in 1994 and by 1997 the team had a workable solution. From 1997 Örjan Johansson became the project leader and propelled the technology and standardization.
In 1997, Adalio Sanchez, then head of IBM ThinkPad product R&D, approached Nils Rydbeck about collaborating on integrating a mobile phone into a ThinkPad notebook. The two assigned engineers from Ericsson and IBM studied the idea. The conclusion was that power consumption on cellphone technology at that time was too high to allow viable integration into a notebook and still achieve adequate battery life. Instead, the two companies agreed to integrate Ericsson's short-link technology on both a ThinkPad notebook and an Ericsson phone to accomplish the goal.
Since neither IBM ThinkPad notebooks nor Ericsson phones were the market share leaders in their respective markets at that time, Adalio Sanchez and Nils Rydbeck agreed to make the short-link technology an open industry standard to permit each player maximum market access. Ericsson contributed the short-link radio technology, and IBM contributed patents around the logical layer. Adalio Sanchez of IBM then recruited Stephen Nachtsheim of Intel to join and then Intel also recruited Toshiba and Nokia. In May 1998, the Bluetooth SIG was launched with IBM and Ericsson as the founding signatories and a total of five members: Ericsson, Intel, Nokia, Toshiba, and IBM.
The first Bluetooth device was revealed in 1999. It was a hands-free mobile headset that earned the "Best of show Technology Award" at COMDEX. The first Bluetooth mobile phone was the unreleased prototype Ericsson T36, though it was the revised Ericsson model T39 that actually made it to store shelves in June 2001. However Ericsson released the R520m in Quarter 1 of 2001, making the R520m the first ever commercially available Bluetooth phone. In parallel, IBM introduced the IBM ThinkPad A30 in October 2001 which was the first notebook with integrated Bluetooth.
Bluetooth's early incorporation into consumer electronics products continued at Vosi Technologies in Costa Mesa, California, initially overseen by founding members Bejan Amini and Tom Davidson. Vosi Technologies had been created by real estate developer Ivano Stegmenga, with United States Patent 608507, for communication between a cellular phone and a vehicle's audio system. At the time, Sony/Ericsson had only a minor market share in the cellular phone market, which was dominated in the US by Nokia and Motorola. Due to ongoing negotiations for an intended licensing agreement with Motorola beginning in the late 1990s, Vosi could not publicly disclose the intention, integration, and initial development of other enabled devices which were to be the first "Smart Home" internet connected devices.
Vosi needed a means for the system to communicate without a wired connection from the vehicle to the other devices in the network. Bluetooth was chosen, since Wi-Fi was not yet readily available or supported in the public market. Vosi had begun to develop the Vosi Cello integrated vehicular system and some other internet connected devices, one of which was intended to be a table-top device named the Vosi Symphony, networked with Bluetooth. Through the negotiations with Motorola, Vosi introduced and disclosed its intent to integrate Bluetooth in its devices. In the early 2000s a legal battle ensued between Vosi and Motorola, which indefinitely suspended release of the devices. Later, Motorola implemented it in their devices which initiated the significant propagation of Bluetooth in the public market due to its large market share at the time.
In 2012, Jaap Haartsen was nominated by the European Patent Office for the European Inventor Award.
Implementation
Bluetooth operates at frequencies between 2.402 and 2.480GHz, or 2.400 and 2.4835GHz, including guard bands 2MHz wide at the bottom end and 3.5MHz wide at the top. This is in the globally unlicensed (but not unregulated) industrial, scientific and medical (ISM) 2.4GHz short-range radio frequency band. Bluetooth uses a radio technology called frequency-hopping spread spectrum. Bluetooth divides transmitted data into packets, and transmits each packet on one of 79 designated Bluetooth channels. Each channel has a bandwidth of 1MHz. It usually performs 1600hops per second, with adaptive frequency-hopping (AFH) enabled. Bluetooth Low Energy uses 2MHz spacing, which accommodates 40 channels.
Originally, Gaussian frequency-shift keying (GFSK) modulation was the only modulation scheme available. Since the introduction of Bluetooth 2.0+EDR, π/4-DQPSK (differential quadrature phase-shift keying) and 8-DPSK modulation may also be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate (BR) mode, where an instantaneous bit rate of 1Mbit/s is possible. The term Enhanced Data Rate (EDR) is used to describe π/4-DPSK (EDR2) and 8-DPSK (EDR3) schemes, transferring 2 and 3Mbit/s respectively.
In 2019, Apple published an extension called HDR which supports data rates of 4 (HDR4) and 8 (HDR8) Mbit/s using π/4-DQPSK modulation on 4 MHz channels with forward error correction (FEC).
Bluetooth is a packet-based protocol with a master/slave architecture. One master may communicate with up to seven slaves in a piconet. All devices within a given piconet use the clock provided by the master as the base for packet exchange. The master clock ticks with a period of 312.5μs, two clock ticks then make up a slot of 625μs, and two slots make up a slot pair of 1250μs. In the simple case of single-slot packets, the master transmits in even slots and receives in odd slots. The slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3, or 5 slots long, but in all cases, the master's transmission begins in even slots and the slave's in odd slots.
The above excludes Bluetooth Low Energy, introduced in the 4.0 specification, which uses the same spectrum but somewhat differently.
Communication and connection
A master BR/EDR Bluetooth device can communicate with a maximum of seven devices in a piconet (an ad hoc computer network using Bluetooth technology), though not all devices reach this maximum. The devices can switch roles, by agreement, and the slave can become the master (for example, a headset initiating a connection to a phone necessarily begins as master—as an initiator of the connection—but may subsequently operate as the slave).
The Bluetooth Core Specification provides for the connection of two or more piconets to form a scatternet, in which certain devices simultaneously play the master/leader role in one piconet and the slave role in another.
At any given time, data can be transferred between the master and one other device (except for the little-used broadcast mode). The master chooses which slave device to address; typically, it switches rapidly from one device to another in a round-robin fashion. Since it is the master that chooses which slave to address, whereas a slave is (in theory) supposed to listen in each receive slot, being a master is a lighter burden than being a slave. Being a master of seven slaves is possible; being a slave of more than one master is possible. The specification is vague as to required behavior in scatternets.
Uses
Bluetooth is a standard wire-replacement communications protocol primarily designed for low power consumption, with a short range based on low-cost transceiver microchips in each device. Because the devices use a radio (broadcast) communications system, they do not have to be in visual line of sight of each other; however, a quasi optical wireless path must be viable.
Bluetooth classes and power use
Historically, the Bluetooth range was defined by the radio class, with a lower class (and higher output power) having larger range. The actual range of a given link depends on several qualities of both communicating devices and the air and obstacles in between. The primary attributes affecting range are the data rate, protocol (Bluetooth Classic or Bluetooth Low Energy), transmission power, and receiver sensitivity, and the relative orientations and gains of both antennas.
The effective range varies depending on propagation conditions, material coverage, production sample variations, antenna configurations and battery conditions. Most Bluetooth applications are for indoor conditions, where attenuation of walls and signal fading due to signal reflections make the range far lower than specified line-of-sight ranges of the Bluetooth products.
Most Bluetooth applications are battery-powered Class 2 devices, with little difference in range whether the other end of the link is a Class 1 or Class 2 device as the lower-powered device tends to set the range limit. In some cases the effective range of the data link can be extended when a Class 2 device is connecting to a Class 1 transceiver with both higher sensitivity and transmission power than a typical Class 2 device. In general, however, Class 1 devices have sensitivities similar to those of Class 2 devices. Connecting two Class 1 devices with both high sensitivity and high power can allow ranges far in excess of the typical 100 m, depending on the throughput required by the application. Some such devices allow open field ranges of up to 1 km and beyond between two similar devices without exceeding legal emission limits.
Bluetooth profile
To use Bluetooth wireless technology, a device must be able to interpret certain Bluetooth profiles.
For example,
The Headset Profile (HSP) connects headphones and earbuds to a cell phone or laptop.
The Health Device Profile (HDP) can connect a cell phone to a digital thermometer or heart rate detector.
The Video Distribution Profile (VDP) sends a video stream from a video camera to a TV screen or a recording device.
Profiles are definitions of possible applications and specify general behaviors that Bluetooth-enabled devices use to communicate with other Bluetooth devices. These profiles include settings to parameterize and to control the communication from the start. Adherence to profiles saves the time for transmitting the parameters anew before the bi-directional link becomes effective. There are a wide range of Bluetooth profiles that describe many different types of applications or use cases for devices.
List of applications
Wireless control and communication between a mobile phone and a handsfree headset. This was one of the earliest applications to become popular.
Wireless control of audio and communication functions between a mobile phone and a Bluetooth compatible car stereo system (and sometimes between the SIM card and the car phone).
Wireless communication between a smartphone and a smart lock for unlocking doors.
Wireless control of and communication with iOS and Android device phones, tablets and portable wireless speakers.
Wireless Bluetooth headset and intercom. Idiomatically, a headset is sometimes called "a Bluetooth".
Wireless streaming of audio to headphones with or without communication capabilities.
Wireless streaming of data collected by Bluetooth-enabled fitness devices to phone or PC.
Wireless networking between PCs in a confined space and where little bandwidth is required.
Wireless communication with PC input and output devices, the most common being the mouse, keyboard and printer.
Transfer of files, contact details, calendar appointments, and reminders between devices with OBEX and sharing directories via FTP.
Triggering the camera shutter of a smartphone using a Bluetooth controlled selfie stick.
Replacement of previous wired RS-232 serial communications in test equipment, GPS receivers, medical equipment, bar code scanners, and traffic control devices.
For controls where infrared was often used.
For low bandwidth applications where higher USB bandwidth is not required and cable-free connection desired.
Sending small advertisements from Bluetooth-enabled advertising hoardings to other, discoverable, Bluetooth devices.
Wireless bridge between two Industrial Ethernet (e.g., PROFINET) networks.
Game consoles have been using Bluetooth as a wireless communications protocol for peripherals since the seventh generation, including Nintendo's Wii and Sony's PlayStation 3 which use Bluetooth for their respective controllers.
Dial-up internet access on personal computers or PDAs using a data-capable mobile phone as a wireless modem.
Short-range transmission of health sensor data from medical devices to mobile phone, set-top box or dedicated telehealth devices.
Allowing a DECT phone to ring and answer calls on behalf of a nearby mobile phone.
Real-time location systems (RTLS) are used to track and identify the location of objects in real time using "Nodes" or "tags" attached to, or embedded in, the objects tracked, and "Readers" that receive and process the wireless signals from these tags to determine their locations.
Personal security application on mobile phones for prevention of theft or loss of items. The protected item has a Bluetooth marker (e.g., a tag) that is in constant communication with the phone. If the connection is broken (the marker is out of range of the phone) then an alarm is raised. This can also be used as a man overboard alarm.
Calgary, Alberta, Canada's Roads Traffic division uses data collected from travelers' Bluetooth devices to predict travel times and road congestion for motorists.
Wireless transmission of audio (a more reliable alternative to FM transmitters)
Live video streaming to the visual cortical implant device by Nabeel Fattah in Newcastle university 2017.
Connection of motion controllers to a PC when using VR headsets
Wireless connection between TVs and soundbars.
Devices
Bluetooth exists in numerous products such as telephones, speakers, tablets, media players, robotics systems, laptops, and game console equipment as well as some high definition headsets, modems, hearing aids and even watches. Bluetooth is useful when transferring information between two or more devices that are near each other in low-bandwidth situations. Bluetooth is commonly used to transfer sound data with telephones (i.e., with a Bluetooth headset) or byte data with hand-held computers (transferring files).
Bluetooth protocols simplify the discovery and setup of services between devices. Bluetooth devices can advertise all of the services they provide. This makes using services easier, because more of the security, network address and permission configuration can be automated than with many other network types.
Computer requirements
A personal computer that does not have embedded Bluetooth can use a Bluetooth adapter that enables the PC to communicate with Bluetooth devices. While some desktop computers and most recent laptops come with a built-in Bluetooth radio, others require an external adapter, typically in the form of a small USB "dongle".
Unlike its predecessor, IrDA, which requires a separate adapter for each device, Bluetooth lets multiple devices communicate with a computer over a single adapter.
Operating system implementation
For Microsoft platforms, Windows XP Service Pack 2 and SP3 releases work natively with Bluetooth v1.1, v2.0 and v2.0+EDR. Previous versions required users to install their Bluetooth adapter's own drivers, which were not directly supported by Microsoft. Microsoft's own Bluetooth dongles (packaged with their Bluetooth computer devices) have no external drivers and thus require at least Windows XP Service Pack 2. Windows Vista RTM/SP1 with the Feature Pack for Wireless or Windows Vista SP2 work with Bluetooth v2.1+EDR. Windows 7 works with Bluetooth v2.1+EDR and Extended Inquiry Response (EIR).
The Windows XP and Windows Vista/Windows 7 Bluetooth stacks support the following Bluetooth profiles natively: PAN, SPP, DUN, HID, HCRP. The Windows XP stack can be replaced by a third party stack that supports more profiles or newer Bluetooth versions. The Windows Vista/Windows 7 Bluetooth stack supports vendor-supplied additional profiles without requiring that the Microsoft stack be replaced. Windows 8 and later support Bluetooth Low Energy (BLE). It is generally recommended to install the latest vendor driver and its associated stack to be able to use the Bluetooth device at its fullest extent.
Apple products have worked with Bluetooth since Mac OSX v10.2, which was released in 2002.
Linux has two popular Bluetooth stacks, BlueZ and Fluoride. The BlueZ stack is included with most Linux kernels and was originally developed by Qualcomm. Fluoride, earlier known as Bluedroid is included in Android OS and was originally developed by Broadcom.
There is also Affix stack, developed by Nokia. It was once popular, but has not been updated since 2005.
FreeBSD has included Bluetooth since its v5.0 release, implemented through netgraph.
NetBSD has included Bluetooth since its v4.0 release. Its Bluetooth stack was ported to OpenBSD as well, however OpenBSD later removed it as unmaintained.
DragonFly BSD has had NetBSD's Bluetooth implementation since 1.11 (2008). A netgraph-based implementation from FreeBSD has also been available in the tree, possibly disabled until 2014-11-15, and may require more work.
Specifications and features
The specifications were formalized by the Bluetooth Special Interest Group (SIG) and formally announced on 20 May 1998. In 2014 it had a membership of over 30,000 companies worldwide. It was established by Ericsson, IBM, Intel, Nokia and Toshiba, and later joined by many other companies.
All versions of the Bluetooth standards are backward-compatible with all earlier versions.
The Bluetooth Core Specification Working Group (CSWG) produces mainly four kinds of specifications:
The Bluetooth Core Specificationtypically released every few years
Core Specification Addendum (CSA)
Core Specification Supplements (CSS)can be released more frequently than Addenda
Errataavailable with a Bluetooth SIG account: Errata login)
Bluetooth 1.0 and 1.0B
Products were not interoperable.
Anonymity was not possible, preventing certain services from using Bluetooth environments.
Bluetooth 1.1
Ratified as IEEE Standard 802.15.1–2002
Many errors found in the v1.0B specifications were fixed.
Added possibility of non-encrypted channels.
Received signal strength indicator (RSSI)
Bluetooth 1.2
Major enhancements include:
Faster connection and discovery
Adaptive frequency-hopping spread spectrum (AFH), which improves resistance to radio frequency interference by avoiding the use of crowded frequencies in the hopping sequence
Higher transmission speeds in practice than in v1.1, up to 721 kbit/s
Extended Synchronous Connections (eSCO), which improve voice quality of audio links by allowing retransmissions of corrupted packets, and may optionally increase audio latency to provide better concurrent data transfer
Host Controller Interface (HCI) operation with three-wire UART
Ratified as IEEE Standard 802.15.1–2005
Introduced flow control and retransmission modes for
Bluetooth 2.0 + EDR
This version of the Bluetooth Core Specification was released before 2005. The main difference is the introduction of an Enhanced Data Rate (EDR) for faster data transfer. The data rate of EDR is 3Mbit/s, although the maximum data transfer rate (allowing for inter-packet time and acknowledgements) is 2.1Mbit/s. EDR uses a combination of GFSK and phase-shift keying modulation (PSK) with two variants, π/4-DQPSK and 8-DPSK. EDR can provide a lower power consumption through a reduced duty cycle.
The specification is published as Bluetooth v2.0 + EDR, which implies that EDR is an optional feature. Aside from EDR, the v2.0 specification contains other minor improvements, and products may claim compliance to "Bluetooth v2.0" without supporting the higher data rate. At least one commercial device states "Bluetooth v2.0 without EDR" on its data sheet.
Bluetooth 2.1 + EDR
Bluetooth Core Specification version 2.1 + EDR was adopted by the Bluetooth SIG on 26 July 2007.
The headline feature of v2.1 is secure simple pairing (SSP): this improves the pairing experience for Bluetooth devices, while increasing the use and strength of security.
Version 2.1 allows various other improvements, including extended inquiry response (EIR), which provides more information during the inquiry procedure to allow better filtering of devices before connection; and sniff subrating, which reduces the power consumption in low-power mode.
Bluetooth 3.0 + HS
Version 3.0 + HS of the Bluetooth Core Specification was adopted by the Bluetooth SIG on 21 April 2009. Bluetooth v3.0 + HS provides theoretical data transfer speeds of up to 24 Mbit/s, though not over the Bluetooth link itself. Instead, the Bluetooth link is used for negotiation and establishment, and the high data rate traffic is carried over a colocated 802.11 link.
The main new feature is (Alternative MAC/PHY), the addition of 802.11 as a high-speed transport. The high-speed part of the specification is not mandatory, and hence only devices that display the "+HS" logo actually support Bluetooth over 802.11 high-speed data transfer. A Bluetooth v3.0 device without the "+HS" suffix is only required to support features introduced in Core Specification version 3.0 or earlier Core Specification Addendum 1.
L2CAP Enhanced modes Enhanced Retransmission Mode (ERTM) implements reliable L2CAP channel, while Streaming Mode (SM) implements unreliable channel with no retransmission or flow control. Introduced in Core Specification Addendum 1.
Alternative MAC/PHY Enables the use of alternative MAC and PHYs for transporting Bluetooth profile data. The Bluetooth radio is still used for device discovery, initial connection and profile configuration. However, when large quantities of data must be sent, the high-speed alternative MAC PHY 802.11 (typically associated with Wi-Fi) transports the data. This means that Bluetooth uses proven low power connection models when the system is idle, and the faster radio when it must send large quantities of data. AMP links require enhanced L2CAP modes.
Unicast Connectionless Data Permits sending service data without establishing an explicit L2CAP channel. It is intended for use by applications that require low latency between user action and reconnection/transmission of data. This is only appropriate for small amounts of data.
Enhanced Power Control Updates the power control feature to remove the open loop power control, and also to clarify ambiguities in power control introduced by the new modulation schemes added for EDR. Enhanced power control removes the ambiguities by specifying the behavior that is expected. The feature also adds closed loop power control, meaning RSSI filtering can start as the response is received. Additionally, a "go straight to maximum power" request has been introduced. This is expected to deal with the headset link loss issue typically observed when a user puts their phone into a pocket on the opposite side to the headset.
Ultra-wideband
The high-speed (AMP) feature of Bluetooth v3.0 was originally intended for UWB, but the WiMedia Alliance, the body responsible for the flavor of UWB intended for Bluetooth, announced in March 2009 that it was disbanding, and ultimately UWB was omitted from the Core v3.0 specification.
On 16 March 2009, the WiMedia Alliance announced it was entering into technology transfer agreements for the WiMedia Ultra-wideband (UWB) specifications. WiMedia has transferred all current and future specifications, including work on future high-speed and power-optimized implementations, to the Bluetooth Special Interest Group (SIG), Wireless USB Promoter Group and the USB Implementers Forum. After successful completion of the technology transfer, marketing, and related administrative items, the WiMedia Alliance ceased operations.
In October 2009, the Bluetooth Special Interest Group suspended development of UWB as part of the alternative MAC/PHY, Bluetooth v3.0 + HS solution. A small, but significant, number of former WiMedia members had not and would not sign up to the necessary agreements for the IP transfer. As of 2009, the Bluetooth SIG was in the process of evaluating other options for its longer-term roadmap.
Bluetooth 4.0
The Bluetooth SIG completed the Bluetooth Core Specification version 4.0 (called Bluetooth Smart) and has been adopted . It includes Classic Bluetooth, Bluetooth high speed and Bluetooth Low Energy (BLE) protocols. Bluetooth high speed is based on Wi-Fi, and Classic Bluetooth consists of legacy Bluetooth protocols.
Bluetooth Low Energy, previously known as Wibree, is a subset of Bluetooth v4.0 with an entirely new protocol stack for rapid build-up of simple links. As an alternative to the Bluetooth standard protocols that were introduced in Bluetooth v1.0 to v3.0, it is aimed at very low power applications powered by a coin cell. Chip designs allow for two types of implementation, dual-mode, single-mode and enhanced past versions. The provisional names Wibree and Bluetooth ULP (Ultra Low Power) were abandoned and the BLE name was used for a while. In late 2011, new logos "Bluetooth Smart Ready" for hosts and "Bluetooth Smart" for sensors were introduced as the general-public face of BLE.
Compared to Classic Bluetooth, Bluetooth Low Energy is intended to provide considerably reduced power consumption and cost while maintaining a similar communication range. In terms of lengthening the battery life of Bluetooth devices, represents a significant progression.
In a single-mode implementation, only the low energy protocol stack is implemented. Dialog Semiconductor, STMicroelectronics, AMICCOM, CSR, Nordic Semiconductor and Texas Instruments have released single mode Bluetooth Low Energy solutions.
In a dual-mode implementation, Bluetooth Smart functionality is integrated into an existing Classic Bluetooth controller. , the following semiconductor companies have announced the availability of chips meeting the standard: Qualcomm Atheros, CSR, Broadcom and Texas Instruments. The compliant architecture shares all of Classic Bluetooth's existing radio and functionality resulting in a negligible cost increase compared to Classic Bluetooth.
Cost-reduced single-mode chips, which enable highly integrated and compact devices, feature a lightweight Link Layer providing ultra-low power idle mode operation, simple device discovery, and reliable point-to-multipoint data transfer with advanced power-save and secure encrypted connections at the lowest possible cost.
General improvements in version 4.0 include the changes necessary to facilitate BLE modes, as well the Generic Attribute Profile (GATT) and Security Manager (SM) services with AES Encryption.
Core Specification Addendum 2 was unveiled in December 2011; it contains improvements to the audio Host Controller Interface and to the High Speed (802.11) Protocol Adaptation Layer.
Core Specification Addendum 3 revision 2 has an adoption date of 24 July 2012.
Core Specification Addendum 4 has an adoption date of 12 February 2013.
Bluetooth 4.1
The Bluetooth SIG announced formal adoption of the Bluetooth v4.1 specification on 4 December 2013. This specification is an incremental software update to Bluetooth Specification v4.0, and not a hardware update. The update incorporates Bluetooth Core Specification Addenda (CSA 1, 2, 3 & 4) and adds new features that improve consumer usability. These include increased co-existence support for LTE, bulk data exchange rates—and aid developer innovation by allowing devices to support multiple roles simultaneously.
New features of this specification include:
Mobile wireless service coexistence signaling
Train nudging and generalized interlaced scanning
Low Duty Cycle Directed Advertising
L2CAP connection-oriented and dedicated channels with credit-based flow control
Dual Mode and Topology
LE Link Layer Topology
802.11n PAL
Audio architecture updates for Wide Band Speech
Fast data advertising interval
Limited discovery time
Some features were already available in a Core Specification Addendum (CSA) before the release of v4.1.
Bluetooth 4.2
Released on 2 December 2014, it introduces features for the Internet of things.
The major areas of improvement are:
Bluetooth Low Energy Secure Connection with Data Packet Length Extension to improve the cryptographic protocol
Link Layer Privacy with Extended Scanner Filter Policies to improve data security
Internet Protocol Support Profile (IPSP) version 6 ready for Bluetooth smart devices to support the Internet of things and home automation
Older Bluetooth hardware may receive 4.2 features such as Data Packet Length Extension and improved privacy via firmware updates.
Bluetooth 5
The Bluetooth SIG released Bluetooth 5 on 6 December 2016. Its new features are mainly focused on new Internet of Things technology. Sony was the first to announce Bluetooth 5.0 support with its Xperia XZ Premium in Feb 2017 during the Mobile World Congress 2017. The Samsung Galaxy S8 launched with Bluetooth 5 support in April 2017. In September 2017, the iPhone 8, 8 Plus and iPhone X launched with Bluetooth 5 support as well. Apple also integrated Bluetooth 5 in its new HomePod offering released on 9 February 2018. Marketing drops the point number; so that it is just "Bluetooth 5" (unlike Bluetooth 4.0); the change is for the sake of "Simplifying our marketing, communicating user benefits more effectively and making it easier to signal significant technology updates to the market."
Bluetooth 5 provides, for BLE, options that can double the data rate (2Mbit/s burst) at the expense of range, or provide up to four times the range at the expense of data rate. The increase in transmissions could be important for Internet of Things devices, where many nodes connect throughout a whole house. Bluetooth 5 increases capacity of connectionless services such as location-relevant navigation of low-energy Bluetooth connections.
The major areas of improvement are:
Slot Availability Mask (SAM)
2 Mbit/s PHY for
LE Long Range
High Duty Cycle Non-Connectable Advertising
LE Advertising Extensions
LE Channel Selection Algorithm #2
Features added in CSA5 – integrated in v5.0:
Higher Output Power
The following features were removed in this version of the specification:
Park State
Bluetooth 5.1
The Bluetooth SIG presented Bluetooth 5.1 on 21 January 2019.
The major areas of improvement are:
Angle of arrival (AoA) and Angle of Departure (AoD) which are used for locating and tracking of devices
Advertising Channel Index
GATT caching
Minor Enhancements batch 1:
HCI support for debug keys in LE Secure Connections
Sleep clock accuracy update mechanism
ADI field in scan response data
Interaction between and Flow Specification
Block Host channel classification for secondary advertising
Allow the SID to appear in scan response reports
Specify the behavior when rules are violated
Periodic Advertising Sync Transfer
Features added in Core Specification Addendum (CSA) 6 – integrated in v5.1:
Models
Mesh-based model hierarchy
The following features were removed in this version of the specification:
Unit keys
Bluetooth 5.2
On 31 December 2019, the Bluetooth SIG published the Bluetooth Core Specification version 5.2. The new specification adds new features:
Enhanced Attribute Protocol (EATT), an improved version of the Attribute Protocol (ATT)
LE Power Control
LE Isochronous Channels
LE Audio that is built on top of the new 5.2 features. BT LE Audio was announced in January 2020 at CES by the Bluetooth SIG. Compared to regular Bluetooth Audio, Bluetooth Low Energy Audio makes lower battery consumption possible and creates a standardized way of transmitting audio over BT LE. Bluetooth LE Audio also allows one-to-many and many-to-one transmission, allowing multiple receivers from one source or one receiver for multiple sources, known as Auracast. It uses a new LC3 codec. BLE Audio will also add support for hearing aids. On 12 July 2022, the Bluetooth SIG announced the completion of Bluetooth LE Audio. The standard has a lower minimum latency claim of 20–30 ms vs Bluetooth Classic audio of 100–200 ms. At IFA in August 2023 Samsung announced support for Auracast through a software update for their Galaxy Buds2 Pro and two of their TV's. In October users started getting updates for the earbuds.
Bluetooth 5.3
The Bluetooth SIG published the Bluetooth Core Specification version 5.3 on 13 July 2021. The feature enhancements of Bluetooth 5.3 are:
Connection Subrating
Periodic Advertisement Interval
Channel Classification Enhancement
Encryption key size control enhancements
The following features were removed in this version of the specification:
Alternate MAC and PHY (AMP) Extension
Bluetooth 5.4
The Bluetooth SIG released the Bluetooth Core Specification version 5.4 on 7 February 2023. This new version adds the following features:
Periodic Advertising with Responses (PAwR)
Encrypted Advertising Data
LE Security Levels Characteristic
Advertising Coding Selection
Bluetooth 6.0
The Bluetooth SIG released the Bluetooth Core Specification version 6.0 on 27 August 2024. This version adds the following features:
Bluetooth Channel Sounding
Decision-based advertising filtering
Monitoring advertisers
enhancement
LL extended feature set
Frame space update
Technical information
Architecture
Software
Seeking to extend the compatibility of Bluetooth devices, the devices that adhere to the standard use an interface called HCI (Host Controller Interface) between the host and the controller.
High-level protocols such as the SDP (Protocol used to find other Bluetooth devices within the communication range, also responsible for detecting the function of devices in range), RFCOMM (Protocol used to emulate serial port connections) and TCS (Telephony control protocol) interact with the baseband controller through the L2CAP (Logical Link Control and Adaptation Protocol). The L2CAP protocol is responsible for the segmentation and reassembly of the packets.
Hardware
The hardware that makes up the Bluetooth device is made up of, logically, two parts; which may or may not be physically separate. A radio device, responsible for modulating and transmitting the signal; and a digital controller. The digital controller is likely a CPU, one of whose functions is to run a Link Controller; and interfaces with the host device; but some functions may be delegated to hardware. The Link Controller is responsible for the processing of the baseband and the management of ARQ and physical layer FEC protocols. In addition, it handles the transfer functions (both asynchronous and synchronous), audio coding (e.g. SBC (codec)) and data encryption. The CPU of the device is responsible for attending the instructions related to Bluetooth of the host device, in order to simplify its operation. To do this, the CPU runs software called Link Manager that has the function of communicating with other devices through the LMP protocol.
A Bluetooth device is a short-range wireless device. Bluetooth devices are fabricated on RF CMOS integrated circuit (RF circuit) chips.
Bluetooth protocol stack
Bluetooth is defined as a layer protocol architecture consisting of core protocols, cable replacement protocols, telephony control protocols, and adopted protocols. Mandatory protocols for all Bluetooth stacks are LMP, L2CAP and SDP. In addition, devices that communicate with Bluetooth almost universally can use these protocols: HCI and RFCOMM.
Link Manager
The Link Manager (LM) is the system that manages establishing the connection between devices. It is responsible for the establishment, authentication and configuration of the link. The Link Manager locates other managers and communicates with them via the management protocol of the LMP link. To perform its function as a service provider, the LM uses the services included in the Link Controller (LC).
The Link Manager Protocol basically consists of several PDUs (Protocol Data Units) that are sent from one device to another. The following is a list of supported services:
Transmission and reception of data.
Name request
Request of the link addresses.
Establishment of the connection.
Authentication.
Negotiation of link mode and connection establishment.
Host Controller Interface
The Host Controller Interface provides a command interface between the controller and the host.
Logical Link Control and Adaptation Protocol
The Logical Link Control and Adaptation Protocol (L2CAP) is used to multiplex multiple logical connections between two devices using different higher level protocols.
Provides segmentation and reassembly of on-air packets.
In Basic mode, L2CAP provides packets with a payload configurable up to 64 kB, with 672 bytes as the default MTU, and 48 bytes as the minimum mandatory supported MTU.
In Retransmission and Flow Control modes, L2CAP can be configured either for isochronous data or reliable data per channel by performing retransmissions and CRC checks.
Bluetooth Core Specification Addendum 1 adds two additional L2CAP modes to the core specification. These modes effectively deprecate original Retransmission and Flow Control modes:
Enhanced Retransmission Mode (ERTM) This mode is an improved version of the original retransmission mode. This mode provides a reliable L2CAP channel.
Streaming Mode (SM) This is a very simple mode, with no retransmission or flow control. This mode provides an unreliable L2CAP channel.
Reliability in any of these modes is optionally and/or additionally guaranteed by the lower layer Bluetooth BDR/EDR air interface by configuring the number of retransmissions and flush timeout (time after which the radio flushes packets). In-order sequencing is guaranteed by the lower layer.
Only L2CAP channels configured in ERTM or SM may be operated over AMP logical links.
Service Discovery Protocol
The Service Discovery Protocol (SDP) allows a device to discover services offered by other devices, and their associated parameters. For example, when you use a mobile phone with a Bluetooth headset, the phone uses SDP to determine which Bluetooth profiles the headset can use (Headset Profile, Hands Free Profile (HFP), Advanced Audio Distribution Profile (A2DP) etc.) and the protocol multiplexer settings needed for the phone to connect to the headset using each of them. Each service is identified by a Universally unique identifier (UUID), with official services (Bluetooth profiles) assigned a short form UUID (16 bits rather than the full 128).
Radio Frequency Communications
Radio Frequency Communications (RFCOMM) is a cable replacement protocol used for generating a virtual serial data stream. RFCOMM provides for binary data transport and emulates EIA-232 (formerly RS-232) control signals over the Bluetooth baseband layer, i.e., it is a serial port emulation.
RFCOMM provides a simple, reliable, data stream to the user, similar to TCP. It is used directly by many telephony related profiles as a carrier for AT commands, as well as being a transport layer for OBEX over Bluetooth.
Many Bluetooth applications use RFCOMM because of its widespread support and publicly available API on most operating systems. Additionally, applications that used a serial port to communicate can be quickly ported to use RFCOMM.
Bluetooth Network Encapsulation Protocol
The Bluetooth Network Encapsulation Protocol (BNEP) is used for transferring another protocol stack's data via an L2CAP channel.
Its main purpose is the transmission of IP packets in the Personal Area Networking Profile.
BNEP performs a similar function to SNAP in Wireless LAN.
Audio/Video Control Transport Protocol
The Audio/Video Control Transport Protocol (AVCTP) is used by the remote control profile to transfer AV/C commands over an L2CAP channel. The music control buttons on a stereo headset use this protocol to control the music player.
Audio/Video Distribution Transport Protocol
The Audio/Video Distribution Transport Protocol (AVDTP) is used by the advanced audio distribution (A2DP) profile to stream music to stereo headsets over an L2CAP channel intended for video distribution profile in the Bluetooth transmission.
Telephony Control Protocol
The Telephony Control Protocol– Binary (TCS BIN) is the bit-oriented protocol that defines the call control signaling for the establishment of voice and data calls between Bluetooth devices. Additionally, "TCS BIN defines mobility management procedures for handling groups of Bluetooth TCS devices."
TCS-BIN is only used by the cordless telephony profile, which failed to attract implementers. As such it is only of historical interest.
Adopted protocols
Adopted protocols are defined by other standards-making organizations and incorporated into Bluetooth's protocol stack, allowing Bluetooth to code protocols only when necessary. The adopted protocols include:
Point-to-Point Protocol (PPP) Internet standard protocol for transporting IP datagrams over a point-to-point link.
TCP/IP/UDP Foundation Protocols for TCP/IP protocol suite
Object Exchange Protocol (OBEX) Session-layer protocol for the exchange of objects, providing a model for object and operation representation
Wireless Application Environment/Wireless Application Protocol (WAE/WAP) WAE specifies an application framework for wireless devices and WAP is an open standard to provide mobile users access to telephony and information services.
Baseband error correction
Depending on packet type, individual packets may be protected by error correction, either 1/3 rate forward error correction (FEC) or 2/3 rate. In addition, packets with CRC will be retransmitted until acknowledged by automatic repeat request (ARQ).
Setting up connections
Any Bluetooth device in discoverable mode transmits the following information on demand:
Device name
Device class
List of services
Technical information (for example: device features, manufacturer, Bluetooth specification used, clock offset)
Any device may perform an inquiry to find other devices to connect to, and any device can be configured to respond to such inquiries. However, if the device trying to connect knows the address of the device, it always responds to direct connection requests and transmits the information shown in the list above if requested. Use of a device's services may require pairing or acceptance by its owner, but the connection itself can be initiated by any device and held until it goes out of range. Some devices can be connected to only one device at a time, and connecting to them prevents them from connecting to other devices and appearing in inquiries until they disconnect from the other device.
Every device has a unique 48-bit address. However, these addresses are generally not shown in inquiries. Instead, friendly Bluetooth names are used, which can be set by the user. This name appears when another user scans for devices and in lists of paired devices.
Most cellular phones have the Bluetooth name set to the manufacturer and model of the phone by default. Most cellular phones and laptops show only the Bluetooth names and special programs are required to get additional information about remote devices. This can be confusing as, for example, there could be several cellular phones in range named T610 (see Bluejacking).
Pairing and bonding
Motivation
Many services offered over Bluetooth can expose private data or let a connecting party control the Bluetooth device. Security reasons make it necessary to recognize specific devices, and thus enable control over which devices can connect to a given Bluetooth device. At the same time, it is useful for Bluetooth devices to be able to establish a connection without user intervention (for example, as soon as in range).
To resolve this conflict, Bluetooth uses a process called bonding, and a bond is generated through a process called pairing. The pairing process is triggered either by a specific request from a user to generate a bond (for example, the user explicitly requests to "Add a Bluetooth device"), or it is triggered automatically when connecting to a service where (for the first time) the identity of a device is required for security purposes. These two cases are referred to as dedicated bonding and general bonding respectively.
Pairing often involves some level of user interaction. This user interaction confirms the identity of the devices. When pairing completes, a bond forms between the two devices, enabling those two devices to connect in the future without repeating the pairing process to confirm device identities. When desired, the user can remove the bonding relationship.
Implementation
During pairing, the two devices establish a relationship by creating a shared secret known as a link key. If both devices store the same link key, they are said to be paired or bonded. A device that wants to communicate only with a bonded device can cryptographically authenticate the identity of the other device, ensuring it is the same device it previously paired with. Once a link key is generated, an authenticated ACL link between the devices may be encrypted to protect exchanged data against eavesdropping. Users can delete link keys from either device, which removes the bond between the devices—so it is possible for one device to have a stored link key for a device it is no longer paired with.
Bluetooth services generally require either encryption or authentication and as such require pairing before they let a remote device connect. Some services, such as the Object Push Profile, elect not to explicitly require authentication or encryption so that pairing does not interfere with the user experience associated with the service use-cases.
Pairing mechanisms
Pairing mechanisms changed significantly with the introduction of Secure Simple Pairing in Bluetooth v2.1. The following summarizes the pairing mechanisms:
Legacy pairing: This is the only method available in Bluetooth v2.0 and before. Each device must enter a PIN code; pairing is only successful if both devices enter the same PIN code. Any 16-byte UTF-8 string may be used as a PIN code; however, not all devices may be capable of entering all possible PIN codes.
Limited input devices: The obvious example of this class of device is a Bluetooth Hands-free headset, which generally have few inputs. These devices usually have a fixed PIN, for example "0000" or "1234", that are hard-coded into the device.
Numeric input devices: Mobile phones are classic examples of these devices. They allow a user to enter a numeric value up to 16 digits in length.
Alpha-numeric input devices: PCs and smartphones are examples of these devices. They allow a user to enter full UTF-8 text as a PIN code. If pairing with a less capable device the user must be aware of the input limitations on the other device; there is no mechanism available for a capable device to determine how it should limit the available input a user may use.
Secure Simple Pairing (SSP): This is required by Bluetooth v2.1, although a Bluetooth v2.1 device may only use legacy pairing to interoperate with a v2.0 or earlier device. Secure Simple Pairing uses a form of public-key cryptography, and some types can help protect against man in the middle, or MITM attacks. SSP has the following authentication mechanisms:
Just works: As the name implies, this method just works, with no user interaction. However, a device may prompt the user to confirm the pairing process. This method is typically used by headsets with minimal IO capabilities, and is more secure than the fixed PIN mechanism this limited set of devices uses for legacy pairing. This method provides no man-in-the-middle (MITM) protection.
Numeric comparison: If both devices have a display, and at least one can accept a binary yes/no user input, they may use Numeric Comparison. This method displays a 6-digit numeric code on each device. The user should compare the numbers to ensure they are identical. If the comparison succeeds, the user(s) should confirm pairing on the device(s) that can accept an input. This method provides MITM protection, assuming the user confirms on both devices and actually performs the comparison properly.
Passkey Entry: This method may be used between a device with a display and a device with numeric keypad entry (such as a keyboard), or two devices with numeric keypad entry. In the first case, the display presents a 6-digit numeric code to the user, who then enters the code on the keypad. In the second case, the user of each device enters the same 6-digit number. Both of these cases provide MITM protection.
Out of band (OOB): This method uses an external means of communication, such as near-field communication (NFC) to exchange some information used in the pairing process. Pairing is completed using the Bluetooth radio, but requires information from the OOB mechanism. This provides only the level of MITM protection that is present in the OOB mechanism.
SSP is considered simple for the following reasons:
In most cases, it does not require a user to generate a passkey.
For use cases not requiring MITM protection, user interaction can be eliminated.
For numeric comparison, MITM protection can be achieved with a simple equality comparison by the user.
Using OOB with NFC enables pairing when devices simply get close, rather than requiring a lengthy discovery process.
Security concerns
Prior to Bluetooth v2.1, encryption is not required and can be turned off at any time. Moreover, the encryption key is only good for approximately 23.5 hours; using a single encryption key longer than this time allows simple XOR attacks to retrieve the encryption key.
Turning off encryption is required for several normal operations, so it is problematic to detect if encryption is disabled for a valid reason or a security attack.
Bluetooth v2.1 addresses this in the following ways:
Encryption is required for all non-SDP (Service Discovery Protocol) connections
A new Encryption Pause and Resume feature is used for all normal operations that require that encryption be disabled. This enables easy identification of normal operation from security attacks.
The encryption key must be refreshed before it expires.
Link keys may be stored on the device file system, not on the Bluetooth chip itself. Many Bluetooth chip manufacturers let link keys be stored on the device—however, if the device is removable, this means that the link key moves with the device.
Security
Overview
Bluetooth implements confidentiality, authentication and key derivation with custom algorithms based on the SAFER+ block cipher. Bluetooth key generation is generally based on a Bluetooth PIN, which must be entered into both devices. This procedure might be modified if one of the devices has a fixed PIN (e.g., for headsets or similar devices with a restricted user interface). During pairing, an initialization key or master key is generated, using the E22 algorithm.
The E0 stream cipher is used for encrypting packets, granting confidentiality, and is based on a shared cryptographic secret, namely a previously generated link key or master key. Those keys, used for subsequent encryption of data sent via the air interface, rely on the Bluetooth PIN, which has been entered into one or both devices.
An overview of Bluetooth vulnerabilities exploits was published in 2007 by Andreas Becker.
In September 2008, the National Institute of Standards and Technology (NIST) published a Guide to Bluetooth Security as a reference for organizations. It describes Bluetooth security capabilities and how to secure Bluetooth technologies effectively. While Bluetooth has its benefits, it is susceptible to denial-of-service attacks, eavesdropping, man-in-the-middle attacks, message modification, and resource misappropriation. Users and organizations must evaluate their acceptable level of risk and incorporate security into the lifecycle of Bluetooth devices. To help mitigate risks, included in the NIST document are security checklists with guidelines and recommendations for creating and maintaining secure Bluetooth piconets, headsets, and smart card readers.
Bluetooth v2.1 – finalized in 2007 with consumer devices first appearing in 2009 – makes significant changes to Bluetooth's security, including pairing. See the pairing mechanisms section for more about these changes.
Bluejacking
Bluejacking is the sending of either a picture or a message from one user to an unsuspecting user through Bluetooth wireless technology. Common applications include short messages, e.g., "You've just been bluejacked!" Bluejacking does not involve the removal or alteration of any data from the device.
Some form of DoS is also possible, even in modern devices, by sending unsolicited pairing requests in rapid succession; this becomes disruptive because most systems display a full screen notification for every connection request, interrupting every other activity, especially on less powerful devices.
History of security concerns
2001–2004
In 2001, Jakobsson and Wetzel from Bell Laboratories discovered flaws in the Bluetooth pairing protocol and also pointed to vulnerabilities in the encryption scheme. In 2003, Ben and Adam Laurie from A.L. Digital Ltd. discovered that serious flaws in some poor implementations of Bluetooth security may lead to disclosure of personal data. In a subsequent experiment, Martin Herfurt from the trifinite.group was able to do a field-trial at the CeBIT fairgrounds, showing the importance of the problem to the world. A new attack called BlueBug was used for this experiment. In 2004 the first purported virus using Bluetooth to spread itself among mobile phones appeared on the Symbian OS.
The virus was first described by Kaspersky Lab and requires users to confirm the installation of unknown software before it can propagate. The virus was written as a proof-of-concept by a group of virus writers known as "29A" and sent to anti-virus groups. Thus, it should be regarded as a potential (but not real) security threat to Bluetooth technology or Symbian OS since the virus has never spread outside of this system. In August 2004, a world-record-setting experiment (see also Bluetooth sniping) showed that the range of Class 2 Bluetooth radios could be extended to with directional antennas and signal amplifiers.
This poses a potential security threat because it enables attackers to access vulnerable Bluetooth devices from a distance beyond expectation. The attacker must also be able to receive information from the victim to set up a connection. No attack can be made against a Bluetooth device unless the attacker knows its Bluetooth address and which channels to transmit on, although these can be deduced within a few minutes if the device is in use.
2005
In January 2005, a mobile malware worm known as Lasco surfaced. The worm began targeting mobile phones using Symbian OS (Series 60 platform) using Bluetooth enabled devices to replicate itself and spread to other devices. The worm is self-installing and begins once the mobile user approves the transfer of the file (Velasco.sis) from another device. Once installed, the worm begins looking for other Bluetooth enabled devices to infect. Additionally, the worm infects other .SIS files on the device, allowing replication to another device through the use of removable media (Secure Digital, CompactFlash, etc.). The worm can render the mobile device unstable.
In April 2005, University of Cambridge security researchers published results of their actual implementation of passive attacks against the PIN-based pairing between commercial Bluetooth devices. They confirmed that attacks are practicably fast, and the Bluetooth symmetric key establishment method is vulnerable. To rectify this vulnerability, they designed an implementation that showed that stronger, asymmetric key establishment is feasible for certain classes of devices, such as mobile phones.
In June 2005, Yaniv Shaked and Avishai Wool published a paper describing both passive and active methods for obtaining the PIN for a Bluetooth link. The passive attack allows a suitably equipped attacker to eavesdrop on communications and spoof if the attacker was present at the time of initial pairing. The active method makes use of a specially constructed message that must be inserted at a specific point in the protocol, to make the master and slave repeat the pairing process. After that, the first method can be used to crack the PIN. This attack's major weakness is that it requires the user of the devices under attack to re-enter the PIN during the attack when the device prompts them to. Also, this active attack probably requires custom hardware, since most commercially available Bluetooth devices are not capable of the timing necessary.
In August 2005, police in Cambridgeshire, England, issued warnings about thieves using Bluetooth enabled phones to track other devices left in cars. Police are advising users to ensure that any mobile networking connections are de-activated if laptops and other devices are left in this way.
2006
In April 2006, researchers from Secure Network and F-Secure published a report that warns of the large number of devices left in a visible state, and issued statistics on the spread of various Bluetooth services and the ease of spread of an eventual Bluetooth worm.
In October 2006, at the Luxembourgish Hack.lu Security Conference, Kevin Finistere and Thierry Zoller demonstrated and released a remote root shell via Bluetooth on Mac OS X v10.3.9 and v10.4. They also demonstrated the first Bluetooth PIN and Linkkeys cracker, which is based on the research of Wool and Shaked.
2017
In April 2017, security researchers at Armis discovered multiple exploits in the Bluetooth software in various platforms, including Microsoft Windows, Linux, Apple iOS, and Google Android. These vulnerabilities are collectively called "BlueBorne". The exploits allow an attacker to connect to devices or systems without authentication and can give them "virtually full control over the device". Armis contacted Google, Microsoft, Apple, Samsung and Linux developers allowing them to patch their software before the coordinated announcement of the vulnerabilities on 12 September 2017.
2018
In July 2018, Lior Neumann and Eli Biham, researchers at the Technion – Israel Institute of Technology identified a security vulnerability in the latest Bluetooth pairing procedures: Secure Simple Pairing and LE Secure Connections.
Also, in October 2018, Karim Lounis, a network security researcher at Queen's University, identified a security vulnerability, called CDV (Connection Dumping Vulnerability), on various Bluetooth devices that allows an attacker to tear down an existing Bluetooth connection and cause the deauthentication and disconnection of the involved devices. The researcher demonstrated the attack on various devices of different categories and from different manufacturers.
2019
In August 2019, security researchers at the Singapore University of Technology and Design, Helmholtz Center for Information Security, and University of Oxford discovered a vulnerability, called KNOB (Key Negotiation of Bluetooth) in the key negotiation that would "brute force the negotiated encryption keys, decrypt the eavesdropped ciphertext, and inject valid encrypted messages (in real-time)".
Google released an Android security patch on 5 August 2019, which removed this vulnerability.
2023
In November 2023, researchers from Eurecom revealed a new class of attacks known as BLUFFS (Bluetooth Low Energy Forward and Future Secrecy Attacks). These 6 new attacks expand on and work in conjunction with the previously known KNOB and BIAS (Bluetooth Impersonation AttackS) attacks. While the previous KNOB and BIAS attacks allowed an attacker to decrypt and spoof Bluetooth packets within a session, BLUFFS extends this capability to all sessions generated by a device (including past, present, and future). All devices running Bluetooth versions 4.2 up to and including 5.4 are affected.
Health concerns
Bluetooth uses the radio frequency spectrum in the 2.402GHz to 2.480GHz range, which is non-ionizing radiation, of similar bandwidth to that used by wireless and mobile phones. No specific harm has been demonstrated, even though wireless transmission has been included by IARC in the possible carcinogen list. Maximum power output from a Bluetooth radio is 100mW for Class1, 2.5mW for Class2, and 1mW for Class3 devices. Even the maximum power output of Class1 is a lower level than the lowest-powered mobile phones. UMTS and W-CDMA output 250mW, GSM1800/1900 outputs 1000mW, and GSM850/900 outputs 2000mW.
Award programs
The Bluetooth Innovation World Cup, a marketing initiative of the Bluetooth Special Interest Group (SIG), was an international competition that encouraged the development of innovations for applications leveraging Bluetooth technology in sports, fitness and health care products. The competition aimed to stimulate new markets.
The Bluetooth Innovation World Cup morphed into the Bluetooth Breakthrough Awards in 2013. Bluetooth SIG subsequently launched the Imagine Blue Award in 2016 at Bluetooth World. The Bluetooth Breakthrough Awards program highlights the most innovative products and applications available today, prototypes coming soon, and student-led projects in the making.
See also
ANT+
Bluetooth stack – building blocks that make up the various implementations of the Bluetooth protocol
List of Bluetooth profiles – features used within the Bluetooth stack
Bluesniping
BlueSoleil – proprietary Bluetooth driver
Bluetooth Low Energy beacons (AltBeacon, iBeacon, Eddystone)
Bluetooth mesh networking
Continua Health Alliance
DASH7
Audio headset
Wi-Fi hotspot
Java APIs for Bluetooth
Key finder
Li-Fi
List of Bluetooth protocols
MyriaNed
Near-field communication
NearLink
RuBee – secure wireless protocol alternative
Tethering
Thread (network protocol)
Wi-Fi HaLow
Zigbee – low-power lightweight wireless protocol in the ISM band based on IEEE 802.15.4
Notes
References
External links
Specifications at Bluetooth SIG
Bluetooth
Mobile computers
Networking standards
Wireless communication systems
Telecommunications-related introductions in 1989
Swedish inventions
Dutch inventions | Bluetooth | [
"Technology",
"Engineering"
] | 13,375 | [
"Computer standards",
"Wireless networking",
"Computer networks engineering",
"Wireless communication systems",
"Networking standards",
"Bluetooth"
] |
3,743 | https://en.wikipedia.org/wiki/Bluetooth%20Special%20Interest%20Group | The Bluetooth Special Interest Group (Bluetooth SIG) is the standards organization that oversees the development of Bluetooth standards and the licensing of the Bluetooth technologies and trademarks to manufacturers. The SIG is a not-for-profit, non-stock corporation founded in September 1998. The SIG is headquartered in Kirkland, Washington, US.
The SIG does not make, manufacture or sell Bluetooth-enabled products.
Introduction
Bluetooth technology provides a way to exchange information between wireless devices such as PDAs, laptops, computers, printers and digital cameras via a secure, low-cost, globally available short-range radio frequency band. Originally developed by Ericsson, Bluetooth technology is now used in many different products by many different manufacturers. These manufacturers must be either Associate or Promoter members of (see below) the Bluetooth SIG before they are granted early access to the Bluetooth specifications, but published Bluetooth specifications are available online via the Bluetooth SIG Website bluetooth.com.
The SIG owns the Bluetooth word mark, figure mark and combination mark. These trademarks are licensed out for use to companies that are incorporating Bluetooth wireless technology into their products. To become a licensee, a company must become a member of the Bluetooth SIG. The SIG also manages the Bluetooth SIG Qualification program, a certification process required for any product using Bluetooth wireless technology and a pre-condition of the intellectual property license for Bluetooth technology. The main tasks for the SIG are to publish the Bluetooth specifications, protect the Bluetooth trademarks and evangelize Bluetooth wireless technology. In 2016, the SIG introduced a new visual and creative identity to support Bluetooth technology as the catalyst for the Internet of Things (IoT). This change included an updated logo, a new tagline and deprecation of the Bluetooth Smart and Bluetooth Smart Ready logos.
At its inception in 1998, the Bluetooth SIG was primarily run by a staff effectively seconded from its member companies. In 2001 Tom Siep served as the group's managing director, and from 2002 to 2004 Mike McCamon led the group as its executive director. In 2004 he was replaced by Michael W. Foley (Mike). From 2012-mid-2024, Mark Powell acted as the Bluetooth SIG's CEO/Executive Director. Effective 29 May 2024, Neville Meijers became the SIG's CEO. Beginning in 2002 a professional staff was hired, composed of operations, engineering and marketing specialists. From 2002 to 2004 the Bluetooth SIG was based in Overland Park, Kansas, US, and is now based in Kirkland, Washington. In addition to its professional staff, the SIG is supported by its more than 40,000 member companies who participate in the various working groups that produce the standardization documents and oversee the qualification process for new products and help to evangelize the technology.
Structure
The SIG members participate in study groups, expert groups, working groups along with committees.
Study groups
The study groups carry out research into their various areas which informs the development of the Bluetooth specifications. They may eventually become working groups in their own right.
Expert groups
The expert groups deal with issues of technical importance to all aspects of Bluetooth development. As with the Study Groups their work informs the working groups as well as the corporate groups.
Participation in the Expert Groups is restricted to Promoter members and Associate members.
Working groups
The working groups develop new Bluetooth specifications and enhance adopted specifications. They are responsible for the vast majority of published standards and specifications. Participation in the working groups is restricted to Promoter members and Associate members.
Committees
The committees of the SIG deal with the other aspects of licensing, marketing and review including developing and maintaining the Qualification Process, oversight of the Bluetooth specifications, and developing, improving and maintaining the test methodology and concepts as well as other strategic functions.
Membership
Any company incorporating Bluetooth wireless technology into products, using the technology to offer goods and services or simply re-branding a product with Bluetooth technology may become a member of the Bluetooth SIG. There are three levels of corporate membership totaling more than 20,000 members, and individuals from member companies may also participate.
Promoter members
These members are the most active in the SIG and have considerable influence over both the strategic and technological directions of Bluetooth as a whole.
Ericsson (founder member)
Intel (founder member)
Nokia (founder member)
Toshiba (founder member)
Microsoft (since 1999)
Lenovo (since 2005, replaced founder member IBM after the divestment of personal computing division)
Apple (since 2015)
Google (since 2017)
Telink Semiconductor (Since 2019)
Each Promoter member has one seat (and one vote) on the board of directors and the Qualification Review Board (the body responsible for developing and maintaining the qualification process). They each may have multiple staff in the various working groups and committees that comprise the work of the SIG.
The SIG's website carries a full list of members.
Associate members
The Bluetooth SIG Associate membership fees have stayed the same since 2006. Associate membership is renewed annually and the yearly fee depends on the individual company's revenue. Companies with annual revenue in excess of $100M US are considered Large Associates and pay annual membership fees of $42,000 US. Small Associates are categorized as those organizations with revenue less than $100M US and join the SIG with an annual membership fee of $9,000 US. Associate members of the SIG get early access to draft specifications at versions 0.5 and 0.7 and are eligible to participate and gain a voting seat in working groups and committees—a key opportunity to work with other Associate and Promoter members on enhancing existing specifications. They are also eligible for enhanced marketing support, receive discounts on product qualification listings and SIG events including Bluetooth World and UnPlugFest (UPF) testing events.
Adopter members
Adopter membership in the SIG is free and entitles members to use published Bluetooth wireless specifications and Bluetooth trademarks. Adopter members do not have early access to unpublished specifications and may not participate in working groups or committees to influence the development of the technology.
Individuals
Membership is not currently open to individuals.
Universities
Universities or other educational facilities are not accepted for membership.
Qualification
Next to the development of the technology itself, the qualification process is one of the most important aspects of Bluetooth technology, supporting interoperability, conformance to the Bluetooth specifications, and to strengthening the Bluetooth brand.
Members of the Bluetooth SIG must complete the qualification and declaration process for their Bluetooth enabled product(s) to demonstrate and declare compliance to the .
The primary objective of the qualification process is for members to demonstrate their product(s) compliance to the adopted specifications through testing and documentation. After qualification is completed, members need to complete the declaration process. Members declare their compliance to both the Bluetooth Patent/Copyright License Agreement and Bluetooth Trademark License Agreement ("BTLA").
An overview of both processes including steps of the processes, types and fees is available on the Bluetooth SIG public portal.
Bluetooth Qualification Experts (BQEs) and Bluetooth Qualification Test Facilities (BQTFs) are available to support members through the processes. Members uncertain or unfamiliar with the qualification process are encouraged to consider using one or both of these service types.
See also
Wireless LAN
IEEE 802.15.1
References
External links
Bluetooth
Standards organizations in the United States
Organizations based in Washington (state)
Organizations established in 1998
Kirkland, Washington | Bluetooth Special Interest Group | [
"Technology"
] | 1,540 | [
"Wireless networking",
"Bluetooth"
] |
3,747 | https://en.wikipedia.org/wiki/Bill%20Gates | William Henry Gates III (born October 28, 1955) is an American businessman and philanthropist best known for co-founding the software company Microsoft with his childhood friend Paul Allen. He later held the positions of chairman, chief executive officer (CEO), president, and chief software architect of the company. Gates was also its largest individual shareholder until May 2014. He was a pioneer of the microcomputer revolution of the 1970s and 1980s.
In June 2008, Gates transitioned into a part-time role at Microsoft and full-time work at the Bill & Melinda Gates Foundation, the private charitable foundation he and his then-wife Melinda had established in 2000. He stepped down as chairman of the Microsoft board in February 2014 and assumed the role of technology adviser to support newly appointed CEO Satya Nadella. In March 2020, Gates left his board positions at Microsoft and Berkshire Hathaway to focus on his philanthropic efforts on climate change, global health and development, and education.
Gates was born and raised in Seattle, Washington. In 1975, he and Allen founded Microsoft in Albuquerque, New Mexico. Gates led the company as its chairman and chief executive officer until stepping down as CEO in January 2000, succeeded by Steve Ballmer, but he remained chairman of the board of directors and became chief software architect. During the late 1990s, he was criticized for his business tactics, which were considered anti-competitive.
Since 1987, Gates has been included in the Forbes list of the world's top billionaires. From 1995 to 2017, he held the title of the wealthiest person in the world every year except in 2008 and from 2010 to 2013. In 1999, he became the first ever centibillionaire when his net worth briefly surpassed US$100 billion. Since leaving day-to-day operations at Microsoft in 2008, Gates has pursued other business and philanthropic endeavors.
He is the founder and chairman of several companies, including BEN, Cascade Investment, TerraPower, Gates Ventures, and Breakthrough Energy. He has donated to various charitable organizations and scientific research programs through the Bill & Melinda Gates Foundation, reported to be the world's largest private charity. Through the foundation, he led an early 21st century vaccination campaign that significantly contributed to the eradication of the wild poliovirus in Africa. In 2010, Gates and Warren Buffett founded the Giving Pledge, whereby they and other billionaires pledge to give at least half of their wealth towards philanthropy.
Early life and education
William Henry Gates III was born on October 28, 1955, in Seattle, Washington, as the only son of William H. Gates Sr. (1925–2020) and his first wife, Mary Maxwell Gates (1929–1994). His ancestry includes English, German, and Irish/Scots-Irish. His father was a prominent lawyer, and his mother served on the board of directors of First Interstate BancSystem and United Way of America. Gates's maternal grandfather J. W. Maxwell was a national bank president. Gates also has an older sister Kristi (Kristianne) and a younger sister Libby. He is the fourth of his name in his family but is known as William Gates III or "Trey" (i.e., three) because his father had the "II" suffix. The family lived in the Sand Point area of Seattle in a home that was damaged by a rare tornado when Gates was 7.
When Gates was young his parents wanted him to pursue a career in law. During his childhood, his family regularly attended a church of the Congregational Christian Churches, a Protestant Reformed denomination.
Gates was small for his age and was bullied as a child. The family encouraged competition; one visitor reported that "it didn't matter whether it was hearts or pickleball or swimming to the dock; there was always a reward for winning and there was always a penalty for losing".
At age 13, he enrolled in the private Lakeside prep school. When he was in the eighth grade, the Mothers' Club at the school used proceeds from Lakeside School's rummage sale to buy a Teletype Model 33 ASR terminal and a block of computer time on a General Electric (GE) computer for the students. Gates took an interest in programming the GE system in BASIC, and he was excused from math classes to pursue his interest. He wrote his first computer program on this machine, an implementation of tic-tac-toe that allowed users to play games against the computer. Gates was fascinated by the machine and how it would always execute software code perfectly. After the Mothers Club donation was exhausted, Gates and other students sought time on systems including DEC PDP minicomputers. One of these systems was a PDP-10 belonging to Computer Center Corporation (CCC) which banned Gates, Paul Allen, Ric Weiland, and Gates's best friend and first business partner Kent Evans for the summer after it caught them exploiting bugs in the operating system to obtain free computer time.
The four students formed the Lakeside Programmers Club to make money. At the end of the ban, they offered to find bugs in CCC's software in exchange for extra computer time. Rather than using the system remotely via Teletype, Gates went to CCC's offices and studied source code for various programs that ran on the system, including Fortran, Lisp, and machine language. The arrangement with CCC continued until 1970 when the company went out of business. The following year, a Lakeside teacher enlisted Gates and Evans to automate the school's class-scheduling system, providing them computer time and royalties in return. The duo worked diligently in order to have the program ready for their senior year. Towards the end of their junior year, Evans was killed in a mountain climbing accident, which Gates described as one of the saddest days of his life. He then turned to Allen who helped him finish the system for Lakeside.
At age 17, Gates formed a venture with Allen called Traf-O-Data to make traffic counters based on the Intel 8008 processor. In 1972, he served as a congressional page in the House of Representatives. He was a national merit scholar when he graduated from Lakeside School in 1973. He scored 1590 out of 1600 on the Scholastic Aptitude Tests (SAT) and enrolled at Harvard College in the autumn of 1973.
He did not stay at Harvard long enough to choose a concentration, but took mathematics (including Math 55) and graduate level computer science courses. While at Harvard, he met fellow student and future Microsoft CEO Steve Ballmer. Gates left Harvard after two years while Ballmer stayed and graduated magna cum laude. Years later, Ballmer succeeded Gates as Microsoft's CEO and maintained that position from 2000 until his resignation in 2014.
Gates devised an algorithm for pancake sorting as a solution to one of a series of unsolved problems presented in a combinatorics class by professor Harry Lewis. His solution held the record as the fastest version for over 30 years, and its successor is faster by only 2%. His solution was formalized and published in collaboration with Harvard computer scientist Christos Papadimitriou.
Gates remained in contact with Paul Allen and joined him at Honeywell during the summer of 1974. In 1975, the MITS Altair 8800 was released based on the Intel 8080 CPU, and Gates and Allen saw the opportunity to start their own computer software company. Gates dropped out of Harvard that same year. His parents were supportive of him after seeing how much he wanted to start his own company. He explained his decision to leave Harvard: "if things hadn't worked out, I could always go back to school. I was officially on leave."
Microsoft
BASIC
Gates read the January 1975 issue of Popular Electronics which demonstrated the Altair 8800, and contacted Micro Instrumentation and Telemetry Systems (MITS) to inform them that he and others were working on a BASIC interpreter for the platform. In reality, Gates and Allen did not have an Altair and had not written code for it; they merely wanted to gauge MITS's interest. MITS president Ed Roberts agreed to meet them for a demonstration, and over the course of a few weeks they developed an Altair emulator that ran on a minicomputer, and then the BASIC interpreter. The demonstration was held at MITS's offices in Albuquerque, New Mexico; it was a success and resulted in a deal with MITS to distribute the interpreter as Altair BASIC. MITS hired Allen, and Gates took a leave of absence from Harvard to work with him at MITS in November 1975. Allen named their partnership "Micro-Soft", a combination of "microcomputer" and "software", and their first office was in Albuquerque. The first employee Gates and Allen hired was their high school collaborator Ric Weiland. They dropped the hyphen within a year and officially registered the trade name "Microsoft" with the Secretary of the State of New Mexico on November 26, 1976. Gates never returned to Harvard to complete his studies.
Microsoft's Altair BASIC was popular with computer hobbyists, but Gates discovered that a pre-market copy had leaked out and was being widely copied and distributed. In February 1976, he wrote An Open Letter to Hobbyists in the MITS newsletter in which he asserted that more than 90% of the users of Microsoft Altair BASIC had not paid Microsoft for it and the Altair "hobby market" was in danger of eliminating the incentive for any professional developers to produce, distribute, and maintain high-quality software. This letter was unpopular with many computer hobbyists, but Gates persisted in his belief that software developers should be able to demand payment. Microsoft became independent of MITS in late 1976, and it continued to develop programming language software for various systems. The company moved from Albuquerque to Bellevue, Washington on January 1, 1979.
Gates said he personally reviewed and often rewrote every line of code that the company produced in its first five years. As the company grew, he transitioned into a manager role, then an executive. DONKEY.BAS, is a computer game written in 1981 and included with early versions of the PC DOS operating system distributed with the original IBM PC. It is a driving game in which the player must avoid hitting donkeys. The game was written by Gates and Neil Konzen.
IBM partnership
IBM, the leading supplier of computer equipment to commercial enterprises at the time, approached Microsoft in July 1980 concerning software for its upcoming personal computer, the IBM PC, after Gates's mother mentioned Microsoft to John Opel, IBM's then CEO. IBM first proposed that Microsoft write the BASIC interpreter. IBM's representatives also mentioned that they needed an operating system, and Gates referred them to Digital Research (DRI), makers of the widely used CP/M operating system. IBM's discussions with Digital Research went poorly and they did not reach a licensing agreement. IBM representative Jack Sams mentioned the licensing difficulties during a subsequent meeting with Gates and asked if Microsoft could provide an operating system. A few weeks later, Gates and Allen proposed using 86-DOS, an operating system similar to CP/M, that Tim Paterson of Seattle Computer Products (SCP) had made for hardware similar to the PC. Microsoft made a deal with SCP to be the exclusive licensing agent of 86-DOS, and later the full owner. Microsoft employed Paterson to adapt the operating system for the PC and delivered it to IBM as PC DOS for a one-time fee of $50,000.
The contract itself only earned Microsoft a relatively small fee. It was the prestige brought to Microsoft by IBM's adoption of their operating system that would be the origin of Microsoft's transformation from a small business to the leading software company in the world. Gates had not offered to transfer the copyright on the operating system to IBM because he believed that other personal computer makers would clone IBM's PC hardware. They did, making the IBM-compatible PC, running DOS, a de facto standard. The sales of MS-DOS (the version of DOS sold to customers other than IBM) made Microsoft a major player in the industry. The press quickly identified Microsoft as being very influential on the IBM PC. PC Magazine asked if Gates was "the man behind the machine?".
Gates oversaw Microsoft's company restructuring on June 25, 1981, which re-incorporated the company in Washington state and made Gates the president and chairman of the board, with Paul Allen as vice president and vice chairman. In early 1983, Allen left the company after receiving a Hodgkin lymphoma diagnosis, effectively ending the formal business partnership between Gates and Allen, which had been strained months prior due to a contentious dispute over Microsoft equity. Later in the decade, Gates repaired his relationship with Allen and together the two donated millions to their childhood school Lakeside. They remained friends until Allen's death in October 2018.
Windows
Microsoft and Gates launched their first retail version of Microsoft Windows on November 20, 1985, in an attempt to fend off competition from Apple's Macintosh GUI, which had captivated consumers with its simplicity and ease of use. In August 1986, the company struck a deal with IBM to develop a separate operating system called OS/2. Although the two companies successfully developed the first version of the new system, the partnership deteriorated due to mounting creative differences. The operating system grew out of DOS in an organic fashion over a decade until Windows 95, which hid the DOS prompt by default. Windows XP was released one year after Gates stepped down as Microsoft CEO. Windows 8.1 was the last version of the OS released before Gates left the chair of the firm to John W. Thompson on February 5, 2014.
Management style
During Microsoft's early years, Gates was an active software developer, particularly in the company's programming language products, but his primary role in most of the company's history was as a manager and executive. He has not officially been on a development team since working on the TRS-80 Model 100, but he wrote code that shipped with the company's products as late as 1989. Jerry Pournelle wrote in 1985 when Gates announced Microsoft Excel: "Bill Gates likes the program, not because it's going to make him a lot of money (although I'm sure it will do that), but because it's a neat hack." During the late 1990s, he was criticized for his business tactics, which were considered anti-competitive. This opinion has been upheld by numerous court rulings.
In June 2006, Gates announced that he would transition out of his role at Microsoft to dedicate more time to philanthropy. He gradually divided his responsibilities between two successors when he placed Ray Ozzie in charge of management and Craig Mundie in charge of long-term product strategy. The process took two years to fully transfer his duties to Ozzie and Mundie, and was completed on June 27, 2008.
Post-Microsoft
Since leaving day-to-day operations at Microsoft, Gates has continued his philanthropy and works on other projects. He stepped down as chairman of Microsoft in February 2014 to become technology advisor at the firm to support newly appointed CEO Satya Nadella.
Gates provided his perspective on a range of issues in an interview that was published in the March 2014 issue of Rolling Stone magazine. In the interview, he provided his perspective on climate change, his charitable activities, various tech companies and people involved in them, and the state of America. In response to a question about his greatest fear when he looks 50 years into the future, Gates stated: "there'll be some really bad things that'll happen in the next 50 or 100 years, but hopefully none of them on the scale of, say, a million people that you didn't expect to die from a pandemic, or nuclear or bioterrorism." Gates also identified innovation as the "real driver of progress" and pronounced that "America's way better today than it's ever been." Gates has often expressed concern about the potential harms of superintelligence; in a Reddit "ask me anything", he stated that:
In an interview that was held at the TED conference in March 2015, with Baidu co-founder and CEO, Robin Li, Gates said he would "highly recommend" Nick Bostrom's recent work, Superintelligence: Paths, Dangers, Strategies. During the conference, Gates warned that the world was not prepared for the next pandemic, a situation that would come to pass in late 2019 when the COVID-19 pandemic began. In March 2018, Gates met at his home in Seattle with Mohammed bin Salman, the crown prince and de facto ruler of Saudi Arabia to discuss investment opportunities for Saudi Vision 2030. In June 2019, Gates admitted that losing the mobile operating system race to Android was his biggest mistake. He stated that it was within their skill set of being the dominant player, but partially blames the antitrust litigation during the time. That same year, Gates became an advisory board member of the Bloomberg New Economy Forum.
In March 2020, Microsoft announced Gates would be leaving his board positions at Berkshire Hathaway and Microsoft to dedicate himself to philanthropic endeavors such as climate change, global health and development, and education. The Wall Street Journal reported in May 2021 that Gates stepped down before Microsoft's board finished its investigation into Gates's alleged inappropriate sexual relationship with a Microsoft employee, which an external law firm had begun probing in late 2019.
During the COVID-19 pandemic, Gates has widely been looked at by media outlets as an expert on the issue, despite him not being a public official or having any prior medical training. His foundation did, however, establish the COVID-19 Therapeutics Accelerator in 2020 to hasten the development and evaluation of new and repurposed drugs and biologics to treat patients for COVID-19, and, as of February 2021, Gates expressed that he and Anthony Fauci frequently talk and collaborate on matters including vaccines and other medical innovations to fight the pandemic.
Business ventures and investments
Gates has a multi-billion dollar investment portfolio with stakes in companies in multiple sectors and has participated in several entrepreneurial ventures beyond Microsoft, including:
AutoNation, an automotive retailer which trades on the NYSE and in which Gates has a 16% stake.
bgC3 LLC, a think-tank and research company founded by Gates.
Canadian National Railway (CN), a Canadian Class I freight railway. As of 2019, Gates is the single largest shareholder of the company.
Cascade Investment LLC, a private investment and holding company incorporated in the United States, founded and controlled by Gates and headquartered in Kirkland, Washington.
Gates is the largest private owner of farmland in the United States with his landholdings owned through Cascade Investment totalling 242,000 acres across 19 states. He is the 49th largest private owner of land in the US.
Carbon Engineering, a for-profit venture founded by David Keith, which Gates helped fund. It is also supported by Chevron Corporation and Occidental Petroleum.
SCoPEx, Keith's academic venture in "sun-dimming" geoengineering, which Gates provided most of the $12 million for.
Corbis (originally named Interactive Home Systems and now known as Branded Entertainment Network), a digital image licensing and rights services company founded and chaired by Gates.
EarthNow, a Seattle-based startup company aiming to blanket the Earth with live satellite video coverage. Gates is a large financial backer.
Eclipse Aviation, a defunct manufacturer of very light jets. Gates was a major stake-holder early on in the project.
Impossible Foods, a company that develops plant-based substitutes for meat products. Some of the $396 million Patrick O. Brown collected for his business came from Gates around 2014 to 2017.
Ecolab, a global provider of water, hygiene and energy technologies and services to the food, energy, healthcare, industrial and hospitality markets. Combined with the shares owned by the Foundation, Gates owns 11.6% of the company. A shareholder agreement in 2012 allowed him to own up to 25% of the company, but this agreement was removed.
ResearchGate, a social networking site for scientists. Gates participated in a $35 million round of financing along with other investors.
TerraPower, a nuclear reactor design company co-founded and chaired by Gates, which is developing next generation traveling-wave reactor nuclear power plants in an effort to tackle climate change.
Breakthrough Energy Ventures, a closed fund for wealthy individuals who seek ROI on a 20-year horizon (see next section), which "is funding green start-ups and a host of other low-carbon entrepreneurial projects, including everything from advanced nuclear technology to synthetic breast milk." It was founded by Gates in 2015.
Ginkgo Bioworks, a biotech startup that received $350 million in venture funding in 2019, in part from Gates's investment firm Cascade Investment.
Luminous Computing, a company that develops neuromorphic photonic integrated circuits for AI acceleration.
Mologic, a British diagnostic technology company that Gates purchased, along with the Soros Economic Development Fund, "which has developed 10-minute Covid lateral flow tests that it aims to make for as little as $1."
Climate change and energy
Gates considers climate change and global access to energy to be critical, interrelated issues. He has urged governments and the private sector to invest in research and development to make clean, reliable energy cheaper. Gates envisions that a breakthrough innovation in sustainable energy technology could drive down both greenhouse gas emissions and poverty, and bring economic benefits by stabilizing energy prices. In 2011, he said: "If you gave me the choice between picking the next 10 presidents or ensuring that energy is environmentally friendly and a quarter as costly, I'd pick the energy thing."
In 2015, he wrote about the challenge of transitioning the world's energy system from one based primarily on fossil fuels to one based on sustainable energy sources. Global energy transitions have historically taken decades. He wrote, "I believe we can make this transition faster, both because the pace of innovation is accelerating, and because we have never had such an urgent reason to move from one source of energy to another." This rapid transition, according to Gates, would depend on increased government funding for basic research and financially risky private-sector investment, to enable innovation in diverse areas such as nuclear energy, grid energy storage to facilitate greater use of solar and wind energy, and solar fuels.
Gates spearheaded two initiatives that he announced at the 2015 United Nations Climate Change Conference in Paris. One was Mission Innovation, in which 20 national governments pledged to double their spending on research and development for carbon-free energy in over five years' time. Another initiative was Breakthrough Energy, a group of investors who agreed to fund high-risk startups in clean energy technologies. Gates, who had already invested $1 billion of his own money in innovative energy startups, committed a further $1 billion to Breakthrough Energy. In December 2020, he called for the U.S. federal government to create institutes for clean energy research, analogous to the National Institutes of Health. Gates has also urged rich nations to shift to 100% synthetic beef industries to reduce greenhouse gas emissions from food production.
Gates has been criticized for holding a large stake in Signature Aviation, a company that services emissions-intensive private jets. In 2019, he began to divest from fossil fuels. He does not expect divestment itself to have much practical impact, but says that if his efforts to provide alternatives were to fail, he would not want to personally benefit from an increase in fossil fuel stock prices. After he published his book How to Avoid a Climate Disaster, parts of the climate activist community criticized Gates's approach as technological solutionism. In 2022, educational streamer Wondrium produced the series "Solving for Zero: The Search for Climate Innovation" inspired by the book.
In June 2021, Gates's company TerraPower and Warren Buffett's PacifiCorp announced the first sodium nuclear reactor in Wyoming. Wyoming Governor Mike Gordon hailed the project as a step toward carbon-negative nuclear power. Wyoming Senator John Barrasso also said that it could boost the state's once-active uranium mining industry.
Gates spent many efforts to make pass the Inflation Reduction Act of 2022 because of his importance to climate. He tried to convince Joe Manchin to support a climate bill from the year 2019 and especially in the months before the adoption of the bill. The bill should cut the global greenhouse gas emissions in a level similar to "eliminating the annual planet-warming pollution of France and Germany combined" and may help to limit the warming of the planet to 1.5 degrees – the target of the Paris Agreement. He thanked both Joe Manchin and Chuck Schumer for their efforts in a guest essay in The New York Times, where he said "Inflation Reduction Act of 2022 may be the single most important piece of climate legislation in American history" given its potential to spur development of new technologies. Gates gave further insights on climate change in his commencement address at Northern Arizona University on May 6, 2023, where he was bestowed an honorary doctorate.
Political positions
In October 2024, The New York Times reported Gates had recently donated $50 million to Future Forward USA Action, a 501(c)(4) organization supporting Kamala Harris's 2024 presidential campaign. In response to the report, he did not explicitly address the donation or endorse Harris, but said "this election is different."
Regulation of the software industry
In 1998, Gates rejected the need for regulation of the software industry in testimony before the United States Senate. During the Federal Trade Commission's (FTC) investigation of Microsoft in the 1990s, Gates was reportedly upset at then Commissioner Dennis Yao for "float[ing] a line of hypothetical questions suggesting possible curbs on Microsoft's growing monopoly power". According to one source:
Donald Trump Facebook ban
After Facebook and Twitter had banned Donald Trump from their platforms on February 18, 2021, as a result of the 2020 United States presidential election which led to the January 6 United States Capitol attack, Gates said a permanent ban of Trump "would be a shame" and would be an "extreme measure". He warned that it would cause "polarization" if users with different political views divide up among various social networks, and said: "I don't think banning somebody who actually did get a fair number of votes (in the presidential election) – well less than a majority – forever would be that good."
Patents for COVID-19 vaccines
In April 2021, during the COVID-19 pandemic, Gates was criticized for suggesting that pharmaceutical companies should hold onto patents for COVID-19 vaccines. The criticism came due to the possibility of this preventing poorer nations from obtaining adequate vaccines. Tara Van Ho of the University of Essex stated, "Gates speaks as if all the lives being lost in India are inevitable but eventually the West will help when in reality the US & UK are holding their feet on the neck of developing states by refusing to break [intellectual property rights] protections. It's disgusting."
Gates is opposed to the TRIPS waiver. Bloomberg News reported him as saying he argued that Oxford University should not give away the rights to its COVID-19 information, as it had announced, but instead sell it to a single industry partner, as it did. His views on the value of legal monopolies in medicine have been linked to his views on legal monopolies in software.
Cryptocurrencies
Gates has been critical of cryptocurrencies like Bitcoin. According to him, cryptocurrencies provide no "valuable output", contribute nothing to society, and pose a danger especially for smaller investors who could not survive the potentially high losses. Gates also does not own any cryptocurrencies himself.
Philanthropy
Bill & Melinda Gates Foundation
Gates studied the work of Andrew Carnegie and John D. Rockefeller, and donated some of his Microsoft stock in 1994 to create the "William H. Gates Foundation". In 2000, Gates and his wife combined three family foundations and donated stock valued at $5 billion to create the Bill & Melinda Gates Foundation, which was identified by the Funds for NGOs company in 2013, as the world's largest charitable foundation, with assets reportedly valued at more than $34.6 billion. The foundation allows benefactors to access information that shows how its money is being spent, unlike other major charitable organizations such as the Wellcome Trust. Gates, through his foundation, also donated $20 million to the Carnegie Mellon University for a new building to be named Gates Center for Computer Science which opened in 2009.
Gates has credited the generosity and extensive philanthropy of David Rockefeller as a major influence. He and his father met with Rockefeller several times, and their charity work is partly modeled on the Rockefeller family's philanthropic focus, whereby they are interested in tackling the global problems that are ignored by governments and other organizations.
The foundation is organized into five program areas: Global Development Division, Global Health Division, United States Division, and Global Policy & Advocacy Division. Among others, it supports a wide range of public health projects, granting aid to fight transmissible diseases such AIDS, tuberculosis and malaria, as well as widespread vaccine programs to eradicate polio. It grants funds to learning institutes and libraries and supports scholarships at universities. The foundation established a water, sanitation and hygiene program to provide sustainable sanitation services in poor countries. Its agriculture division supports the International Rice Research Institute in developing Golden Rice, a genetically modified rice variant used to combat vitamin A deficiency. The foundation aims to provide women and girls in the developing world with information and support regarding contraception and, ultimately, universal access to consensual family planning. In 2007, the Los Angeles Times criticized the foundation for investing its assets in companies that have been accused of worsening poverty, pollution and pharmaceutical firms that do not sell to developing countries. Although the foundation announced a review of its investments to assess social responsibility, it was subsequently canceled and upheld its policy of investing for maximum return, while using voting rights to influence company practices.
Gates delivered his thoughts in a fireside chat moderated by journalist and news anchor Shereen Bhan virtually at the Singapore FinTech Festival on December 8, 2020, on the topic, "Building Infrastructure for Resilience: What the COVID-19 Response Can Teach Us About How to Scale Financial Inclusion".
Gates favors the normalization of COVID-19 masks. In a November 2020 interview, he said: "What are these, like, nudists? I mean, you know, we ask you to wear pants, and no American says, or very few Americans say, that that's, like, some terrible thing."
Personal donations
Melinda Gates suggested that people should emulate the philanthropic efforts of the Salwen family, who sold their home and gave away half of its value, as detailed in their book, The Power of Half. Gates and his wife invited Joan Salwen to Seattle to speak about what the family had done, and on December 9, 2010, Bill and Melinda Gates and investor Warren Buffett each signed a commitment they called the "Giving Pledge", which is a commitment by all three to donate at least half of their wealth, over the course of time, to charity. The Foundation has received criticism, particularly over its role in Common Core, with critics stating the support is "cronyist" in that it profits from the "federal, state, and local contracts."
Gates has also provided personal donations to educational institutions. In 1999, Gates donated $20 million to the Massachusetts Institute of Technology for the construction of a computer laboratory named the "William H. Gates Building" that was designed by architect Frank Gehry. While Microsoft had previously given financial support to the institution, this was the first personal donation received from Gates.
The Maxwell Dworkin Laboratory of the Harvard John A. Paulson School of Engineering and Applied Sciences is named after the mothers of both Gates and Microsoft President Steven A. Ballmer, both of whom were students (Ballmer was a member of the school's graduating class of 1977, while Gates left his studies for Microsoft), and donated funds for the laboratory's construction. Gates also donated $6 million to the construction of the Gates Computer Science Building, completed in January 1996, on the campus of Stanford University. The building contains the Computer Science Department and the Computer Systems Laboratory (CSL) of Stanford's Engineering department.
Since 2005, Gates and his foundation have taken an interest in solving global sanitation problems. For example, they announced the "Reinvent the Toilet Challenge", which has received considerable media interest. To raise awareness for the topic of sanitation and possible solutions, Gates drank water that was "produced from human feces" in 2014 – it was produced from a sewage sludge treatment process called the Omni Processor. In early 2015, he also appeared with Jimmy Fallon on The Tonight Show and challenged him to see if he could taste the difference between this reclaimed water or bottled water.
In November 2017, Gates said he would give $50 million to the Dementia Discovery Fund, a venture capital fund that seeks treatment for Alzheimer's disease. He also pledged an additional $50 million to start-up ventures working in Alzheimer's research. Bill and Melinda Gates have said that they intend to leave their three children $10 million each as their inheritance. With only $30 million kept in the family, they are expected to give away about 99.96% of their wealth. On August 25, 2018, Gates distributed $600,000 through his foundation via UNICEF which is helping flood affected victims in Kerala, India.
In June 2018, Gates offered free ebooks, to all new graduates of U.S. colleges and universities, and in 2021, offered free ebooks, to all college and university students around the world. The Bill & Melinda Gates Foundation partially funds OpenStax, which creates and provides free digital textbooks. In July 2022 he reiterated the commitment he had made by starting the Giving Pledge campaign by announcing on his Twitter channel he planned to give 'virtually all' his wealth to charity and eventually 'move off of the list of the world's richest people.'
Charity sports events
In April 2017, Gates partnered with Swiss tennis player Roger Federer in playing in the Match for Africa 4, a noncompetitive tennis match at a sold-out Key Arena in Seattle. The event was in support of the Roger Federer Foundation's charity efforts in Africa. Federer and Gates played against John Isner, the top-ranked American player for much of this decade, and Mike McCready, the lead guitarist for Pearl Jam. The pair won the match 6 games to 4. Overall, they raised $2 million for children in Africa. The following year, Gates and Federer returned to play in the Match for Africa 5 on March 5, 2018, at San Jose's SAP Center. Their opponents were Jack Sock, one of the top American players and a grand slam winner in doubles, and Savannah Guthrie, a co-anchor for NBC's Today show. Gates and Federer recorded their second match victory together by a score of 6–3 and the event raised over $2.5 million.
Books
In 1989, Gates wrote the foreword to the Microsoft Press book Learn BASIC Now, by Michael Halvorson and David Rygmyr, reflecting on the growth of the BASIC language and its use in most of the era's personal computers. He also sketched out plans for BASIC's use as a universal language to embellish or alter the performance of a range of software applications.
Gates has authored several books. The Road Ahead, co-authored with Microsoft executive Nathan Myhrvold and journalist Peter Rinearson, was published in November 1995. It summarized the implications of the personal computing revolution and described a future profoundly changed by the arrival of a global information superhighway. His second book, Business @ the Speed of Thought, co-authored with Collins Hemingway, was published in 1999, and discusses how business and technology are integrated, and shows how digital infrastructures and information networks can help to get an edge on the competition. In 2021 he published How to Avoid a Climate Disaster, which presents what Gates learned in over a decade of studying climate change and investing in innovations to address climate problems. Following the COVID-19 pandemic Gates published How to Prevent the Next Pandemic in 2022 which proposes a "Global Epidemic Response and Mobilization" (GERM) team with annual funding of $1 billion, under the auspices of the WHO.
The first of Gates's planned three memoirs, Source Code will be published in 2025.
Personal life
Gates is an avid reader, and the ceiling of his large home library is engraved with a quotation from The Great Gatsby. He also enjoys bridge, golf, and tennis. His days are planned for him on a minute-by-minute basis, similarly to the U.S. president's schedule. Despite his wealth and extensive business travel, Gates flew coach (economy class) in commercial aircraft until 1997, when he bought a private jet.
Gates purchased the Codex Leicester, a collection of scientific writings by Leonardo da Vinci, for US$30.8 million at an auction in 1994. In 1998, he reportedly paid $30 million for the original 1885 maritime painting Lost on the Grand Banks, at the time a record price for an American painting. In 2016, he revealed that he was color-blind. On May 10, 2022, Gates said that he tested positive for COVID-19 and was experiencing mild symptoms. Gates has received three doses of the COVID-19 vaccine.
Marriage, family and divorce
In 1987, at a trade fair in New York, Gates met Melinda French, a recent graduate of Duke University who had begun working at Microsoft around four months earlier. Gates and French became engaged in 1993 after dating for six years. They married on January 1, 1994, at the 12th hole of the Jack Nicklaus–designed Manele Golf Course on the Hawaiian Island of Lānaʻi. They had three children together: Jennifer Katherine Gates Nassar (born April 26, 1996), Rory John Gates (born May 23, 1999), and Phoebe Adele Gates (born September 14, 2002). The family resided in an earth-sheltered mansion in the side of a hill overlooking Lake Washington in Medina, Washington. In 2009, property taxes on the mansion were reported to be US$1.063 million, on a total assessed value of US$147.5 million. The estate has a swimming pool with an underwater music system, as well as a gym and a dining room.
On May 3, 2021, the couple announced their decision to divorce after more than 27 years of marriage. The Wall Street Journal reported that Melinda had begun meeting with divorce attorneys in 2019, citing interviews that suggested Gates's ties with Jeffrey Epstein were among her concerns. However, the couple delayed their divorce until their youngest daughter Phoebe had graduated from high school. The divorce was finalized on August 2, 2021, and the financial details have remained confidential. In February 2023, Gates confirmed that he was dating Paula Hurd, widow of former Oracle Corporation and Hewlett-Packard chief executive Mark Hurd. Gates became a grandfather in March 2023 when his eldest daughter Jennifer, who had married Olympic equestrian Nayel Nassar in October 2021, gave birth to a daughter. In October 2024, Jennifer gave birth to a second daughter.
Public image
Gates's public image has changed over the years. At first he was perceived as a brilliant but ruthless "robber baron", a "nerd-turned-tycoon". Starting in 2000 with the foundation of the Bill & Melinda Gates Foundation, and particularly after he stepped down as head of Microsoft, he turned his attention to philanthropy, spending more than $50 billion on causes like health, poverty, and education. His image morphed from "tyrannical technocrat to saintly savior" to a "huggable billionaire techno-philanthropist", celebrated on magazine covers and sought after for his opinions on major issues like global health and climate change. Still another shift in public opinion came in 2021 with the announcement that he and Melinda were divorcing. Coverage of that proceeding brought out information about romantic pursuits of women who worked for him, a long-term extra-marital affair, and a friendship with convicted sex offender Jeffrey Epstein. This information and his response to the COVID-19 pandemic resulted in some deterioration of his public image, going from "a lovable nerd who was out to save the world" to "a tech supervillain who wants to protect profits over public health."
Investigative journalist Tim Schwab has accused Gates of using his contributions to the media to shape their coverage of him in order to protect his public image. In September 2022, Politico published an exposé critical of NGO leadership at the helm of the worldwide COVID-19 pandemic response, written in cooperation with the German newspaper Die Welt. Criticisms included the interconnectivity of the non-profits with Gates, as well as his personal lack of formal credentials in medicine.
Gates and the projects of his foundation have been the subject of many conspiracy theories that proliferate on Facebook and elsewhere. He has been implausibly accused of attempting to depopulate the world, distributing harmful or unethical vaccines, and implanting people with privacy-violating microchips. These unfounded theories reached a new level of influence during the COVID-19 pandemic when, according to New York Times journalist Rory Smith, the uncertainties of pandemic life drove people to seek explanations from the Internet. When asked about the theories, Gates has remarked that some people are tempted by the "simple explanation" that an evil person rather than biological factors are to blame, and that he does not know for what purpose anyone believes he would want to track them with microchips.
Religious views
In an interview with Rolling Stone, Gates said in regard to his faith: "The moral systems of religion, I think, are super important. We've raised our kids in a religious way; they've gone to the Catholic church that Melinda goes to and I participate in. I've been very lucky, and therefore I owe it to try and reduce the inequity in the world. And that's kind of a religious belief. I mean, it's at least a moral belief." In the same 2014 interview he also said: "I agree with people like Richard Dawkins that mankind felt the need for creation myths. Before we really began to understand disease and the weather and things like that, we sought false explanations for them. Now science has filled in some of the realm – not all – that religion used to fill. But the mystery and the beauty of the world is overwhelmingly amazing, and there's no scientific explanation of how it came about. To say that it was generated by random numbers, that does seem, you know, sort of an uncharitable view [laughs]. I think it makes sense to believe in God, but exactly what decision in your life you make differently because of it, I don't know."
Wealth
In 1987, Gates was listed as a billionaire in Forbes magazine's first ever America's richest issue; he was the world's youngest-ever self-made billionaire, with a net worth of $1.25 billion. Since then, he has been featured on The World's Billionaires list and was ranked as the richest person in 1995, 1996, 1998–2007, and 2009, maintaining the position until 2018, when Jeff Bezos surpassed his wealth. Gates was ranked first on the Forbes 400 list of wealthiest Americans from 1993 to 2007, in 2009, and from 2014 to 2017. he has an estimated net worth of US$154 billion, making him the sixth-richest person in the world according to the Bloomberg Billionaires Index.
Gates's wealth briefly surpassed US$100 billion in 1999, making him the first person to reach this net worth. After 2000, the nominal value of his Microsoft holdings declined, partly because of the decline in Microsoft's stock price after the dot-com bubble burst, and partly because of the multi-billion dollar donations he had made to his charitable foundations. In May 2006, Gates remarked that he wished that he were not the richest man in the world, because he disliked the attention that it brought. In March 2010, Gates was the second wealthiest person after Carlos Slim, but regained the top position in 2013, according to the Bloomberg Billionaires Index. Slim regained the position again in June 2014 (but then lost the top position back to Gates). Between 2009 and 2014, his wealth doubled from US$40 billion to US$82 billion. In October 2017, Gates was surpassed by Amazon founder Jeff Bezos as the richest person in the world. In the Forbes 400 list of wealthiest Americans in 2023, he was ranked sixth with a wealth of $115.0 billion. He once again became the richest person in the world in November 2019 after a 48% increase in Microsoft shares, surpassing Bezos. Gates told the BBC, "I've paid more tax than any individual ever, and gladly so ... I've paid over $6 billion in taxes." He is a proponent of higher taxes, particularly for the rich.
Gates has several investments outside Microsoft, which in 2006 paid him a salary of US$616,667 and a bonus of US$350,000, for a total of US$966,667. In 1989, he founded Corbis, a digital imaging company. In 2004, he became a board member of Berkshire Hathaway, the investment company headed by long-time friend Warren Buffett.
Controversies
Antitrust litigation
During his tenure as CEO of Microsoft, Gates approved of many decisions that led to antitrust litigation over Microsoft's business practices. In the 1998 United States v. Microsoft case, Gates gave deposition testimony that several journalists characterized as evasive. He argued with examiner David Boies over the contextual meaning of words such as "compete", "concerned", and "we". Later in the year, when portions of the videotaped deposition were played back in court, the judge was seen laughing and shaking his head. BusinessWeek reported:
Gates later said that he had simply resisted attempts by Boies to mischaracterize his words and actions. "Did I fence with Boies? ... I plead guilty ... rudeness to Boies in the first degree." Despite Gates's denials, the judge ruled that Microsoft had committed monopolization, tying and blocking competition, each in violation of the Sherman Antitrust Act.
Treatment of colleagues and employees
Gates had primary responsibility for Microsoft's product strategy from the company's founding in 1975 until 2006. He gained a reputation for being distant from others; an industry executive complained in 1981 that "Gates is notorious for not being reachable by phone and for not returning phone calls." An Atari executive recalled that he showed Gates a game and defeated him 35 of 37 times. When they met again a month later, Gates "won or tied every game. He had studied the game until he solved it. That is a competitor".
In the early 1980s, while business partner Paul Allen was undergoing treatments for cancer, Gates—according to Allen—conspired to reduce Allen's share in Microsoft by issuing himself stock options. In his autobiography, Allen would later recall that Gates was "scheming to rip me off. It was mercenary opportunism plain and simple". Gates says he remembers the episode differently. Allen would also recall that Gates was prone to shouting episodes.
Gates has often been accused of bullying Microsoft employees. He met regularly with Microsoft's senior managers and program managers, and the managers described him as being verbally combative, berating them for perceived holes in their business strategies or proposals that placed the company's long-term interests at risk. Gates saw competition in personal terms; when Borland's Turbo Pascal performed better than Microsoft's own tools, he yelled at programming director Greg Whitten "for half an hour" because, Gates believed, Borland's Philippe Kahn had surpassed Gates. Gates interrupted presentations with such comments as "that's the stupidest thing I've ever heard" and "why don't you just give up your options and join the Peace Corps?" The target of his outburst would then have to defend the proposal in detail until Gates was fully convinced. Not all harsh language was criticism; a manager recalled that "You're full of shit. That's the stupidest fucking thing I've ever heard" meant that Gates was amazed. "In the lore of Microsoft, if Bill says that to you, you're made". When subordinates appeared to be procrastinating, he was known to remark sarcastically, "I'll do it over the weekend".
Relationship with Jeffrey Epstein
A 2019 New York Times article reported that Gates's relationship with financier Jeffrey Epstein started in 2011, just a few years after Epstein was convicted for procuring a child for prostitution, and continued for some years, including a visit to Epstein's house with his wife in the fall of 2013, despite her declared discomfort. Gates said in 2011 about Epstein: "His lifestyle is very different and kind of intriguing although it would not work for me".
The depth of the friendship between Gates and Epstein is unclear though Gates generally commented that "I met him. I didn't have any business relationship or friendship with him". However, Gates visited Epstein "many times, despite [Epstein's] past". It was reported that Epstein and Gates "discussed the Gates Foundation and philanthropy". However, in an interview in 2019 Gates completely denied any connection between Epstein and the Gates Foundation or his philanthropy generally. In August 2021, Gates said the reason he had meetings with Epstein was because Gates hoped Epstein could provide money for philanthropic work, though nothing came of the idea. Gates added, "It was a huge mistake to spend time with him, to give him the credibility of being there." Gates came under further scrutiny after it was revealed that he had travelled on Epstein's private jet, though further claims about Gates travelling to Little St James proved unsubstantiated.
It has also been reported that Epstein and Gates met with Nobel Committee chair Thorbjørn Jagland at his residence in Strasbourg, France, in March 2013 to discuss the Nobel Prize. Also in attendance were representatives of the International Peace Institute which has received millions in grants from the Gates Foundation, including a $2.5 million "community engagement" grant in October 2013. In 2023, it was reported that Epstein threatened to expose an alleged affair Gates had with a Russian bridge player.
Recognition
Time magazine listed Gates as one of the 100 most influential people of the 20th century in 1999, as well as one of the 100 most influential people in 2004, 2005, and 2006 respectively.
Time also collectively named Gates, his wife Melinda and U2's lead singer Bono as the 2005 Persons of the Year for their humanitarian efforts. In 2006, he was voted eighth in the list of "Heroes of our time" published by New Statesman.
Gates was listed in the Sunday Times power list in 1999, named CEO of the year by Chief Executive Officers magazine in 1994, ranked number one in the "Top 50 Cyber Elite" by Time in 1998, ranked number two in the Upside Elite 100 in 1999, and was also included in The Guardian as one of the "Top 100 influential people in media" in 2001.
Gates has received honorary doctorates from Nyenrode Business Universiteit (1996), KTH Royal Institute of Technology (2002), Waseda University (2005), Tsinghua University (2007), Harvard University (2007), the Karolinska Institute (2007), the University of Cambridge (2009), and Northern Arizona University (2023). He was also made an honorary trustee of Peking University in 2007.
In 1994, he was honored as the 20th Distinguished Fellow of the British Computer Society (DFBCS). In 1999, Gates received New York Institute of Technology's President's Medal.
Gates was elected a Member of the US National Academy of Engineering in 1996 "for contributions to the founding and development of personal computing".
Entomologists named Bill Gates' flower fly, , in his honor in 1997.
He was awarded American Library Association Honorary Membership in 1998.
In 2002, Bill and Melinda Gates received the Jefferson Award for Greatest Public Service Benefiting the Disadvantaged.
Gates was made an Honorary Knight Commander of the Order of the British Empire (KBE) by Queen Elizabeth II in 2005.
He was given the 2006 James C. Morgan Global Humanitarian Award from the Tech Awards.
In January 2006, he was awarded the Grand Cross of the Order of Prince Henry by the then President of Portugal Jorge Sampaio.
In November 2006, he was awarded the Placard of the Order of the Aztec Eagle, together with his wife Melinda who was awarded the Insignia of the same order, both for their philanthropic work around the world in the areas of health and education, particularly in Mexico, and specifically in the program "".
Gates received the 2010 Bower Award for Business Leadership from The Franklin Institute for his achievements at Microsoft and his philanthropic work.
Also in 2010, he was honored with the Silver Buffalo Award by the Boy Scouts of America, its highest award for adults, for his service to youth.
According to Forbes, Gates was ranked as the fourth most powerful person in the world in 2012, up from fifth in 2011.
In 2015, Gates and his wife Melinda received the Padma Bhushan, India's third-highest civilian award for their social work in the country.
In 2016, Barack Obama honored Bill and Melinda Gates with the Presidential Medal of Freedom for their philanthropic efforts.
In 2017, François Hollande awarded Bill and Melinda Gates with France's highest national order, as Commanders in the Legion of Honour, for their charity efforts.
He was elected a foreign member of the Chinese Academy of Engineering in 2017.
In 2019, Gates was awarded the Professor Hawking Fellowship of the Cambridge Union in the University of Cambridge.
In 2020, Gates received the Grand Cordon of the Order of the Rising Sun for his contributions to Japan and the world in regard to worldwide technological transformation and advancement of global health.
In 2021, Gates was nominated at the 11th annual Streamy Awards for the crossover for his personal YouTube channel.
In 2022, Gates received the Hilal-e-Pakistan, the second-highest civilian award in Pakistan for his social work in the country.
Depiction in media
Documentary films about Gates
The Machine That Changed the World (1990)
Triumph of the Nerds (1996)
Nerds 2.0.1 (1998)
Waiting for "Superman" (2010)
The Virtual Revolution (2010)
Inside Bill's Brain: Decoding Bill Gates (2019)
Feature films
1999: Pirates of Silicon Valley, a film that chronicles the rise of Apple and Microsoft from the early 1970s to 1997. Gates is portrayed by Anthony Michael Hall.
2002: Nothing So Strange, a mockumentary featuring Gates as the subject of a modern assassination. Gates briefly appears at the start, played by Steve Sires.
2010: The Social Network, a film that chronicles the development of Facebook. Gates is portrayed by Steve Sires.
2015: Steve Jobs vs. Bill Gates: The Competition to Control the Personal Computer, 1974–1999: Original film from the National Geographic Channel for the American Genius series.
Video and film clips
1983: Steve Jobs hosts Gates and others in the "Macintosh dating game" at the Macintosh pre-launch event (a parody of the television game show The Dating Game)
1991: Gates spoke to the Berkeley Macintosh Users Group lively weekly Thursday night meeting with questions and answers in PSL Hall (renamed Pimentel Hall in 1994) at University of California, Berkeley
2007: , All Things Digital
Since 2009, Gates has given numerous TED talks on current concerns such as innovation, education and fighting global diseases
Radio
Gates was the guest on BBC Radio 4's Desert Island Discs on January 31, 2016, in which he talked about his relationships with his father and Steve Jobs, meeting Melinda Ann French, the start of Microsoft and some of his habits (for example reading The Economist "from cover to cover every week"). His choice of things to take on a desert island were, for music: "Blue Skies" by Willie Nelson; a book: The Better Angels of Our Nature by Steven Pinker; and luxury item: a DVD Collection of Lectures from The Teaching Company.
Television
Gates starred as himself in a brief appearance on the Frasier episode “The Two Hundredth Episode". He also made a guest appearance as himself on the TV show The Big Bang Theory, in an episode titled "The Gates Excitation". He also appeared in a cameo role in 2019 on the series finale of Silicon Valley. Gates was parodied in The Simpsons episode "Das Bus". In 2023, Gates was the interviewee in an episode of the Amol Rajan Interviews series on BBC Two, and was the subject of an episode of the UK Channel 4 series The Billionaires Who Made Our World.
See also
Big History Project
List of richest Americans in history
Notes
References
Bibliography
Primary sources
Gates, Bill. "Bill Gates explains how AI will change our lives in 5 years". CNN (2024). online
Gates, Bill. "An exclusive interview with Bill Gates". Financial Times 1 (2013). online
Gates, Bill. "Remarks of Bill Gates, Harvard Commencement 2007". The Harvard Gazette 7 (2007). Online
Kinsley, Michael, and Conor Clarke, Eds. Creative Capitalism: A Conversation With Bill Gates, Warren Buffett, and Other Economic Leaders (Simon and Schuster, 2009).
National Museum of American History. Online
Further reading
External links
Forbes profile
1955 births
Living people
20th-century American businesspeople
21st-century American businesspeople
21st-century American philanthropists
American billionaires
American chairpersons of corporations
American computer businesspeople
American computer programmers
American financiers
Inventors from Washington (state)
American memoirists
American nonprofit chief executives
American people of English descent
American people of German descent
American people of Scotch-Irish descent
American people of Scottish descent
American technology chief executives
American technology company founders
American technology writers
American venture capitalists
Benjamin Franklin Medal (Franklin Institute) laureates
Big History
Bill & Melinda Gates Foundation people
Businesspeople from Seattle
Businesspeople in software
Commanders of the Legion of Honour
Cornell family
Directors of Berkshire Hathaway
Directors of Microsoft
Engineers from Washington (state)
Fellows of the British Computer Society
Foreign members of the Chinese Academy of Engineering
Gates family
Grand Cordons of the Order of the Rising Sun
Harvard College alumni
History of computing
History of Microsoft
Honorary Knights Commander of the Order of the British Empire
Lakeside School (Seattle) alumni
Members of the United States National Academy of Engineering
Microsoft employees
Microsoft Windows people
National Medal of Technology recipients
Nerd culture
People from Medina, Washington
Personal computing
Philanthropists from Washington (state)
Phillips family (New England)
Presidential Medal of Freedom recipients
Recipients of Hilal-i-Pakistan
Recipients of the Cross of Recognition
Recipients of the Padma Bhushan in social work
Wired (magazine) people
Writers from Seattle | Bill Gates | [
"Technology"
] | 12,117 | [
"Computing and society",
"Computers",
"History of computing",
"Personal computing"
] |
3,755 | https://en.wikipedia.org/wiki/Boron | Boron is a chemical element. It has the symbol B and atomic number 5. In its crystalline form it is a brittle, dark, lustrous metalloid; in its amorphous form it is a brown powder. As the lightest element of the boron group it has three valence electrons for forming covalent bonds, resulting in many compounds such as boric acid, the mineral sodium borate, and the ultra-hard crystals of boron carbide and boron nitride.
Boron is synthesized entirely by cosmic ray spallation and supernovas and not by stellar nucleosynthesis, so it is a low-abundance element in the Solar System and in the Earth's crust. It constitutes about 0.001 percent by weight of Earth's crust. It is concentrated on Earth by the water-solubility of its more common naturally occurring compounds, the borate minerals. These are mined industrially as evaporites, such as borax and kernite. The largest known deposits are in Turkey, the largest producer of boron minerals.
Elemental boron is found in small amounts in meteoroids, but chemically uncombined boron is not otherwise found naturally on Earth.
Several allotropes exist: amorphous boron is a brown powder; crystalline boron is silvery to black, extremely hard (9.3 on the Mohs scale), and a poor electrical conductor at room temperature (1.5 × 10−6 Ω−1 cm−1 room temperature electrical conductivity). The primary use of the element itself is as boron filaments with applications similar to carbon fibers in some high-strength materials.
Boron is primarily used in chemical compounds. About half of all production consumed globally is an additive in fiberglass for insulation and structural materials. The next leading use is in polymers and ceramics in high-strength, lightweight structural and heat-resistant materials. Borosilicate glass is desired for its greater strength and thermal shock resistance than ordinary soda lime glass. As sodium perborate, it is used as a bleach. A small amount is used as a dopant in semiconductors, and reagent intermediates in the synthesis of organic fine chemicals. A few boron-containing organic pharmaceuticals are used or are in study. Natural boron is composed of two stable isotopes, one of which (boron-10) has a number of uses as a neutron-capturing agent.
Borates have low toxicity in mammals (similar to table salt) but are more toxic to arthropods and are occasionally used as insecticides. Boron-containing organic antibiotics are known. Although only traces are required, it is an essential plant nutrient.
History
The word boron was coined from borax, the mineral from which it was isolated, by analogy with carbon, which boron resembles chemically.
Borax in its mineral form (then known as tincal) first saw use as a glaze, beginning in China circa 300 AD. Some crude borax traveled westward, and was apparently mentioned by the alchemist Jabir ibn Hayyan around 700 AD. Marco Polo brought some glazes back to Italy in the 13th century. Georgius Agricola, in around 1600, reported the use of borax as a flux in metallurgy. In 1777, boric acid was recognized in the hot springs (soffioni) near Florence, Italy, at which point it became known as sal sedativum, with ostensible medical benefits. The mineral was named sassolite, after Sasso Pisano in Italy. Sasso was the main source of European borax from 1827 to 1872, when American sources replaced it. Boron compounds were rarely used until the late 1800s when Francis Marion Smith's Pacific Coast Borax Company first popularized and produced them in volume at low cost.
Boron was not recognized as an element until it was isolated by Sir Humphry Davy and by Joseph Louis Gay-Lussac and Louis Jacques Thénard. In 1808 Davy observed that electric current sent through a solution of borates produced a brown precipitate on one of the electrodes. In his subsequent experiments, he used potassium to reduce boric acid instead of electrolysis. He produced enough boron to confirm a new element and named it boracium. Gay-Lussac and Thénard used iron to reduce boric acid at high temperatures. By oxidizing boron with air, they showed that boric acid is its oxidation product. Jöns Jacob Berzelius identified it as an element in 1824. Pure boron was arguably first produced by the American chemist Ezekiel Weintraub in 1909.
Characteristics of the element
Isotopes
Boron has two naturally occurring and stable isotopes, 11B (80.1%) and 10B (19.9%). The mass difference results in a wide range of δ11B values, which are defined as a fractional difference between the 11B and 10B and traditionally expressed in parts per thousand, in natural waters ranging from −16 to +59. There are 13 known isotopes of boron; the shortest-lived isotope is 7B which decays through proton emission and alpha decay with a half-life of 3.5×10−22 s. Isotopic fractionation of boron is controlled by the exchange reactions of the boron species B(OH)3 and [B(OH)4]−. Boron isotopes are also fractionated during mineral crystallization, during H2O phase changes in hydrothermal systems, and during hydrothermal alteration of rock. The latter effect results in preferential removal of the [10B(OH)4]− ion onto clays. It results in solutions enriched in 11B(OH)3 and therefore may be responsible for the large 11B enrichment in seawater relative to both oceanic crust and continental crust; this difference may act as an isotopic signature.
The exotic 17B exhibits a nuclear halo, i.e. its radius is appreciably larger than that predicted by the liquid drop model.
NMR spectroscopy
Both 10B and 11B possess nuclear spin. The nuclear spin of 10B is 3 and that of 11B is . These isotopes are, therefore, of use in nuclear magnetic resonance spectroscopy; and spectrometers specially adapted to detecting the boron-11 nuclei are available commercially. The 10B and 11B nuclei also cause splitting in the resonances of attached nuclei.
Allotropes
Boron forms four major allotropes: α-rhombohedral and β-rhombohedral (α-R and β-R), γ-orthorhombic (γ) and β-tetragonal (β-T). All four phases are stable at ambient conditions, and β-rhombohedral is the most common and stable. An α-tetragonal phase also exists (α-T), but is very difficult to produce without significant contamination. Most of the phases are based on B12 icosahedra, but the γ phase can be described as a rocksalt-type arrangement of the icosahedra and B2 atomic pairs. It can be produced by compressing other boron phases to 12–20 GPa and heating to 1500–1800 °C; it remains stable after releasing the temperature and pressure. The β-T phase is produced at similar pressures, but higher temperatures of 1800–2200 °C. The α-T and β-T phases might coexist at ambient conditions, with the β-T phase being the more stable. Compressing boron above 160 GPa produces a boron phase with an as yet unknown structure, and this phase is a superconductor at temperatures below 6–12 K.
Atomic structure
Atomic boron is the lightest element having an electron in a p-orbital in its ground state. Its first three ionization energies are higher than those for heavier group III elements, reflecting its electropositive character.
Chemistry of the element
Preparation
Elemental boron is rare and poorly studied because the pure material is extremely difficult to prepare. Most studies of "boron" involve samples that contain small amounts of carbon. Very pure boron is produced with difficulty because of contamination by carbon or other elements that resist removal.
Some early routes to elemental boron involved the reduction of boric oxide with metals such as magnesium or aluminium. However, the product was often contaminated with borides of those metals. Pure boron can be prepared by reducing volatile boron halides with hydrogen at high temperatures. Ultrapure boron for use in the semiconductor industry is produced by the decomposition of diborane at high temperatures and then further purified by the zone melting or Czochralski processes.
Reactions of the element
Crystalline boron is a hard, black material with a melting point of above 2000 °C. Crystalline boron is chemically inert and resistant to attack by boiling hydrofluoric or hydrochloric acid. When finely divided, it is attacked slowly by hot concentrated hydrogen peroxide, hot concentrated nitric acid, hot sulfuric acid or hot mixture of sulfuric and chromic acids.
Since elemental boron is very rare, its chemical reactions are of little significance practically speaking. The elemental form is not typically used as a precursor to compounds. Instead, the extensive inventory of boron compounds are produced from borates.
When exposed to air, under normal conditions, a protective oxide or hydroxide layer forms on the surface of boron, which prevents further corrosion. The rate of oxidation of boron depends on the crystallinity, particle size, purity and temperature. At higher temperatures boron burns to form boron trioxide:
4 B + 3 O2 → 2 B2O3
Chemical compounds
General trends
In some ways, boron is comparable to carbon in its capability to form stable covalently bonded molecular networks (even nominally disordered (amorphous) boron contains boron icosahedra, which are bonded randomly to each other without long-range order.). In terms of chemical behavior, boron compounds resembles silicon. Aluminium, the heavier congener of boron, does not behave analogously to boron: it is far more electropositive, it is larger, and it tends not to form homoatomic Al-Al bonds. In the most familiar compounds, boron has the formal oxidation state III. These include the common oxides, sulfides, nitrides, and halides, as well as organic derivatives
Boron compounds often violate the octet rule.
Halides
Boron forms the complete series of trihalides, i.e. BX3 (X = F, Cl, Br, I). The trifluoride is produced by treating borate salts with hydrogen fluoride, while the trichloride is produced by carbothermic reduction of boron oxides in the presence of chlorine gas:
The trihalides adopt a planar trigonal structures, in contrast to the behavior of aluminium trihalides. All charge-neutral boron halides violate the octet rule, hence they typically are Lewis acidic. For example, boron trifluoride (BF3) combines eagerly with fluoride sources to give the tetrafluoroborate anion, BF4−. Boron trifluoride is used in the petrochemical industry as a catalyst. The halides react with water to form boric acid. Other boron halides include those with B-B bonding, such as B2F4 and B4Cl4.
Oxide derivatives
Boron-containing minerals exclusively exist as oxides of B(III), often associated with other elements. More than one hundred borate minerals are known. These minerals resemble silicates in some respect, although it is often found not only in a tetrahedral coordination with oxygen, but also in a trigonal planar configuration. The borates can be subdivided into two classes, anhydrous and the far more common hydrates. The hydrates contain B-OH groups and sometimes water of crystallization. A typical motif is exemplified by the tetraborate anions of the common mineral borax. The formal negative charge of the tetrahedral borate center is balanced by sodium (Na+). Some idea of the complexity of these materials is provided by the inventory of zinc borates, which are common wood preservatives and fire retardants: 4ZnO·B2O3·H2O, ZnO·B2O3·1.12H2O, ZnO·B2O3·2H2O, 6ZnO·5B2O3·3H2O, 2ZnO·3B2O3·7H2O, 2ZnO·3B2O3·3H2O, 3ZnO·5B2O3·14H2O, and ZnO·5B2O3·4.5H2O.
As illustrated by the preceding examples, borate anions tend to condense by formation of B-O-B bonds. Borosilicates, with B-O-Si, and borophosphates, with B-O-P linkages, are also well represented in both minerals and synthetic compounds.
Related to the oxides are the alkoxides and boronic acids with the formula B(OR)3 and R2BOH, respectively. Boron forms a wide variety of such metal-organic compounds, some of which are used in the synthesis of pharmaceuticals. These developments, especially the Suzuki reaction, was recognized with the 2010 Nobel Prize in Chemistry to Akira Suzuki.
Hydrides
Boranes and borohydrides are neutral and anionic compounds of boron and hydrogen, respectively. Sodium borohydride is the progenitor of the boranes. Sodium borohydride is obtained by hydrogenation of trimethylborate:
Sodium borohydride is a white, fairly air-stable salt.
Sodium borohydride converts to diborane by treatment with boron trifluoride:
Diborane is the dimer of the elusive parent called borane, BH3. Having a formula akin to ethane's (C2H6), diborane adopts a very different structure, featuring a pair of bridging H atoms. This unusual structure, which was deduced only in the 1940's, was an early indication of the many surprises provided by boron chemistry.
Pyrolysis of diborane gives boron hydride clusters, such as pentaborane(9) and decaborane . A large number of anionic boron hydrides are also known, e.g. [B12H12]2−. In these cluster compounds, boron has a coordination number greater than four. The analysis of the bonding in these polyhedra clusters earned William N. Lipscomb the 1976 Nobel Prize in Chemistry for "studies on the structure of boranes illuminating problems of chemical bonding". Not only are their structures unusual, many of the boranes are extremely reactive. For example, a widely used procedure for pentaborane states that it will "spontaneously inflame or explode in air".
Organoboron compounds
A large number of organoboron compounds, species with B-C bonds, are known. Many organoboron compounds are produced from hydroboration, the addition of B-H bonds to bonds. Diborane is traditionally used for such reactions, as illustrated by the preparation of trioctylborane:
This regiochemistry, i.e. the tendency of B to attach to the terminal carbon - is explained by the polarization of the bonds in boranes, which is indicated as Bδ+-Hδ-.
Hydroboration opened the doors for many subsequent reactions, several of which are useful in the synthesis of complex organic compounds. The significance of these methods was recognized by the award of Nobel Prize in Chemistry to H. C. Brown in 1979. Even complicated boron hydrides, such as decaborane undergo hydroboration. Like the volatile boranes, the alkyl boranes ignite spontaneously in air.
In the 1950s, several studies examined the use of boranes as energy-increasing "Zip fuel" additives for jet fuel.
Triorganoboron(III) compounds are trigonal planar and exhibit weak Lewis acidity. The resulting adducts are tetrahedral. This behavior contrasts with that of triorganoaluminium compounds (see trimethylaluminium), which are tetrahedral with bridging alkyl groups.
Nitrides
The boron-nitrides follow the pattern of avoiding B-B and N-N bonds: only B-N bonding is observed generally. The boron nitrides exhibit structures analogous to various allotropes of carbon, including graphite, diamond, and nanotubes. This similarity reflects the fact that B and N have eight valence electrons as does a pair of carbon atoms. In cubic boron nitride (tradename Borazon), boron and nitrogen atoms are tetrahedral, just like carbon in diamond. Cubic boron nitride, among other applications, is used as an abrasive, as its hardness is comparable with that of diamond. Hexagonal boron nitride (h-BN) is the BN analogue of graphite, consisting of sheets of alternating B and N atoms. These sheets stack with boron and nitrogen in registry between the sheets. Graphite and h-BN have very different properties, although both are lubricants, as these planes slip past each other easily. However, h-BN is a relatively poor electrical and thermal conductor in the planar directions. Molecular analogues of boron nitrides are represented by borazine, (BH)3(NH)3.
Carbides
Boron carbide is a ceramic material. It is obtained by carbothermal reduction of B2O3in an electric furnace:
2 B2O3 + 7 C → B4C + 6 CO
Boron carbide's structure is only approximately reflected in its formula of B4C, and it shows a clear depletion of carbon from this suggested stoichiometric ratio. This is due to its very complex structure. The substance can be seen with empirical formula B12C3 (i.e., with B12 dodecahedra being a motif), but with less carbon, as the suggested C3 units are replaced with C-B-C chains, and some smaller (B6) octahedra are present as well (see the boron carbide article for structural analysis). The repeating polymer plus semi-crystalline structure of boron carbide gives it great structural strength per weight.
Borides
Binary metal-boron compounds, the metal borides, contain only boron and a metal. They are metallic, very hard, with high melting points. TiB2, ZrB2, and HfB2 have melting points above 3000 °C. Some metal borides find specialized applications as hard materials for cutting tools.
Occurrence
Boron is rare in the Universe and solar system. The amount of boron formed in the Big Bang is negligible. Boron is not generated in the normal course of stellar nucleosynthesis and is destroyed in stellar interiors.
In the high oxygen environment of the Earth's surface, boron is always found fully oxidized to borate. Boron does not appear on Earth in elemental form. Extremely small traces of elemental boron were detected in Lunar regolith.
Although boron is a relatively rare element in the Earth's crust, representing only 0.001% of the crust mass, it can be highly concentrated by the action of water, in which many borates are soluble. It is found naturally combined in compounds such as borax and boric acid (sometimes found in volcanic spring waters). About a hundred borate minerals are known.
Production
Economically important sources of boron are the minerals colemanite, rasorite (kernite), ulexite and tincal. Together these constitute 90% of mined boron-containing ore. The largest global borax deposits known, many still untapped, are in Central and Western Turkey, including the provinces of Eskişehir, Kütahya and Balıkesir. Global proven boron mineral mining reserves exceed one billion metric tonnes, against a yearly production of about four million tonnes.
Turkey and the United States are the largest producers of boron products. Turkey produces about half of the global yearly demand, through Eti Mine Works () a Turkish state-owned mining and chemicals company focusing on boron products. It holds a government monopoly on the mining of borate minerals in Turkey, which possesses 72% of the world's known deposits. In 2012, it held a 47% share of production of global borate minerals, ahead of its main competitor, Rio Tinto Group.
Almost a quarter (23%) of global boron production comes from the Rio Tinto Borax Mine (also known as the U.S. Borax Boron Mine) near Boron, California.
Market trend
The average cost of crystalline elemental boron is US$5/g. Elemental boron is chiefly used in making boron fibers, where it is deposited by chemical vapor deposition on a tungsten core (see below). Boron fibers are used in lightweight composite applications, such as high strength tapes. This use is a very small fraction of total boron use. Boron is introduced into semiconductors as boron compounds, by ion implantation.
Estimated global consumption of boron (almost entirely as boron compounds) was about 4 million tonnes of B2O3 in 2012. As compounds such as borax and kernite its cost was US$377/tonne in 2019.
Increasing demand for boric acid has led a number of producers to invest in additional capacity. Turkey's state-owned Eti Mine Works opened a new boric acid plant with the production capacity of 100,000 tonnes per year at Emet in 2003. Rio Tinto Group increased the capacity of its boron plant from 260,000 tonnes per year in 2003 to 310,000 tonnes per year by May 2005, with plans to grow this to 366,000 tonnes per year in 2006. Chinese boron producers have been unable to meet rapidly growing demand for high quality borates. This has led to imports of sodium tetraborate (borax) growing by a hundredfold between 2000 and 2005 and boric acid imports increasing by 28% per year over the same period.
The rise in global demand has been driven by high growth rates in glass fiber, fiberglass and borosilicate glassware production. A rapid increase in the manufacture of reinforcement-grade boron-containing fiberglass in Asia, has offset the development of boron-free reinforcement-grade fiberglass in Europe and the US. The recent rises in energy prices may lead to greater use of insulation-grade fiberglass, with consequent growth in the boron consumption. Roskill Consulting Group forecasts that world demand for boron will grow by 3.4% per year to reach 21 million tonnes by 2010. The highest growth in demand is expected to be in Asia where demand could rise by an average 5.7% per year.
Applications
Nearly all boron ore extracted from the Earth is refined as boric acid and sodium tetraborate pentahydrate. In the United States, 70% of the boron is used for the production of glass and ceramics.
The major global industrial-scale use of boron compounds (about 46% of end-use) is in production of glass fiber for boron-containing insulating and structural fiberglasses, especially in Asia. Boron is added to the glass as borax pentahydrate or boron oxide, to influence the strength or fluxing qualities of the glass fibers. Another 10% of global boron production is for borosilicate glass as used in high strength glassware. About 15% of global boron is used in boron ceramics, including super-hard materials discussed below. Agriculture consumes 11% of global boron production, and bleaches and detergents about 6%.
Boronated fiberglass
Fiberglasses, a fiber reinforced polymer sometimes contain borosilicate, borax, or boron oxide, and is added to increase the strength of the glass. The highly boronated glasses, E-glass (named for "Electrical" use) are alumino-borosilicate glass. Another common high-boron glasses, C-glass, also has a high boron oxide content, used for glass staple fibers and insulation. D-glass, a borosilicate glass, named for its low dielectric constant.
Because of the ubiquitous use of fiberglass in construction and insulation, boron-containing fiberglasses consume over half the global production of boron, and are the single largest commercial boron market.
Borosilicate glass
Borosilicate glass, which is typically 12–15% B2O3, 80% SiO2, and 2% Al2O3, has a low coefficient of thermal expansion, giving it a good resistance to thermal shock. Schott AG's "Duran" and Owens-Corning's trademarked Pyrex are two major brand names for this glass, used both in laboratory glassware and in consumer cookware and bakeware, chiefly for this resistance.
Elemental boron fiber
Boron fibers (boron filaments) are high-strength, lightweight materials that are used chiefly for advanced aerospace structures as a component of composite materials, as well as limited production consumer and sporting goods such as golf clubs and fishing rods. The fibers can be produced by chemical vapor deposition of boron on a tungsten filament.
Boron fibers and sub-millimeter sized crystalline boron springs are produced by laser-assisted chemical vapor deposition. Translation of the focused laser beam allows production of even complex helical structures. Such structures show good mechanical properties (elastic modulus 450 GPa, fracture strain 3.7%, fracture stress 17 GPa) and can be applied as reinforcement of ceramics or in micromechanical systems.
Boron carbide ceramic
Boron carbide's ability to absorb neutrons without forming long-lived radionuclides (especially when doped with extra boron-10) makes the material attractive as an absorbent for neutron radiation arising in nuclear power plants. Nuclear applications of boron carbide include shielding, control rods and shut-down pellets. Within control rods, boron carbide is often powdered, to increase its surface area.
High-hardness and abrasive compounds
Boron carbide and cubic boron nitride powders are widely used as abrasives. Boron nitride is a material isoelectronic to carbon. Similar to carbon, it has both hexagonal (soft graphite-like h-BN) and cubic (hard, diamond-like c-BN) forms. h-BN is used as a high temperature component and lubricant. c-BN, also known under commercial name borazon, is a superior abrasive. Its hardness is only slightly smaller than, but its chemical stability is superior, to that of diamond. Heterodiamond (also called BCN) is another diamond-like boron compound.
Metallurgy
Boron is added to boron steels at the level of a few parts per million to increase hardenability. Higher percentages are added to steels used in the nuclear industry due to boron's neutron absorption ability.
Boron can also increase the surface hardness of steels and alloys through boriding. Additionally metal borides are used for coating tools through chemical vapor deposition or physical vapor deposition. Implantation of boron ions into metals and alloys, through ion implantation or ion beam deposition, results in a spectacular increase in surface resistance and microhardness. Laser alloying has also been successfully used for the same purpose. These borides are an alternative to diamond coated tools, and their (treated) surfaces have similar properties to those of the bulk boride.
For example, rhenium diboride can be produced at ambient pressures, but is rather expensive because of rhenium. The hardness of ReB2 exhibits considerable anisotropy because of its hexagonal layered structure. Its value is comparable to that of tungsten carbide, silicon carbide, titanium diboride or zirconium diboride. Similarly, AlMgB14 + TiB2 composites possess high hardness and wear resistance and are used in either bulk form or as coatings for components exposed to high temperatures and wear loads.
Detergent formulations and bleaching agents
Borax is used in various household laundry and cleaning products. It is also present in some tooth bleaching formulas.
Sodium perborate serves as a source of active oxygen in many detergents, laundry detergents, cleaning products, and laundry bleaches. However, despite its name, "Borateem" laundry bleach no longer contains any boron compounds, using sodium percarbonate instead as a bleaching agent.
Insecticides and antifungals
Zinc borates and boric acid, popularized as fire retardants, are widely used as wood preservatives and insecticides. Boric acid is also used as a domestic insecticide.
Semiconductors
Boron is a useful dopant for such semiconductors as silicon, germanium, and silicon carbide. Having one fewer valence electron than the host atom, it donates a hole resulting in p-type conductivity. Traditional method of introducing boron into semiconductors is via its atomic diffusion at high temperatures. This process uses either solid (B2O3), liquid (BBr3), or gaseous boron sources (B2H6 or BF3). However, after the 1970s, it was mostly replaced by ion implantation, which relies mostly on BF3 as a boron source. Boron trichloride gas is also an important chemical in semiconductor industry, however, not for doping but rather for plasma etching of metals and their oxides. Triethylborane is also injected into vapor deposition reactors as a boron source. Examples are the plasma deposition of boron-containing hard carbon films, silicon nitride–boron nitride films, and for doping of diamond film with boron.
Magnets
Boron is a component of neodymium magnets (Nd2Fe14B), which are among the strongest type of permanent magnet. These magnets are found in a variety of electromechanical and electronic devices, such as magnetic resonance imaging (MRI) medical imaging systems, in compact and relatively small motors and actuators. As examples, computer HDDs (hard disk drives), CD (compact disk) and DVD (digital versatile disk) players rely on neodymium magnet motors to deliver intense rotary power in a remarkably compact package. In mobile phones 'Neo' magnets provide the magnetic field which allows tiny speakers to deliver appreciable audio power.
Shielding and neutron absorber in nuclear reactors
Boron shielding is used as a control for nuclear reactors, taking advantage of its high cross-section for neutron capture.
In pressurized water reactors a variable concentration of boronic acid in the cooling water is used as a neutron poison to compensate the variable reactivity of the fuel. When new rods are inserted the concentration of boronic acid is maximal, and is reduced during the lifetime.
Other nonmedical uses
Because of its distinctive green flame, amorphous boron is used in pyrotechnic flares.
Some anti-corrosion systems contain borax.
Sodium borates are used as a flux for soldering silver and gold and with ammonium chloride for welding ferrous metals. They are also fire retarding additives to plastics and rubber articles.
Boric acid (also known as orthoboric acid) H3BO3 is used in the production of textile fiberglass and flat panel displays and in many PVAc- and PVOH-based adhesives.
Triethylborane is a substance which ignites the JP-7 fuel of the Pratt & Whitney J58 turbojet/ramjet engines powering the Lockheed SR-71 Blackbird. It was also used to ignite the F-1 Engines on the Saturn V Rocket utilized by NASA's Apollo and Skylab programs from 1967 until 1973. Today SpaceX uses it to ignite the engines on their Falcon 9 rocket. Triethylborane is suitable for this because of its pyrophoric properties, especially the fact that it burns with a very high temperature. Triethylborane is an industrial initiator in radical reactions, where it is effective even at low temperatures.
Borates are used as environmentally benign wood preservatives.
Pharmaceutical and biological applications
Boron plays a role in pharmaceutical and biological applications as it is found in various antibiotics produced by bacteria, such as boromycins, aplasmomycins, borophycins, and tartrolons. These antibiotics have shown inhibitory effects on the growth of certain bacteria, fungi, and protozoa. Boron is also being studied for its potential medicinal applications, including its incorporation into biologically active molecules for therapies like boron neutron capture therapy for brain tumors. Some boron-containing biomolecules may act as signaling molecules interacting with cell surfaces, suggesting a role in cellular communication.
Boric acid has antiseptic, antifungal, and antiviral properties and, for these reasons, is applied as a water clarifier in swimming pool water treatment. Mild solutions of boric acid have been used as eye antiseptics.
Bortezomib (marketed as Velcade and Cytomib). Boron appears as an active element in the organic pharmaceutical bortezomib, a new class of drug called the proteasome inhibitor, for treating myeloma and one form of lymphoma (it is currently in experimental trials against other types of lymphoma). The boron atom in bortezomib binds the catalytic site of the 26S proteasome with high affinity and specificity.
A number of potential boronated pharmaceuticals using boron-10, have been prepared for use in boron neutron capture therapy (BNCT).
Some boron compounds show promise in treating arthritis, though none have as yet been generally approved for the purpose.
Tavaborole (marketed as Kerydin) is an Aminoacyl tRNA synthetase inhibitor which is used to treat toenail fungus. It gained FDA approval in July 2014.
Dioxaborolane chemistry enables radioactive fluoride (18F) labeling of antibodies or red blood cells, which allows for positron emission tomography (PET) imaging of cancer and hemorrhages, respectively. A Human-Derived, Genetic, Positron-emitting and Fluorescent (HD-GPF) reporter system uses a human protein, PSMA and non-immunogenic, and a small molecule that is positron-emitting (boron bound 18F) and fluorescence for dual modality PET and fluorescent imaging of genome modified cells, e.g. cancer, CRISPR/Cas9, or CAR T-cells, in an entire mouse. The dual-modality small molecule targeting PSMA was tested in humans and found the location of primary and metastatic prostate cancer, fluorescence-guided removal of cancer, and detects single cancer cells in tissue margins.
Research
MgB2
Magnesium diboride (MgB2) is a superconductor with the transition temperature of 39 K. MgB2 wires are produced with the powder-in-tube process and applied in superconducting magnets. A project at CERN to make MgB2 cables has resulted in superconducting test cables able to carry 20,000 amperes for extremely high current distribution applications, such as the contemplated high luminosity version of the Large Hadron Collider.
Commercial isotope enrichment
Because of its high neutron cross-section, boron-10 is often used to control fission in nuclear reactors as a neutron-capturing substance. Several industrial-scale enrichment processes have been developed; however, only the fractionated vacuum distillation of the dimethyl ether adduct of boron trifluoride (DME-BF3) and column chromatography of borates are being used.
Radiation-hardened semiconductors
Cosmic radiation will produce secondary neutrons if it hits spacecraft structures. Those neutrons will be captured in 10B, if it is present in the spacecraft's semiconductors, producing a gamma ray, an alpha particle, and a lithium ion. Those resultant decay products may then irradiate nearby semiconductor "chip" structures, causing data loss (bit flipping, or single event upset). In radiation-hardened semiconductor designs, one countermeasure is to use depleted boron, which is greatly enriched in 11B and contains almost no 10B. This is useful because 11B is largely immune to radiation damage. Depleted boron is a byproduct of the nuclear industry (see above).
Proton-boron fusion
11B is also a candidate as a fuel for aneutronic fusion. When struck by a proton with energy of about 500 keV, it produces three alpha particles and 8.7 MeV of energy. Most other fusion reactions involving hydrogen and helium produce penetrating neutron radiation, which weakens reactor structures and induces long-term radioactivity, thereby endangering operating personnel. The alpha particles from 11B fusion can be turned directly into electric power, and all radiation stops as soon as the reactor is turned off.
Enriched boron (boron-10)
The 10B isotope is useful for capturing thermal neutrons (see neutron cross section#Typical cross sections). The nuclear industry enriches natural boron to nearly pure 10B. The less-valuable by-product, depleted boron, is nearly pure 11B.
Enriched boron or 10B is used in both radiation shielding and is the primary nuclide used in neutron capture therapy of cancer. In the latter ("boron neutron capture therapy" or BNCT), a compound containing 10B is incorporated into a pharmaceutical which is selectively taken up by a malignant tumor and tissues near it. The patient is then treated with a beam of low energy neutrons at a relatively low neutron radiation dose. The neutrons, however, trigger energetic and short-range secondary alpha particle and lithium-7 heavy ion radiation that are products of the boron-neutron nuclear reaction, and this ion radiation additionally bombards the tumor, especially from inside the tumor cells.
In nuclear reactors, 10B is used for reactivity control and in emergency shutdown systems. It can serve either function in the form of borosilicate control rods or as boric acid. In pressurized water reactors, 10B boric acid is added to the reactor coolant after the plant is shut down for refueling. When the plant is started up again, the boric acid is slowly filtered out over many months as fissile material is used up and the fuel becomes less reactive.
Nuclear fusion
Boron has been investigated for possible applications in nuclear fusion research. It is commonly used for conditioning the walls in fusion reactors by depositing boron coatings on plasma-facing components and walls to reduce the release of hydrogen and impurities from the surfaces. It is also being used for the dissipation of energy in the fusion plasma boundary to suppress excessive energy bursts and heat fluxes to the walls.
Neutron capture therapy
In neutron capture therapy (BNCT) for malignant brain tumors, boron is researched to be used for selectively targeting and destroying tumor cells. The goal is to deliver higher concentrations of the non-radioactive boron isotope (10B) to the tumor cells than to the surrounding normal tissues. When these 10B-containing cells are irradiated with low-energy thermal neutrons, they undergo nuclear capture reactions, releasing high linear energy transfer (LET) particles such as α-particles and lithium-7 nuclei within a limited path length. These high-LET particles can destroy the adjacent tumor cells without causing significant harm to nearby normal cells. Boron acts as a selective agent due to its ability to absorb thermal neutrons and produce short-range physical effects primarily affecting the targeted tissue region. This binary approach allows for precise tumor cell killing while sparing healthy tissues. The effective delivery of boron involves administering boron compounds or carriers capable of accumulating selectively in tumor cells compared to surrounding tissue. BSH and BPA have been used clinically, but research continues to identify more optimal carriers. Accelerator-based neutron sources have also been developed recently as an alternative to reactor-based sources, leading to improved efficiency and enhanced clinical outcomes in BNCT. By employing the properties of boron isotopes and targeted irradiation techniques, BNCT offers a potential approach to treating malignant brain tumors by selectively killing cancer cells while minimizing the damage caused by traditional radiation therapies.
BNCT has shown promising results in clinical trials for various other malignancies, including glioblastoma, head and neck cancer, cutaneous melanoma, hepatocellular carcinoma, lung cancer, and extramammary Paget's disease. The treatment involves a nuclear reaction between nonradioactive boron-10 isotope and low-energy thermal or high-energy epithermal neutrons to generate α particles and lithium nuclei that selectively destroy DNA in tumor cells. The primary challenge lies in developing efficient boron agents with higher content and specific targeting properties tailored for BNCT. Integration of tumor-targeting strategies with BNCT could potentially establish it as a practical personalized treatment option for different types of cancers. Ongoing research explores new boron compounds, optimization strategies, theranostic agents, and radiobiological advances to overcome limitations and cost-effectively improve patient outcomes.
Biological role
Boron is an essential plant nutrient, required primarily for maintaining the integrity of cell walls. However, high soil concentrations of greater than 1.0 ppm lead to marginal and tip necrosis in leaves as well as poor overall growth performance. Levels as low as 0.8 ppm produce these same symptoms in plants that are particularly sensitive to boron in the soil. Nearly all plants, even those somewhat tolerant of soil boron, will show at least some symptoms of boron toxicity when soil boron content is greater than 1.8 ppm. When this content exceeds 2.0 ppm, few plants will perform well and some may not survive.
Some boron-containing antibiotics exist in nature. The first one found was boromycin, isolated from streptomyces in the 1960s. Others are tartrolons, a group of antibiotics discovered in the 1990s from culture broth of the myxobacterium Sorangium cellulosum.
In 2013, chemist and synthetic biologist Steve Benner suggested that the conditions on Mars three billion years ago were much more favorable to the stability of RNA and formation of oxygen-containing boron and molybdenum catalysts found in life. According to Benner's theory, primitive life, which is widely believed to have originated from RNA, first formed on Mars before migrating to Earth.
In human health
It is thought that boron plays several essential roles in animals, including humans, but the exact physiological role is poorly understood. Boron deficiency has only been clearly established in livestock; in humans, boron deficiency may affect bone mineral density, though it has been noted that additional research on the effects of bone health is necessary.
Boron is not classified as an essential human nutrient because research has not established a clear biological function for it. The U.S. Food and Nutrition Board (FNB) found the existing data insufficient to derive a Recommended Dietary Allowance (RDA), Adequate Intake (AI), or Estimated Average Requirement (EAR) for boron and the U.S. Food and Drug Administration (FDA) has not established a daily value for boron for food and dietary supplement labeling purposes. While low boron status can be detrimental to health, probably increasing the risk of osteoporosis, poor immune function, and cognitive decline, high boron levels are associated with cell damage and toxicity.
Still, studies suggest that boron may exert beneficial effects on reproduction and development, calcium metabolism, bone formation, brain function, insulin and energy substrate metabolism, immunity, and steroid hormone (including estrogen) and vitamin D function, among other functions. A small human trial published in 1987 reported on postmenopausal women first made boron deficient and then repleted with 3 mg/day. Boron supplementation markedly reduced urinary calcium excretion and elevated the serum concentrations of 17 beta-estradiol and testosterone. Environmental boron appears to be inversely correlated with arthritis.
The exact mechanism by which boron exerts its physiological effects is not fully understood, but may involve interactions with adenosine monophosphate (ADP) and S-adenosyl methionine (SAM-e), two compounds involved in important cellular functions. Furthermore, boron appears to inhibit cyclic ADP-ribose, thereby affecting the release of calcium ions from the endoplasmic reticulum and affecting various biological processes. Some studies suggest that boron may reduce levels of inflammatory biomarkers. Congenital endothelial dystrophy type 2, a rare form of corneal dystrophy, is linked to mutations in SLC4A11 gene that encodes a transporter reportedly regulating the intracellular concentration of boron.
In humans, boron is usually consumed with food that contains boron, such as fruits, leafy vegetables, and nuts. Foods that are particularly rich in boron include avocados, dried fruits such as raisins, peanuts, pecans, prune juice, grape juice, wine and chocolate powder. According to 2-day food records from the respondents to the Third National Health and Nutrition Examination Survey (NHANES III), adult dietary intake was recorded at 0.9 to 1.4 mg/day.
Health issues and toxicity
Elemental boron, boron oxide, boric acid, borates, and many organoboron compounds are relatively nontoxic to humans and animals (with toxicity similar to that of table salt). The LD50 (dose at which there is 50% mortality) for animals is about 6 g per kg of body weight. Substances with an LD50 above 2 g/kg are considered nontoxic. An intake of 4 g/day of boric acid was reported without incident, but more than this is considered toxic in more than a few doses. Intakes of more than 0.5 grams per day for 50 days cause minor digestive and other problems suggestive of toxicity.
Boric acid is more toxic to insects than to mammals, and is routinely used as an insecticide. However, it has been used in neutron capture therapy alongside other boron compounds such as sodium borocaptate and boronophenylalanine with reported low toxicity levels.
The boranes (boron hydrogen compounds) and similar gaseous compounds are quite poisonous. As usual, boron is not an element that is intrinsically poisonous, but the toxicity of these compounds depends on structure (for another example of this phenomenon, see phosphine). The boranes are also highly flammable and require special care when handling, some combinations of boranes and other compounds are highly explosive. Sodium borohydride presents a fire hazard owing to its reducing nature and the liberation of hydrogen on contact with acid. Boron halides are corrosive.
Boron is necessary for plant growth, but an excess of boron is toxic to plants, and occurs particularly in acidic soil. It presents as a yellowing from the tip inwards of the oldest leaves and black spots in barley leaves, but it can be confused with other stresses such as magnesium deficiency in other plants.
See also
Allotropes of boron
Boron deficiency
Boron oxide
Boron nitride
Boron neutron capture therapy
Boronic acid
Hydroboration-oxidation reaction
Suzuki coupling
Notes
References
External links
Boron at The Periodic Table of Videos (University of Nottingham)
J. B. Calvert: Boron, 2004, private website (archived version)
Chemical elements
Metalloids
Neutron poisons
Pyrotechnic fuels
Rocket fuels
Nuclear fusion fuels
Dietary minerals
Reducing agents
Articles containing video clips
Chemical elements with rhombohedral structure | Boron | [
"Physics",
"Chemistry"
] | 10,163 | [
"Chemical elements",
"Redox",
"Reducing agents",
"Atoms",
"Matter"
] |
3,756 | https://en.wikipedia.org/wiki/Bromine | Bromine is a chemical element; it has symbol Br and atomic number 35. It is a volatile red-brown liquid at room temperature that evaporates readily to form a similarly coloured vapour. Its properties are intermediate between those of chlorine and iodine. Isolated independently by two chemists, Carl Jacob Löwig (in 1825) and Antoine Jérôme Balard (in 1826), its name was derived , referring to its sharp and pungent smell.
Elemental bromine is very reactive and thus does not occur as a free element in nature. Instead, it can be isolated from colourless soluble crystalline mineral halide salts analogous to table salt, a property it shares with the other halogens. While it is rather rare in the Earth's crust, the high solubility of the bromide ion (Br) has caused its accumulation in the oceans. Commercially the element is easily extracted from brine evaporation ponds, mostly in the United States and Israel. The mass of bromine in the oceans is about one three-hundredth that of chlorine.
At standard conditions for temperature and pressure it is a liquid; the only other element that is liquid under these conditions is mercury. At high temperatures, organobromine compounds readily dissociate to yield free bromine atoms, a process that stops free radical chemical chain reactions. This effect makes organobromine compounds useful as fire retardants, and more than half the bromine produced worldwide each year is put to this purpose. The same property causes ultraviolet sunlight to dissociate volatile organobromine compounds in the atmosphere to yield free bromine atoms, causing ozone depletion. As a result, many organobromine compounds—such as the pesticide methyl bromide—are no longer used. Bromine compounds are still used in well drilling fluids, in photographic film, and as an intermediate in the manufacture of organic chemicals.
Large amounts of bromide salts are toxic from the action of soluble bromide ions, causing bromism. However, bromine is beneficial for human eosinophils, and is an essential trace element for collagen development in all animals. Hundreds of known organobromine compounds are generated by terrestrial and marine plants and animals, and some serve important biological roles. As a pharmaceutical, the simple bromide ion (Br) has inhibitory effects on the central nervous system, and bromide salts were once a major medical sedative, before replacement by shorter-acting drugs. They retain niche uses as antiepileptics.
History
Bromine was discovered independently by two chemists, Carl Jacob Löwig and Antoine Balard, in 1825 and 1826, respectively.
Löwig isolated bromine from a mineral water spring from his hometown Bad Kreuznach in 1825. Löwig used a solution of the mineral salt saturated with chlorine and extracted the bromine with diethyl ether. After evaporation of the ether, a brown liquid remained. With this liquid as a sample of his work he applied for a position in the laboratory of Leopold Gmelin in Heidelberg. The publication of the results was delayed and Balard published his results first.
Balard found bromine chemicals in the ash of seaweed from the salt marshes of Montpellier. The seaweed was used to produce iodine, but also contained bromine. Balard distilled the bromine from a solution of seaweed ash saturated with chlorine. The properties of the resulting substance were intermediate between those of chlorine and iodine; thus he tried to prove that the substance was iodine monochloride (ICl), but after failing to do so he was sure that he had found a new element and named it muride, derived from the Latin word ("brine").
After the French chemists Louis Nicolas Vauquelin, Louis Jacques Thénard, and Joseph-Louis Gay-Lussac approved the experiments of the young pharmacist Balard, the results were presented at a lecture of the Académie des Sciences and published in Annales de Chimie et Physique. In his publication, Balard stated that he changed the name from muride to brôme on the proposal of M. Anglada. The name brôme (bromine) derives from the Greek (, "stench"). Other sources claim that the French chemist and physicist Joseph-Louis Gay-Lussac suggested the name brôme for the characteristic smell of the vapors. Bromine was not produced in large quantities until 1858, when the discovery of salt deposits in Stassfurt enabled its production as a by-product of potash.
Apart from some minor medical applications, the first commercial use was the daguerreotype. In 1840, bromine was discovered to have some advantages over the previously used iodine vapor to create the light sensitive silver halide layer in daguerreotypy.
By 1864, a 25% solution of liquid bromine in .75 molar aqueous potassium bromide was widely used to treat gangrene during the American Civil War, before the publications of Joseph Lister and Pasteur.
Potassium bromide and sodium bromide were used as anticonvulsants and sedatives in the late 19th and early 20th centuries, but were gradually superseded by chloral hydrate and then by the barbiturates. In the early years of the First World War, bromine compounds such as xylyl bromide were used as poison gas.
Properties
Bromine is the third halogen, being a nonmetal in group 17 of the periodic table. Its properties are thus similar to those of fluorine, chlorine, and iodine, and tend to be intermediate between those of chlorine and iodine, the two neighbouring halogens. Bromine has the electron configuration [Ar]4s3d4p, with the seven electrons in the fourth and outermost shell acting as its valence electrons. Like all halogens, it is thus one electron short of a full octet, and is hence a strong oxidising agent, reacting with many elements in order to complete its outer shell. Corresponding to periodic trends, it is intermediate in electronegativity between chlorine and iodine (F: 3.98, Cl: 3.16, Br: 2.96, I: 2.66), and is less reactive than chlorine and more reactive than iodine. It is also a weaker oxidising agent than chlorine, but a stronger one than iodine. Conversely, the bromide ion is a weaker reducing agent than iodide, but a stronger one than chloride. These similarities led to chlorine, bromine, and iodine together being classified as one of the original triads of Johann Wolfgang Döbereiner, whose work foreshadowed the periodic law for chemical elements. It is intermediate in atomic radius between chlorine and iodine, and this leads to many of its atomic properties being similarly intermediate in value between chlorine and iodine, such as first ionisation energy, electron affinity, enthalpy of dissociation of the X molecule (X = Cl, Br, I), ionic radius, and X–X bond length. The volatility of bromine accentuates its very penetrating, choking, and unpleasant odour.
All four stable halogens experience intermolecular van der Waals forces of attraction, and their strength increases together with the number of electrons among all homonuclear diatomic halogen molecules. Thus, the melting and boiling points of bromine are intermediate between those of chlorine and iodine. As a result of the increasing molecular weight of the halogens down the group, the density and heats of fusion and vaporisation of bromine are again intermediate between those of chlorine and iodine, although all their heats of vaporisation are fairly low (leading to high volatility) thanks to their diatomic molecular structure. The halogens darken in colour as the group is descended: fluorine is a very pale yellow gas, chlorine is greenish-yellow, and bromine is a reddish-brown volatile liquid that freezes at −7.2 °C and boils at 58.8 °C. (Iodine is a shiny black solid.) This trend occurs because the wavelengths of visible light absorbed by the halogens increase down the group. Specifically, the colour of a halogen, such as bromine, results from the electron transition between the highest occupied antibonding π molecular orbital and the lowest vacant antibonding σ molecular orbital. The colour fades at low temperatures so that solid bromine at −195 °C is pale yellow.
Liquid bromine is infrared-transparent.
Like solid chlorine and iodine, solid bromine crystallises in the orthorhombic crystal system, in a layered arrangement of Br molecules. The Br–Br distance is 227 pm (close to the gaseous Br–Br distance of 228 pm) and the Br···Br distance between molecules is 331 pm within a layer and 399 pm between layers (compare the van der Waals radius of bromine, 195 pm). This structure means that bromine is a very poor conductor of electricity, with a conductivity of around 5 × 10 Ω cm just below the melting point, although this is higher than the essentially undetectable conductivity of chlorine.
At a pressure of 55 GPa (roughly 540,000 times atmospheric pressure) bromine undergoes an insulator-to-metal transition. At 75 GPa it changes to a face-centered orthorhombic structure. At 100 GPa it changes to a body centered orthorhombic monatomic form.
Isotopes
Bromine has two stable isotopes, Br and Br. These are its only two natural isotopes, with Br making up 51% of natural bromine and Br making up the remaining 49%. Both have nuclear spin 3/2− and thus may be used for nuclear magnetic resonance, although Br is more favourable. The relatively 1:1 distribution of the two isotopes in nature is helpful in identification of bromine containing compounds using mass spectroscopy. Other bromine isotopes are all radioactive, with half-lives too short to occur in nature. Of these, the most important are Br (t = 17.7 min), Br (t = 4.421 h), and Br (t = 35.28 h), which may be produced from the neutron activation of natural bromine. The most stable bromine radioisotope is Br (t = 57.04 h). The primary decay mode of isotopes lighter than Br is electron capture to isotopes of selenium; that of isotopes heavier than Br is beta decay to isotopes of krypton; and Br may decay by either mode to stable Se or Kr. Br isotopes from 87Br and heavier undergo beta decay with neutron emission and are of practical importance because they are fission products.
Chemistry and compounds
Bromine is intermediate in reactivity between chlorine and iodine, and is one of the most reactive elements. Bond energies to bromine tend to be lower than those to chlorine but higher than those to iodine, and bromine is a weaker oxidising agent than chlorine but a stronger one than iodine. This can be seen from the standard electrode potentials of the X/X couples (F, +2.866 V; Cl, +1.395 V; Br, +1.087 V; I, +0.615 V; At, approximately +0.3 V). Bromination often leads to higher oxidation states than iodination but lower or equal oxidation states to chlorination. Bromine tends to react with compounds including M–M, M–H, or M–C bonds to form M–Br bonds.
Hydrogen bromide
The simplest compound of bromine is hydrogen bromide, HBr. It is mainly used in the production of inorganic bromides and alkyl bromides, and as a catalyst for many reactions in organic chemistry. Industrially, it is mainly produced by the reaction of hydrogen gas with bromine gas at 200–400 °C with a platinum catalyst. However, reduction of bromine with red phosphorus is a more practical way to produce hydrogen bromide in the laboratory:
2 P + 6 HO + 3 Br → 6 HBr + 2 HPO
HPO + HO + Br → 2 HBr + HPO
At room temperature, hydrogen bromide is a colourless gas, like all the hydrogen halides apart from hydrogen fluoride, since hydrogen cannot form strong hydrogen bonds to the large and only mildly electronegative bromine atom; however, weak hydrogen bonding is present in solid crystalline hydrogen bromide at low temperatures, similar to the hydrogen fluoride structure, before disorder begins to prevail as the temperature is raised. Aqueous hydrogen bromide is known as hydrobromic acid, which is a strong acid (pK = −9) because the hydrogen bonds to bromine are too weak to inhibit dissociation. The HBr/HO system also involves many hydrates HBr·nHO for n = 1, 2, 3, 4, and 6, which are essentially salts of bromine anions and hydronium cations. Hydrobromic acid forms an azeotrope with boiling point 124.3 °C at 47.63 g HBr per 100 g solution; thus hydrobromic acid cannot be concentrated beyond this point by distillation.
Unlike hydrogen fluoride, anhydrous liquid hydrogen bromide is difficult to work with as a solvent, because its boiling point is low, it has a small liquid range, its dielectric constant is low and it does not dissociate appreciably into HBr and ions – the latter, in any case, are much less stable than the bifluoride ions () due to the very weak hydrogen bonding between hydrogen and bromine, though its salts with very large and weakly polarising cations such as Cs and (R = Me, Et, Bu) may still be isolated. Anhydrous hydrogen bromide is a poor solvent, only able to dissolve small molecular compounds such as nitrosyl chloride and phenol, or salts with very low lattice energies such as tetraalkylammonium halides.
Other binary bromides
Nearly all elements in the periodic table form binary bromides. The exceptions are decidedly in the minority and stem in each case from one of three causes: extreme inertness and reluctance to participate in chemical reactions (the noble gases, with the exception of xenon in the very unstable XeBr); extreme nuclear instability hampering chemical investigation before decay and transmutation (many of the heaviest elements beyond bismuth); and having an electronegativity higher than bromine's (oxygen, nitrogen, fluorine, and chlorine), so that the resultant binary compounds are formally not bromides but rather oxides, nitrides, fluorides, or chlorides of bromine. (Nonetheless, nitrogen tribromide is named as a bromide as it is analogous to the other nitrogen trihalides.)
Bromination of metals with Br tends to yield lower oxidation states than chlorination with Cl when a variety of oxidation states is available. Bromides can be made by reaction of an element or its oxide, hydroxide, or carbonate with hydrobromic acid, and then dehydrated by mildly high temperatures combined with either low pressure or anhydrous hydrogen bromide gas. These methods work best when the bromide product is stable to hydrolysis; otherwise, the possibilities include high-temperature oxidative bromination of the element with bromine or hydrogen bromide, high-temperature bromination of a metal oxide or other halide by bromine, a volatile metal bromide, carbon tetrabromide, or an organic bromide. For example, niobium(V) oxide reacts with carbon tetrabromide at 370 °C to form niobium(V) bromide. Another method is halogen exchange in the presence of excess "halogenating reagent", for example:
FeCl + BBr (excess) → FeBr + BCl
When a lower bromide is wanted, either a higher halide may be reduced using hydrogen or a metal as a reducing agent, or thermal decomposition or disproportionation may be used, as follows:
3 WBr + Al 3 WBr + AlBr
EuBr + H → EuBr + HBr
2 TaBr TaBr + TaBr
Most metal bromides with the metal in low oxidation states (+1 to +3) are ionic. Nonmetals tend to form covalent molecular bromides, as do metals in high oxidation states from +3 and above. Both ionic and covalent bromides are known for metals in oxidation state +3 (e.g. scandium bromide is mostly ionic, but aluminium bromide is not). Silver bromide is very insoluble in water and is thus often used as a qualitative test for bromine.
Bromine halides
The halogens form many binary, diamagnetic interhalogen compounds with stoichiometries XY, XY, XY, and XY (where X is heavier than Y), and bromine is no exception. Bromine forms a monofluoride and monochloride, as well as a trifluoride and pentafluoride. Some cationic and anionic derivatives are also characterised, such as , , , , and . Apart from these, some pseudohalides are also known, such as cyanogen bromide (BrCN), bromine thiocyanate (BrSCN), and bromine azide (BrN).
The pale-brown bromine monofluoride (BrF) is unstable at room temperature, disproportionating quickly and irreversibly into bromine, bromine trifluoride, and bromine pentafluoride. It thus cannot be obtained pure. It may be synthesised by the direct reaction of the elements, or by the comproportionation of bromine and bromine trifluoride at high temperatures. Bromine monochloride (BrCl), a red-brown gas, quite readily dissociates reversibly into bromine and chlorine at room temperature and thus also cannot be obtained pure, though it can be made by the reversible direct reaction of its elements in the gas phase or in carbon tetrachloride. Bromine monofluoride in ethanol readily leads to the monobromination of the aromatic compounds PhX (para-bromination occurs for X = Me, Bu, OMe, Br; meta-bromination occurs for the deactivating X = –COEt, –CHO, –NO); this is due to heterolytic fission of the Br–F bond, leading to rapid electrophilic bromination by Br.
At room temperature, bromine trifluoride (BrF) is a straw-coloured liquid. It may be formed by directly fluorinating bromine at room temperature and is purified through distillation. It reacts violently with water and explodes on contact with flammable materials, but is a less powerful fluorinating reagent than chlorine trifluoride. It reacts vigorously with boron, carbon, silicon, arsenic, antimony, iodine, and sulfur to give fluorides, and will also convert most metals and many metal compounds to fluorides; as such, it is used to oxidise uranium to uranium hexafluoride in the nuclear power industry. Refractory oxides tend to be only partially fluorinated, but here the derivatives KBrF and BrFSbF remain reactive. Bromine trifluoride is a useful nonaqueous ionising solvent, since it readily dissociates to form and and thus conducts electricity.
Bromine pentafluoride (BrF) was first synthesised in 1930. It is produced on a large scale by direct reaction of bromine with excess fluorine at temperatures higher than 150 °C, and on a small scale by the fluorination of potassium bromide at 25 °C. It also reacts violently with water and is a very strong fluorinating agent, although chlorine trifluoride is still stronger.
Polybromine compounds
Although dibromine is a strong oxidising agent with a high first ionisation energy, very strong oxidisers such as peroxydisulfuryl fluoride (SOF) can oxidise it to form the cherry-red cation. A few other bromine cations are known, namely the brown and dark brown . The tribromide anion, , has also been characterised; it is analogous to triiodide.
Bromine oxides and oxoacids
Bromine oxides are not as well-characterised as chlorine oxides or iodine oxides, as they are all fairly unstable: it was once thought that they could not exist at all. Dibromine monoxide is a dark-brown solid which, while reasonably stable at −60 °C, decomposes at its melting point of −17.5 °C; it is useful in bromination reactions and may be made from the low-temperature decomposition of bromine dioxide in a vacuum. It oxidises iodine to iodine pentoxide and benzene to 1,4-benzoquinone; in alkaline solutions, it gives the hypobromite anion.
So-called "bromine dioxide", a pale yellow crystalline solid, may be better formulated as bromine perbromate, BrOBrO. It is thermally unstable above −40 °C, violently decomposing to its elements at 0 °C. Dibromine trioxide, syn-BrOBrO, is also known; it is the anhydride of hypobromous acid and bromic acid. It is an orange crystalline solid which decomposes above −40 °C; if heated too rapidly, it explodes around 0 °C. A few other unstable radical oxides are also known, as are some poorly characterised oxides, such as dibromine pentoxide, tribromine octoxide, and bromine trioxide.
The four oxoacids, hypobromous acid (HOBr), bromous acid (HOBrO), bromic acid (HOBrO), and perbromic acid (HOBrO), are better studied due to their greater stability, though they are only so in aqueous solution. When bromine dissolves in aqueous solution, the following reactions occur:
{|
|-
| Br + HO || HOBr + H + Br || K = 7.2 × 10 mol l
|-
| Br + 2 OH || OBr + HO + Br || K = 2 × 10 mol l
|}
Hypobromous acid is unstable to disproportionation. The hypobromite ions thus formed disproportionate readily to give bromide and bromate:
{|
|-
| 3 BrO 2 Br + || K = 10
|}
Bromous acids and bromites are very unstable, although the strontium and barium bromites are known. More important are the bromates, which are prepared on a small scale by oxidation of bromide by aqueous hypochlorite, and are strong oxidising agents. Unlike chlorates, which very slowly disproportionate to chloride and perchlorate, the bromate anion is stable to disproportionation in both acidic and aqueous solutions. Bromic acid is a strong acid. Bromides and bromates may comproportionate to bromine as follows:
+ 5 Br + 6 H → 3 Br + 3 HO
There were many failed attempts to obtain perbromates and perbromic acid, leading to some rationalisations as to why they should not exist, until 1968 when the anion was first synthesised from the radioactive beta decay of unstable . Today, perbromates are produced by the oxidation of alkaline bromate solutions by fluorine gas. Excess bromate and fluoride are precipitated as silver bromate and calcium fluoride, and the perbromic acid solution may be purified. The perbromate ion is fairly inert at room temperature but is thermodynamically extremely oxidising, with extremely strong oxidising agents needed to produce it, such as fluorine or xenon difluoride. The Br–O bond in is fairly weak, which corresponds to the general reluctance of the 4p elements arsenic, selenium, and bromine to attain their group oxidation state, as they come after the scandide contraction characterised by the poor shielding afforded by the radial-nodeless 3d orbitals.
Organobromine compounds
Like the other carbon–halogen bonds, the C–Br bond is a common functional group that forms part of core organic chemistry. Formally, compounds with this functional group may be considered organic derivatives of the bromide anion. Due to the difference of electronegativity between bromine (2.96) and carbon (2.55), the carbon atom in a C–Br bond is electron-deficient and thus electrophilic. The reactivity of organobromine compounds resembles but is intermediate between the reactivity of organochlorine and organoiodine compounds. For many applications, organobromides represent a compromise of reactivity and cost.
Organobromides are typically produced by additive or substitutive bromination of other organic precursors. Bromine itself can be used, but due to its toxicity and volatility, safer brominating reagents are normally used, such as N-bromosuccinimide. The principal reactions for organobromides include dehydrobromination, Grignard reactions, reductive coupling, and nucleophilic substitution.
Organobromides are the most common organohalides in nature, even though the concentration of bromide is only 0.3% of that for chloride in sea water, because of the easy oxidation of bromide to the equivalent of Br, a potent electrophile. The enzyme bromoperoxidase catalyzes this reaction. The oceans are estimated to release 1–2 million tons of bromoform and 56,000 tons of bromomethane annually.
An old qualitative test for the presence of the alkene functional group is that alkenes turn brown aqueous bromine solutions colourless, forming a bromohydrin with some of the dibromoalkane also produced. The reaction passes through a short-lived strongly electrophilic bromonium intermediate. This is an example of a halogen addition reaction.
Occurrence and production
Bromine is significantly less abundant in the crust than fluorine or chlorine, comprising only 2.5 parts per million of the Earth's crustal rocks, and then only as bromide salts. It is the 46th most abundant element in Earth's crust. It is significantly more abundant in the oceans, resulting from long-term leaching. There, it makes up 65 parts per million, corresponding to a ratio of about one bromine atom for every 660 chlorine atoms. Salt lakes and brine wells may have higher bromine concentrations: for example, the Dead Sea contains 0.4% bromide ions. It is from these sources that bromine extraction is mostly economically feasible. Bromine is the tenth most abundant element in seawater.
The main sources of bromine production are Israel and Jordan. The element is liberated by halogen exchange, using chlorine gas to oxidise Br to Br. This is then removed with a blast of steam or air, and is then condensed and purified. Today, bromine is transported in large-capacity metal drums or lead-lined tanks that can hold hundreds of kilograms or even tonnes of bromine. The bromine industry is about one-hundredth the size of the chlorine industry. Laboratory production is unnecessary because bromine is commercially available and has a long shelf life.
Applications
A wide variety of organobromine compounds are used in industry. Some are prepared from bromine and others are prepared from hydrogen bromide, which is obtained by burning hydrogen in bromine.
Flame retardants
Brominated flame retardants represent a commodity of growing importance, and make up the largest commercial use of bromine. When the brominated material burns, the flame retardant produces hydrobromic acid which interferes in the radical chain reaction of the oxidation reaction of the fire. The mechanism is that the highly reactive hydrogen radicals, oxygen radicals, and hydroxyl radicals react with hydrobromic acid to form less reactive bromine radicals (i.e., free bromine atoms). Bromine atoms may also react directly with other radicals to help terminate the free radical chain-reactions that characterise combustion.
To make brominated polymers and plastics, bromine-containing compounds can be incorporated into the polymer during polymerisation. One method is to include a relatively small amount of brominated monomer during the polymerisation process. For example, vinyl bromide can be used in the production of polyethylene, polyvinyl chloride or polypropylene. Specific highly brominated molecules can also be added that participate in the polymerisation process. For example, tetrabromobisphenol A can be added to polyesters or epoxy resins, where it becomes part of the polymer. Epoxies used in printed circuit boards are normally made from such flame retardant resins, indicated by the FR in the abbreviation of the products (FR-4 and FR-2). In some cases, the bromine-containing compound may be added after polymerisation. For example, decabromodiphenyl ether can be added to the final polymers.
A number of gaseous or highly volatile brominated halomethane compounds are non-toxic and make superior fire suppressant agents by this same mechanism, and are particularly effective in enclosed spaces such as submarines, airplanes, and spacecraft. However, they are expensive and their production and use has been greatly curtailed due to their effect as ozone-depleting agents. They are no longer used in routine fire extinguishers, but retain niche uses in aerospace and military automatic fire suppression applications. They include bromochloromethane (Halon 1011, CHBrCl), bromochlorodifluoromethane (Halon 1211, CBrClF), and bromotrifluoromethane (Halon 1301, CBrF).
Other uses
Silver bromide is used, either alone or in combination with silver chloride and silver iodide, as the light sensitive constituent of photographic emulsions.
Ethylene bromide was an additive in gasolines containing lead anti-engine knocking agents. It scavenges lead by forming volatile lead bromide, which is exhausted from the engine. This application accounted for 77% of the bromine use in 1966 in the US. This application has declined since the 1970s due to environmental regulations (see below).
Brominated vegetable oil (BVO), a complex mixture of plant-derived triglycerides that have been reacted to contain atoms of the element bromine bonded to the molecules, is used primarily to help emulsify citrus-flavored soft drinks, preventing them from separating during distribution.
Poisonous bromomethane was widely used as pesticide to fumigate soil and to fumigate housing, by the tenting method. Ethylene bromide was similarly used. These volatile organobromine compounds are all now regulated as ozone depletion agents. The Montreal Protocol on Substances that Deplete the Ozone Layer scheduled the phase out for the ozone depleting chemical by 2005, and organobromide pesticides are no longer used (in housing fumigation they have been replaced by such compounds as sulfuryl fluoride, which contain neither the chlorine or bromine organics which harm ozone). Before the Montreal protocol in 1991 (for example) an estimated 35,000 tonnes of the chemical were used to control nematodes, fungi, weeds and other soil-borne diseases.
In pharmacology, inorganic bromide compounds, especially potassium bromide, were frequently used as general sedatives in the 19th and early 20th century. Bromides in the form of simple salts are still used as anticonvulsants in both veterinary and human medicine, although the latter use varies from country to country. For example, the U.S. Food and Drug Administration (FDA) does not approve bromide for the treatment of any disease, and sodium bromide was removed from over-the-counter sedative products like Bromo-Seltzer, in 1975. Commercially available organobromine pharmaceuticals include the vasodilator nicergoline, the sedative brotizolam, the anticancer agent pipobroman, and the antiseptic merbromin. Otherwise, organobromine compounds are rarely pharmaceutically useful, in contrast to the situation for organofluorine compounds. Several drugs are produced as the bromide (or equivalents, hydrobromide) salts, but in such cases bromide serves as an innocuous counterion of no biological significance.
Other uses of organobromine compounds include high-density drilling fluids, dyes (such as Tyrian purple and the indicator bromothymol blue), and pharmaceuticals. Bromine itself, as well as some of its compounds, are used in water treatment, and is the precursor of a variety of inorganic compounds with an enormous number of applications (e.g. silver bromide for photography). Zinc–bromine batteries are hybrid flow batteries used for stationary electrical power backup and storage; from household scale to industrial scale.
Bromine is used in cooling towers (in place of chlorine) for controlling bacteria, algae, fungi, and zebra mussels.
Because it has similar antiseptic qualities to chlorine, bromine can be used in the same manner as chlorine as a disinfectant or antimicrobial in applications such as swimming pools. Bromine came into this use in the United States during World War II due to a predicted shortage of chlorine. However, bromine is usually not used outside for these applications due to it being relatively more expensive than chlorine and the absence of a stabilizer to protect it from the sun. For indoor pools, it can be a good option as it is effective at a wider pH range. It is also more stable in a heated pool or hot tub.
Biological role and toxicity
A 2014 study suggests that bromine (in the form of bromide ion) is a necessary cofactor in the biosynthesis of collagen IV, making the element essential to basement membrane architecture and tissue development in animals. Nevertheless, no clear deprivation symptoms or syndromes have been documented in mammals. In other biological functions, bromine may be non-essential but still beneficial when it takes the place of chlorine. For example, in the presence of hydrogen peroxide, HO, formed by the eosinophil, and either chloride, iodide, thiocyanate, or bromide ions, eosinophil peroxidase provides a potent mechanism by which eosinophils kill multicellular parasites (such as the nematode worms involved in filariasis) and some bacteria (such as tuberculosis bacteria). Eosinophil peroxidase is a haloperoxidase that preferentially uses bromide over chloride for this purpose, generating hypobromite (hypobromous acid), although the use of chloride is possible.
α-Haloesters are generally thought of as highly reactive and consequently toxic intermediates in organic synthesis. Nevertheless, mammals, including humans, cats, and rats, appear to biosynthesize traces of an α-bromoester, 2-octyl 4-bromo-3-oxobutanoate, which is found in their cerebrospinal fluid and appears to play a yet unclarified role in inducing REM sleep. Neutrophil myeloperoxidase can use HO and Br to brominate deoxycytidine, which could result in DNA mutations. Marine organisms are the main source of organobromine compounds, and it is in these organisms that bromine is more firmly shown to be essential. More than 1600 such organobromine compounds were identified by 1999. The most abundant is methyl bromide (CHBr), of which an estimated 56,000 tonnes is produced by marine algae each year. The essential oil of the Hawaiian alga Asparagopsis taxiformis consists of 80% bromoform. Most of such organobromine compounds in the sea are made by the action of a unique algal enzyme, vanadium bromoperoxidase.
The bromide anion is not very toxic: a normal daily intake is 2 to 8 milligrams. However, high levels of bromide chronically impair the membrane of neurons, which progressively impairs neuronal transmission, leading to toxicity, known as bromism. Bromide has an elimination half-life of 9 to 12 days, which can lead to excessive accumulation. Doses of 0.5 to 1 gram per day of bromide can lead to bromism. Historically, the therapeutic dose of bromide is about 3 to 5 grams of bromide, thus explaining why chronic toxicity (bromism) was once so common. While significant and sometimes serious disturbances occur to neurologic, psychiatric, dermatological, and gastrointestinal functions, death from bromism is rare. Bromism is caused by a neurotoxic effect on the brain which results in somnolence, psychosis, seizures and delirium.
Elemental bromine (Br) is toxic and causes chemical burns on human flesh. Inhaling bromine gas results in similar irritation of the respiratory tract, causing coughing, choking, shortness of breath, and death if inhaled in large enough amounts. Chronic exposure may lead to frequent bronchial infections and a general deterioration of health. As a strong oxidising agent, bromine is incompatible with most organic and inorganic compounds. Caution is required when transporting bromine; it is commonly carried in steel tanks lined with lead, supported by strong metal frames. The Occupational Safety and Health Administration (OSHA) of the United States has set a permissible exposure limit (PEL) for bromine at a time-weighted average (TWA) of 0.1 ppm. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of TWA 0.1 ppm and a short-term limit of 0.3 ppm. The exposure to bromine immediately dangerous to life and health (IDLH) is 3 ppm. Bromine is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities.
References
General and cited references
Chemical elements
Diatomic nonmetals
Gases with color
Halogens
Oxidizing agents
Reactive nonmetals | Bromine | [
"Physics",
"Chemistry",
"Materials_science"
] | 8,256 | [
"Chemical elements",
"Redox",
"Diatomic nonmetals",
"Nonmetals",
"Oxidizing agents",
"Reactive nonmetals",
"Atoms",
"Matter"
] |
3,757 | https://en.wikipedia.org/wiki/Barium | Barium is a chemical element; it has symbol Ba and atomic number 56. It is the fifth element in group 2 and is a soft, silvery alkaline earth metal. Because of its high chemical reactivity, barium is never found in nature as a free element.
The most common minerals of barium are barite (barium sulfate, BaSO4) and witherite (barium carbonate, BaCO3). The name barium originates from the alchemical derivative "baryta", from Greek (), meaning 'heavy'. Baric is the adjectival form of barium. Barium was identified as a new element in 1772, but not reduced to a metal until 1808 with the advent of electrolysis.
Barium has few industrial applications. Historically, it was used as a getter for vacuum tubes and in oxide form as the emissive coating on indirectly heated cathodes. It is a component of YBCO (high-temperature superconductors) and electroceramics, and is added to steel and cast iron to reduce the size of carbon grains within the microstructure. Barium compounds are added to fireworks to impart a green color. Barium sulfate is used as an insoluble additive to oil well drilling fluid. In a purer form it is used as X-ray radiocontrast agents for imaging the human gastrointestinal tract. Water-soluble barium compounds are poisonous and have been used as rodenticides.
Characteristics
Physical properties
Barium is a soft, silvery-white metal, with a slight golden shade when ultrapure. The silvery-white color of barium metal rapidly vanishes upon oxidation in air yielding a dark gray layer containing the oxide. Barium has a medium specific weight and high electrical conductivity. Because barium is difficult to purify, many of its properties have not been accurately determined.
At room temperature and pressure, barium metal adopts a body-centered cubic structure, with a barium–barium distance of 503 picometers, expanding with heating at a rate of approximately 1.8/°C. It is a soft metal with a Mohs hardness of 1.25. Its melting temperature of is intermediate between those of the lighter strontium () and heavier radium (); however, its boiling point of exceeds that of strontium (). The density (3.62 g/cm3) is again intermediate between those of strontium (2.36 g/cm3) and radium (≈5 g/cm3).
Chemical reactivity
Barium is chemically similar to magnesium, calcium, and strontium, but more reactive. Its compounds are almost invariably found in the +2 oxidation state. As expected for a highly electropositive metal, barium's reaction with chalcogens is highly exothermic (release energy). Barium reacts with atmospheric oxygen in air at room temperature. For this reason, metallic barium is often stored under oil or in an inert atmosphere. Reactions with other nonmetals, such as carbon, nitrogen, phosphorus, silicon, and hydrogen, proceed upon heating. Reactions with water and alcohols are also exothermic and release hydrogen gas:
Ba + 2 ROH → Ba(OR)2 + H2↑ (R is an alkyl group or a hydrogen atom)
Barium reacts with ammonia to form the electride [Ba(NH3)6](e−)2, which near room temperature gives the amide Ba(NH2)2.
The metal is readily attacked by acids. Sulfuric acid is a notable exception because passivation stops the reaction by forming the insoluble barium sulfate on the surface. Barium combines with several other metals, including aluminium, zinc, lead, and tin, forming intermetallic phases and alloys.
Compounds
Barium salts are typically white when solid and colorless when dissolved. They are denser than the strontium or calcium analogs, except for the halides (see table; zinc is given for comparison).
Barium hydroxide ("baryta") was known to alchemists, who produced it by heating barium carbonate. Unlike calcium hydroxide, it absorbs very little CO2 in aqueous solutions and is therefore insensitive to atmospheric fluctuations. This property is used in calibrating pH equipment.
Barium compounds burn with a green to pale green flame, which is an efficient test to detect a barium compound. The color results from spectral lines at 455.4, 493.4, 553.6, and 611.1 nm.
Organobarium compounds are a growing field of knowledge: recently discovered are dialkylbariums and alkylhalobariums.
Isotopes
Barium found in the Earth's crust is a mixture of seven primordial nuclides, barium-130, 132, and 134 through 138. Barium-130 undergoes very slow radioactive decay to xenon-130 by double beta plus decay, with a half-life of (0.5–2.7)×1021 years (about 1011 times the age of the universe). Its abundance is ≈0.1% that of natural barium. Theoretically, barium-132 can similarly undergo double beta decay to xenon-132; this decay has not been detected. The radioactivity of these isotopes is so weak that they pose no danger to life.
Of the stable isotopes, barium-138 composes 71.7% of all barium; other isotopes have decreasing abundance with decreasing mass number.
In total, barium has 40 known isotopes, ranging in mass between 114 and 153. The most stable artificial radioisotope is barium-133 with a half-life of approximately 10.51 years. Five other isotopes have half-lives longer than a day. Barium also has 10 meta states, of which barium-133m1 is the most stable with a half-life of about 39 hours.
History
Alchemists in the early Middle Ages knew about some barium minerals. Smooth pebble-like stones of mineral baryte were found in volcanic rock near Bologna, Italy, and so were called "Bologna stones". Alchemists were attracted to them because after exposure to light they would glow for years. The phosphorescent properties of baryte heated with organics were described by V. Casciorolus in 1602.
Carl Scheele determined that baryte contained a new element in 1772, but could not isolate barium, only barium oxide. Johan Gottlieb Gahn also isolated barium oxide two years later in similar studies. Oxidized barium was at first called "barote" by Guyton de Morveau, a name that was changed by Antoine Lavoisier to baryte (in French) or baryta (in Latin). Also in the 18th century, English mineralogist William Withering noted a heavy mineral in the lead mines of Cumberland, now known to be witherite. Barium was first isolated by electrolysis of molten barium salts in 1808 by Sir Humphry Davy in England. Davy, by analogy with calcium, named "barium" after baryta, with the "-ium" ending signifying a metallic element. Robert Bunsen and Augustus Matthiessen obtained pure barium by electrolysis of a molten mixture of barium chloride and ammonium chloride.
The production of pure oxygen in the Brin process was a large-scale application of barium peroxide in the 1880s, before it was replaced by electrolysis and fractional distillation of liquefied air in the early 1900s. In this process barium oxide reacts at with air to form barium peroxide, which decomposes above by releasing oxygen:
2 BaO + O2 ⇌ 2 BaO2
Barium sulfate was first applied as a radiocontrast agent in X-ray imaging of the digestive system in 1908.
Occurrence and production
The abundance of barium is 0.0425% in the Earth's crust and 13 μg/L in sea water. The primary commercial source of barium is baryte (also called barytes or heavy spar), a barium sulfate mineral. with deposits in many parts of the world. Another commercial source, far less important than baryte, is witherite, barium carbonate. The main deposits are located in Britain, Romania, and the former USSR.
The baryte reserves are estimated between 0.7 and 2 billion tonnes. The highest production, 8.3 million tonnes, was achieved in 1981, but only 7–8% was used for barium metal or compounds. Baryte production has risen since the second half of the 1990s from 5.6 million tonnes in 1996 to 7.6 in 2005 and 7.8 in 2011. China accounts for more than 50% of this output, followed by India (14% in 2011), Morocco (8.3%), US (8.2%), Iran and Kazakhstan (2.6% each) and Turkey (2.5%).
The mined ore is washed, crushed, classified, and separated from quartz. If the quartz penetrates too deeply into the ore, or the iron, zinc, or lead content is abnormally high, then froth flotation is used. The product is a 98% pure baryte (by mass); the purity should be no less than 95%, with a minimal content of iron and silicon dioxide. It is then reduced by carbon to barium sulfide:
BaSO4 + 2 C → BaS + 2 CO2
The water-soluble barium sulfide is the starting point for other compounds: treating BaS with oxygen produces the sulfate, with nitric acid the nitrate, with aqueous carbon dioxide the carbonate, and so on. The nitrate can be thermally decomposed to yield the oxide. Barium metal is produced by reduction with aluminium at . The intermetallic compound BaAl4 is produced first:
3 BaO + 14 Al → 3 BaAl4 + Al2O3
BaAl4 is an intermediate reacted with barium oxide to produce the metal. Note that not all barium is reduced.
8 BaO + BaAl4 → Ba↓ + 7 BaAl2O4
The remaining barium oxide reacts with the formed aluminium oxide:
BaO + Al2O3 → BaAl2O4
and the overall reaction is
4 BaO + 2 Al → 3 Ba↓ + BaAl2O4
Barium vapor is condensed and packed into molds in an atmosphere of argon. This method is used commercially, yielding ultrapure barium. Commonly sold barium is about 99% pure, with main impurities being strontium and calcium (up to 0.8% and 0.25%) and other contaminants contributing less than 0.1%.
A similar reaction with silicon at yields barium and barium metasilicate. Electrolysis is not used because barium readily dissolves in molten halides and the product is rather impure.
Gemstone
The barium mineral, benitoite (barium titanium silicate), occurs as a very rare blue fluorescent gemstone, and is the official state gem of California.
Barium in seawater
Barium exists in seawater as the Ba2+ ion with an average oceanic concentration of 109 nmol/kg. Barium also exists in the ocean as BaSO4, or barite. Barium has a nutrient-like profile with a residence time of 10,000 years.
Barium shows a relatively consistent concentration in upper ocean seawater, excepting regions of high river inputs and regions with strong upwelling. There is little depletion of barium concentrations in the upper ocean for an ion with a nutrient-like profile, thus lateral mixing is important. Barium isotopic values show basin-scale balances instead of local or short-term processes.
Applications
Metal and alloys
Barium, as a metal or when alloyed with aluminium, is used to remove unwanted gases (gettering) from vacuum tubes, such as TV picture tubes. Barium is suitable for this purpose because of its low vapor pressure and reactivity towards oxygen, nitrogen, carbon dioxide, and water; it can even partly remove noble gases by dissolving them in the crystal lattice. This application is gradually disappearing due to the rising popularity of the tubeless LCD, LED, and plasma sets.
Other uses of elemental barium are minor and include an additive to silumin (aluminium–silicon alloys) that refines their structure, as well as
bearing alloys;
lead–tin soldering alloys – to increase the creep resistance;
alloy with nickel for spark plugs;
additive to steel and cast iron as an inoculant;
alloys with calcium, manganese, silicon, and aluminium as high-grade steel deoxidizers.
Barium sulfate and baryte
Barium sulfate (the mineral baryte, BaSO4) is important to the petroleum industry as a drilling fluid in oil and gas wells. The precipitate of the compound (called "blanc fixe", from the French for "permanent white") is used in paints and varnishes; as a filler in ringing ink, plastics, and rubbers; as a paper coating pigment; and in nanoparticles, to improve physical properties of some polymers, such as epoxies.
Barium sulfate has a low toxicity and relatively high density of ca. 4.5 g/cm3 (and thus opacity to X-rays). For this reason it is used as a radiocontrast agent in X-ray imaging of the digestive system ("barium meals" and "barium enemas"). Lithopone, a pigment that contains barium sulfate and zinc sulfide, is a permanent white with good covering power that does not darken when exposed to sulfides.
Other barium compounds
Other compounds of barium find only niche applications, limited by the toxicity of Ba2+ ions (barium carbonate is a rat poison), which is not a problem for the insoluble BaSO4.
Barium oxide coating on the electrodes of fluorescent lamps facilitates the release of electrons.
By its great atomic density, barium carbonate increases the refractive index and luster of glass and reduces leaks of X-rays from cathode-ray tubes (CRTs) TV sets.
Barium, typically as barium nitrate imparts a yellow or "apple" green color to fireworks; for brilliant green barium chloride is used.
Barium peroxide is a catalyst in the aluminothermic reaction (thermite) for welding rail tracks. It is also a green flare in tracer ammunition and a bleaching agent.
Barium titanate is a promising electroceramic.
Barium fluoride is used for optics in infrared applications because of its wide transparency range of 0.15–12 micrometers.
YBCO was the first high-temperature superconductor cooled by liquid nitrogen, with a transition temperature of greater than the boiling point of nitrogen ().
Ferrite, a type of sintered ceramic composed of iron oxide (Fe2O3) and barium oxide (BaO), is both electrically nonconductive and ferrimagnetic, and can be temporarily or permanently magnetized.
Palaeoceanography
The lateral mixing of barium is caused by water mass mixing and ocean circulation. Global ocean circulation reveals a strong correlation between dissolved barium and silicic acid. The large-scale ocean circulation combined with remineralization of barium show a similar correlation between dissolved barium and ocean alkalinity.
Dissolved barium's correlation with silicic acid can be seen both vertically and spatially. Particulate barium shows a strong correlation with particulate organic carbon or POC. Barium is becoming more popular as a base for palaeoceanographic proxies. With both dissolved and particulate barium's links with silicic acid and POC, it can be used to determine historical variations in the biological pump, carbon cycle, and global climate.
The barium particulate barite (BaSO4), as one of many proxies, can be used to provide a host of historical information on processes in different oceanic settings (water column, sediments, and hydrothermal sites). In each setting there are differences in isotopic and elemental composition of the barite particulate. Barite in the water column, known as marine or pelagic barite, reveals information on seawater chemistry variation over time. Barite in sediments, known as diagenetic or cold seeps barite, gives information about sedimentary redox processes. Barite formed via hydrothermal activity at hydrothermal vents, known as hydrothermal barite, reveals alterations in the condition of the earth's crust around those vents.
Toxicity
Soluble barium compounds have LD50 near 10 mg/kg (oral rats). Symptoms include "convulsions... paralysis of the peripheral nerve system ... severe inflammation of the gastrointestinal tract". The insoluble sulfate is nontoxic and is not classified as a dangerous goods in transport regulations.
Little is known about the long term effects of barium exposure. The US EPA considers it unlikely that barium is carcinogenic when consumed orally. Inhaled dust containing insoluble barium compounds can accumulate in the lungs, causing a benign condition called baritosis.
See also
Han purple and Han blue – synthetic barium copper silicate pigments developed and used in ancient and imperial China
References
External links
Barium at The Periodic Table of Videos (University of Nottingham)
Elementymology & Elements Multidict
3-D Holographic Display Using Strontium Barium Niobate
Chemical elements
Alkaline earth metals
Toxicology
Reducing agents
Chemical elements with body-centered cubic structure | Barium | [
"Physics",
"Chemistry",
"Environmental_science"
] | 3,724 | [
"Toxicology",
"Chemical elements",
"Redox",
"Reducing agents",
"Atoms",
"Matter"
] |
3,758 | https://en.wikipedia.org/wiki/Berkelium | Berkelium is a synthetic chemical element; it has symbol Bk and atomic number 97. It is a member of the actinide and transuranium element series. It is named after the city of Berkeley, California, the location of the Lawrence Berkeley National Laboratory (then the University of California Radiation Laboratory) where it was discovered in December 1949. Berkelium was the fifth transuranium element discovered after neptunium, plutonium, curium and americium.
The major isotope of berkelium, 249Bk, is synthesized in minute quantities in dedicated high-flux nuclear reactors, mainly at the Oak Ridge National Laboratory in Tennessee, United States, and at the Research Institute of Atomic Reactors in Dimitrovgrad, Russia. The longest-lived and second-most important isotope, 247Bk, can be synthesized via irradiation of 244Cm with high-energy alpha particles.
Just over one gram of berkelium has been produced in the United States since 1967. There is no practical application of berkelium outside scientific research which is mostly directed at the synthesis of heavier transuranium elements and superheavy elements. A 22-milligram batch of berkelium-249 was prepared during a 250-day irradiation period and then purified for a further 90 days at Oak Ridge in 2009. This sample was used to synthesize the new element tennessine for the first time in 2009 at the Joint Institute for Nuclear Research, Russia, after it was bombarded with calcium-48 ions for 150 days. This was the culmination of the Russia–US collaboration on the synthesis of the heaviest elements on the periodic table.
Berkelium is a soft, silvery-white, radioactive metal. The berkelium-249 isotope emits low-energy electrons and thus is relatively safe to handle. It decays with a half-life of 330 days to californium-249, which is a strong emitter of ionizing alpha particles. This gradual transformation is an important consideration when studying the properties of elemental berkelium and its chemical compounds, since the formation of californium brings not only chemical contamination, but also free-radical effects and self-heating from the emitted alpha particles.
Characteristics
Physical
Berkelium is a soft, silvery-white, radioactive actinide metal. In the periodic table, it is located to the right of the actinide curium, to the left of the actinide californium and below the lanthanide terbium with which it shares many similarities in physical and chemical properties. Its density of 14.78 g/cm3 lies between those of curium (13.52 g/cm3) and californium (15.1 g/cm3), as does its melting point of 986 °C, below that of curium (1340 °C) but higher than that of californium (900 °C). Berkelium is relatively soft and has one of the lowest bulk moduli among the actinides, at about 20 GPa (2 Pa).
ions shows two sharp fluorescence peaks at 652 nanometers (red light) and 742 nanometers (deep red – near-infrared) due to internal transitions at the f-electron shell. The relative intensity of these peaks depends on the excitation power and temperature of the sample. This emission can be observed, for example, after dispersing berkelium ions in a silicate glass, by melting the glass in presence of berkelium oxide or halide.
Between 70 K and room temperature, berkelium behaves as a Curie–Weiss paramagnetic material with an effective magnetic moment of 9.69 Bohr magnetons (μB) and a Curie temperature of 101 K. This magnetic moment is almost equal to the theoretical value of 9.72 μB calculated within the simple atomic L-S coupling model. Upon cooling to about 34 K, berkelium undergoes a transition to an antiferromagnetic state. The enthalpy of dissolution in hydrochloric acid at standard conditions is −600 kJ/mol, from which the standard enthalpy of formation (ΔfH°) of aqueous ions is obtained as −601 kJ/mol. The standard electrode potential /Bk is −2.01 V. The ionization potential of a neutral berkelium atom is 6.23 eV.
Allotropes
At ambient conditions, berkelium assumes its most stable α form which has a hexagonal symmetry, space group P63/mmc, lattice parameters of 341 pm and 1107 pm. The crystal has a double-hexagonal close packing structure with the layer sequence ABAC and so is isotypic (having a similar structure) with α-lanthanum and α-forms of actinides beyond curium. This crystal structure changes with pressure and temperature. When compressed at room temperature to 7 GPa, α-berkelium transforms to the β modification, which has a face-centered cubic (fcc) symmetry and space group Fmm. This transition occurs without change in volume, but the enthalpy increases by 3.66 kJ/mol. Upon further compression to 25 GPa, berkelium transforms to an orthorhombic γ-berkelium structure similar to that of α-uranium. This transition is accompanied by a 12% volume decrease and delocalization of the electrons at the 5f electron shell. No further phase transitions are observed up to 57 GPa.
Upon heating, α-berkelium transforms into another phase with an fcc lattice (but slightly different from β-berkelium), space group Fmm and the lattice constant of 500 pm; this fcc structure is equivalent to the closest packing with the sequence ABC. This phase is metastable and will gradually revert to the original α-berkelium phase at room temperature. The temperature of the phase transition is believed to be quite close to the melting point.
Chemical
Like all actinides, berkelium dissolves in various aqueous inorganic acids, liberating gaseous hydrogen and converting into the state. This trivalent oxidation state (+3) is the most stable, especially in aqueous solutions, but tetravalent (+4), pentavalent (+5), and possibly divalent (+2) berkelium compounds are also known. The existence of divalent berkelium salts is uncertain and has only been reported in mixed lanthanum(III) chloride-strontium chloride melts. A similar behavior is observed for the lanthanide analogue of berkelium, terbium. Aqueous solutions of ions are green in most acids. The color of ions is yellow in hydrochloric acid and orange-yellow in sulfuric acid. Berkelium does not react rapidly with oxygen at room temperature, possibly due to the formation of a protective oxide layer surface. However, it reacts with molten metals, hydrogen, halogens, chalcogens and pnictogens to form various binary compounds.
Isotopes
Nineteen isotopes and six nuclear isomers (excited states of an isotope) of berkelium have been characterized, with mass numbers ranging from 233 to 253 (except 235 and 237). All of them are radioactive. The longest half-lives are observed for 247Bk (1,380 years), 248Bk (over 300 years), and 249Bk (330 days); the half-lives of the other isotopes range from microseconds to several days. The isotope which is the easiest to synthesize is berkelium-249. This emits mostly soft β-particles which are inconvenient for detection. Its alpha radiation is rather weak (1.45%) with respect to the β-radiation, but is sometimes used to detect this isotope. The second important berkelium isotope, berkelium-247, is an alpha-emitter, as are most actinide isotopes.
Occurrence
All berkelium isotopes have a half-life far too short to be primordial. Therefore, any primordial berkelium − that is, berkelium present on the Earth during its formation − has decayed by now.
On Earth, berkelium is mostly concentrated in certain areas, which were used for the atmospheric nuclear weapons tests between 1945 and 1980, as well as at the sites of nuclear incidents, such as the Chernobyl disaster, Three Mile Island accident and 1968 Thule Air Base B-52 crash. Analysis of the debris at the testing site of the first United States' first thermonuclear weapon, Ivy Mike, (1 November 1952, Enewetak Atoll), revealed high concentrations of various actinides, including berkelium. For reasons of military secrecy, this result was not published until 1956.
Nuclear reactors produce mostly, among the berkelium isotopes, berkelium-249. During the storage and before the fuel disposal, most of it beta decays to californium-249. The latter has a half-life of 351 years, which is relatively long compared to the half-lives of other isotopes produced in the reactor, and is therefore undesirable in the disposal products.
The transuranium elements from americium to fermium, including berkelium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so.
Berkelium is also one of the elements that have theoretically been detected in Przybylski's Star.
History
Although very small amounts of berkelium were possibly produced in previous nuclear experiments, it was first intentionally synthesized, isolated and identified in December 1949 by Glenn T. Seaborg, Albert Ghiorso, Stanley Gerald Thompson, and Kenneth Street Jr. They used the 60-inch cyclotron at the University of California, Berkeley. Similar to the nearly simultaneous discovery of americium (element 95) and curium (element 96) in 1944, the new elements berkelium and californium (element 98) were both produced in 1949–1950.
The name choice for element 97 followed the previous tradition of the Californian group to draw an analogy between the newly discovered actinide and the lanthanide element positioned above it in the periodic table. Previously, americium was named after a continent as its analogue europium, and curium honored scientists Marie and Pierre Curie as the lanthanide above it, gadolinium, was named after the explorer of the rare-earth elements Johan Gadolin. Thus the discovery report by the Berkeley group reads: "It is suggested that element 97 be given the name berkelium (symbol Bk) after the city of Berkeley in a manner similar to that used in naming its chemical homologue terbium (atomic number 65) whose name was derived from the town of Ytterby, Sweden, where the rare earth minerals were first found." This tradition ended with berkelium, though, as the naming of the next discovered actinide, californium, was not related to its lanthanide analogue dysprosium, but after the discovery place.
The most difficult steps in the synthesis of berkelium were its separation from the final products and the production of sufficient quantities of americium for the target material. First, americium (241Am) nitrate solution was coated on a platinum foil, the solution was evaporated and the residue converted by annealing to americium dioxide (). This target was irradiated with 35 MeV alpha particles for 6 hours in the 60-inch cyclotron at the Lawrence Radiation Laboratory, University of California, Berkeley. The (α,2n) reaction induced by the irradiation yielded the 243Bk isotope and two free neutrons:
+ → + 2
After the irradiation, the coating was dissolved with nitric acid and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The product was centrifugated and re-dissolved in nitric acid. To separate berkelium from the unreacted americium, this solution was added to a mixture of ammonium and ammonium sulfate and heated to convert all the dissolved americium into the oxidation state +6. Unoxidized residual americium was precipitated by the addition of hydrofluoric acid as americium(III) fluoride (). This step yielded a mixture of the accompanying product curium and the expected element 97 in form of trifluorides. The mixture was converted to the corresponding hydroxides by treating it with potassium hydroxide, and after centrifugation, was dissolved in perchloric acid.
Further separation was carried out in the presence of a citric acid/ammonium buffer solution in a weakly acidic medium (pH≈3.5), using ion exchange at elevated temperature. The chromatographic separation behavior was unknown for the element 97 at the time, but was anticipated by analogy with terbium. The first results were disappointing because no alpha-particle emission signature could be detected from the elution product. With further analysis, searching for characteristic X-rays and conversion electron signals, a berkelium isotope was eventually detected. Its mass number was uncertain between 243 and 244 in the initial report, but was later established as 243.
Synthesis and extraction
Preparation of isotopes
Berkelium is produced by bombarding lighter actinides uranium (238U) or plutonium (239Pu) with neutrons in a nuclear reactor. In a more common case of uranium fuel, plutonium is produced first by neutron capture (the so-called (n,γ) reaction or neutron fusion) followed by beta-decay:
^{238}_{92}U ->[\ce{(n,\gamma)}] ^{239}_{92}U ->[\beta^-][23.5 \ \ce{min}] ^{239}_{93}Np ->[\beta^-][2.3565 \ \ce{d}] ^{239}_{94}Pu (the times are half-lives)
Plutonium-239 is further irradiated by a source that has a high neutron flux, several times higher than a conventional nuclear reactor, such as the 85-megawatt High Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory in Tennessee, US. The higher flux promotes fusion reactions involving not one but several neutrons, converting 239Pu to 244Cm and then to 249Cm:
Curium-249 has a short half-life of 64 minutes, and thus its further conversion to 250Cm has a low probability. Instead, it transforms by beta-decay into 249Bk:
^{249}_{96}Cm ->[{\beta^-}][64.15 \ \ce{min}] ^{249}_{97}Bk ->[\beta^-][330 \ \ce{d}] ^{249}_{98}Cf
The thus-produced 249Bk has a long half-life of 330 days and thus can capture another neutron. However, the product, 250Bk, again has a relatively short half-life of 3.212 hours and thus does not yield any heavier berkelium isotopes. It instead decays to the californium isotope 250Cf:
^{249}_{97}Bk ->[\ce{(n,\gamma)}] ^{250}_{97}Bk ->[\beta^-][3.212 \ \ce{h}] ^{250}_{98}Cf
Although 247Bk is the most stable isotope of berkelium, its production in nuclear reactors is very difficult because its potential progenitor 247Cm has never been observed to undergo beta decay. Thus, 249Bk is the most accessible isotope of berkelium, which still is available only in small quantities (only 0.66 grams have been produced in the US over the period 1967–1983) at a high price of the order 185 USD per microgram. It is the only berkelium isotope available in bulk quantities, and thus the only berkelium isotope whose properties can be extensively studied.
The isotope 248Bk was first obtained in 1956 by bombarding a mixture of curium isotopes with 25 MeV α-particles. Although its direct detection was hindered by strong signal interference with 245Bk, the existence of a new isotope was proven by the growth of the decay product 248Cf which had been previously characterized. The half-life of 248Bk was estimated as hours, though later 1965 work gave a half-life in excess of 300 years (which may be due to an isomeric state). Berkelium-247 was produced during the same year by irradiating 244Cm with alpha-particles:
Berkelium-242 was synthesized in 1979 by bombarding 235U with 11B, 238U with 10B, 232Th with 14N or 232Th with 15N. It converts by electron capture to 242Cm with a half-life of minutes. A search for an initially suspected isotope 241Bk was then unsuccessful; 241Bk has since been synthesized.
Separation
The fact that berkelium readily assumes oxidation state +4 in solids, and is relatively stable in this state in liquids greatly assists separation of berkelium away from many other actinides. These are inevitably produced in relatively large amounts during the nuclear synthesis and often favor the +3 state. This fact was not yet known in the initial experiments, which used a more complex separation procedure. Various inorganic oxidation agents can be applied to the solutions to convert it to the +4 state, such as bromates (), bismuthates (), chromates ( and ), silver(I) thiolate (), lead(IV) oxide (), ozone (), or photochemical oxidation procedures. More recently, it has been discovered that some organic and bio-inspired molecules, such as the chelator called 3,4,3-LI(1,2-HOPO), can also oxidize Bk(III) and stabilize Bk(IV) under mild conditions. is then extracted with ion exchange, extraction chromatography or liquid-liquid extraction using HDEHP (bis-(2-ethylhexyl) phosphoric acid), amines, tributyl phosphate or various other reagents. These procedures separate berkelium from most trivalent actinides and lanthanides, except for the lanthanide cerium (lanthanides are absent in the irradiation target but are created in various nuclear fission decay chains).
A more detailed procedure adopted at the Oak Ridge National Laboratory was as follows: the initial mixture of actinides is processed with ion exchange using lithium chloride reagent, then precipitated as hydroxides, filtered and dissolved in nitric acid. It is then treated with high-pressure elution from cation exchange resins, and the berkelium phase is oxidized and extracted using one of the procedures described above. Reduction of the thus-obtained to the +3 oxidation state yields a solution, which is nearly free from other actinides (but contains cerium). Berkelium and cerium are then separated with another round of ion-exchange treatment.
Bulk metal preparation
In order to characterize chemical and physical properties of solid berkelium and its compounds, a program was initiated in 1952 at the Material Testing Reactor, Arco, Idaho, US. It resulted in preparation of an eight-gram plutonium-239 target and in the first production of macroscopic quantities (0.6 micrograms) of berkelium by Burris B. Cunningham and Stanley Gerald Thompson in 1958, after a continuous reactor irradiation of this target for six years. This irradiation method was and still is the only way of producing weighable amounts of the element, and most solid-state studies of berkelium have been conducted on microgram or submicrogram-sized samples.
The world's major irradiation sources are the 85-megawatt High Flux Isotope Reactor at the Oak Ridge National Laboratory in Tennessee, USA, and the SM-2 loop reactor at the Research Institute of Atomic Reactors (NIIAR) in Dimitrovgrad, Russia, which are both dedicated to the production of transcurium elements (atomic number greater than 96). These facilities have similar power and flux levels, and are expected to have comparable production capacities for transcurium elements, although the quantities produced at NIIAR are not publicly reported. In a "typical processing campaign" at Oak Ridge, tens of grams of curium are irradiated to produce decigram quantities of californium, milligram quantities of berkelium-249 and einsteinium, and picogram quantities of fermium. In total, just over one gram of berkelium-249 has been produced at Oak Ridge since 1967.
The first berkelium metal sample weighing 1.7 micrograms was prepared in 1971 by the reduction of fluoride with lithium vapor at 1000 °C; the fluoride was suspended on a tungsten wire above a tantalum crucible containing molten lithium. Later, metal samples weighing up to 0.5 milligrams were obtained with this method.
Similar results are obtained with fluoride. Berkelium metal can also be produced by the reduction of oxide with thorium or lanthanum.
Compounds
Oxides
Two oxides of berkelium are known, with the berkelium oxidation state of +3 () and +4 (). oxide is a brown solid, while oxide is a yellow-green solid with a melting point of 1920 °C and is formed from BkO2 by reduction with molecular hydrogen:
Upon heating to 1200 °C, the oxide undergoes a phase change; it undergoes another phase change at 1750 °C. Such three-phase behavior is typical for the actinide sesquioxides. oxide, BkO, has been reported as a brittle gray solid but its exact chemical composition remains uncertain.
Halides
In halides, berkelium assumes the oxidation states +3 and +4. The +3 state is the most stable, especially in solutions, while the tetravalent halides and are only known in the solid phase. The coordination of berkelium atom in its trivalent fluoride and chloride is tricapped trigonal prismatic, with the coordination number of 9. In trivalent bromide, it is bicapped trigonal prismatic (coordination 8) or octahedral (coordination 6), and in the iodide it is octahedral.
fluoride () is a yellow-green ionic solid and is isotypic with uranium tetrafluoride or zirconium tetrafluoride. fluoride () is also a yellow-green solid, but it has two crystalline structures. The most stable phase at low temperatures is isotypic with yttrium(III) fluoride, while upon heating to between 350 and 600 °C, it transforms to the structure found in lanthanum trifluoride.
Visible amounts of chloride () were first isolated and characterized in 1962, and weighed only 3 billionths of a gram. It can be prepared by introducing hydrogen chloride vapors into an evacuated quartz tube containing berkelium oxide at a temperature about 500 °C. This green solid has a melting point of 600 °C, and is isotypic with uranium(III) chloride. Upon heating to nearly melting point, converts into an orthorhombic phase.
Two forms of bromide are known: one with berkelium having coordination 6, and one with coordination 8. The latter is less stable and transforms to the former phase upon heating to about 350 °C. An important phenomenon for radioactive solids has been studied on these two crystal forms: the structure of fresh and aged 249BkBr3 samples was probed by X-ray diffraction over a period longer than 3 years, so that various fractions of berkelium-249 had beta decayed to californium-249. No change in structure was observed upon the 249BkBr3—249CfBr3 transformation. However, other differences were noted for 249BkBr3 and 249CfBr3. For example, the latter could be reduced with hydrogen to 249CfBr2, but the former could not – this result was reproduced on individual 249BkBr3 and 249CfBr3 samples, as well on the samples containing both bromides. The intergrowth of californium in berkelium occurs at a rate of 0.22% per day and is an intrinsic obstacle in studying berkelium properties. Beside a chemical contamination, 249Cf, being an alpha emitter, brings undesirable self-damage of the crystal lattice and the resulting self-heating. The chemical effect however can be avoided by performing measurements as a function of time and extrapolating the obtained results.
Other inorganic compounds
The pnictides of berkelium-249 of the type BkX are known for the elements nitrogen, phosphorus, arsenic and antimony. They crystallize in the rock-salt structure and are prepared by the reaction of either hydride () or metallic berkelium with these elements at elevated temperature (about 600 °C) under high vacuum.
sulfide, , is prepared by either treating berkelium oxide with a mixture of hydrogen sulfide and carbon disulfide vapors at 1130 °C, or by directly reacting metallic berkelium with elemental sulfur. These procedures yield brownish-black crystals.
and hydroxides are both stable in 1 molar solutions of sodium hydroxide. phosphate () has been prepared as a solid, which shows strong fluorescence under excitation with a green light. Berkelium hydrides are produced by reacting metal with hydrogen gas at temperatures about 250 °C. They are non-stoichiometric with the nominal formula (0 < x < 1). Several other salts of berkelium are known, including an oxysulfide (), and hydrated nitrate (), chloride (), sulfate () and oxalate (). Thermal decomposition at about 600 °C in an argon atmosphere (to avoid oxidation to ) of yields the crystals of oxysulfate (). This compound is thermally stable to at least 1000 °C in inert atmosphere.
Organoberkelium compounds
Berkelium forms a trigonal (η5–C5H5)3Bk metallocene complex with three cyclopentadienyl rings, which can be synthesized by reacting chloride with the molten beryllocene () at about 70 °C. It has an amber color and a density of 2.47 g/cm3. The complex is stable to heating to at least 250 °C, and sublimates without melting at about 350 °C. The high radioactivity of berkelium gradually destroys the compound (within a period of weeks). One cyclopentadienyl ring in (η5–C5H5)3Bk can be substituted by chlorine to yield . The optical absorption spectra of this compound are very similar to those of (η5–C5H5)3Bk.
Applications
There is currently no use for any isotope of berkelium outside basic scientific research. Berkelium-249 is a common target nuclide to prepare still heavier transuranium elements and superheavy elements, such as lawrencium, rutherfordium and bohrium. It is also useful as a source of the isotope californium-249, which is used for studies on the chemistry of californium in preference to the more radioactive californium-252 that is produced in neutron bombardment facilities such as the HFIR.
A 22 milligram batch of berkelium-249 was prepared in a 250-day irradiation and then purified for 90 days at Oak Ridge in 2009. This target yielded the first 6 atoms of tennessine at the Joint Institute for Nuclear Research (JINR), Dubna, Russia, after bombarding it with calcium ions in the U400 cyclotron for 150 days. This synthesis was a culmination of the Russia-US collaboration between JINR and Lawrence Livermore National Laboratory on the synthesis of elements 113 to 118 which was initiated in 1989.
Nuclear fuel cycle
The nuclear fission properties of berkelium are different from those of the neighboring actinides curium and californium, and they suggest berkelium to perform poorly as a fuel in a nuclear reactor. Specifically, berkelium-249 has a moderately large neutron capture cross section of 710 barns for thermal neutrons, 1200 barns resonance integral, but very low fission cross section for thermal neutrons. In a thermal reactor, much of it will therefore be converted to berkelium-250 which quickly decays to californium-250. In principle, berkelium-249 can sustain a nuclear chain reaction in a fast breeder reactor. Its critical mass is relatively high at 192 kg, which can be reduced with a water or steel reflector but would still exceed the world production of this isotope.
Berkelium-247 can maintain a chain reaction both in a thermal-neutron and in a fast-neutron reactor, however, its production is rather complex and thus the availability is much lower than its critical mass, which is about 75.7 kg for a bare sphere, 41.2 kg with a water reflector and 35.2 kg with a steel reflector (30 cm thickness).
Health issues
Little is known about the effects of berkelium on human body, and analogies with other elements may not be drawn because of different radiation products (electrons for berkelium and alpha particles, neutrons, or both for most other actinides). The low energy of electrons emitted from berkelium-249 (less than 126 keV) hinders its detection, due to signal interference with other decay processes, but also makes this isotope relatively harmless to humans as compared to other actinides. However, berkelium-249 transforms with a half-life of only 330 days to the strong alpha-emitter californium-249, which is rather dangerous and has to be handled in a glovebox in a dedicated laboratory.
Most available berkelium toxicity data originate from research on animals. Upon ingestion by rats, only about 0.01% of berkelium ends in the blood stream. From there, about 65% goes to the bones, where it remains for about 50 years, 25% to the lungs (biological half-life about 20 years), 0.035% to the testicles or 0.01% to the ovaries where berkelium stays indefinitely. The balance of about 10% is excreted. In all these organs berkelium might promote cancer, and in the skeleton, its radiation can damage red blood cells. The maximum permissible amount of berkelium-249 in the human skeleton is 0.4 nanograms.
References
Bibliography
External links
Berkelium at The Periodic Table of Videos (University of Nottingham)
Chemical elements
Chemical elements with double hexagonal close-packed structure
Actinides
Synthetic elements | Berkelium | [
"Physics",
"Chemistry"
] | 6,514 | [
"Matter",
"Chemical elements",
"Synthetic materials",
"Synthetic elements",
"Atoms",
"Radioactivity"
] |
3,821 | https://en.wikipedia.org/wiki/Binary-coded%20decimal | In computing and electronic systems, binary-coded decimal (BCD) is a class of binary encodings of decimal numbers where each digit is represented by a fixed number of bits, usually four or eight. Sometimes, special bit patterns are used for a sign or other indications (e.g. error or overflow).
In byte-oriented systems (i.e. most modern computers), the term unpacked BCD usually implies a full byte for each digit (often including a sign), whereas packed BCD typically encodes two digits within a single byte by taking advantage of the fact that four bits are enough to represent the range 0 to 9. The precise four-bit encoding, however, may vary for technical reasons (e.g. Excess-3).
The ten states representing a BCD digit are sometimes called tetrades (the nibble typically needed to hold them is also known as a tetrade) while the unused, don't care-states are named pseudo-tetrad(e)s, pseudo-decimals, or pseudo-decimal digits.
BCD's main virtue, in comparison to binary positional systems, is its more accurate representation and rounding of decimal quantities, as well as its ease of conversion into conventional human-readable representations. Its principal drawbacks are a slight increase in the complexity of the circuits needed to implement basic arithmetic as well as slightly less dense storage.
BCD was used in many early decimal computers, and is implemented in the instruction set of machines such as the IBM System/360 series and its descendants, Digital Equipment Corporation's VAX, the Burroughs B1700, and the Motorola 68000-series processors.
BCD per se is not as widely used as in the past, and is unavailable or limited in newer instruction sets (e.g., ARM; x86 in long mode). However, decimal fixed-point and decimal floating-point formats are still important and continue to be used in financial, commercial, and industrial computing, where the subtle conversion and fractional rounding errors that are inherent in binary floating point formats cannot be tolerated.
Background
BCD takes advantage of the fact that any one decimal numeral can be represented by a four-bit pattern. An obvious way of encoding digits is Natural BCD (NBCD), where each decimal digit is represented by its corresponding four-bit binary value, as shown in the following table. This is also called "8421" encoding.
This scheme can also be referred to as Simple Binary-Coded Decimal (SBCD) or BCD 8421, and is the most common encoding. Others include the so-called "4221" and "7421" encoding – named after the weighting used for the bits – and "Excess-3". For example, the BCD digit 6, in 8421 notation, is in 4221 (two encodings are possible), in 7421, while in Excess-3 it is ().
The following table represents decimal digits from 0 to 9 in various BCD encoding systems. In the headers, the "8421" indicates the weight of each bit. In the fifth column ("BCD 84−2−1"), two of the weights are negative. Both ASCII and EBCDIC character codes for the digits, which are examples of zoned BCD, are also shown.
As most computers deal with data in 8-bit bytes, it is possible to use one of the following methods to encode a BCD number:
Unpacked: Each decimal digit is encoded into one byte, with four bits representing the number and the remaining bits having no significance.
Packed: Two decimal digits are encoded into a single byte, with one digit in the least significant nibble (bits 0 through 3) and the other numeral in the most significant nibble (bits 4 through 7).
As an example, encoding the decimal number 91 using unpacked BCD results in the following binary pattern of two bytes:
Decimal: 9 1
Binary : 0000 1001 0000 0001
In packed BCD, the same number would fit into a single byte:
Decimal: 9 1
Binary : 1001 0001
Hence the numerical range for one unpacked BCD byte is zero through nine inclusive, whereas the range for one packed BCD byte is zero through ninety-nine inclusive.
To represent numbers larger than the range of a single byte any number of contiguous bytes may be used. For example, to represent the decimal number 12345 in packed BCD, using big-endian format, a program would encode as follows:
Decimal: 0 1 2 3 4 5
Binary : 0000 0001 0010 0011 0100 0101
Here, the most significant nibble of the most significant byte has been encoded as zero, so the number is stored as 012345 (but formatting routines might replace or remove leading zeros). Packed BCD is more efficient in storage usage than unpacked BCD; encoding the same number (with the leading zero) in unpacked format would consume twice the storage.
Shifting and masking operations are used to pack or unpack a packed BCD digit. Other bitwise operations are used to convert a numeral to its equivalent bit pattern or reverse the process.
Packed BCD
In packed BCD (or packed decimal), each nibble represents a decimal digit. Packed BCD has been in use since at least the 1960s and is implemented in all IBM mainframe hardware since then. Most implementations are big endian, i.e. with the more significant digit in the upper half of each byte, and with the leftmost byte (residing at the lowest memory address) containing the most significant digits of the packed decimal value. The lower nibble of the rightmost byte is usually used as the sign flag, although some unsigned representations lack a sign flag.
As an example, a 4-byte value consists of 8 nibbles, wherein the upper 7 nibbles store the digits of a 7-digit decimal value, and the lowest nibble indicates the sign of the decimal integer value. Standard sign values are 1100 (hex C) for positive (+) and 1101 (D) for negative (−). This convention comes from the zone field for EBCDIC characters and the signed overpunch representation.
Other allowed signs are 1010 (A) and 1110 (E) for positive and 1011 (B) for negative. IBM System/360 processors will use the 1010 (A) and 1011 (B) signs if the A bit is set in the PSW, for the ASCII-8 standard that never passed. Most implementations also provide unsigned BCD values with a sign nibble of 1111 (F). ILE RPG uses 1111 (F) for positive and 1101 (D) for negative. These match the EBCDIC zone for digits without a sign overpunch. In packed BCD, the number 127 is represented by 0001 0010 0111 1100 (127C) and −127 is represented by 0001 0010 0111 1101 (127D). Burroughs systems used 1101 (D) for negative, and any other value is considered a positive sign value (the processors will normalize a positive sign to 1100 (C)).
No matter how many bytes wide a word is, there is always an even number of nibbles because each byte has two of them. Therefore, a word of n bytes can contain up to (2n)−1 decimal digits, which is always an odd number of digits. A decimal number with d digits requires (d+1) bytes of storage space.
For example, a 4-byte (32-bit) word can hold seven decimal digits plus a sign and can represent values ranging from ±9,999,999. Thus the number −1,234,567 is 7 digits wide and is encoded as:
0001 0010 0011 0100 0101 0110 0111 1101
1 2 3 4 5 6 7 −
Like character strings, the first byte of the packed decimal that with the most significant two digits is usually stored in the lowest address in memory, independent of the endianness of the machine.
In contrast, a 4-byte binary two's complement integer can represent values from −2,147,483,648 to +2,147,483,647.
While packed BCD does not make optimal use of storage (using about 20% more memory than binary notation to store the same numbers), conversion to ASCII, EBCDIC, or the various encodings of Unicode is made trivial, as no arithmetic operations are required. The extra storage requirements are usually offset by the need for the accuracy and compatibility with calculator or hand calculation that fixed-point decimal arithmetic provides. Denser packings of BCD exist which avoid the storage penalty and also need no arithmetic operations for common conversions.
Packed BCD is supported in the COBOL programming language as the "COMPUTATIONAL-3" (an IBM extension adopted by many other compiler vendors) or "PACKED-DECIMAL" (part of the 1985 COBOL standard) data type. It is supported in PL/I as "FIXED DECIMAL". Beside the IBM System/360 and later compatible mainframes, packed BCD is implemented in the native instruction set of the original VAX processors from Digital Equipment Corporation and some models of the SDS Sigma series mainframes, and is the native format for the Burroughs Medium Systems line of mainframes (descended from the 1950s Electrodata 200 series).
Ten's complement representations for negative numbers offer an alternative approach to encoding the sign of packed (and other) BCD numbers. In this case, positive numbers always have a most significant digit between 0 and 4 (inclusive), while negative numbers are represented by the 10's complement of the corresponding positive number.
As a result, this system allows for 32-bit packed BCD numbers to range from −50,000,000 to +49,999,999, and −1 is represented as 99999999. (As with two's complement binary numbers, the range is not symmetric about zero.)
Fixed-point packed decimal
Fixed-point decimal numbers are supported by some programming languages (such as COBOL and PL/I). These languages allow the programmer to specify an implicit decimal point in front of one of the digits.
For example, a packed decimal value encoded with the bytes 12 34 56 7C represents the fixed-point value +1,234.567 when the implied decimal point is located between the fourth and fifth digits:
12 34 56 7C
12 34.56 7+
The decimal point is not actually stored in memory, as the packed BCD storage format does not provide for it. Its location is simply known to the compiler, and the generated code acts accordingly for the various arithmetic operations.
Higher-density encodings
If a decimal digit requires four bits, then three decimal digits require 12 bits. However, since 210 (1,024) is greater than 103 (1,000), if three decimal digits are encoded together, only 10 bits are needed. Two such encodings are Chen–Ho encoding and densely packed decimal (DPD). The latter has the advantage that subsets of the encoding encode two digits in the optimal seven bits and one digit in four bits, as in regular BCD.
Zoned decimal
Some implementations, for example IBM mainframe systems, support zoned decimal numeric representations. Each decimal digit is stored in one byte, with the lower four bits encoding the digit in BCD form. The upper four bits, called the "zone" bits, are usually set to a fixed value so that the byte holds a character value corresponding to the digit. EBCDIC systems use a zone value of 1111 (hex F); this yields bytes in the range F0 to F9 (hex), which are the EBCDIC codes for the characters "0" through "9". Similarly, ASCII systems use a zone value of 0011 (hex 3), giving character codes 30 to 39 (hex).
For signed zoned decimal values, the rightmost (least significant) zone nibble holds the sign digit, which is the same set of values that are used for signed packed decimal numbers (see above). Thus a zoned decimal value encoded as the hex bytes F1 F2 D3 represents the signed decimal value −123:
F1 F2 D3
1 2 −3
EBCDIC zoned decimal conversion table
(*) Note: These characters vary depending on the local character code page setting.
Fixed-point zoned decimal
Some languages (such as COBOL and PL/I) directly support fixed-point zoned decimal values, assigning an implicit decimal point at some location between the decimal digits of a number.
For example, given a six-byte signed zoned decimal value with an implied decimal point to the right of the fourth digit, the hex bytes F1 F2 F7 F9 F5 C0 represent the value +1,279.50:
F1 F2 F7 F9 F5 C0
1 2 7 9. 5 +0
Operations with BCD
Addition
It is possible to perform addition by first adding in binary, and then converting to BCD afterwards. Conversion of the simple sum of two digits can be done by adding 6 (that is, 16 − 10) when the five-bit result of adding a pair of digits has a value greater than 9. The reason for adding 6 is that there are 16 possible 4-bit BCD values (since 24 = 16), but only 10 values are valid (0000 through 1001). For example:
1001 + 1000 = 10001
9 + 8 = 17
10001 is the binary, not decimal, representation of the desired result, but the most significant 1 (the "carry") cannot fit in a 4-bit binary number. In BCD as in decimal, there cannot exist a value greater than 9 (1001) per digit. To correct this, 6 (0110) is added to the total, and then the result is treated as two nibbles:
10001 + 0110 = 00010111 => 0001 0111
17 + 6 = 23 1 7
The two nibbles of the result, 0001 and 0111, correspond to the digits "1" and "7". This yields "17" in BCD, which is the correct result.
This technique can be extended to adding multiple digits by adding in groups from right to left, propagating the second digit as a carry, always comparing the 5-bit result of each digit-pair sum to 9. Some CPUs provide a half-carry flag to facilitate BCD arithmetic adjustments following binary addition and subtraction operations. The Intel 8080, the Zilog Z80 and the CPUs of the x86 family provide the opcode DAA (Decimal Adjust Accumulator).
Subtraction
Subtraction is done by adding the ten's complement of the subtrahend to the minuend. To represent the sign of a number in BCD, the number 0000 is used to represent a positive number, and 1001 is used to represent a negative number. The remaining 14 combinations are invalid signs. To illustrate signed BCD subtraction, consider the following problem: 357 − 432.
In signed BCD, 357 is 0000 0011 0101 0111. The ten's complement of 432 can be obtained by taking the nine's complement of 432, and then adding one. So, 999 − 432 = 567, and 567 + 1 = 568. By preceding 568 in BCD by the negative sign code, the number −432 can be represented. So, −432 in signed BCD is 1001 0101 0110 1000.
Now that both numbers are represented in signed BCD, they can be added together:
0000 0011 0101 0111
0 3 5 7
+ 1001 0101 0110 1000
9 5 6 8
= 1001 1000 1011 1111
9 8 11 15
Since BCD is a form of decimal representation, several of the digit sums above are invalid. In the event that an invalid entry (any BCD digit greater than 1001) exists, 6 is added to generate a carry bit and cause the sum to become a valid entry. So, adding 6 to the invalid entries results in the following:
1001 1000 1011 1111
9 8 11 15
+ 0000 0000 0110 0110
0 0 6 6
= 1001 1001 0010 0101
9 9 2 5
Thus the result of the subtraction is 1001 1001 0010 0101 (−925). To confirm the result, note that the first digit is 9, which means negative. This seems to be correct since 357 − 432 should result in a negative number. The remaining nibbles are BCD, so 1001 0010 0101 is 925. The ten's complement of 925 is 1000 − 925 = 75, so the calculated answer is −75.
If there are a different number of nibbles being added together (such as 1053 − 2), the number with the fewer digits must first be prefixed with zeros before taking the ten's complement or subtracting. So, with 1053 − 2, 2 would have to first be represented as 0002 in BCD, and the ten's complement of 0002 would have to be calculated.
BCD in computers
IBM
IBM used the terms Binary-Coded Decimal Interchange Code (BCDIC, sometimes just called BCD), for 6-bit alphanumeric codes that represented numbers, upper-case letters and special characters. Some variation of BCDIC alphamerics is used in most early IBM computers, including the IBM 1620 (introduced in 1959), IBM 1400 series, and non-decimal architecture members of the IBM 700/7000 series.
The IBM 1400 series are character-addressable machines, each location being six bits labeled B, A, 8, 4, 2 and 1, plus an odd parity check bit (C) and a word mark bit (M). For encoding digits 1 through 9, B and A are zero and the digit value represented by standard 4-bit BCD in bits 8 through 1. For most other characters bits B and A are derived simply from the "12", "11", and "0" "zone punches" in the punched card character code, and bits 8 through 1 from the 1 through 9 punches. A "12 zone" punch set both B and A, an "11 zone" set B, and a "0 zone" (a 0 punch combined with any others) set A. Thus the letter A, which is (12,1) in the punched card format, is encoded (B,A,1). The currency symbol $, (11,8,3) in the punched card, was encoded in memory as (B,8,2,1). This allows the circuitry to convert between the punched card format and the internal storage format to be very simple with only a few special cases. One important special case is digit 0, represented by a lone 0 punch in the card, and (8,2) in core memory.
The memory of the IBM 1620 is organized into 6-bit addressable digits, the usual 8, 4, 2, 1 plus F, used as a flag bit and C, an odd parity check bit. BCD alphamerics are encoded using digit pairs, with the "zone" in the even-addressed digit and the "digit" in the odd-addressed digit, the "zone" being related to the 12, 11, and 0 "zone punches" as in the 1400 series. Input/output translation hardware converted between the internal digit pairs and the external standard 6-bit BCD codes.
In the decimal architecture IBM 7070, IBM 7072, and IBM 7074 alphamerics are encoded using digit pairs (using two-out-of-five code in the digits, not BCD) of the 10-digit word, with the "zone" in the left digit and the "digit" in the right digit. Input/output translation hardware converted between the internal digit pairs and the external standard 6-bit BCD codes.
With the introduction of System/360, IBM expanded 6-bit BCD alphamerics to 8-bit EBCDIC, allowing the addition of many more characters (e.g., lowercase letters). A variable length packed BCD numeric data type is also implemented, providing machine instructions that perform arithmetic directly on packed decimal data.
On the IBM 1130 and 1800, packed BCD is supported in software by IBM's Commercial Subroutine Package.
Today, BCD data is still heavily used in IBM databases such as IBM Db2 and processors such as z/Architecture and POWER6 and later Power ISA processors. In these products, the BCD is usually zoned BCD (as in EBCDIC or ASCII), packed BCD (two decimal digits per byte), or "pure" BCD encoding (one decimal digit stored as BCD in the low four bits of each byte). All of these are used within hardware registers and processing units, and in software.
Other computers
The Digital Equipment Corporation VAX series includes instructions that can perform arithmetic directly on packed BCD data and convert between packed BCD data and other integer representations. The VAX's packed BCD format is compatible with that on IBM System/360 and IBM's later compatible processors. The MicroVAX and later VAX implementations dropped this ability from the CPU but retained code compatibility with earlier machines by implementing the missing instructions in an operating system-supplied software library. This is invoked automatically via exception handling when the defunct instructions are encountered, so that programs using them can execute without modification on the newer machines.
Many processors have hardware support for BCD-encoded integer arithmetic. For example, the 6502, the Motorola 68000 series, and the x86 series. The Intel x86 architecture supports a unique 18-digit (ten-byte) BCD format that can be loaded into and stored from the floating point registers, from where computations can be performed.
In more recent computers such capabilities are almost always implemented in software rather than the CPU's instruction set, but BCD numeric data are still extremely common in commercial and financial applications.
There are tricks for implementing packed BCD and zoned decimal add–or–subtract operations using short but difficult to understand sequences of word-parallel logic and binary arithmetic operations. For example, the following code (written in C) computes an unsigned 8-digit packed BCD addition using 32-bit binary operations:
uint32_t BCDadd(uint32_t a, uint32_t b)
{
uint32_t t1, t2; // unsigned 32-bit intermediate values
t1 = a + 0x06666666;
t2 = t1 ^ b; // sum without carry propagation
t1 = t1 + b; // provisional sum
t2 = t1 ^ t2; // all the binary carry bits
t2 = ~t2 & 0x11111110; // just the BCD carry bits
t2 = (t2 >> 2) | (t2 >> 3); // correction
return t1 - t2; // corrected BCD sum
}
BCD in electronics
BCD is common in electronic systems where a numeric value is to be displayed, especially in systems consisting solely of digital logic, and not containing a microprocessor. By employing BCD, the manipulation of numerical data for display can be greatly simplified by treating each digit as a separate single sub-circuit.
This matches much more closely the physical reality of display hardware—a designer might choose to use a series of separate identical seven-segment displays to build a metering circuit, for example. If the numeric quantity were stored and manipulated as pure binary, interfacing with such a display would require complex circuitry. Therefore, in cases where the calculations are relatively simple, working throughout with BCD can lead to an overall simpler system than converting to and from binary. Most pocket calculators do all their calculations in BCD.
The same argument applies when hardware of this type uses an embedded microcontroller or other small processor. Often, representing numbers internally in BCD format results in smaller code, since a conversion from or to binary representation can be expensive on such limited processors. For these applications, some small processors feature dedicated arithmetic modes, which assist when writing routines that manipulate BCD quantities.
Comparison with pure binary
Advantages
Scaling by a power of 10 is simple.
Rounding at a decimal digit boundary is simpler. Addition and subtraction in decimal do not require rounding.
The alignment of two decimal numbers (for example 1.3 + 27.08) is a simple, exact shift.
Conversion to a character form or for display (e.g., to a text-based format such as XML, or to drive signals for a seven-segment display) is a simple per-digit mapping, and can be done in linear (O(n)) time. Conversion from pure binary involves relatively complex logic that spans digits, and for large numbers, no linear-time conversion algorithm is known (see ).
Many non-integral values, such as decimal 0.2, have an infinite place-value representation in binary (.001100110011...) but have a finite place-value in binary-coded decimal (0.0010). Consequently, a system based on binary-coded decimal representations of decimal fractions avoids errors representing and calculating such values. This is useful in financial calculations.
Disadvantages
Practical existing implementations of BCD are typically slower than operations on binary representations, especially on embedded systems, due to limited processor support for native BCD operations.
Some operations are more complex to implement. Adders require extra logic to cause them to wrap and generate a carry early. Also, 15 to 20 per cent more circuitry is needed for BCD add compared to pure binary. Multiplication requires the use of algorithms that are somewhat more complex than shift-mask-add (a binary multiplication, requiring binary shifts and adds or the equivalent, per-digit or group of digits is required).
Standard BCD requires four bits per digit, roughly 20 per cent more space than a binary encoding (the ratio of 4 bits to log210 bits is 1.204). When packed so that three digits are encoded in ten bits, the storage overhead is greatly reduced, at the expense of an encoding that is unaligned with the 8-bit byte boundaries common on existing hardware, resulting in slower implementations on these systems.
Representational variations
Various BCD implementations exist that employ other representations for numbers. Programmable calculators manufactured by Texas Instruments, Hewlett-Packard, and others typically employ a floating-point BCD format, typically with two or three digits for the (decimal) exponent. The extra bits of the sign digit may be used to indicate special numeric values, such as infinity, underflow/overflow, and error (a blinking display).
Signed variations
Signed decimal values may be represented in several ways. The COBOL programming language, for example, supports five zoned decimal formats, with each one encoding the numeric sign in a different way:
Telephony binary-coded decimal (TBCD)
3GPP developed TBCD, an expansion to BCD where the remaining (unused) bit combinations are used to add specific telephony characters, with digits similar to those found in telephone keypads original design.
The mentioned 3GPP document defines TBCD-STRING with swapped nibbles in each byte. Bits, octets and digits indexed from 1, bits from the right, digits and octets from the left.
bits 8765 of octet n encoding digit 2n
bits 4321 of octet n encoding digit 2(n – 1) + 1
Meaning number 1234, would become 21 43 in TBCD.
This format is used in modern mobile telephony to send dialed numbers, as well as operator ID (the MCC/MNC tuple), IMEI, IMSI (SUPI), et.c.
Alternative encodings
If errors in representation and computation are more important than the speed of conversion to and from display, a scaled binary representation may be used, which stores a decimal number as a binary-encoded integer and a binary-encoded signed decimal exponent. For example, 0.2 can be represented as 2.
This representation allows rapid multiplication and division, but may require shifting by a power of 10 during addition and subtraction to align the decimal points. It is appropriate for applications with a fixed number of decimal places that do not then require this adjustment—particularly financial applications where 2 or 4 digits after the decimal point are usually enough. Indeed, this is almost a form of fixed point arithmetic since the position of the radix point is implied.
The Hertz and Chen–Ho encodings provide Boolean transformations for converting groups of three BCD-encoded digits to and from 10-bit values that can be efficiently encoded in hardware with only 2 or 3 gate delays. Densely packed decimal (DPD) is a similar scheme that is used for most of the significand, except the lead digit, for one of the two alternative decimal encodings specified in the IEEE 754-2008 floating-point standard.
Application
The BIOS in many personal computers stores the date and time in BCD because the MC6818 real-time clock chip used in the original IBM PC AT motherboard provided the time encoded in BCD. This form is easily converted into ASCII for display.
The Atari 8-bit computers use a BCD format for floating point numbers. The MOS Technology 6502 processor has a BCD mode for the addition and subtraction instructions. The Psion Organiser 1 handheld computer's manufacturer-supplied software also uses BCD to implement floating point; later Psion models use binary exclusively.
Early models of the PlayStation 3 store the date and time in BCD. This led to a worldwide outage of the console on 1 March 2010. The last two digits of the year stored as BCD were misinterpreted as 16 causing an error in the unit's date, rendering most functions inoperable. This has been referred to as the Year 2010 problem.
Legal history
In the 1972 case Gottschalk v. Benson, the U.S. Supreme Court overturned a lower court's decision that had allowed a patent for converting BCD-encoded numbers to binary on a computer.
The decision noted that a patent "would wholly pre-empt the mathematical formula and in practical effect would be a patent on the algorithm itself". This was a landmark judgement that determined the patentability of software and algorithms.
See also
Bi-quinary coded decimal
Binary-coded ternary (BCT)
Binary integer decimal (BID)
Bitmask
Chen–Ho encoding
Decimal computer
Densely packed decimal (DPD)
Double dabble, an algorithm for converting binary numbers to BCD
Year 2000 problem
Notes
References
Further reading
and (NB. At least some batches of the Krieger reprint edition were misprints with defective pages 115–146.)
(Also: ACM SIGPLAN Notices, Vol. 22 #10, IEEE Computer Society Press #87CH2440-6, October 1987)
External links
Convert BCD to decimal, binary and hexadecimal and vice versa
BCD for Java
Computer arithmetic
Numeral systems
Non-standard positional numeral systems
Binary arithmetic
Articles with example C code | Binary-coded decimal | [
"Mathematics"
] | 6,565 | [
"Mathematical objects",
"Computer arithmetic",
"Numeral systems",
"Arithmetic",
"Binary arithmetic",
"Numbers"
] |
3,832 | https://en.wikipedia.org/wiki/Bauhaus | The Staatliches Bauhaus (), commonly known as the , was a German art school operational from 1919 to 1933 that combined crafts and the fine arts. The school became famous for its approach to design, which attempted to unify individual artistic vision with the principles of mass production and emphasis on function. Along with the doctrine of functionalism, the Bauhaus initiated the conceptual understanding of architecture and design.
The Bauhaus was founded by architect Walter Gropius in Weimar. It was grounded in the idea of creating a Gesamtkunstwerk ("comprehensive artwork") in which all the arts would eventually be brought together. The Bauhaus style later became one of the most influential currents in modern design, modernist architecture, and architectural education. The Bauhaus movement had a profound influence on subsequent developments in art, architecture, graphic design, interior design, industrial design, and typography. Staff at the Bauhaus included prominent artists such as Paul Klee, Wassily Kandinsky, Gunta Stölzl, and László Moholy-Nagy at various points.
The school existed in three German cities—Weimar, from 1919 to 1925; Dessau, from 1925 to 1932; and Berlin, from 1932 to 1933—under three different architect-directors: Walter Gropius from 1919 to 1928; Hannes Meyer from 1928 to 1930; and Ludwig Mies van der Rohe from 1930 until 1933, when the school was closed by its own leadership under pressure from the Nazi regime, having been painted as a centre of communist intellectualism. Internationally, former key figures of Bauhaus were successful in the United States and became known as the avant-garde for the International Style. The White city of Tel Aviv to which numerous Jewish Bauhaus architects emigrated, has the highest concentration of the Bauhaus' international architecture in the world.
The changes of venue and leadership resulted in a constant shifting of focus, technique, instructors, and politics. For example, the pottery shop was discontinued when the school moved from Weimar to Dessau, even though it had been an important revenue source; when Mies van der Rohe took over the school in 1930, he transformed it into a private school and would not allow any supporters of Hannes Meyer to attend it.
Terms and concepts
Several specific features are identified in the Bauhaus forms and shapes: simple geometric shapes like rectangles and spheres, without elaborate decorations. Buildings, furniture, and fonts often feature rounded corners, sometimes rounded walls, or curved chrome pipes. Some buildings are characterized by rectangular features, for example protruding balconies with flat, chunky railings facing the street, and long banks of windows. Some outlines can be defined as a tool for creating an ideal form, which is the basis of the architectural concept.
Bauhaus and German modernism
After Germany's defeat in World War I and the establishment of the Weimar Republic, a renewed liberal spirit allowed an upsurge of radical experimentation in all the arts, which had been suppressed by the old regime. Many Germans of left-wing views were influenced by the cultural experimentation that followed the Russian Revolution, such as constructivism. Such influences can be overstated: Gropius did not share these radical views, and said that Bauhaus was entirely apolitical. Just as important was the influence of the 19th-century English designer William Morris (1834–1896), who had argued that art should meet the needs of society and that there should be no distinction between form and function. Thus, the Bauhaus style, also known as the International Style, was marked by the absence of ornamentation and by harmony between the function of an object or a building and its design.
However, the most important influence on Bauhaus was modernism, a cultural movement whose origins lay as early as the 1880s, and which had already made its presence felt in Germany before the World War, despite the prevailing conservatism. The design innovations commonly associated with Gropius and the Bauhaus—the radically simplified forms, the rationality and functionality, and the idea that mass production was reconcilable with the individual artistic spirit—were already partly developed in Germany before the Bauhaus was founded. The German national designers' organization Deutscher Werkbund was formed in 1907 by Hermann Muthesius to harness the new potentials of mass production, with a mind towards preserving Germany's economic competitiveness with England. In its first seven years, the Werkbund came to be regarded as the authoritative body on questions of design in Germany, and was copied in other countries. Many fundamental questions of craftsmanship versus mass production, the relationship of usefulness and beauty, the practical purpose of formal beauty in a commonplace object, and whether or not a single proper form could exist, were argued out among its 1,870 members (by 1914).
German architectural modernism was known as Neues Bauen. Beginning in June 1907, Peter Behrens' pioneering industrial design work for the German electrical company AEG successfully integrated art and mass production on a large scale. He designed consumer products, standardized parts, created clean-lined designs for the company's graphics, developed a consistent corporate identity, built the modernist landmark AEG Turbine Factory, and made full use of newly developed materials such as poured concrete and exposed steel. Behrens was a founding member of the Werkbund, and both Walter Gropius and Adolf Meyer worked for him in this period.
The Bauhaus was founded at a time when the German zeitgeist had turned from emotional Expressionism to the matter-of-fact New Objectivity. An entire group of working architects, including Erich Mendelsohn, Bruno Taut and Hans Poelzig, turned away from fanciful experimentation and towards rational, functional, sometimes standardized building. Beyond the Bauhaus, many other significant German-speaking architects in the 1920s responded to the same aesthetic issues and material possibilities as the school. They also responded to the promise "to promote the object of assuring to every German a healthful habitation" written into the new Weimar Constitution (Article 155). Ernst May, Bruno Taut and Martin Wagner, among others, built large housing blocks in Frankfurt and Berlin. The acceptance of modernist design into everyday life was the subject of publicity campaigns, well-attended public exhibitions like the Weissenhof Estate, films, and sometimes fierce public debate.
Bauhaus and Vkhutemas
The Vkhutemas, the Russian state art and technical school founded in 1920 in Moscow, has been compared to Bauhaus. Founded a year after the Bauhaus school, Vkhutemas has close parallels to the German Bauhaus in its intent, organization and scope. The two schools were the first to train artist-designers in a modern manner. Both schools were state-sponsored initiatives to merge traditional craft with modern technology, with a basic course in aesthetic principles, courses in color theory, industrial design, and architecture. Vkhutemas was a larger school than the Bauhaus, but it was less publicised outside the Soviet Union and consequently, is less familiar in the West.
With the internationalism of modern architecture and design, there were many exchanges between the Vkhutemas and the Bauhaus. The second Bauhaus director Hannes Meyer attempted to organise an exchange between the two schools, while Hinnerk Scheper of the Bauhaus collaborated with various Vkhutein members on the use of colour in architecture. In addition, El Lissitzky's book Russia: an Architecture for World Revolution published in German in 1930 featured several illustrations of Vkhutemas/Vkhutein projects there.
History of the Bauhaus
Weimar
The school was founded by Walter Gropius in Weimar on 1 April 1919, as a merger of the Grand-Ducal Saxon Academy of Fine Art and the Grand Ducal Saxon School of Arts and Crafts for a newly affiliated architecture department. Its roots lay in the arts and crafts school founded by the Grand Duke of Saxe-Weimar-Eisenach in 1906, and directed by Belgian Art Nouveau architect Henry van de Velde. When van de Velde was forced to resign in 1915 because he was Belgian, he suggested Gropius, Hermann Obrist, and August Endell as possible successors. In 1919, after delays caused by World War I and a lengthy debate over who should head the institution and the socio-economic meanings of a reconciliation of the fine arts and the applied arts (an issue which remained a defining one throughout the school's existence), Gropius was made the director of a new institution integrating the two called the Bauhaus. In the pamphlet for an April 1919 exhibition entitled Exhibition of Unknown Architects, Gropius, still very much under the influence of William Morris and the British Arts and Crafts Movement, proclaimed his goal as being "to create a new guild of craftsmen, without the class distinctions which raise an arrogant barrier between craftsman and artist." Gropius's neologism Bauhaus references both building and the Bauhütte, a premodern guild of stonemasons. The early intention was for the Bauhaus to be a combined architecture school, crafts school, and academy of the arts. Swiss painter Johannes Itten, German-American painter Lyonel Feininger, and German sculptor Gerhard Marcks, along with Gropius, comprised the faculty of the Bauhaus in 1919. By the following year their ranks had grown to include German painter, sculptor, and designer Oskar Schlemmer who headed the theatre workshop, and Swiss painter Paul Klee, joined in 1922 by Russian painter Wassily Kandinsky. The first major joint project completed by the Bauhaus was the Sommerfeld House, which was built between 1920 and 1921. A tumultuous year at the Bauhaus, 1922 also saw the move of Dutch painter Theo van Doesburg to Weimar to promote De Stijl ("The Style"), and a visit to the Bauhaus by Russian Constructivist artist and architect El Lissitzky.
From 1919 to 1922 the school was shaped by the pedagogical and aesthetic ideas of Johannes Itten, who taught the Vorkurs or "preliminary course" that was the introduction to the ideas of the Bauhaus. Itten was heavily influenced in his teaching by the ideas of Franz Cižek and Friedrich Wilhelm August Fröbel. He was also influenced in respect to aesthetics by the work of the Der Blaue Reiter group in Munich, as well as the work of Austrian Expressionist Oskar Kokoschka. The influence of German Expressionism favoured by Itten was analogous in some ways to the fine arts side of the ongoing debate. This influence culminated with the addition of Der Blaue Reiter founding member Wassily Kandinsky to the faculty and ended when Itten resigned in late 1923. Itten was replaced by the Hungarian designer László Moholy-Nagy, who rewrote the Vorkurs with a leaning towards the New Objectivity favoured by Gropius, which was analogous in some ways to the applied arts side of the debate. Although this shift was an important one, it did not represent a radical break from the past so much as a small step in a broader, more gradual socio-economic movement that had been going on at least since 1907, when van de Velde had argued for a craft basis for design while Hermann Muthesius had begun implementing industrial prototypes.
Gropius was not necessarily against Expressionism, and in the same 1919 pamphlet proclaiming this "new guild of craftsmen, without the class snobbery", described "painting and sculpture rising to heaven out of the hands of a million craftsmen, the crystal symbol of the new faith of the future." By 1923, however, Gropius was no longer evoking images of soaring Romanesque cathedrals and the craft-driven aesthetic of the "Völkisch movement", instead declaring "we want an architecture adapted to our world of machines, radios and fast cars." Gropius argued that a new period of history had begun with the end of the war. He wanted to create a new architectural style to reflect this new era. His style in architecture and consumer goods was to be functional, cheap and consistent with mass production. To these ends, Gropius wanted to reunite art and craft to arrive at high-end functional products with artistic merit. The Bauhaus issued a magazine called Bauhaus and a series of books called "Bauhausbücher". Since the Weimar Republic lacked the number of raw materials available to the United States and Great Britain, it had to rely on the proficiency of a skilled labour force and an ability to export innovative and high-quality goods. Therefore, designers were needed and so was a new type of art education. The school's philosophy stated that the artist should be trained to work with the industry.
Weimar was in the German state of Thuringia, and the Bauhaus school received state support from the Social Democrat-controlled Thuringian state government. The school in Weimar experienced political pressure from conservative circles in Thuringian politics, increasingly so after 1923 as political tension rose. One condition placed on the Bauhaus in this new political environment was the exhibition of work undertaken at the school. This condition was met in 1923 with the Bauhaus' exhibition of the experimental Haus am Horn. The Ministry of Education placed the staff on six-month contracts and cut the school's funding in half. The Bauhaus issued a press release on 26 December 1924, setting the closure of the school for the end of March 1925. At this point it had already been looking for alternative sources of funding. After the Bauhaus moved to Dessau, a school of industrial design with teachers and staff less antagonistic to the conservative political regime remained in Weimar. This school was eventually known as the Technical University of Architecture and Civil Engineering, and in 1996 changed its name to Bauhaus-University Weimar.
Dessau
The Bauhaus moved to Dessau in 1925 and new facilities there were inaugurated in late 1926. Gropius's design for the Dessau facilities was a return to the futuristic Gropius of 1914 that had more in common with the International style lines of the Fagus Factory than the stripped down Neo-classical of the Werkbund pavilion or the Völkisch Sommerfeld House. During the Dessau years, there was a remarkable change in direction for the school. According to Elaine Hoffman, Gropius had approached the Dutch architect Mart Stam to run the newly founded architecture program, and when Stam declined the position, Gropius turned to Stam's friend and colleague in the ABC group, Hannes Meyer.
Meyer became director when Gropius resigned in February 1928, and brought the Bauhaus its two most significant building commissions, both of which still exist: five apartment buildings in the city of Dessau, and the Bundesschule des Allgemeinen Deutschen Gewerkschaftsbundes (ADGB Trade Union School) in Bernau bei Berlin. Meyer favoured measurements and calculations in his presentations to clients, along with the use of off-the-shelf architectural components to reduce costs. This approach proved attractive to potential clients. The school turned its first profit under his leadership in 1929.
But Meyer also generated a great deal of conflict. As a radical functionalist, he had no patience with the aesthetic program and forced the resignations of Herbert Bayer, Marcel Breuer, and other long-time instructors. Even though Meyer shifted the orientation of the school further to the left than it had been under Gropius, he didn't want the school to become a tool of left-wing party politics. He prevented the formation of a student communist cell, and in the increasingly dangerous political atmosphere, this became a threat to the existence of the Dessau school. Dessau mayor Fritz Hesse fired him in the summer of 1930. The Dessau city council attempted to convince Gropius to return as head of the school, but Gropius instead suggested Ludwig Mies van der Rohe. Mies was appointed in 1930 and immediately interviewed each student, dismissing those that he deemed uncommitted. He halted the school's manufacture of goods so that the school could focus on teaching, and appointed no new faculty other than his close confidant Lilly Reich. By 1931, the Nazi Party was becoming more influential in German politics. When it gained control of the Dessau city council, it moved to close the school.
Berlin
In late 1932, Mies rented a derelict factory in Berlin (Birkbusch Street 49) to use as the new Bauhaus with his own money. The students and faculty rehabilitated the building, painting the interior white. The school operated for ten months without further interference from the Nazi Party. In 1933, the Gestapo closed down the Berlin school. Mies protested the decision, eventually speaking to the head of the Gestapo, who agreed to allow the school to re-open. However, shortly after receiving a letter permitting the opening of the Bauhaus, Mies and the other faculty agreed to voluntarily shut down the school.
Although neither the Nazi Party nor Adolf Hitler had a cohesive architectural policy before they came to power in 1933, Nazi writers like Wilhelm Frick and Alfred Rosenberg had already labelled the Bauhaus "un-German" and criticized its modernist styles, deliberately generating public controversy over issues like flat roofs. Increasingly through the early 1930s, they characterized the Bauhaus as a front for communists and social liberals. Indeed, when Meyer was fired in 1930, a number of communist students loyal to him moved to the Soviet Union.
Even before the Nazis came to power, political pressure on Bauhaus had increased. The Nazi movement, from nearly the start, denounced the Bauhaus for its "degenerate art", and the Nazi regime was determined to crack down on what it saw as the foreign, probably Jewish, influences of "cosmopolitan modernism". Despite Gropius's protestations that as a war veteran and a patriot his work had no subversive political intent, the Berlin Bauhaus was pressured to close in April 1933. Emigrants did succeed, however, in spreading the concepts of the Bauhaus to other countries, including the "New Bauhaus" of Chicago: Mies decided to emigrate to the United States for the directorship of the School of Architecture at the Armour Institute (now Illinois Institute of Technology) in Chicago and to seek building commissions. The simple engineering-oriented functionalism of stripped-down modernism, however, did lead to some Bauhaus influences living on in Nazi Germany. When Hitler's chief engineer, Fritz Todt, began opening the new autobahns (highways) in 1935, many of the bridges and service stations were "bold examples of modernism", and among those submitting designs was Mies van der Rohe.
Architectural output
The paradox of the early Bauhaus was that, although its manifesto proclaimed that the aim of all creative activity was building, the school did not offer classes in architecture until 1927. During the years under Gropius (1919–1927), he and his partner Adolf Meyer observed no real distinction between the output of his architectural office and the school. The built output of Bauhaus architecture in these years is the output of Gropius: the Sommerfeld house in Berlin, the Otte house in Berlin, the Auerbach house in Jena, and the competition design for the Chicago Tribune Tower, which brought the school much attention. The definitive 1926 Bauhaus building in Dessau is also attributed to Gropius. Apart from contributions to the 1923 Haus am Horn, student architectural work amounted to un-built projects, interior finishes, and craft work like cabinets, chairs and pottery.
In the next two years under Meyer, the architectural focus shifted away from aesthetics and towards functionality. There were major commissions: one from the city of Dessau for five tightly designed "Laubenganghäuser" (apartment buildings with balcony access), which are still in use today, and another for the Bundesschule des Allgemeinen Deutschen Gewerkschaftsbundes (ADGB Trade Union School) in Bernau bei Berlin. Meyer's approach was to research users' needs and scientifically develop the design solution. He intended to place emphasis on Gropius' objective analysis of the properties determining an object's use value, known as Wesensforschung. Gropius believed that it was possible to design exemplary products of universal validity that should be standardized.
Mies van der Rohe repudiated Meyer's politics, his supporters, and his architectural approach. As opposed to Gropius's "study of essentials", and Meyer's research into user requirements, Mies advocated a "spatial implementation of intellectual decisions", which effectively meant an adoption of his own aesthetics. Neither Mies van der Rohe nor his Bauhaus students saw any projects built during the 1930s.
The Bauhaus movement was not focused on developing worker housing. Only two projects, the apartment building project in Dessau and the Törten row housing fall into the worker housing category. It was the Bauhaus contemporaries Bruno Taut, Hans Poelzig and particularly Ernst May, as the city architects of Berlin, Dresden and Frankfurt respectively, who are rightfully credited with the thousands of socially progressive housing units built in Weimar Germany. The housing Taut built in south-west Berlin during the 1920s, close to the U-Bahn stop Onkel Toms Hütte, is still occupied.
Impact
The Bauhaus had a major impact on art and architecture trends in Western Europe, Canada, the United States and Israel in the decades following its demise, as many of the artists involved fled, or were exiled by the Nazi regime. In 1996, four of the major sites associated with Bauhaus in Germany were inscribed on the UNESCO World Heritage List (with two more added in 2017).
In 1928, the Hungarian painter Alexander Bortnyik founded a school of design in Budapest called Műhely, which means "the studio". Located on the seventh floor of a house on Nagymezo Street, it was meant to be the Hungarian equivalent to the Bauhaus. The literature sometimes refers to it—in an oversimplified manner—as "the Budapest Bauhaus". Bortnyik was a great admirer of László Moholy-Nagy and had met Walter Gropius in Weimar between 1923 and 1925. Moholy-Nagy himself taught at the Műhely. Victor Vasarely, a pioneer of op art, studied at this school before establishing in Paris in 1930.
Walter Gropius, Marcel Breuer, and Moholy-Nagy re-assembled in Britain during the mid-1930s and lived and worked in the Isokon housing development in Lawn Road in London before the war caught up with them. Gropius and Breuer went on to teach at the Harvard Graduate School of Design and worked together before their professional split. Their collaboration produced, among other projects, the Aluminum City Terrace in New Kensington, Pennsylvania and the Alan I W Frank House in Pittsburgh. The Harvard School was enormously influential in America in the late 1920s and early 1930s, producing such students as Philip Johnson, I. M. Pei, Lawrence Halprin and Paul Rudolph, among many others.
In the late 1930s, Mies van der Rohe re-settled in Chicago, enjoyed the sponsorship of the influential Philip Johnson, and became one of the world's pre-eminent architects. Moholy-Nagy also went to Chicago and founded the New Bauhaus school under the sponsorship of industrialist and philanthropist Walter Paepcke. This school became the Institute of Design, part of the Illinois Institute of Technology. Printmaker and painter Werner Drewes was also largely responsible for bringing the Bauhaus aesthetic to America and taught at both Columbia University and Washington University in St. Louis. Herbert Bayer, sponsored by Paepcke, moved to Aspen, Colorado in support of Paepcke's Aspen projects at the Aspen Institute. In 1953, Max Bill, together with Inge Aicher-Scholl and Otl Aicher, founded the Ulm School of Design (German: Hochschule für Gestaltung – HfG Ulm) in Ulm, Germany, a design school in the tradition of the Bauhaus. The school is notable for its inclusion of semiotics as a field of study. The school closed in 1968, but the "Ulm Model" concept continues to influence international design education. Another series of projects at the school were the Bauhaus typefaces, mostly realized in the decades afterward.
The influence of the Bauhaus on design education was significant. One of the main objectives of the Bauhaus was to unify art, craft, and technology, and this approach was incorporated into the curriculum of the Bauhaus. The structure of the Bauhaus Vorkurs (preliminary course) reflected a pragmatic approach to integrating theory and application. In their first year, students learnt the basic elements and principles of design and colour theory, and experimented with a range of materials and processes. This approach to design education became a common feature of architectural and design school in many countries. For example, the Shillito Design School in Sydney stands as a unique link between Australia and the Bauhaus. The colour and design syllabus of the Shillito Design School was firmly underpinned by the theories and ideologies of the Bauhaus. Its first year foundational course mimicked the Vorkurs and focused on the elements and principles of design plus colour theory and application. The founder of the school, Phyllis Shillito, which opened in 1962 and closed in 1980, firmly believed that "A student who has mastered the basic principles of design, can design anything from a dress to a kitchen stove". In Britain, largely under the influence of painter and teacher William Johnstone, Basic Design, a Bauhaus-influenced art foundation course, was introduced at Camberwell School of Art and the Central School of Art and Design, whence it spread to all art schools in the country, becoming universal by the early 1960s.
One of the most important contributions of the Bauhaus is in the field of modern furniture design. The characteristic Cantilever chair and Wassily Chair designed by Marcel Breuer are two examples. (Breuer eventually lost a legal battle in Germany with Dutch architect/designer Mart Stam over patent rights to the cantilever chair design. Although Stam had worked on the design of the Bauhaus's 1923 exhibit in Weimar, and guest-lectured at the Bauhaus later in the 1920s, he was not formally associated with the school, and he and Breuer had worked independently on the cantilever concept, leading to the patent dispute.) The most profitable product of the Bauhaus was its wallpaper.
The physical plant at Dessau survived World War II and was operated as a design school with some architectural facilities by the German Democratic Republic. This included live stage productions in the Bauhaus theater under the name of Bauhausbühne ("Bauhaus Stage"). After German reunification, a reorganized school continued in the same building, with no essential continuity with the Bauhaus under Gropius in the early 1920s. In 1979 Bauhaus-Dessau College started to organize postgraduate programs with participants from all over the world. This effort has been supported by the Bauhaus-Dessau Foundation which was founded in 1974 as a public institution.
Later evaluation of the Bauhaus design credo was critical of its flawed recognition of the human element, an acknowledgment of "the dated, unattractive aspects of the Bauhaus as a projection of utopia marked by mechanistic views of human nature…Home hygiene without home atmosphere."
Subsequent examples which have continued the philosophy of the Bauhaus include Black Mountain College, Hochschule für Gestaltung in Ulm and Domaine de Boisbuchet.
The White City
The White City (Hebrew: העיר הלבנה), refers to a collection of over 4,000 buildings built in the Bauhaus or International Style in Tel Aviv from the 1930s by German Jewish architects who emigrated to the British Mandate of Palestine after the rise of the Nazis. Tel Aviv has the largest number of buildings in the Bauhaus/International Style of any city in the world. Preservation, documentation, and exhibitions have brought attention to Tel Aviv's collection of 1930s architecture. In 2003, the United Nations Educational, Scientific and Cultural Organization (UNESCO) proclaimed Tel Aviv's White City a World Cultural Heritage site, as "an outstanding example of new town planning and architecture in the early 20th century." The citation recognized the unique adaptation of modern international architectural trends to the cultural, climatic, and local traditions of the city. Bauhaus Center Tel Aviv organizes regular architectural tours of the city, and the Bauhaus Foundation offers Bauhaus exhibits.
Centenary
As the centenary of the founding of Bauhaus, several events, festivals, and exhibitions were held around the world in 2019. The international opening festival at the Berlin Academy of the Arts from 16 to 24 January concentrated on "the presentation and production of pieces by contemporary artists, in which the aesthetic issues and experimental configurations of the Bauhaus artists continue to be inspiringly contagious". Original Bauhaus, The Centenary Exhibition at the Berlinische Galerie (6 September 2019 to 27 January 2020) presented 1,000 original artefacts from the Bauhaus-Archive's collection and recounted the history behind the objects. The Bauhaus Museum Dessau also opened in September 2019, operated by the Bauhaus Dessau Foundation and funded by the State of Saxony-Anhalt and the German Federal government. It is set to be the permanent home of the second largest Bauhaus collection at 49,000 objects, while paying homage to its strong influence in the city when Bauhaus arrived in 1925.
In 2024, the German far-right party Alternative for Germany (AfG) sought to attack celebrations of Bauhaus because of their view that Bauhaus did not follow tradition. Bauhaus was also crushed by the Nazi's before World War II, and according to political scientist Jan-Werner Mueller, AfG's condemnation seeks to use it in a culture war of far right-wing provocation.
The New European Bauhaus
In September 2020, President of the European Commission Ursula von der Leyen introduced the New European Bauhaus (NEB) initiative during her State of the Union address. The NEB is a creative and interdisciplinary movement that connects the European Green Deal to everyday life. It is a platform for experimentation aiming to unite citizens, experts, businesses and institutions in imagining and designing a sustainable, aesthetic and inclusive future.
Sport and physical activity were an essential part of the original Bauhaus approach. Hannes Meyer, the second director of Bauhaus Dessau, ensured that one day a week was solely devoted to sport and gymnastics. 1 In 1930, Meyer employed two physical education teachers. The Bauhaus school even applied for public funds to enhance its playing field. The inclusion of sport and physical activity in the Bauhaus curriculum had various purposes. First, as Meyer put it, sport combatted a “one-sided emphasis on brainwork.” In addition, Bauhaus instructors believed that students could better express themselves if they actively experienced the space, rhythms and movements of the body. The Bauhaus approach also considered physical activity an important contributor to wellbeing and community spirit. Sport and physical activity were essential to the interdisciplinary Bauhaus movement that developed revolutionary ideas and continues to shape our environments today.
Bauhaus staff and students
People who were educated, or who taught or worked in other capacities, at the Bauhaus.
Gallery
See also
Art Deco architecture
Bauhaus Archive
Bauhaus Center Tel Aviv
Bauhaus Dessau Foundation
Bauhaus Museum, Tel Aviv
Bauhaus Museum, Weimar
Bauhaus Museum, Dessau
Bauhaus Project (computing)
Bauhaus World Heritage Site
Constructivist architecture
Expressionist architecture
Form follows function
Haus am Horn
IIT Institute of Design
International style (architecture)
Lucia Moholy
Max-Liebling House, Tel Aviv
Modern architecture
Neues Sehen (New Vision)
New Objectivity (architecture)
Swiss Style (design)
Ulm School of Design
Vkhutemas
Women of the Bauhaus
Explanatory footnotes
The closure, and the response of Mies van der Rohe, is fully documented in Elaine Hochman's Architects of Fortune.
Google honored Bauhaus for its 100th anniversary on 12 April 2019 with a Google Doodle.
Citations
General and cited references
Olaf Thormann: Bauhaus Saxony. arnoldsche Art Publishers 2019, .
Further reading
External links
Bauhaus Everywhere — Google Arts & Culture
Collection: Artists of the Bauhaus from the University of Michigan Museum of Art
1919 establishments in Germany
1933 disestablishments in Germany
Architecture in Germany
Architecture schools
Art movements
Design schools in Germany
Expressionist architecture
German architectural styles
Graphic design
Industrial design
Modernist architecture
Bauhaus, Dessau
Visual arts education
Bauhaus
Weimar culture
World Heritage Sites in Germany | Bauhaus | [
"Engineering"
] | 6,842 | [
"Industrial design",
"Design engineering",
"Design"
] |
3,876 | https://en.wikipedia.org/wiki/Binomial%20distribution | In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: success (with probability ) or failure (with probability ). A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., , the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the binomial test of statistical significance.
The binomial distribution is frequently used to model the number of successes in a sample of size drawn with replacement from a population of size . If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for much larger than , the binomial distribution remains a good approximation, and is widely used.
Definitions
Probability mass function
If the random variable follows the binomial distribution with parameters and , we write . The probability of getting exactly successes in independent Bernoulli trials (with the same rate ) is given by the probability mass function:
for , where
is the binomial coefficient. The formula can be understood as follows: is the probability of obtaining the sequence of independent Bernoulli trials in which trials are "successes" and the remaining trials result in "failure". Since the trials are independent with probabilities remaining constant between them, any sequence of trials with successes (and failures) has the same probability of being achieved (regardless of positions of successes within the sequence). There are such sequences, since the binomial coefficient counts the number of ways to choose the positions of the successes among the trials. The binomial distribution is concerned with the probability of obtaining any of these sequences, meaning the probability of obtaining one of them () must be added times, hence .
In creating reference tables for binomial distribution probability, usually, the table is filled in up to values. This is because for , the probability can be calculated by its complement as
Looking at the expression as a function of , there is a value that maximizes it. This value can be found by calculating
and comparing it to 1. There is always an integer that satisfies
is monotone increasing for and monotone decreasing for , with the exception of the case where is an integer. In this case, there are two values for which is maximal: and . is the most probable outcome (that is, the most likely, although this can still be unlikely overall) of the Bernoulli trials and is called the mode.
Equivalently, . Taking the floor function, we obtain .
Example
Suppose a biased coin comes up heads with probability 0.3 when tossed. The probability of seeing exactly 4 heads in 6 tosses is
Cumulative distribution function
The cumulative distribution function can be expressed as:
where is the "floor" under , i.e. the greatest integer less than or equal to .
It can also be represented in terms of the regularized incomplete beta function, as follows:
which is equivalent to the cumulative distribution functions of the beta distribution and of the -distribution:
Some closed-form bounds for the cumulative distribution function are given below.
Properties
Expected value and variance
If , that is, is a binomially distributed random variable, being the total number of experiments and p the probability of each experiment yielding a successful result, then the expected value of is:
This follows from the linearity of the expected value along with the fact that is the sum of identical Bernoulli random variables, each with expected value . In other words, if are identical (and independent) Bernoulli random variables with parameter , then and
The variance is:
This similarly follows from the fact that the variance of a sum of independent random variables is the sum of the variances.
Higher moments
The first 6 central moments, defined as , are given by
The non-central moments satisfy
and in general
where are the Stirling numbers of the second kind, and is the th falling power of .
A simple bound
follows by bounding the Binomial moments via the higher Poisson moments:
This shows that if , then is at most a constant factor away from
Mode
Usually the mode of a binomial distribution is equal to , where is the floor function. However, when is an integer and is neither 0 nor 1, then the distribution has two modes: and . When is equal to 0 or 1, the mode will be 0 and correspondingly. These cases can be summarized as follows:
Proof: Let
For only has a nonzero value with . For we find and for . This proves that the mode is 0 for and for .
Let . We find
.
From this follows
So when is an integer, then and is a mode. In the case that , then only is a mode.
Median
In general, there is no single formula to find the median for a binomial distribution, and it may even be non-unique. However, several special results have been established:
If is an integer, then the mean, median, and mode coincide and equal .
Any median must lie within the interval .
A median cannot lie too far away from the mean: .
The median is unique and equal to when (except for the case when and is odd).
When is a rational number (with the exception of \ and odd) the median is unique.
When and is odd, any number in the interval is a median of the binomial distribution. If and is even, then is the unique median.
Tail bounds
For , upper bounds can be derived for the lower tail of the cumulative distribution function , the probability that there are at most successes. Since , these bounds can also be seen as bounds for the upper tail of the cumulative distribution function for .
Hoeffding's inequality yields the simple bound
which is however not very tight. In particular, for , we have that (for fixed , with ), but Hoeffding's bound evaluates to a positive constant.
A sharper bound can be obtained from the Chernoff bound:
where is the relative entropy (or Kullback-Leibler divergence) between an -coin and a -coin (i.e. between the and distribution):
Asymptotically, this bound is reasonably tight; see for details.
One can also obtain lower bounds on the tail , known as anti-concentration bounds. By approximating the binomial coefficient with Stirling's formula it can be shown that
which implies the simpler but looser bound
For and for even , it is possible to make the denominator constant:
Statistical inference
Estimation of parameters
When is known, the parameter can be estimated using the proportion of successes:
This estimator is found using maximum likelihood estimator and also the method of moments. This estimator is unbiased and uniformly with minimum variance, proven using Lehmann–Scheffé theorem, since it is based on a minimal sufficient and complete statistic (i.e.: ). It is also consistent both in probability and in MSE. This statistic is asymptotically normal thanks to the central limit theorem, because it is the same as taking the mean over Bernoulli samples. It has a variance of , a property which is used in various ways, such as in Wald's confidence intervals.
A closed form Bayes estimator for also exists when using the Beta distribution as a conjugate prior distribution. When using a general as a prior, the posterior mean estimator is:
The Bayes estimator is asymptotically efficient and as the sample size approaches infinity (), it approaches the MLE solution. The Bayes estimator is biased (how much depends on the priors), admissible and consistent in probability. Using the Bayesian estimator with the Beta distribution can be used with Thompson sampling.
For the special case of using the standard uniform distribution as a non-informative prior, , the posterior mean estimator becomes:
(A posterior mode should just lead to the standard estimator.) This method is called the rule of succession, which was introduced in the 18th century by Pierre-Simon Laplace.
When relying on Jeffreys prior, the prior is , which leads to the estimator:
When estimating with very rare events and a small (e.g.: if ), then using the standard estimator leads to which sometimes is unrealistic and undesirable. In such cases there are various alternative estimators. One way is to use the Bayes estimator , leading to:
Another method is to use the upper bound of the confidence interval obtained using the rule of three:
Confidence intervals for the parameter p
Even for quite large values of n, the actual distribution of the mean is significantly nonnormal. Because of this problem several methods to estimate confidence intervals have been proposed.
In the equations for confidence intervals below, the variables have the following meaning:
n1 is the number of successes out of n, the total number of trials
is the proportion of successes
is the quantile of a standard normal distribution (i.e., probit) corresponding to the target error rate . For example, for a 95% confidence level the error = 0.05, so = 0.975 and = 1.96.
Wald method
A continuity correction of may be added.
Agresti–Coull method
Here the estimate of is modified to
This method works well for and . See here for . For use the Wilson (score) method below.
Arcsine method
Wilson (score) method
The notation in the formula below differs from the previous formulas in two respects:
Firstly, has a slightly different interpretation in the formula below: it has its ordinary meaning of 'the th quantile of the standard normal distribution', rather than being a shorthand for 'the th quantile'.
Secondly, this formula does not use a plus-minus to define the two bounds. Instead, one may use to get the lower bound, or use to get the upper bound. For example: for a 95% confidence level the error = 0.05, so one gets the lower bound by using , and one gets the upper bound by using .
Comparison
The so-called "exact" (Clopper–Pearson) method is the most conservative. (Exact does not mean perfectly accurate; rather, it indicates that the estimates will not be less conservative than the true value.)
The Wald method, although commonly recommended in textbooks, is the most biased.
Related distributions
Sums of binomials
If and are independent binomial variables with the same probability , then is again a binomial variable; its distribution is :
A Binomial distributed random variable can be considered as the sum of Bernoulli distributed random variables. So the sum of two Binomial distributed random variables and is equivalent to the sum of Bernoulli distributed random variables, which means . This can also be proven directly using the addition rule.
However, if and do not have the same probability , then the variance of the sum will be smaller than the variance of a binomial variable distributed as .
Poisson binomial distribution
The binomial distribution is a special case of the Poisson binomial distribution, which is the distribution of a sum of independent non-identical Bernoulli trials .
Ratio of two binomial distributions
This result was first derived by Katz and coauthors in 1978.
Let and be independent. Let .
Then log(T) is approximately normally distributed with mean log(p1/p2) and variance .
Conditional binomials
If X ~ B(n, p) and Y | X ~ B(X, q) (the conditional distribution of Y, given X), then Y is a simple binomial random variable with distribution Y ~ B(n, pq).
For example, imagine throwing n balls to a basket UX and taking the balls that hit and throwing them to another basket UY. If p is the probability to hit UX then X ~ B(n, p) is the number of balls that hit UX. If q is the probability to hit UY then the number of balls that hit UY is Y ~ B(X, q) and therefore Y ~ B(n, pq).
Since and , by the law of total probability,
Since the equation above can be expressed as
Factoring and pulling all the terms that don't depend on out of the sum now yields
After substituting in the expression above, we get
Notice that the sum (in the parentheses) above equals by the binomial theorem. Substituting this in finally yields
and thus as desired.
Bernoulli distribution
The Bernoulli distribution is a special case of the binomial distribution, where . Symbolically, has the same meaning as . Conversely, any binomial distribution, , is the distribution of the sum of independent Bernoulli trials, , each with the same probability .
Normal approximation
If is large enough, then the skew of the distribution is not too great. In this case a reasonable approximation to is given by the normal distribution
and this basic approximation can be improved in a simple way by using a suitable continuity correction.
The basic approximation generally improves as increases (at least 20) and is better when is not near to 0 or 1. Various rules of thumb may be used to decide whether is large enough, and is far enough from the extremes of zero or one:
One rule is that for the normal approximation is adequate if the absolute value of the skewness is strictly less than 0.3; that is, if
This can be made precise using the Berry–Esseen theorem.
A stronger rule states that the normal approximation is appropriate only if everything within 3 standard deviations of its mean is within the range of possible values; that is, only if
This 3-standard-deviation rule is equivalent to the following conditions, which also imply the first rule above.
The rule is totally equivalent to request that
Moving terms around yields:
Since , we can apply the square power and divide by the respective factors and , to obtain the desired conditions:
Notice that these conditions automatically imply that . On the other hand, apply again the square root and divide by 3,
Subtracting the second set of inequalities from the first one yields:
and so, the desired first rule is satisfied,
Another commonly used rule is that both values and must be greater than or equal to 5. However, the specific number varies from source to source, and depends on how good an approximation one wants. In particular, if one uses 9 instead of 5, the rule implies the results stated in the previous paragraphs.
Assume that both values and are greater than 9. Since , we easily have that
We only have to divide now by the respective factors and , to deduce the alternative form of the 3-standard-deviation rule:
The following is an example of applying a continuity correction. Suppose one wishes to calculate for a binomial random variable . If has a distribution given by the normal approximation, then is approximated by . The addition of 0.5 is the continuity correction; the uncorrected normal approximation gives considerably less accurate results.
This approximation, known as de Moivre–Laplace theorem, is a huge time-saver when undertaking calculations by hand (exact calculations with large are very onerous); historically, it was the first use of the normal distribution, introduced in Abraham de Moivre's book The Doctrine of Chances in 1738. Nowadays, it can be seen as a consequence of the central limit theorem since is a sum of independent, identically distributed Bernoulli variables with parameter . This fact is the basis of a hypothesis test, a "proportion z-test", for the value of using , the sample proportion and estimator of , in a common test statistic.
For example, suppose one randomly samples people out of a large population and ask them whether they agree with a certain statement. The proportion of people who agree will of course depend on the sample. If groups of n people were sampled repeatedly and truly randomly, the proportions would follow an approximate normal distribution with mean equal to the true proportion p of agreement in the population and with standard deviation
Poisson approximation
The binomial distribution converges towards the Poisson distribution as the number of trials goes to infinity while the product converges to a finite limit. Therefore, the Poisson distribution with parameter can be used as an approximation to of the binomial distribution if is sufficiently large and is sufficiently small. According to rules of thumb, this approximation is good if and such that , or if and such that , or if and .
Concerning the accuracy of Poisson approximation, see Novak, ch. 4, and references therein.
Limiting distributions
Poisson limit theorem: As approaches and approaches 0 with the product held fixed, the distribution approaches the Poisson distribution with expected value .
de Moivre–Laplace theorem: As approaches while remains fixed, the distribution of
approaches the normal distribution with expected value 0 and variance 1. This result is sometimes loosely stated by saying that the distribution of is asymptotically normal with expected value 0 and variance 1. This result is a specific case of the central limit theorem.
Beta distribution
The binomial distribution and beta distribution are different views of the same model of repeated Bernoulli trials. The binomial distribution is the PMF of successes given independent events each with a probability of success.
Mathematically, when and , the beta distribution and the binomial distribution are related by a factor of :
Beta distributions also provide a family of prior probability distributions for binomial distributions in Bayesian inference:
Given a uniform prior, the posterior distribution for the probability of success given independent events with observed successes is a beta distribution.
Computational methods
Random number generation
Methods for random number generation where the marginal distribution is a binomial distribution are well-established.
One way to generate random variates samples from a binomial distribution is to use an inversion algorithm. To do so, one must calculate the probability that for all values from through . (These probabilities should sum to a value close to one, in order to encompass the entire sample space.) Then by using a pseudorandom number generator to generate samples uniformly between 0 and 1, one can transform the calculated samples into discrete numbers by using the probabilities calculated in the first step.
History
This distribution was derived by Jacob Bernoulli. He considered the case where where is the probability of success and and are positive integers. Blaise Pascal had earlier considered the case where , tabulating the corresponding binomial coefficients in what is now recognized as Pascal's triangle.
See also
Logistic regression
Multinomial distribution
Negative binomial distribution
Beta-binomial distribution
Binomial measure, an example of a multifractal measure.
Statistical mechanics
Piling-up lemma, the resulting probability when XOR-ing independent Boolean variables
References
Further reading
External links
Interactive graphic: Univariate Distribution Relationships
Binomial distribution formula calculator
Difference of two binomial variables: X-Y or |X-Y|
Querying the binomial probability distribution in WolframAlpha
Confidence (credible) intervals for binomial probability, p: online calculator available at causaScientia.org
Discrete distributions
Factorial and binomial topics
Conjugate prior distributions
Exponential family distributions | Binomial distribution | [
"Mathematics"
] | 4,007 | [
"Factorial and binomial topics",
"Combinatorics"
] |
3,878 | https://en.wikipedia.org/wiki/Biostatistics | Biostatistics (also known as biometry) is a branch of statistics that applies statistical methods to a wide range of topics in biology. It encompasses the design of biological experiments, the collection and analysis of data from those experiments and the interpretation of the results.
History
Biostatistics and genetics
Biostatistical modeling forms an important part of numerous modern biological theories. Genetics studies, since its beginning, used statistical concepts to understand observed experimental results. Some genetics scientists even contributed with statistical advances with the development of methods and tools. Gregor Mendel started the genetics studies investigating genetics segregation patterns in families of peas and used statistics to explain the collected data. In the early 1900s, after the rediscovery of Mendel's Mendelian inheritance work, there were gaps in understanding between genetics and evolutionary Darwinism. Francis Galton tried to expand Mendel's discoveries with human data and proposed a different model with fractions of the heredity coming from each ancestral composing an infinite series. He called this the theory of "Law of Ancestral Heredity". His ideas were strongly disagreed by William Bateson, who followed Mendel's conclusions, that genetic inheritance were exclusively from the parents, half from each of them. This led to a vigorous debate between the biometricians, who supported Galton's ideas, as Raphael Weldon, Arthur Dukinfield Darbishire and Karl Pearson, and Mendelians, who supported Bateson's (and Mendel's) ideas, such as Charles Davenport and Wilhelm Johannsen. Later, biometricians could not reproduce Galton conclusions in different experiments, and Mendel's ideas prevailed. By the 1930s, models built on statistical reasoning had helped to resolve these differences and to produce the neo-Darwinian modern evolutionary synthesis.
Solving these differences also allowed to define the concept of population genetics and brought together genetics and evolution. The three leading figures in the establishment of population genetics and this synthesis all relied on statistics and developed its use in biology.
Ronald Fisher worked alongside statistician Betty Allan developing several basic statistical methods in support of his work studying the crop experiments at Rothamsted Research, published in Fisher's books Statistical Methods for Research Workers (1925) and The Genetical Theory of Natural Selection (1930), as well as Allan's scientific papers. Fisher went on to give many contributions to genetics and statistics. Some of them include the ANOVA, p-value concepts, Fisher's exact test and Fisher's equation for population dynamics. He is credited for the sentence "Natural selection is a mechanism for generating an exceedingly high degree of improbability".
Sewall G. Wright developed F-statistics and methods of computing them and defined inbreeding coefficient.
J. B. S. Haldane's book, The Causes of Evolution, reestablished natural selection as the premier mechanism of evolution by explaining it in terms of the mathematical consequences of Mendelian genetics. He also developed the theory of primordial soup.
These and other biostatisticians, mathematical biologists, and statistically inclined geneticists helped bring together evolutionary biology and genetics into a consistent, coherent whole that could begin to be quantitatively modeled.
In parallel to this overall development, the pioneering work of D'Arcy Thompson in On Growth and Form also helped to add quantitative discipline to biological study.
Despite the fundamental importance and frequent necessity of statistical reasoning, there may nonetheless have been a tendency among biologists to distrust or deprecate results which are not qualitatively apparent. One anecdote describes Thomas Hunt Morgan banning the Friden calculator from his department at Caltech, saying "Well, I am like a guy who is prospecting for gold along the banks of the Sacramento River in 1849. With a little intelligence, I can reach down and pick up big nuggets of gold. And as long as I can do that, I'm not going to let any people in my department waste scarce resources in placer mining."
Research planning
Any research in life sciences is proposed to answer a scientific question we might have. To answer this question with a high certainty, we need accurate results. The correct definition of the main hypothesis and the research plan will reduce errors while taking a decision in understanding a phenomenon. The research plan might include the research question, the hypothesis to be tested, the experimental design, data collection methods, data analysis perspectives and costs involved. It is essential to carry the study based on the three basic principles of experimental statistics: randomization, replication, and local control.
Research question
The research question will define the objective of a study. The research will be headed by the question, so it needs to be concise, at the same time it is focused on interesting and novel topics that may improve science and knowledge and that field. To define the way to ask the scientific question, an exhaustive literature review might be necessary. So the research can be useful to add value to the scientific community.
Hypothesis definition
Once the aim of the study is defined, the possible answers to the research question can be proposed, transforming this question into a hypothesis. The main propose is called null hypothesis (H0) and is usually based on a permanent knowledge about the topic or an obvious occurrence of the phenomena, sustained by a deep literature review. We can say it is the standard expected answer for the data under the situation in test. In general, HO assumes no association between treatments. On the other hand, the alternative hypothesis is the denial of HO. It assumes some degree of association between the treatment and the outcome. Although, the hypothesis is sustained by question research and its expected and unexpected answers.
As an example, consider groups of similar animals (mice, for example) under two different diet systems. The research question would be: what is the best diet? In this case, H0 would be that there is no difference between the two diets in mice metabolism (H0: μ1 = μ2) and the alternative hypothesis would be that the diets have different effects over animals metabolism (H1: μ1 ≠ μ2).
The hypothesis is defined by the researcher, according to his/her interests in answering the main question. Besides that, the alternative hypothesis can be more than one hypothesis. It can assume not only differences across observed parameters, but their degree of differences (i.e. higher or shorter).
Sampling
Usually, a study aims to understand an effect of a phenomenon over a population. In biology, a population is defined as all the individuals of a given species, in a specific area at a given time. In biostatistics, this concept is extended to a variety of collections possible of study. Although, in biostatistics, a population is not only the individuals, but the total of one specific component of their organisms, as the whole genome, or all the sperm cells, for animals, or the total leaf area, for a plant, for example.
It is not possible to take the measures from all the elements of a population. Because of that, the sampling process is very important for statistical inference. Sampling is defined as to randomly get a representative part of the entire population, to make posterior inferences about the population. So, the sample might catch the most variability across a population. The sample size is determined by several things, since the scope of the research to the resources available. In clinical research, the trial type, as inferiority, equivalence, and superiority is a key in determining sample size.
Experimental design
Experimental designs sustain those basic principles of experimental statistics. There are three basic experimental designs to randomly allocate treatments in all plots of the experiment. They are completely randomized design, randomized block design, and factorial designs. Treatments can be arranged in many ways inside the experiment. In agriculture, the correct experimental design is the root of a good study and the arrangement of treatments within the study is essential because environment largely affects the plots (plants, livestock, microorganisms). These main arrangements can be found in the literature under the names of "lattices", "incomplete blocks", "split plot", "augmented blocks", and many others. All of the designs might include control plots, determined by the researcher, to provide an error estimation during inference.
In clinical studies, the samples are usually smaller than in other biological studies, and in most cases, the environment effect can be controlled or measured. It is common to use randomized controlled clinical trials, where results are usually compared with observational study designs such as case–control or cohort.
Data collection
Data collection methods must be considered in research planning, because it highly influences the sample size and experimental design.
Data collection varies according to type of data. For qualitative data, collection can be done with structured questionnaires or by observation, considering presence or intensity of disease, using score criterion to categorize levels of occurrence. For quantitative data, collection is done by measuring numerical information using instruments.
In agriculture and biology studies, yield data and its components can be obtained by metric measures. However, pest and disease injuries in plats are obtained by observation, considering score scales for levels of damage. Especially, in genetic studies, modern methods for data collection in field and laboratory should be considered, as high-throughput platforms for phenotyping and genotyping. These tools allow bigger experiments, while turn possible evaluate many plots in lower time than a human-based only method for data collection.
Finally, all data collected of interest must be stored in an organized data frame for further analysis.
Analysis and data interpretation
Descriptive tools
Data can be represented through tables or graphical representation, such as line charts, bar charts, histograms, scatter plot. Also, measures of central tendency and variability can be very useful to describe an overview of the data. Follow some examples:
Frequency tables
One type of table is the frequency table, which consists of data arranged in rows and columns, where the frequency is the number of occurrences or repetitions of data. Frequency can be:
Absolute: represents the number of times that a determined value appear;
Relative: obtained by the division of the absolute frequency by the total number;
In the next example, we have the number of genes in ten operons of the same organism.
Line graph
Line graphs represent the variation of a value over another metric, such as time. In general, values are represented in the vertical axis, while the time variation is represented in the horizontal axis.
Bar chart
A bar chart is a graph that shows categorical data as bars presenting heights (vertical bar) or widths (horizontal bar) proportional to represent values. Bar charts provide an image that could also be represented in a tabular format.
In the bar chart example, we have the birth rate in Brazil for the December months from 2010 to 2016. The sharp fall in December 2016 reflects the outbreak of Zika virus in the birth rate in Brazil.
Histograms
The histogram (or frequency distribution) is a graphical representation of a dataset tabulated and divided into uniform or non-uniform classes. It was first introduced by Karl Pearson.
Scatter plot
A scatter plot is a mathematical diagram that uses Cartesian coordinates to display values of a dataset. A scatter plot shows the data as a set of points, each one presenting the value of one variable determining the position on the horizontal axis and another variable on the vertical axis. They are also called scatter graph, scatter chart, scattergram, or scatter diagram.
Mean
The arithmetic mean is the sum of a collection of values () divided by the number of items of this collection ().
Median
The median is the value in the middle of a dataset.
Mode
The mode is the value of a set of data that appears most often.
Box plot
Box plot is a method for graphically depicting groups of numerical data. The maximum and minimum values are represented by the lines, and the interquartile range (IQR) represent 25–75% of the data. Outliers may be plotted as circles.
Correlation coefficients
Although correlations between two different kinds of data could be inferred by graphs, such as scatter plot, it is necessary validate this though numerical information. For this reason, correlation coefficients are required. They provide a numerical value that reflects the strength of an association.
Pearson correlation coefficient
Pearson correlation coefficient is a measure of association between two variables, X and Y. This coefficient, usually represented by ρ (rho) for the population and r for the sample, assumes values between −1 and 1, where ρ = 1 represents a perfect positive correlation, ρ = −1 represents a perfect negative correlation, and ρ = 0 is no linear correlation.
Inferential statistics
It is used to make inferences about an unknown population, by estimation and/or hypothesis testing. In other words, it is desirable to obtain parameters to describe the population of interest, but since the data is limited, it is necessary to make use of a representative sample in order to estimate them. With that, it is possible to test previously defined hypotheses and apply the conclusions to the entire population. The standard error of the mean is a measure of variability that is crucial to do inferences.
Hypothesis testing
Hypothesis testing is essential to make inferences about populations aiming to answer research questions, as settled in "Research planning" section. Authors defined four steps to be set:
The hypothesis to be tested: as stated earlier, we have to work with the definition of a null hypothesis (H0), that is going to be tested, and an alternative hypothesis. But they must be defined before the experiment implementation.
Significance level and decision rule: A decision rule depends on the level of significance, or in other words, the acceptable error rate (α). It is easier to think that we define a critical value that determines the statistical significance when a test statistic is compared with it. So, α also has to be predefined before the experiment.
Experiment and statistical analysis: This is when the experiment is really implemented following the appropriate experimental design, data is collected and the more suitable statistical tests are evaluated.
Inference: Is made when the null hypothesis is rejected or not rejected, based on the evidence that the comparison of p-values and α brings. It is pointed that the failure to reject H0 just means that there is not enough evidence to support its rejection, but not that this hypothesis is true.
Confidence intervals
A confidence interval is a range of values that can contain the true real parameter value in given a certain level of confidence. The first step is to estimate the best-unbiased estimate of the population parameter. The upper value of the interval is obtained by the sum of this estimate with the multiplication between the standard error of the mean and the confidence level. The calculation of lower value is similar, but instead of a sum, a subtraction must be applied.
Statistical considerations
Power and statistical error
When testing a hypothesis, there are two types of statistic errors possible: Type I error and Type II error.
The type I error or false positive is the incorrect rejection of a true null hypothesis
The type II error or false negative is the failure to reject a false null hypothesis.
The significance level denoted by α is the type I error rate and should be chosen before performing the test. The type II error rate is denoted by β and statistical power of the test is 1 − β.
p-value
The p-value is the probability of obtaining results as extreme as or more extreme than those observed, assuming the null hypothesis (H0) is true. It is also called the calculated probability. It is common to confuse the p-value with the significance level (α), but, the α is a predefined threshold for calling significant results. If p is less than α, the null hypothesis (H0) is rejected.
Multiple testing
In multiple tests of the same hypothesis, the probability of the occurrence of falses positives (familywise error rate) increase and some strategy are used to control this occurrence. This is commonly achieved by using a more stringent threshold to reject null hypotheses. The Bonferroni correction defines an acceptable global significance level, denoted by α* and each test is individually compared with a value of α = α*/m. This ensures that the familywise error rate in all m tests, is less than or equal to α*. When m is large, the Bonferroni correction may be overly conservative. An alternative to the Bonferroni correction is to control the false discovery rate (FDR). The FDR controls the expected proportion of the rejected null hypotheses (the so-called discoveries) that are false (incorrect rejections). This procedure ensures that, for independent tests, the false discovery rate is at most q*. Thus, the FDR is less conservative than the Bonferroni correction and have more power, at the cost of more false positives.
Mis-specification and robustness checks
The main hypothesis being tested (e.g., no association between treatments and outcomes) is often accompanied by other technical assumptions (e.g., about the form of the probability distribution of the outcomes) that are also part of the null hypothesis. When the technical assumptions are violated in practice, then the null may be frequently rejected even if the main hypothesis is true. Such rejections are said to be due to model mis-specification. Verifying whether the outcome of a statistical test does not change when the technical assumptions are slightly altered (so-called robustness checks) is the main way of combating mis-specification.
Model selection criteria
Model criteria selection will select or model that more approximate true model. The Akaike's Information Criterion (AIC) and The Bayesian Information Criterion (BIC) are examples of asymptotically efficient criteria.
Developments and big data
Recent developments have made a large impact on biostatistics. Two important changes have been the ability to collect data on a high-throughput scale, and the ability to perform much more complex analysis using computational techniques. This comes from the development in areas as sequencing technologies, Bioinformatics and Machine learning (Machine learning in bioinformatics).
Use in high-throughput data
New biomedical technologies like microarrays, next-generation sequencers (for genomics) and mass spectrometry (for proteomics) generate enormous amounts of data, allowing many tests to be performed simultaneously. Careful analysis with biostatistical methods is required to separate the signal from the noise. For example, a microarray could be used to measure many thousands of genes simultaneously, determining which of them have different expression in diseased cells compared to normal cells. However, only a fraction of genes will be differentially expressed.
Multicollinearity often occurs in high-throughput biostatistical settings. Due to high intercorrelation between the predictors (such as gene expression levels), the information of one predictor might be contained in another one. It could be that only 5% of the predictors are responsible for 90% of the variability of the response. In such a case, one could apply the biostatistical technique of dimension reduction (for example via principal component analysis). Classical statistical techniques like linear or logistic regression and linear discriminant analysis do not work well for high dimensional data (i.e. when the number of observations n is smaller than the number of features or predictors p: n < p). As a matter of fact, one can get quite high R2-values despite very low predictive power of the statistical model. These classical statistical techniques (esp. least squares linear regression) were developed for low dimensional data (i.e. where the number of observations n is much larger than the number of predictors p: n >> p). In cases of high dimensionality, one should always consider an independent validation test set and the corresponding residual sum of squares (RSS) and R2 of the validation test set, not those of the training set.
Often, it is useful to pool information from multiple predictors together. For example, Gene Set Enrichment Analysis (GSEA) considers the perturbation of whole (functionally related) gene sets rather than of single genes. These gene sets might be known biochemical pathways or otherwise functionally related genes. The advantage of this approach is that it is more robust: It is more likely that a single gene is found to be falsely perturbed than it is that a whole pathway is falsely perturbed. Furthermore, one can integrate the accumulated knowledge about biochemical pathways (like the JAK-STAT signaling pathway) using this approach.
Bioinformatics advances in databases, data mining, and biological interpretation
The development of biological databases enables storage and management of biological data with the possibility of ensuring access for users around the world. They are useful for researchers depositing data, retrieve information and files (raw or processed) originated from other experiments or indexing scientific articles, as PubMed. Another possibility is search for the desired term (a gene, a protein, a disease, an organism, and so on) and check all results related to this search. There are databases dedicated to SNPs (dbSNP), the knowledge on genes characterization and their pathways (KEGG) and the description of gene function classifying it by cellular component, molecular function and biological process (Gene Ontology). In addition to databases that contain specific molecular information, there are others that are ample in the sense that they store information about an organism or group of organisms. As an example of a database directed towards just one organism, but that contains much data about it, is the Arabidopsis thaliana genetic and molecular database – TAIR. Phytozome, in turn, stores the assemblies and annotation files of dozen of plant genomes, also containing visualization and analysis tools. Moreover, there is an interconnection between some databases in the information exchange/sharing and a major initiative was the International Nucleotide Sequence Database Collaboration (INSDC) which relates data from DDBJ, EMBL-EBI, and NCBI.
Nowadays, increase in size and complexity of molecular datasets leads to use of powerful statistical methods provided by computer science algorithms which are developed by machine learning area. Therefore, data mining and machine learning allow detection of patterns in data with a complex structure, as biological ones, by using methods of supervised and unsupervised learning, regression, detection of clusters and association rule mining, among others. To indicate some of them, self-organizing maps and k-means are examples of cluster algorithms; neural networks implementation and support vector machines models are examples of common machine learning algorithms.
Collaborative work among molecular biologists, bioinformaticians, statisticians and computer scientists is important to perform an experiment correctly, going from planning, passing through data generation and analysis, and ending with biological interpretation of the results.
Use of computationally intensive methods
On the other hand, the advent of modern computer technology and relatively cheap computing resources have enabled computer-intensive biostatistical methods like bootstrapping and re-sampling methods.
In recent times, random forests have gained popularity as a method for performing statistical classification. Random forest techniques generate a panel of decision trees. Decision trees have the advantage that you can draw them and interpret them (even with a basic understanding of mathematics and statistics). Random Forests have thus been used for clinical decision support systems.
Applications
Public health
Public health, including epidemiology, health services research, nutrition, environmental health and health care policy & management. In these medicine contents, it's important to consider the design and analysis of the clinical trials. As one example, there is the assessment of severity state of a patient with a prognosis of an outcome of a disease.
With new technologies and genetics knowledge, biostatistics are now also used for Systems medicine, which consists in a more personalized medicine. For this, is made an integration of data from different sources, including conventional patient data, clinico-pathological parameters, molecular and genetic data as well as data generated by additional new-omics technologies.
Quantitative genetics
The study of population genetics and statistical genetics in order to link variation in genotype with a variation in phenotype. In other words, it is desirable to discover the genetic basis of a measurable trait, a quantitative trait, that is under polygenic control. A genome region that is responsible for a continuous trait is called a quantitative trait locus (QTL). The study of QTLs become feasible by using molecular markers and measuring traits in populations, but their mapping needs the obtaining of a population from an experimental crossing, like an F2 or recombinant inbred strains/lines (RILs). To scan for QTLs regions in a genome, a gene map based on linkage have to be built. Some of the best-known QTL mapping algorithms are Interval Mapping, Composite Interval Mapping, and Multiple Interval Mapping.
However, QTL mapping resolution is impaired by the amount of recombination assayed, a problem for species in which it is difficult to obtain large offspring. Furthermore, allele diversity is restricted to individuals originated from contrasting parents, which limit studies of allele diversity when we have a panel of individuals representing a natural population. For this reason, the genome-wide association study was proposed in order to identify QTLs based on linkage disequilibrium, that is the non-random association between traits and molecular markers. It was leveraged by the development of high-throughput SNP genotyping.
In animal and plant breeding, the use of markers in selection aiming for breeding, mainly the molecular ones, collaborated to the development of marker-assisted selection. While QTL mapping is limited due resolution, GWAS does not have enough power when rare variants of small effect that are also influenced by environment. So, the concept of Genomic Selection (GS) arises in order to use all molecular markers in the selection and allow the prediction of the performance of candidates in this selection. The proposal is to genotype and phenotype a training population, develop a model that can obtain the genomic estimated breeding values (GEBVs) of individuals belonging to a genotype and but not phenotype population, called testing population. This kind of study could also include a validation population, thinking in the concept of cross-validation, in which the real phenotype results measured in this population are compared with the phenotype results based on the prediction, what used to check the accuracy of the model.
As a summary, some points about the application of quantitative genetics are:
This has been used in agriculture to improve crops (Plant breeding) and livestock (Animal breeding).
In biomedical research, this work can assist in finding candidates gene alleles that can cause or influence predisposition to diseases in human genetics
Expression data
Studies for differential expression of genes from RNA-Seq data, as for RT-qPCR and microarrays, demands comparison of conditions. The goal is to identify genes which have a significant change in abundance between different conditions. Then, experiments are designed appropriately, with replicates for each condition/treatment, randomization and blocking, when necessary. In RNA-Seq, the quantification of expression uses the information of mapped reads that are summarized in some genetic unit, as exons that are part of a gene sequence. As microarray results can be approximated by a normal distribution, RNA-Seq counts data are better explained by other distributions. The first used distribution was the Poisson one, but it underestimate the sample error, leading to false positives. Currently, biological variation is considered by methods that estimate a dispersion parameter of a negative binomial distribution. Generalized linear models are used to perform the tests for statistical significance and as the number of genes is high, multiple tests correction have to be considered. Some examples of other analysis on genomics data comes from microarray or proteomics experiments. Often concerning diseases or disease stages.
Other studies
Ecology, ecological forecasting
Biological sequence analysis
Systems biology for gene network inference or pathways analysis.
Clinical research and pharmaceutical development
Population dynamics, especially in regards to fisheries science.
Phylogenetics and evolution
Pharmacodynamics
Pharmacokinetics
Neuroimaging
Tools
There are a lot of tools that can be used to do statistical analysis in biological data. Most of them are useful in other areas of knowledge, covering a large number of applications (alphabetical). Here are brief descriptions of some of them:
ASReml: Another software developed by VSNi that can be used also in R environment as a package. It is developed to estimate variance components under a general linear mixed model using restricted maximum likelihood (REML). Models with fixed effects and random effects and nested or crossed ones are allowed. Gives the possibility to investigate different variance-covariance matrix structures.
CycDesigN: A computer package developed by VSNi that helps the researchers create experimental designs and analyze data coming from a design present in one of three classes handled by CycDesigN. These classes are resolvable, non-resolvable, partially replicated and crossover designs. It includes less used designs the Latinized ones, as t-Latinized design.
Orange: A programming interface for high-level data processing, data mining and data visualization. Include tools for gene expression and genomics.
R: An open source environment and programming language dedicated to statistical computing and graphics. It is an implementation of S language maintained by CRAN. In addition to its functions to read data tables, take descriptive statistics, develop and evaluate models, its repository contains packages developed by researchers around the world. This allows the development of functions written to deal with the statistical analysis of data that comes from specific applications. In the case of Bioinformatics, for example, there are packages located in the main repository (CRAN) and in others, as Bioconductor. It is also possible to use packages under development that are shared in hosting-services as GitHub.
SAS: A data analysis software widely used, going through universities, services and industry. Developed by a company with the same name (SAS Institute), it uses SAS language for programming.
PLA 3.0: Is a biostatistical analysis software for regulated environments (e.g. drug testing) which supports Quantitative Response Assays (Parallel-Line, Parallel-Logistics, Slope-Ratio) and Dichotomous Assays (Quantal Response, Binary Assays). It also supports weighting methods for combination calculations and the automatic data aggregation of independent assay data.
Weka: A Java software for machine learning and data mining, including tools and methods for visualization, clustering, regression, association rule, and classification. There are tools for cross-validation, bootstrapping and a module of algorithm comparison. Weka also can be run in other programming languages as Perl or R.
Python (programming language) image analysis, deep-learning, machine-learning
SQL databases
NoSQL
NumPy numerical python
SciPy
SageMath
LAPACK linear algebra
MATLAB
Apache Hadoop
Apache Spark
Amazon Web Services
Scope and training programs
Almost all educational programmes in biostatistics are at postgraduate level. They are most often found in schools of public health, affiliated with schools of medicine, forestry, or agriculture, or as a focus of application in departments of statistics.
In the United States, where several universities have dedicated biostatistics departments, many other top-tier universities integrate biostatistics faculty into statistics or other departments, such as epidemiology. Thus, departments carrying the name "biostatistics" may exist under quite different structures. For instance, relatively new biostatistics departments have been founded with a focus on bioinformatics and computational biology, whereas older departments, typically affiliated with schools of public health, will have more traditional lines of research involving epidemiological studies and clinical trials as well as bioinformatics. In larger universities around the world, where both a statistics and a biostatistics department exist, the degree of integration between the two departments may range from the bare minimum to very close collaboration. In general, the difference between a statistics program and a biostatistics program is twofold: (i) statistics departments will often host theoretical/methodological research which are less common in biostatistics programs and (ii) statistics departments have lines of research that may include biomedical applications but also other areas such as industry (quality control), business and economics and biological areas other than medicine.
Specialized journals
Biostatistics
International Journal of Biostatistics
Journal of Epidemiology and Biostatistics
Biostatistics and Public Health
Biometrics
Biometrika
Biometrical Journal
Communications in Biometry and Crop Science
Statistical Applications in Genetics and Molecular Biology
Statistical Methods in Medical Research
Pharmaceutical Statistics
Statistics in Medicine
See also
Bioinformatics
Epidemiological method
Epidemiology
Group size measures
Health indicator
Mathematical and theoretical biology
References
External links
The International Biometric Society
The Collection of Biostatistics Research Archive
Guide to Biostatistics (MedPageToday.com)
Biomedical Statistics
Bioinformatics | Biostatistics | [
"Engineering",
"Biology"
] | 6,812 | [
"Bioinformatics",
"Biological engineering"
] |
3,931 | https://en.wikipedia.org/wiki/Binary%20relation | In mathematics, a binary relation associates elements of one set called the domain with elements of another set called the codomain. Precisely, a binary relation over sets and is a set of ordered pairs where is in and is in . It encodes the common concept of relation: an element is related to an element , if and only if the pair belongs to the set of ordered pairs that defines the binary relation.
An example of a binary relation is the "divides" relation over the set of prime numbers and the set of integers , in which each prime is related to each integer that is a multiple of , but not to an integer that is not a multiple of . In this relation, for instance, the prime number is related to numbers such as , , , , but not to or , just as the prime number is related to , , and , but not to or .
Binary relations, and especially homogeneous relations, are used in many branches of mathematics to model a wide variety of concepts. These include, among others:
the "is greater than", "is equal to", and "divides" relations in arithmetic;
the "is congruent to" relation in geometry;
the "is adjacent to" relation in graph theory;
the "is orthogonal to" relation in linear algebra.
A function may be defined as a binary relation that meets additional constraints. Binary relations are also heavily used in computer science.
A binary relation over sets and is an element of the power set of Since the latter set is ordered by inclusion (), each relation has a place in the lattice of subsets of A binary relation is called a homogeneous relation when . A binary relation is also called a heterogeneous relation when it is not necessary that .
Since relations are sets, they can be manipulated using set operations, including union, intersection, and complementation, and satisfying the laws of an algebra of sets. Beyond that, operations like the converse of a relation and the composition of relations are available, satisfying the laws of a calculus of relations, for which there are textbooks by Ernst Schröder, Clarence Lewis, and Gunther Schmidt. A deeper analysis of relations involves decomposing them into subsets called concepts, and placing them in a complete lattice.
In some systems of axiomatic set theory, relations are extended to classes, which are generalizations of sets. This extension is needed for, among other things, modeling the concepts of "is an element of" or "is a subset of" in set theory, without running into logical inconsistencies such as Russell's paradox.
A binary relation is the most studied special case of an -ary relation over sets , which is a subset of the Cartesian product
Definition
Given sets and , the Cartesian product is defined as and its elements are called ordered pairs.
A over sets and is a subset of The set is called the or of , and the set the or of . In order to specify the choices of the sets and , some authors define a or as an ordered triple , where is a subset of called the of the binary relation. The statement reads " is -related to " and is denoted by . The or of is the set of all such that for at least one . The codomain of definition, , or of is the set of all such that for at least one . The of is the union of its domain of definition and its codomain of definition.
When a binary relation is called a (or ). To emphasize the fact that and are allowed to be different, a binary relation is also called a heterogeneous relation. The prefix hetero is from the Greek ἕτερος (heteros, "other, another, different").
A heterogeneous relation has been called a rectangular relation, suggesting that it does not have the square-like symmetry of a homogeneous relation on a set where Commenting on the development of binary relations beyond homogeneous relations, researchers wrote, "... a variant of the theory has evolved that treats relations from the very beginning as or , i.e. as relations where the normal case is that they are relations between different sets."
The terms correspondence, dyadic relation and two-place relation are synonyms for binary relation, though some authors use the term "binary relation" for any subset of a Cartesian product without reference to and , and reserve the term "correspondence" for a binary relation with reference to and .
In a binary relation, the order of the elements is important; if then can be true or false independently of . For example, divides , but does not divide .
Operations
Union
If and are binary relations over sets and then is the of and over and .
The identity element is the empty relation. For example, is the union of < and =, and is the union of > and =.
Intersection
If and are binary relations over sets and then is the of and over and .
The identity element is the universal relation. For example, the relation "is divisible by 6" is the intersection of the relations "is divisible by 3" and "is divisible by 2".
Composition
If is a binary relation over sets and , and is a binary relation over sets and then (also denoted by ) is the of and over and .
The identity element is the identity relation. The order of and in the notation used here agrees with the standard notational order for composition of functions. For example, the composition (is parent of)(is mother of) yields (is maternal grandparent of), while the composition (is mother of)(is parent of) yields (is grandmother of). For the former case, if is the parent of and is the mother of , then is the maternal grandparent of .
Converse
If is a binary relation over sets and then is the , also called , of over and .
For example, is the converse of itself, as is , and and are each other's converse, as are and . A binary relation is equal to its converse if and only if it is symmetric.
Complement
If is a binary relation over sets and then (also denoted by ) is the of over and .
For example, and are each other's complement, as are and , and , and , and for total orders also and , and and .
The complement of the converse relation is the converse of the complement:
If the complement has the following properties:
If a relation is symmetric, then so is the complement.
The complement of a reflexive relation is irreflexive—and vice versa.
The complement of a strict weak order is a total preorder—and vice versa.
Restriction
If is a binary homogeneous relation over a set and is a subset of then is the of to over .
If is a binary relation over sets and and if is a subset of then is the of to over and .
If a relation is reflexive, irreflexive, symmetric, antisymmetric, asymmetric, transitive, total, trichotomous, a partial order, total order, strict weak order, total preorder (weak order), or an equivalence relation, then so too are its restrictions.
However, the transitive closure of a restriction is a subset of the restriction of the transitive closure, i.e., in general not equal. For example, restricting the relation " is parent of " to females yields the relation " is mother of the woman "; its transitive closure does not relate a woman with her paternal grandmother. On the other hand, the transitive closure of "is parent of" is "is ancestor of"; its restriction to females does relate a woman with her paternal grandmother.
Also, the various concepts of completeness (not to be confused with being "total") do not carry over to restrictions. For example, over the real numbers a property of the relation is that every non-empty subset with an upper bound in has a least upper bound (also called supremum) in However, for the rational numbers this supremum is not necessarily rational, so the same property does not hold on the restriction of the relation to the rational numbers.
A binary relation over sets and is said to be a relation over and , written if is a subset of , that is, for all and if , then . If is contained in and is contained in , then and are called written . If is contained in but is not contained in , then is said to be than , written For example, on the rational numbers, the relation is smaller than , and equal to the composition .
Matrix representation
Binary relations over sets and can be represented algebraically by logical matrices indexed by and with entries in the Boolean semiring (addition corresponds to OR and multiplication to AND) where matrix addition corresponds to union of relations, matrix multiplication corresponds to composition of relations (of a relation over and and a relation over and ), the Hadamard product corresponds to intersection of relations, the zero matrix corresponds to the empty relation, and the matrix of ones corresponds to the universal relation. Homogeneous relations (when ) form a matrix semiring (indeed, a matrix semialgebra over the Boolean semiring) where the identity matrix corresponds to the identity relation.
Examples
Types of binary relations
Some important types of binary relations over sets and are listed below.
Uniqueness properties:
Injective (also called left-unique): for all and all if and then . In other words, every element of the codomain has at most one preimage element. For such a relation, is called a primary key of . For example, the green and blue binary relations in the diagram are injective, but the red one is not (as it relates both and to ), nor the black one (as it relates both and to ).
Functional (also called right-unique or univalent): for all and all if and then . In other words, every element of the domain has at most one image element. Such a binary relation is called a or . For such a relation, is called of . For example, the red and green binary relations in the diagram are functional, but the blue one is not (as it relates to both and ), nor the black one (as it relates to both and ).
One-to-one: injective and functional. For example, the green binary relation in the diagram is one-to-one, but the red, blue and black ones are not.
One-to-many: injective and not functional. For example, the blue binary relation in the diagram is one-to-many, but the red, green and black ones are not.
Many-to-one: functional and not injective. For example, the red binary relation in the diagram is many-to-one, but the green, blue and black ones are not.
Many-to-many: not injective nor functional. For example, the black binary relation in the diagram is many-to-many, but the red, green and blue ones are not.
Totality properties (only definable if the domain and codomain are specified):
Total (also called left-total): for all there exists a such that . In other words, every element of the domain has at least one image element. In other words, the domain of definition of is equal to . This property, is different from the definition of (also called by some authors) in Properties. Such a binary relation is called a . For example, the red and green binary relations in the diagram are total, but the blue one is not (as it does not relate to any real number), nor the black one (as it does not relate to any real number). As another example, is a total relation over the integers. But it is not a total relation over the positive integers, because there is no in the positive integers such that . However, is a total relation over the positive integers, the rational numbers and the real numbers. Every reflexive relation is total: for a given , choose .
Surjective (also called right-total): for all , there exists an such that . In other words, every element of the codomain has at least one preimage element. In other words, the codomain of definition of is equal to . For example, the green and blue binary relations in the diagram are surjective, but the red one is not (as it does not relate any real number to ), nor the black one (as it does not relate any real number to ).
Uniqueness and totality properties (only definable if the domain and codomain are specified):
A function (also called mapping): a binary relation that is functional and total. In other words, every element of the domain has exactly one image element. For example, the red and green binary relations in the diagram are functions, but the blue and black ones are not.
An injection: a function that is injective. For example, the green relation in the diagram is an injection, but the red one is not; the black and the blue relation is not even a function.
A surjection: a function that is surjective. For example, the green relation in the diagram is a surjection, but the red one is not.
A bijection: a function that is injective and surjective. In other words, every element of the domain has exactly one image element and every element of the codomain has exactly one preimage element. For example, the green binary relation in the diagram is a bijection, but the red one is not.
If relations over proper classes are allowed:
Set-like (also called local): for all , the class of all such that , i.e. , is a set. For example, the relation is set-like, and every relation on two sets is set-like. The usual ordering < over the class of ordinal numbers is a set-like relation, while its inverse > is not.
Sets versus classes
Certain mathematical "relations", such as "equal to", "subset of", and "member of", cannot be understood to be binary relations as defined above, because their domains and codomains cannot be taken to be sets in the usual systems of axiomatic set theory. For example, to model the general concept of "equality" as a binary relation , take the domain and codomain to be the "class of all sets", which is not a set in the usual set theory.
In most mathematical contexts, references to the relations of equality, membership and subset are harmless because they can be understood implicitly to be restricted to some set in the context. The usual work-around to this problem is to select a "large enough" set , that contains all the objects of interest, and work with the restriction instead of . Similarly, the "subset of" relation needs to be restricted to have domain and codomain (the power set of a specific set ): the resulting set relation can be denoted by Also, the "member of" relation needs to be restricted to have domain and codomain to obtain a binary relation that is a set. Bertrand Russell has shown that assuming to be defined over all sets leads to a contradiction in naive set theory, see Russell's paradox.
Another solution to this problem is to use a set theory with proper classes, such as NBG or Morse–Kelley set theory, and allow the domain and codomain (and so the graph) to be proper classes: in such a theory, equality, membership, and subset are binary relations without special comment. (A minor modification needs to be made to the concept of the ordered triple , as normally a proper class cannot be a member of an ordered tuple; or of course one can identify the binary relation with its graph in this context.) With this definition one can for instance define a binary relation over every set and its power set.
Homogeneous relation
A homogeneous relation over a set is a binary relation over and itself, i.e. it is a subset of the Cartesian product It is also simply called a (binary) relation over .
A homogeneous relation over a set may be identified with a directed simple graph permitting loops, where is the vertex set and is the edge set (there is an edge from a vertex to a vertex if and only if ).
The set of all homogeneous relations over a set is the power set which is a Boolean algebra augmented with the involution of mapping of a relation to its converse relation. Considering composition of relations as a binary operation on , it forms a semigroup with involution.
Some important properties that a homogeneous relation over a set may have are:
: for all . For example, is a reflexive relation but > is not.
: for all not . For example, is an irreflexive relation, but is not.
: for all if then . For example, "is a blood relative of" is a symmetric relation.
: for all if and then For example, is an antisymmetric relation.
: for all if then not . A relation is asymmetric if and only if it is both antisymmetric and irreflexive. For example, > is an asymmetric relation, but is not.
: for all if and then . A transitive relation is irreflexive if and only if it is asymmetric. For example, "is ancestor of" is a transitive relation, while "is parent of" is not.
: for all if then or .
: for all or .
: for all if then some exists such that and .
A is a relation that is reflexive, antisymmetric, and transitive. A is a relation that is irreflexive, asymmetric, and transitive. A is a relation that is reflexive, antisymmetric, transitive and connected. A is a relation that is irreflexive, asymmetric, transitive and connected.
An is a relation that is reflexive, symmetric, and transitive.
For example, " divides " is a partial, but not a total order on natural numbers "" is a strict total order on and " is parallel to " is an equivalence relation on the set of all lines in the Euclidean plane.
All operations defined in section also apply to homogeneous relations.
Beyond that, a homogeneous relation over a set may be subjected to closure operations like:
the smallest reflexive relation over containing ,
the smallest transitive relation over containing ,
the smallest equivalence relation over containing .
Calculus of relations
Developments in algebraic logic have facilitated usage of binary relations. The calculus of relations includes the algebra of sets, extended by composition of relations and the use of converse relations. The inclusion meaning that implies , sets the scene in a lattice of relations. But since the inclusion symbol is superfluous. Nevertheless, composition of relations and manipulation of the operators according to Schröder rules, provides a calculus to work in the power set of
In contrast to homogeneous relations, the composition of relations operation is only a partial function. The necessity of matching target to source of composed relations has led to the suggestion that the study of heterogeneous relations is a chapter of category theory as in the category of sets, except that the morphisms of this category are relations. The of the category Rel are sets, and the relation-morphisms compose as required in a category.
Induced concept lattice
Binary relations have been described through their induced concept lattices:
A concept satisfies two properties:
The logical matrix of is the outer product of logical vectors logical vectors.
is maximal, not contained in any other outer product. Thus is described as a non-enlargeable rectangle.
For a given relation the set of concepts, enlarged by their joins and meets, forms an "induced lattice of concepts", with inclusion forming a preorder.
The MacNeille completion theorem (1937) (that any partial order may be embedded in a complete lattice) is cited in a 2013 survey article "Decomposition of relations on concept lattices". The decomposition is
, where and are functions, called or left-total, functional relations in this context. The "induced concept lattice is isomorphic to the cut completion of the partial order that belongs to the minimal decomposition of the relation ."
Particular cases are considered below: total order corresponds to Ferrers type, and identity corresponds to difunctional, a generalization of equivalence relation on a set.
Relations may be ranked by the Schein rank which counts the number of concepts necessary to cover a relation. Structural analysis of relations with concepts provides an approach for data mining.
Particular relations
Proposition: If is a surjective relation and is its transpose, then where is the identity relation.
Proposition: If is a serial relation, then where is the identity relation.
Difunctional
The idea of a difunctional relation is to partition objects by distinguishing attributes, as a generalization of the concept of an equivalence relation. One way this can be done is with an intervening set of indicators. The partitioning relation is a composition of relations using relations Jacques Riguet named these relations difunctional since the composition involves functional relations, commonly called partial functions.
In 1950 Riguet showed that such relations satisfy the inclusion:
In automata theory, the term rectangular relation has also been used to denote a difunctional relation. This terminology recalls the fact that, when represented as a logical matrix, the columns and rows of a difunctional relation can be arranged as a block matrix with rectangular blocks of ones on the (asymmetric) main diagonal. More formally, a relation on is difunctional if and only if it can be written as the union of Cartesian products , where the are a partition of a subset of and the likewise a partition of a subset of .
Using the notation , a difunctional relation can also be characterized as a relation such that wherever and have a non-empty intersection, then these two sets coincide; formally implies
In 1997 researchers found "utility of binary decomposition based on difunctional dependencies in database management." Furthermore, difunctional relations are fundamental in the study of bisimulations.
In the context of homogeneous relations, a partial equivalence relation is difunctional.
Ferrers type
A strict order on a set is a homogeneous relation arising in order theory.
In 1951 Jacques Riguet adopted the ordering of an integer partition, called a Ferrers diagram, to extend ordering to binary relations in general.
The corresponding logical matrix of a general binary relation has rows which finish with a sequence of ones. Thus the dots of a Ferrer's diagram are changed to ones and aligned on the right in the matrix.
An algebraic statement required for a Ferrers type relation R is
If any one of the relations is of Ferrers type, then all of them are.
Contact
Suppose is the power set of , the set of all subsets of . Then a relation is a contact relation if it satisfies three properties:
The set membership relation, "is an element of", satisfies these properties so is a contact relation. The notion of a general contact relation was introduced by Georg Aumann in 1970.
In terms of the calculus of relations, sufficient conditions for a contact relation include
where is the converse of set membership ().
Preorder R\R
Every relation generates a preorder which is the left residual. In terms of converse and complements, Forming the diagonal of , the corresponding row of and column of will be of opposite logical values, so the diagonal is all zeros. Then
, so that is a reflexive relation.
To show transitivity, one requires that Recall that is the largest relation such that Then
(repeat)
(Schröder's rule)
(complementation)
(definition)
The inclusion relation Ω on the power set of can be obtained in this way from the membership relation on subsets of :
Fringe of a relation
Given a relation , its fringe is the sub-relation defined as
When is a partial identity relation, difunctional, or a block diagonal relation, then . Otherwise the operator selects a boundary sub-relation described in terms of its logical matrix: is the side diagonal if is an upper right triangular linear order or strict order. is the block fringe if is irreflexive () or upper right block triangular. is a sequence of boundary rectangles when is of Ferrers type.
On the other hand, when is a dense, linear, strict order.
Mathematical heaps
Given two sets and , the set of binary relations between them can be equipped with a ternary operation where denotes the converse relation of . In 1953 Viktor Wagner used properties of this ternary operation to define semiheaps, heaps, and generalized heaps. The contrast of heterogeneous and homogeneous relations is highlighted by these definitions:
See also
Abstract rewriting system
Additive relation, a many-valued homomorphism between modules
Allegory (category theory)
Category of relations, a category having sets as objects and binary relations as morphisms
Confluence (term rewriting), discusses several unusual but fundamental properties of binary relations
Correspondence (algebraic geometry), a binary relation defined by algebraic equations
Hasse diagram, a graphic means to display an order relation
Incidence structure, a heterogeneous relation between set of points and lines
Logic of relatives, a theory of relations by Charles Sanders Peirce
Order theory, investigates properties of order relations
Notes
References
Bibliography
Ernst Schröder (1895) Algebra der Logik, Band III, via Internet Archive
External links
Binary relations | Binary relation | [
"Mathematics"
] | 5,253 | [
"Mathematical relations",
"Binary relations"
] |
3,942 | https://en.wikipedia.org/wiki/Bijection | A bijection, bijective function, or one-to-one correspondence between two mathematical sets is a function such that each element of the second set (the codomain) is the image of exactly one element of the first set (the domain). Equivalently, a bijection is a relation between two sets such that each element of either set is paired with exactly one element of the other set.
A function is bijective if and only if it is invertible; that is, a function is bijective if and only if there is a function the inverse of , such that each of the two ways for composing the two functions produces an identity function: for each in and for each in
For example, the multiplication by two defines a bijection from the integers to the even numbers, which has the division by two as its inverse function.
A function is bijective if and only if it is both injective (or one-to-one)—meaning that each element in the codomain is mapped from at most one element of the domain—and surjective (or onto)—meaning that each element of the codomain is mapped from at least one element of the domain. The term one-to-one correspondence must not be confused with one-to-one function, which means injective but not necessarily surjective.
The elementary operation of counting establishes a bijection from some finite set to the first natural numbers , up to the number of elements in the counted set. It results that two finite sets have the same number of elements if and only if there exists a bijection between them. More generally, two sets are said to have the same cardinal number if there exists a bijection between them.
A bijective function from a set to itself is also called a permutation, and the set of all permutations of a set forms its symmetric group.
Some bijections with further properties have received specific names, which include automorphisms, isomorphisms, homeomorphisms, diffeomorphisms, permutation groups, and most geometric transformations. Galois correspondences are bijections between sets of mathematical objects of apparently very different nature.
Definition
For a binary relation pairing elements of set X with elements of set Y to be a bijection, four properties must hold:
each element of X must be paired with at least one element of Y,
no element of X may be paired with more than one element of Y,
each element of Y must be paired with at least one element of X, and
no element of Y may be paired with more than one element of X.
Satisfying properties (1) and (2) means that a pairing is a function with domain X. It is more common to see properties (1) and (2) written as a single statement: Every element of X is paired with exactly one element of Y. Functions which satisfy property (3) are said to be "onto Y " and are called surjections (or surjective functions). Functions which satisfy property (4) are said to be "one-to-one functions" and are called injections (or injective functions). With this terminology, a bijection is a function which is both a surjection and an injection, or using other words, a bijection is a function which is both "one-to-one" and "onto".
Examples
Batting line-up of a baseball or cricket team
Consider the batting line-up of a baseball or cricket team (or any list of all the players of any sports team where every player holds a specific spot in a line-up). The set X will be the players on the team (of size nine in the case of baseball) and the set Y will be the positions in the batting order (1st, 2nd, 3rd, etc.) The "pairing" is given by which player is in what position in this order. Property (1) is satisfied since each player is somewhere in the list. Property (2) is satisfied since no player bats in two (or more) positions in the order. Property (3) says that for each position in the order, there is some player batting in that position and property (4) states that two or more players are never batting in the same position in the list.
Seats and students of a classroom
In a classroom there are a certain number of seats. A group of students enter the room and the instructor asks them to be seated. After a quick look around the room, the instructor declares that there is a bijection between the set of students and the set of seats, where each student is paired with the seat they are sitting in. What the instructor observed in order to reach this conclusion was that:
Every student was in a seat (there was no one standing),
No student was in more than one seat,
Every seat had someone sitting there (there were no empty seats), and
No seat had more than one student in it.
The instructor was able to conclude that there were just as many seats as there were students, without having to count either set.
More mathematical examples
For any set X, the identity function 1X: X → X, 1X(x) = x is bijective.
The function f: R → R, f(x) = 2x + 1 is bijective, since for each y there is a unique x = (y − 1)/2 such that f(x) = y. More generally, any linear function over the reals, f: R → R, f(x) = ax + b (where a is non-zero) is a bijection. Each real number y is obtained from (or paired with) the real number x = (y − b)/a.
The function f: R → (−π/2, π/2), given by f(x) = arctan(x) is bijective, since each real number x is paired with exactly one angle y in the interval (−π/2, π/2) so that tan(y) = x (that is, y = arctan(x)). If the codomain (−π/2, π/2) was made larger to include an integer multiple of π/2, then this function would no longer be onto (surjective), since there is no real number which could be paired with the multiple of π/2 by this arctan function.
The exponential function, g: R → R, g(x) = ex, is not bijective: for instance, there is no x in R such that g(x) = −1, showing that g is not onto (surjective). However, if the codomain is restricted to the positive real numbers , then g would be bijective; its inverse (see below) is the natural logarithm function ln.
The function h: R → R+, h(x) = x2 is not bijective: for instance, h(−1) = h(1) = 1, showing that h is not one-to-one (injective). However, if the domain is restricted to , then h would be bijective; its inverse is the positive square root function.
By Schröder–Bernstein theorem, given any two sets X and Y, and two injective functions f: X → Y and g: Y → X, there exists a bijective function h: X → Y.
Inverses
A bijection f with domain X (indicated by f: X → Y in functional notation) also defines a converse relation starting in Y and going to X (by turning the arrows around). The process of "turning the arrows around" for an arbitrary function does not, in general, yield a function, but properties (3) and (4) of a bijection say that this inverse relation is a function with domain Y. Moreover, properties (1) and (2) then say that this inverse function is a surjection and an injection, that is, the inverse function exists and is also a bijection. Functions that have inverse functions are said to be invertible. A function is invertible if and only if it is a bijection.
Stated in concise mathematical notation, a function f: X → Y is bijective if and only if it satisfies the condition
for every y in Y there is a unique x in X with y = f(x).
Continuing with the baseball batting line-up example, the function that is being defined takes as input the name of one of the players and outputs the position of that player in the batting order. Since this function is a bijection, it has an inverse function which takes as input a position in the batting order and outputs the player who will be batting in that position.
Composition
The composition of two bijections f: X → Y and g: Y → Z is a bijection, whose inverse is given by is .
Conversely, if the composition of two functions is bijective, it only follows that f is injective and g is surjective.
Cardinality
If X and Y are finite sets, then there exists a bijection between the two sets X and Y if and only if X and Y have the same number of elements. Indeed, in axiomatic set theory, this is taken as the definition of "same number of elements" (equinumerosity), and generalising this definition to infinite sets leads to the concept of cardinal number, a way to distinguish the various sizes of infinite sets.
Properties
A function f: R → R is bijective if and only if its graph meets every horizontal and vertical line exactly once.
If X is a set, then the bijective functions from X to itself, together with the operation of functional composition (∘), form a group, the symmetric group of X, which is denoted variously by S(X), SX, or X! (X factorial).
Bijections preserve cardinalities of sets: for a subset A of the domain with cardinality |A| and subset B of the codomain with cardinality |B|, one has the following equalities:
|f(A)| = |A| and |f−1(B)| = |B|.
If X and Y are finite sets with the same cardinality, and f: X → Y, then the following are equivalent:
f is a bijection.
f is a surjection.
f is an injection.
For a finite set S, there is a bijection between the set of possible total orderings of the elements and the set of bijections from S to S. That is to say, the number of permutations of elements of S is the same as the number of total orderings of that set—namely, n!.
Category theory
Bijections are precisely the isomorphisms in the category Set of sets and set functions. However, the bijections are not always the isomorphisms for more complex categories. For example, in the category Grp of groups, the morphisms must be homomorphisms since they must preserve the group structure, so the isomorphisms are group isomorphisms which are bijective homomorphisms.
Generalization to partial functions
The notion of one-to-one correspondence generalizes to partial functions, where they are called partial bijections, although partial bijections are only required to be injective. The reason for this relaxation is that a (proper) partial function is already undefined for a portion of its domain; thus there is no compelling reason to constrain its inverse to be a total function, i.e. defined everywhere on its domain. The set of all partial bijections on a given base set is called the symmetric inverse semigroup.
Another way of defining the same notion is to say that a partial bijection from A to B is any relation
R (which turns out to be a partial function) with the property that R is the graph of a bijection f:A′→B′, where A′ is a subset of A and B′ is a subset of B.
When the partial bijection is on the same set, it is sometimes called a one-to-one partial transformation. An example is the Möbius transformation simply defined on the complex plane, rather than its completion to the extended complex plane.
Gallery
See also
Ax–Grothendieck theorem
Bijection, injection and surjection
Bijective numeration
Bijective proof
Category theory
Multivalued function
Notes
References
This topic is a basic concept in set theory and can be found in any text which includes an introduction to set theory. Almost all texts that deal with an introduction to writing proofs will include a section on set theory, so the topic may be found in any of these:
External links
Earliest Uses of Some of the Words of Mathematics: entry on Injection, Surjection and Bijection has the history of Injection and related terms.
Functions and mappings
Basic concepts in set theory
Mathematical relations
Types of functions | Bijection | [
"Mathematics"
] | 2,739 | [
"Mathematical analysis",
"Functions and mappings",
"Predicate logic",
"Mathematical objects",
"Basic concepts in set theory",
"Mathematical relations",
"Types of functions"
] |
3,943 | https://en.wikipedia.org/wiki/Binary%20function | In mathematics, a binary function (also called bivariate function, or function of two variables) is a function that takes two inputs.
Precisely stated, a function is binary if there exists sets such that
where is the Cartesian product of and
Alternative definitions
Set-theoretically, a binary function can be represented as a subset of the Cartesian product , where belongs to the subset if and only if .
Conversely, a subset defines a binary function if and only if for any and , there exists a unique such that belongs to .
is then defined to be this .
Alternatively, a binary function may be interpreted as simply a function from to .
Even when thought of this way, however, one generally writes instead of .
(That is, the same pair of parentheses is used to indicate both function application and the formation of an ordered pair.)
Examples
Division of whole numbers can be thought of as a function. If is the set of integers, is the set of natural numbers (except for zero), and is the set of rational numbers, then division is a binary function .
In a vector space V over a field F, scalar multiplication is a binary function. A scalar a ∈ F is combined with a vector v ∈ V to produce a new vector av ∈ V.
Another example is that of inner products, or more generally functions of the form , where , are real-valued vectors of appropriate size and is a matrix. If is a positive definite matrix, this yields an inner product.
Functions of two real variables
Functions whose domain is a subset of are often also called functions of two variables even if their domain does not form a rectangle and thus the cartesian product of two sets.
Restrictions to ordinary functions
In turn, one can also derive ordinary functions of one variable from a binary function.
Given any element , there is a function , or , from to , given by .
Similarly, given any element , there is a function , or , from to , given by . In computer science, this identification between a function from to and a function from to , where is the set of all functions from to , is called currying.
Generalisations
The various concepts relating to functions can also be generalised to binary functions.
For example, the division example above is surjective (or onto) because every rational number may be expressed as a quotient of an integer and a natural number.
This example is injective in each input separately, because the functions f x and f y are always injective.
However, it's not injective in both variables simultaneously, because (for example) f (2,4) = f (1,2).
One can also consider partial binary functions, which may be defined only for certain values of the inputs.
For example, the division example above may also be interpreted as a partial binary function from Z and N to Q, where N is the set of all natural numbers, including zero.
But this function is undefined when the second input is zero.
A binary operation is a binary function where the sets X, Y, and Z are all equal; binary operations are often used to define algebraic structures.
In linear algebra, a bilinear transformation is a binary function where the sets X, Y, and Z are all vector spaces and the derived functions f x and fy are all linear transformations.
A bilinear transformation, like any binary function, can be interpreted as a function from X × Y to Z, but this function in general won't be linear.
However, the bilinear transformation can also be interpreted as a single linear transformation from the tensor product to Z.
Generalisations to ternary and other functions
The concept of binary function generalises to ternary (or 3-ary) function, quaternary (or 4-ary) function, or more generally to n-ary function for any natural number n.
A 0-ary function to Z is simply given by an element of Z.
One can also define an A-ary function where A is any set; there is one input for each element of A.
Category theory
In category theory, n-ary functions generalise to n-ary morphisms in a multicategory.
The interpretation of an n-ary morphism as an ordinary morphisms whose domain is some sort of product of the domains of the original n-ary morphism will work in a monoidal category.
The construction of the derived morphisms of one variable will work in a closed monoidal category.
The category of sets is closed monoidal, but so is the category of vector spaces, giving the notion of bilinear transformation above.
See also
Arity
Unary operation
Unary function
Binary operation
Iterated binary operation
Ternary operation
References
Types of functions
2 (number) | Binary function | [
"Mathematics"
] | 984 | [
"Mathematical objects",
"Functions and mappings",
"Types of functions",
"Mathematical relations"
] |
3,948 | https://en.wikipedia.org/wiki/Binary%20operation | In mathematics, a binary operation or dyadic operation is a rule for combining two elements (called operands) to produce another element. More formally, a binary operation is an operation of arity two.
More specifically, a binary operation on a set is a binary function whose two domains and the codomain are the same set. Examples include the familiar arithmetic operations of addition, subtraction, and multiplication. Other examples are readily found in different areas of mathematics, such as vector addition, matrix multiplication, and conjugation in groups.
A binary function that involves several sets is sometimes also called a binary operation. For example, scalar multiplication of vector spaces takes a scalar and a vector to produce a vector, and scalar product takes two vectors to produce a scalar.
Binary operations are the keystone of most structures that are studied in algebra, in particular in semigroups, monoids, groups, rings, fields, and vector spaces.
Terminology
More precisely, a binary operation on a set is a mapping of the elements of the Cartesian product to :
The closure property of a binary operation expresses the existence of a result for the operation given any pair of operands.
If is not a function but a partial function, then is called a partial binary operation. For instance, division of real numbers is a partial binary operation, because one can not divide by zero: is undefined for every real number . In both model theory and classical universal algebra, binary operations are required to be defined on all elements of . However, partial algebras generalize universal algebras to allow partial operations.
Sometimes, especially in computer science, the term binary operation is used for any binary function.
Properties and examples
Typical examples of binary operations are the addition () and multiplication () of numbers and matrices as well as composition of functions on a single set.
For instance,
On the set of real numbers , is a binary operation since the sum of two real numbers is a real number.
On the set of natural numbers , is a binary operation since the sum of two natural numbers is a natural number. This is a different binary operation than the previous one since the sets are different.
On the set of matrices with real entries, is a binary operation since the sum of two such matrices is a matrix.
On the set of matrices with real entries, is a binary operation since the product of two such matrices is a matrix.
For a given set , let be the set of all functions . Define by for all , the composition of the two functions and in . Then is a binary operation since the composition of the two functions is again a function on the set (that is, a member of ).
Many binary operations of interest in both algebra and formal logic are commutative, satisfying for all elements and in , or associative, satisfying for all , , and in . Many also have identity elements and inverse elements.
The first three examples above are commutative and all of the above examples are associative.
On the set of real numbers , subtraction, that is, , is a binary operation which is not commutative since, in general, . It is also not associative, since, in general, ; for instance, but .
On the set of natural numbers , the binary operation exponentiation, , is not commutative since, (cf. Equation xy = yx), and is also not associative since . For instance, with , , and , , but . By changing the set to the set of integers , this binary operation becomes a partial binary operation since it is now undefined when and is any negative integer. For either set, this operation has a right identity (which is ) since for all in the set, which is not an identity (two sided identity) since in general.
Division (), a partial binary operation on the set of real or rational numbers, is not commutative or associative. Tetration (), as a binary operation on the natural numbers, is not commutative or associative and has no identity element.
Notation
Binary operations are often written using infix notation such as , , or (by juxtaposition with no symbol) rather than by functional notation of the form . Powers are usually also written without operator, but with the second argument as superscript.
Binary operations are sometimes written using prefix or (more frequently) postfix notation, both of which dispense with parentheses. They are also called, respectively, Polish notation and reverse Polish notation .
Binary operations as ternary relations
A binary operation on a set may be viewed as a ternary relation on , that is, the set of triples in for all and in .
Other binary operations
For example, scalar multiplication in linear algebra. Here is a field and is a vector space over that field.
Also the dot product of two vectors maps to , where is a field and is a vector space over . It depends on authors whether it is considered as a binary operation.
See also
:Category:Properties of binary operations
Notes
References
External links | Binary operation | [
"Mathematics"
] | 1,040 | [
"Binary operations",
"Mathematical relations",
"Binary relations"
] |
3,954 | https://en.wikipedia.org/wiki/Biochemistry | Biochemistry, or biological chemistry, is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis that allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs as well as organism structure and function. Biochemistry is closely related to molecular biology, the study of the molecular mechanisms of biological phenomena.
Much of biochemistry deals with the structures, functions, and interactions of biological macromolecules such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition, and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles and methods have been combined with problem-solving approaches from engineering to manipulate living systems in order to produce useful tools for research, industrial processes, and diagnosis and control of diseasethe discipline of biotechnology.
History
At its most comprehensive definition, biochemistry can be seen as a study of the components and composition of living things and how they come together to become life. In this sense, the history of biochemistry may therefore go back as far as the ancient Greeks. However, biochemistry as a specific scientific discipline began sometime in the 19th century, or a little earlier, depending on which aspect of biochemistry is being focused on. Some argued that the beginning of biochemistry may have been the discovery of the first enzyme, diastase (now called amylase), in 1833 by Anselme Payen, while others considered Eduard Buchner's first demonstration of a complex biochemical process alcoholic fermentation in cell-free extracts in 1897 to be the birth of biochemistry. Some might also point as its beginning to the influential 1842 work by Justus von Liebig, Animal chemistry, or, Organic chemistry in its applications to physiology and pathology, which presented a chemical theory of metabolism, or even earlier to the 18th century studies on fermentation and respiration by Antoine Lavoisier. Many other pioneers in the field who helped to uncover the layers of complexity of biochemistry have been proclaimed founders of modern biochemistry. Emil Fischer, who studied the chemistry of proteins, and F. Gowland Hopkins, who studied enzymes and the dynamic nature of biochemistry, represent two examples of early biochemists.
The term "biochemistry" was first used when Vinzenz Kletzinsky (1826–1882) had his "Compendium der Biochemie" printed in Vienna in 1858; it derived from a combination of biology and chemistry. In 1877, Felix Hoppe-Seyler used the term ( in German) as a synonym for physiological chemistry in the foreword to the first issue of Zeitschrift für Physiologische Chemie (Journal of Physiological Chemistry) where he argued for the setting up of institutes dedicated to this field of study. The German chemist Carl Neuberg however is often cited to have coined the word in 1903, while some credited it to Franz Hofmeister.
It was once generally believed that life and its materials had some essential property or substance (often referred to as the "vital principle") distinct from any found in non-living matter, and it was thought that only living beings could produce the molecules of life. In 1828, Friedrich Wöhler published a paper on his serendipitous urea synthesis from potassium cyanate and ammonium sulfate; some regarded that as a direct overthrow of vitalism and the establishment of organic chemistry. However, the Wöhler synthesis has sparked controversy as some reject the death of vitalism at his hands. Since then, biochemistry has advanced, especially since the mid-20th century, with the development of new techniques such as chromatography, X-ray diffraction, dual polarisation interferometry, NMR spectroscopy, radioisotopic labeling, electron microscopy and molecular dynamics simulations. These techniques allowed for the discovery and detailed analysis of many molecules and metabolic pathways of the cell, such as glycolysis and the Krebs cycle (citric acid cycle), and led to an understanding of biochemistry on a molecular level.
Another significant historic event in biochemistry is the discovery of the gene, and its role in the transfer of information in the cell. In the 1950s, James D. Watson, Francis Crick, Rosalind Franklin and Maurice Wilkins were instrumental in solving DNA structure and suggesting its relationship with the genetic transfer of information. In 1958, George Beadle and Edward Tatum received the Nobel Prize for work in fungi showing that one gene produces one enzyme. In 1988, Colin Pitchfork was the first person convicted of murder with DNA evidence, which led to the growth of forensic science. More recently, Andrew Z. Fire and Craig C. Mello received the 2006 Nobel Prize for discovering the role of RNA interference (RNAi) in the silencing of gene expression.
Starting materials: the chemical elements of life
Around two dozen chemical elements are essential to various kinds of biological life. Most rare elements on Earth are not needed by life (exceptions being selenium and iodine), while a few common ones (aluminum and titanium) are not used. Most organisms share element needs, but there are a few differences between plants and animals. For example, ocean algae use bromine, but land plants and animals do not seem to need any. All animals require sodium, but is not an essential element for plants. Plants need boron and silicon, but animals may not (or may need ultra-small amounts).
Just six elements—carbon, hydrogen, nitrogen, oxygen, calcium and phosphorus—make up almost 99% of the mass of living cells, including those in the human body (see composition of the human body for a complete list). In addition to the six major elements that compose most of the human body, humans require smaller amounts of possibly 18 more.
Biomolecules
The 4 main classes of molecules in biochemistry (often called biomolecules) are carbohydrates, lipids, proteins, and nucleic acids. Many biological molecules are polymers: in this terminology, monomers are relatively small macromolecules that are linked together to create large macromolecules known as polymers. When monomers are linked together to synthesize a biological polymer, they undergo a process called dehydration synthesis. Different macromolecules can assemble in larger complexes, often needed for biological activity.
Carbohydrates
Two of the main functions of carbohydrates are energy storage and providing structure. One of the common sugars known as glucose is a carbohydrate, but not all carbohydrates are sugars. There are more carbohydrates on Earth than any other known type of biomolecule; they are used to store energy and genetic information, as well as play important roles in cell to cell interactions and communications.
The simplest type of carbohydrate is a monosaccharide, which among other properties contains carbon, hydrogen, and oxygen, mostly in a ratio of 1:2:1 (generalized formula CnH2nOn, where n is at least 3). Glucose (C6H12O6) is one of the most important carbohydrates; others include fructose (C6H12O6), the sugar commonly associated with the sweet taste of fruits, and deoxyribose (C5H10O4), a component of DNA. A monosaccharide can switch between acyclic (open-chain) form and a cyclic form. The open-chain form can be turned into a ring of carbon atoms bridged by an oxygen atom created from the carbonyl group of one end and the hydroxyl group of another. The cyclic molecule has a hemiacetal or hemiketal group, depending on whether the linear form was an aldose or a ketose.
In these cyclic forms, the ring usually has 5 or 6 atoms. These forms are called furanoses and pyranoses, respectively—by analogy with furan and pyran, the simplest compounds with the same carbon-oxygen ring (although they lack the carbon-carbon double bonds of these two molecules). For example, the aldohexose glucose may form a hemiacetal linkage between the hydroxyl on carbon 1 and the oxygen on carbon 4, yielding a molecule with a 5-membered ring, called glucofuranose. The same reaction can take place between carbons 1 and 5 to form a molecule with a 6-membered ring, called glucopyranose. Cyclic forms with a 7-atom ring called heptoses are rare.
Two monosaccharides can be joined by a glycosidic or ester bond into a disaccharide through a dehydration reaction during which a molecule of water is released. The reverse reaction in which the glycosidic bond of a disaccharide is broken into two monosaccharides is termed hydrolysis. The best-known disaccharide is sucrose or ordinary sugar, which consists of a glucose molecule and a fructose molecule joined. Another important disaccharide is lactose found in milk, consisting of a glucose molecule and a galactose molecule. Lactose may be hydrolysed by lactase, and deficiency in this enzyme results in lactose intolerance.
When a few (around three to six) monosaccharides are joined, it is called an oligosaccharide (oligo- meaning "few"). These molecules tend to be used as markers and signals, as well as having some other uses. Many monosaccharides joined form a polysaccharide. They can be joined in one long linear chain, or they may be branched. Two of the most common polysaccharides are cellulose and glycogen, both consisting of repeating glucose monomers. Cellulose is an important structural component of plant's cell walls and glycogen is used as a form of energy storage in animals.
Sugar can be characterized by having reducing or non-reducing ends. A reducing end of a carbohydrate is a carbon atom that can be in equilibrium with the open-chain aldehyde (aldose) or keto form (ketose). If the joining of monomers takes place at such a carbon atom, the free hydroxy group of the pyranose or furanose form is exchanged with an OH-side-chain of another sugar, yielding a full acetal. This prevents opening of the chain to the aldehyde or keto form and renders the modified residue non-reducing. Lactose contains a reducing end at its glucose moiety, whereas the galactose moiety forms a full acetal with the C4-OH group of glucose. Saccharose does not have a reducing end because of full acetal formation between the aldehyde carbon of glucose (C1) and the keto carbon of fructose (C2).
Lipids
Lipids comprise a diverse range of molecules and to some extent is a catchall for relatively water-insoluble or nonpolar compounds of biological origin, including waxes, fatty acids, fatty-acid derived phospholipids, sphingolipids, glycolipids, and terpenoids (e.g., retinoids and steroids). Some lipids are linear, open-chain aliphatic molecules, while others have ring structures. Some are aromatic (with a cyclic [ring] and planar [flat] structure) while others are not. Some are flexible, while others are rigid.
Lipids are usually made from one molecule of glycerol combined with other molecules. In triglycerides, the main group of bulk lipids, there is one molecule of glycerol and three fatty acids. Fatty acids are considered the monomer in that case, and may be saturated (no double bonds in the carbon chain) or unsaturated (one or more double bonds in the carbon chain).
Most lipids have some polar character and are largely nonpolar. In general, the bulk of their structure is nonpolar or hydrophobic ("water-fearing"), meaning that it does not interact well with polar solvents like water. Another part of their structure is polar or hydrophilic ("water-loving") and will tend to associate with polar solvents like water. This makes them amphiphilic molecules (having both hydrophobic and hydrophilic portions). In the case of cholesterol, the polar group is a mere –OH (hydroxyl or alcohol).
In the case of phospholipids, the polar groups are considerably larger and more polar, as described below.
Lipids are an integral part of our daily diet. Most oils and milk products that we use for cooking and eating like butter, cheese, ghee etc. are composed of fats. Vegetable oils are rich in various polyunsaturated fatty acids (PUFA). Lipid-containing foods undergo digestion within the body and are broken into fatty acids and glycerol, the final degradation products of fats and lipids. Lipids, especially phospholipids, are also used in various pharmaceutical products, either as co-solubilizers (e.g. in parenteral infusions) or else as drug carrier components (e.g. in a liposome or transfersome).
Proteins
Proteins are very large molecules—macro-biopolymers—made from monomers called amino acids. An amino acid consists of an alpha carbon atom attached to an amino group, –NH2, a carboxylic acid group, –COOH (although these exist as –NH3+ and –COO− under physiologic conditions), a simple hydrogen atom, and a side chain commonly denoted as "–R". The side chain "R" is different for each amino acid of which there are 20 standard ones. It is this "R" group that makes each amino acid different, and the properties of the side chains greatly influence the overall three-dimensional conformation of a protein. Some amino acids have functions by themselves or in a modified form; for instance, glutamate functions as an important neurotransmitter. Amino acids can be joined via a peptide bond. In this dehydration synthesis, a water molecule is removed and the peptide bond connects the nitrogen of one amino acid's amino group to the carbon of the other's carboxylic acid group. The resulting molecule is called a dipeptide, and short stretches of amino acids (usually, fewer than thirty) are called peptides or polypeptides. Longer stretches merit the title proteins. As an example, the important blood serum protein albumin contains 585 amino acid residues.
Proteins can have structural and/or functional roles. For instance, movements of the proteins actin and myosin ultimately are responsible for the contraction of skeletal muscle. One property many proteins have is that they specifically bind to a certain molecule or class of molecules—they may be extremely selective in what they bind. Antibodies are an example of proteins that attach to one specific type of molecule. Antibodies are composed of heavy and light chains. Two heavy chains would be linked to two light chains through disulfide linkages between their amino acids. Antibodies are specific through variation based on differences in the N-terminal domain.
The enzyme-linked immunosorbent assay (ELISA), which uses antibodies, is one of the most sensitive tests modern medicine uses to detect various biomolecules. Probably the most important proteins, however, are the enzymes. Virtually every reaction in a living cell requires an enzyme to lower the activation energy of the reaction. These molecules recognize specific reactant molecules called substrates; they then catalyze the reaction between them. By lowering the activation energy, the enzyme speeds up that reaction by a rate of 1011 or more; a reaction that would normally take over 3,000 years to complete spontaneously might take less than a second with an enzyme. The enzyme itself is not used up in the process and is free to catalyze the same reaction with a new set of substrates. Using various modifiers, the activity of the enzyme can be regulated, enabling control of the biochemistry of the cell as a whole.
The structure of proteins is traditionally described in a hierarchy of four levels. The primary structure of a protein consists of its linear sequence of amino acids; for instance, "alanine-glycine-tryptophan-serine-glutamate-asparagine-glycine-lysine-...". Secondary structure is concerned with local morphology (morphology being the study of structure). Some combinations of amino acids will tend to curl up in a coil called an α-helix or into a sheet called a β-sheet; some α-helixes can be seen in the hemoglobin schematic above. Tertiary structure is the entire three-dimensional shape of the protein. This shape is determined by the sequence of amino acids. In fact, a single change can change the entire structure. The alpha chain of hemoglobin contains 146 amino acid residues; substitution of the glutamate residue at position 6 with a valine residue changes the behavior of hemoglobin so much that it results in sickle-cell disease. Finally, quaternary structure is concerned with the structure of a protein with multiple peptide subunits, like hemoglobin with its four subunits. Not all proteins have more than one subunit.
Ingested proteins are usually broken up into single amino acids or dipeptides in the small intestine and then absorbed. They can then be joined to form new proteins. Intermediate products of glycolysis, the citric acid cycle, and the pentose phosphate pathway can be used to form all twenty amino acids, and most bacteria and plants possess all the necessary enzymes to synthesize them. Humans and other mammals, however, can synthesize only half of them. They cannot synthesize isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine. Because they must be ingested, these are the essential amino acids. Mammals do possess the enzymes to synthesize alanine, asparagine, aspartate, cysteine, glutamate, glutamine, glycine, proline, serine, and tyrosine, the nonessential amino acids. While they can synthesize arginine and histidine, they cannot produce it in sufficient amounts for young, growing animals, and so these are often considered essential amino acids.
If the amino group is removed from an amino acid, it leaves behind a carbon skeleton called an α-keto acid. Enzymes called transaminases can easily transfer the amino group from one amino acid (making it an α-keto acid) to another α-keto acid (making it an amino acid). This is important in the biosynthesis of amino acids, as for many of the pathways, intermediates from other biochemical pathways are converted to the α-keto acid skeleton, and then an amino group is added, often via transamination. The amino acids may then be linked together to form a protein.
A similar process is used to break down proteins. It is first hydrolyzed into its component amino acids. Free ammonia (NH3), existing as the ammonium ion (NH4+) in blood, is toxic to life forms. A suitable method for excreting it must therefore exist. Different tactics have evolved in different animals, depending on the animals' needs. Unicellular organisms release the ammonia into the environment. Likewise, bony fish can release ammonia into the water where it is quickly diluted. In general, mammals convert ammonia into urea, via the urea cycle.
In order to determine whether two proteins are related, or in other words to decide whether they are homologous or not, scientists use sequence-comparison methods. Methods like sequence alignments and structural alignments are powerful tools that help scientists identify homologies between related molecules. The relevance of finding homologies among proteins goes beyond forming an evolutionary pattern of protein families. By finding how similar two protein sequences are, we acquire knowledge about their structure and therefore their function.
Nucleic acids
Nucleic acids, so-called because of their prevalence in cellular nuclei, is the generic name of the family of biopolymers. They are complex, high-molecular-weight biochemical macromolecules that can convey genetic information in all living cells and viruses. The monomers are called nucleotides, and each consists of three components: a nitrogenous heterocyclic base (either a purine or a pyrimidine), a pentose sugar, and a phosphate group.
The most common nucleic acids are deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). The phosphate group and the sugar of each nucleotide bond with each other to form the backbone of the nucleic acid, while the sequence of nitrogenous bases stores the information. The most common nitrogenous bases are adenine, cytosine, guanine, thymine, and uracil. The nitrogenous bases of each strand of a nucleic acid will form hydrogen bonds with certain other nitrogenous bases in a complementary strand of nucleic acid. Adenine binds with thymine and uracil, thymine binds only with adenine, and cytosine and guanine can bind only with one another. Adenine, thymine, and uracil contain two hydrogen bonds, while hydrogen bonds formed between cytosine and guanine are three.
Aside from the genetic material of the cell, nucleic acids often play a role as second messengers, as well as forming the base molecule for adenosine triphosphate (ATP), the primary energy-carrier molecule found in all living organisms. Also, the nitrogenous bases possible in the two nucleic acids are different: adenine, cytosine, and guanine occur in both RNA and DNA, while thymine occurs only in DNA and uracil occurs in RNA.
Metabolism
Carbohydrates as energy source
Glucose is an energy source in most life forms. For instance, polysaccharides are broken down into their monomers by enzymes (glycogen phosphorylase removes glucose residues from glycogen, a polysaccharide). Disaccharides like lactose or sucrose are cleaved into their two component monosaccharides.
Glycolysis (anaerobic)
Glucose is mainly metabolized by a very important ten-step pathway called glycolysis, the net result of which is to break down one molecule of glucose into two molecules of pyruvate. This also produces a net two molecules of ATP, the energy currency of cells, along with two reducing equivalents of converting NAD+ (nicotinamide adenine dinucleotide: oxidized form) to NADH (nicotinamide adenine dinucleotide: reduced form). This does not require oxygen; if no oxygen is available (or the cell cannot use oxygen), the NAD is restored by converting the pyruvate to lactate (lactic acid) (e.g. in humans) or to ethanol plus carbon dioxide (e.g. in yeast). Other monosaccharides like galactose and fructose can be converted into intermediates of the glycolytic pathway.
Aerobic
In aerobic cells with sufficient oxygen, as in most human cells, the pyruvate is further metabolized. It is irreversibly converted to acetyl-CoA, giving off one carbon atom as the waste product carbon dioxide, generating another reducing equivalent as NADH. The two molecules acetyl-CoA (from one molecule of glucose) then enter the citric acid cycle, producing two molecules of ATP, six more NADH molecules and two reduced (ubi)quinones (via FADH2 as enzyme-bound cofactor), and releasing the remaining carbon atoms as carbon dioxide. The produced NADH and quinol molecules then feed into the enzyme complexes of the respiratory chain, an electron transport system transferring the electrons ultimately to oxygen and conserving the released energy in the form of a proton gradient over a membrane (inner mitochondrial membrane in eukaryotes). Thus, oxygen is reduced to water and the original electron acceptors NAD+ and quinone are regenerated. This is why humans breathe in oxygen and breathe out carbon dioxide. The energy released from transferring the electrons from high-energy states in NADH and quinol is conserved first as proton gradient and converted to ATP via ATP synthase. This generates an additional 28 molecules of ATP (24 from the 8 NADH + 4 from the 2 quinols), totaling to 32 molecules of ATP conserved per degraded glucose (two from glycolysis + two from the citrate cycle). It is clear that using oxygen to completely oxidize glucose provides an organism with far more energy than any oxygen-independent metabolic feature, and this is thought to be the reason why complex life appeared only after Earth's atmosphere accumulated large amounts of oxygen.
Gluconeogenesis
In vertebrates, vigorously contracting skeletal muscles (during weightlifting or sprinting, for example) do not receive enough oxygen to meet the energy demand, and so they shift to anaerobic metabolism, converting glucose to lactate.
The combination of glucose from noncarbohydrates origin, such as fat and proteins. This only happens when glycogen supplies in the liver are worn out. The pathway is a crucial reversal of glycolysis from pyruvate to glucose and can use many sources like amino acids, glycerol and Krebs Cycle. Large scale protein and fat catabolism usually occur when those suffer from starvation or certain endocrine disorders. The liver regenerates the glucose, using a process called gluconeogenesis. This process is not quite the opposite of glycolysis, and actually requires three times the amount of energy gained from glycolysis (six molecules of ATP are used, compared to the two gained in glycolysis). Analogous to the above reactions, the glucose produced can then undergo glycolysis in tissues that need energy, be stored as glycogen (or starch in plants), or be converted to other monosaccharides or joined into di- or oligosaccharides. The combined pathways of glycolysis during exercise, lactate's crossing via the bloodstream to the liver, subsequent gluconeogenesis and release of glucose into the bloodstream is called the Cori cycle.
Relationship to other "molecular-scale" biological sciences
Researchers in biochemistry use specific techniques native to biochemistry, but increasingly combine these with techniques and ideas developed in the fields of genetics, molecular biology, and biophysics. There is not a defined line between these disciplines. Biochemistry studies the chemistry required for biological activity of molecules, molecular biology studies their biological activity, genetics studies their heredity, which happens to be carried by their genome. This is shown in the following schematic that depicts one possible view of the relationships between the fields:
Biochemistry is the study of the chemical substances and vital processes occurring in live organisms. Biochemists focus heavily on the role, function, and structure of biomolecules. The study of the chemistry behind biological processes and the synthesis of biologically active molecules are applications of biochemistry. Biochemistry studies life at the atomic and molecular level.
Genetics is the study of the effect of genetic differences in organisms. This can often be inferred by the absence of a normal component (e.g. one gene). The study of "mutants" – organisms that lack one or more functional components with respect to the so-called "wild type" or normal phenotype. Genetic interactions (epistasis) can often confound simple interpretations of such "knockout" studies.
Molecular biology is the study of molecular underpinnings of the biological phenomena, focusing on molecular synthesis, modification, mechanisms and interactions. The central dogma of molecular biology, where genetic material is transcribed into RNA and then translated into protein, despite being oversimplified, still provides a good starting point for understanding the field. This concept has been revised in light of emerging novel roles for RNA.
Chemical biology seeks to develop new tools based on small molecules that allow minimal perturbation of biological systems while providing detailed information about their function. Further, chemical biology employs biological systems to create non-natural hybrids between biomolecules and synthetic devices (for example emptied viral capsids that can deliver gene therapy or drug molecules).
See also
Lists
Important publications in biochemistry (chemistry)
List of biochemistry topics
List of biochemists
List of biomolecules
See also
Astrobiology
Biochemistry (journal)
Biological Chemistry (journal)
Biophysics
Chemical ecology
Computational biomodeling
Dedicated bio-based chemical
EC number
Hypothetical types of biochemistry
International Union of Biochemistry and Molecular Biology
Metabolome
Metabolomics
Molecular biology
Molecular medicine
Plant biochemistry
Proteolysis
Small molecule
Structural biology
TCA cycle
Notes
References
Cited literature
Further reading
Fruton, Joseph S. Proteins, Enzymes, Genes: The Interplay of Chemistry and Biology. Yale University Press: New Haven, 1999.
Keith Roberts, Martin Raff, Bruce Alberts, Peter Walter, Julian Lewis and Alexander Johnson, Molecular Biology of the Cell
4th Edition, Routledge, March, 2002, hardcover, 1616 pp.
3rd Edition, Garland, 1994,
2nd Edition, Garland, 1989,
Kohler, Robert. From Medical Chemistry to Biochemistry: The Making of a Biomedical Discipline. Cambridge University Press, 1982.
External links
The Virtual Library of Biochemistry, Molecular Biology and Cell Biology
Biochemistry, 5th ed. Full text of Berg, Tymoczko, and Stryer, courtesy of NCBI.
SystemsX.ch – The Swiss Initiative in Systems Biology
Full text of Biochemistry by Kevin and Indira, an introductory biochemistry textbook.
Biotechnology
Molecular biology | Biochemistry | [
"Chemistry",
"Biology"
] | 6,497 | [
"Biochemistry",
"Biotechnology",
"nan",
"Molecular biology"
] |
3,959 | https://en.wikipedia.org/wiki/Boolean%20algebra%20%28structure%29 | In abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice. This type of algebraic structure captures essential properties of both set operations and logic operations. A Boolean algebra can be seen as a generalization of a power set algebra or a field of sets, or its elements can be viewed as generalized truth values. It is also a special case of a De Morgan algebra and a Kleene algebra (with involution).
Every Boolean algebra gives rise to a Boolean ring, and vice versa, with ring multiplication corresponding to conjunction or meet ∧, and ring addition to exclusive disjunction or symmetric difference (not disjunction ∨). However, the theory of Boolean rings has an inherent asymmetry between the two operators, while the axioms and theorems of Boolean algebra express the symmetry of the theory described by the duality principle.
History
The term "Boolean algebra" honors George Boole (1815–1864), a self-educated English mathematician. He introduced the algebraic system initially in a small pamphlet, The Mathematical Analysis of Logic, published in 1847 in response to an ongoing public controversy between Augustus De Morgan and William Hamilton, and later as a more substantial book, The Laws of Thought, published in 1854. Boole's formulation differs from that described above in some important respects. For example, conjunction and disjunction in Boole were not a dual pair of operations. Boolean algebra emerged in the 1860s, in papers written by William Jevons and Charles Sanders Peirce. The first systematic presentation of Boolean algebra and distributive lattices is owed to the 1890 Vorlesungen of Ernst Schröder. The first extensive treatment of Boolean algebra in English is A. N. Whitehead's 1898 Universal Algebra. Boolean algebra as an axiomatic algebraic structure in the modern axiomatic sense begins with a 1904 paper by Edward V. Huntington. Boolean algebra came of age as serious mathematics with the work of Marshall Stone in the 1930s, and with Garrett Birkhoff's 1940 Lattice Theory. In the 1960s, Paul Cohen, Dana Scott, and others found deep new results in mathematical logic and axiomatic set theory using offshoots of Boolean algebra, namely forcing and Boolean-valued models.
Definition
A Boolean algebra is a set , equipped with two binary operations (called "meet" or "and"), (called "join" or "or"), a unary operation (called "complement" or "not") and two elements and in (called "bottom" and "top", or "least" and "greatest" element, also denoted by the symbols and , respectively), such that for all elements , and of , the following axioms hold:
{| cellpadding=5
|
|
| associativity
|-
|
|
| commutativity
|-
|
|
| absorption
|-
|
|
| identity
|-
|
|
| distributivity
|-
|
|
| complements
|}
Note, however, that the absorption law and even the associativity law can be excluded from the set of axioms as they can be derived from the other axioms (see Proven properties).
A Boolean algebra with only one element is called a trivial Boolean algebra or a degenerate Boolean algebra. (In older works, some authors required and to be distinct elements in order to exclude this case.)
It follows from the last three pairs of axioms above (identity, distributivity and complements), or from the absorption axiom, that
if and only if .
The relation defined by if these equivalent conditions hold, is a partial order with least element 0 and greatest element 1. The meet and the join of two elements coincide with their infimum and supremum, respectively, with respect to ≤.
The first four pairs of axioms constitute a definition of a bounded lattice.
It follows from the first five pairs of axioms that any complement is unique.
The set of axioms is self-dual in the sense that if one exchanges with and with in an axiom, the result is again an axiom. Therefore, by applying this operation to a Boolean algebra (or Boolean lattice), one obtains another Boolean algebra with the same elements; it is called its dual.
Examples
The simplest non-trivial Boolean algebra, the two-element Boolean algebra, has only two elements, and , and is defined by the rules:
It has applications in logic, interpreting as false, as true, as and, as or, and as not. Expressions involving variables and the Boolean operations represent statement forms, and two such expressions can be shown to be equal using the above axioms if and only if the corresponding statement forms are logically equivalent.
The two-element Boolean algebra is also used for circuit design in electrical engineering; here 0 and 1 represent the two different states of one bit in a digital circuit, typically high and low voltage. Circuits are described by expressions containing variables, and two such expressions are equal for all values of the variables if and only if the corresponding circuits have the same input–output behavior. Furthermore, every possible input–output behavior can be modeled by a suitable Boolean expression.
The two-element Boolean algebra is also important in the general theory of Boolean algebras, because an equation involving several variables is generally true in all Boolean algebras if and only if it is true in the two-element Boolean algebra (which can be checked by a trivial brute force algorithm for small numbers of variables). This can for example be used to show that the following laws (Consensus theorems) are generally valid in all Boolean algebras:
The power set (set of all subsets) of any given nonempty set forms a Boolean algebra, an algebra of sets, with the two operations (union) and (intersection). The smallest element 0 is the empty set and the largest element is the set itself.
After the two-element Boolean algebra, the simplest Boolean algebra is that defined by the power set of two atoms:
The set of all subsets of that are either finite or cofinite is a Boolean algebra and an algebra of sets called the finite–cofinite algebra. If is infinite then the set of all cofinite subsets of , which is called the Fréchet filter, is a free ultrafilter on . However, the Fréchet filter is not an ultrafilter on the power set of .
Starting with the propositional calculus with sentence symbols, form the Lindenbaum algebra (that is, the set of sentences in the propositional calculus modulo logical equivalence). This construction yields a Boolean algebra. It is in fact the free Boolean algebra on generators. A truth assignment in propositional calculus is then a Boolean algebra homomorphism from this algebra to the two-element Boolean algebra.
Given any linearly ordered set with a least element, the interval algebra is the smallest Boolean algebra of subsets of containing all of the half-open intervals such that is in and is either in or equal to . Interval algebras are useful in the study of Lindenbaum–Tarski algebras; every countable Boolean algebra is isomorphic to an interval algebra.
For any natural number , the set of all positive divisors of , defining if divides , forms a distributive lattice. This lattice is a Boolean algebra if and only if is square-free. The bottom and the top elements of this Boolean algebra are the natural numbers and , respectively. The complement of is given by . The meet and the join of and are given by the greatest common divisor () and the least common multiple () of and , respectively. The ring addition is given by . The picture shows an example for . As a counter-example, considering the non-square-free , the greatest common divisor of 30 and its complement 2 would be 2, while it should be the bottom element 1.
Other examples of Boolean algebras arise from topological spaces: if is a topological space, then the collection of all subsets of that are both open and closed forms a Boolean algebra with the operations (union) and (intersection).
If is an arbitrary ring then its set of central idempotents, which is the set
becomes a Boolean algebra when its operations are defined by and .
Homomorphisms and isomorphisms
A homomorphism between two Boolean algebras and is a function such that for all , in :
,
,
,
.
It then follows that for all in . The class of all Boolean algebras, together with this notion of morphism, forms a full subcategory of the category of lattices.
An isomorphism between two Boolean algebras and is a homomorphism with an inverse homomorphism, that is, a homomorphism such that the composition is the identity function on , and the composition is the identity function on . A homomorphism of Boolean algebras is an isomorphism if and only if it is bijective.
Boolean rings
Every Boolean algebra gives rise to a ring by defining (this operation is called symmetric difference in the case of sets and XOR in the case of logic) and . The zero element of this ring coincides with the 0 of the Boolean algebra; the multiplicative identity element of the ring is the of the Boolean algebra. This ring has the property that for all in ; rings with this property are called Boolean rings.
Conversely, if a Boolean ring is given, we can turn it into a Boolean algebra by defining and .
Since these two constructions are inverses of each other, we can say that every Boolean ring arises from a Boolean algebra, and vice versa. Furthermore, a map is a homomorphism of Boolean algebras if and only if it is a homomorphism of Boolean rings. The categories of Boolean rings and Boolean algebras are equivalent; in fact the categories are isomorphic.
Hsiang (1985) gave a rule-based algorithm to check whether two arbitrary expressions denote the same value in every Boolean ring.
More generally, Boudet, Jouannaud, and Schmidt-Schauß (1989) gave an algorithm to solve equations between arbitrary Boolean-ring expressions.
Employing the similarity of Boolean rings and Boolean algebras, both algorithms have applications in automated theorem proving.
Ideals and filters
An ideal of the Boolean algebra is a nonempty subset such that for all , in we have in and for all in we have in . This notion of ideal coincides with the notion of ring ideal in the Boolean ring . An ideal of is called prime if and if in always implies in or in . Furthermore, for every we have that , and then if is prime we have or for every . An ideal of is called maximal if and if the only ideal properly containing is itself. For an ideal , if and , then or is contained in another proper ideal . Hence, such an is not maximal, and therefore the notions of prime ideal and maximal ideal are equivalent in Boolean algebras. Moreover, these notions coincide with ring theoretic ones of prime ideal and maximal ideal in the Boolean ring .
The dual of an ideal is a filter. A filter of the Boolean algebra is a nonempty subset such that for all , in we have in and for all in we have in . The dual of a maximal (or prime) ideal in a Boolean algebra is ultrafilter. Ultrafilters can alternatively be described as 2-valued morphisms from to the two-element Boolean algebra. The statement every filter in a Boolean algebra can be extended to an ultrafilter is called the ultrafilter lemma and cannot be proven in Zermelo–Fraenkel set theory (ZF), if ZF is consistent. Within ZF, the ultrafilter lemma is strictly weaker than the axiom of choice.
The ultrafilter lemma has many equivalent formulations: every Boolean algebra has an ultrafilter, every ideal in a Boolean algebra can be extended to a prime ideal, etc.
Representations
It can be shown that every finite Boolean algebra is isomorphic to the Boolean algebra of all subsets of a finite set. Therefore, the number of elements of every finite Boolean algebra is a power of two.
Stone's celebrated representation theorem for Boolean algebras states that every Boolean algebra is isomorphic to the Boolean algebra of all clopen sets in some (compact totally disconnected Hausdorff) topological space.
Axiomatics
The first axiomatization of Boolean lattices/algebras in general was given by the English philosopher and mathematician Alfred North Whitehead in 1898.
It included the above axioms and additionally and .
In 1904, the American mathematician Edward V. Huntington (1874–1952) gave probably the most parsimonious axiomatization based on , , , even proving the associativity laws (see box).
He also proved that these axioms are independent of each other.
In 1933, Huntington set out the following elegant axiomatization for Boolean algebra. It requires just one binary operation and a unary functional symbol , to be read as 'complement', which satisfy the following laws:
Herbert Robbins immediately asked: If the Huntington equation is replaced with its dual, to wit:
do (1), (2), and (4) form a basis for Boolean algebra? Calling (1), (2), and (4) a Robbins algebra, the question then becomes: Is every Robbins algebra a Boolean algebra? This question (which came to be known as the Robbins conjecture) remained open for decades, and became a favorite question of Alfred Tarski and his students. In 1996, William McCune at Argonne National Laboratory, building on earlier work by Larry Wos, Steve Winker, and Bob Veroff, answered Robbins's question in the affirmative: Every Robbins algebra is a Boolean algebra. Crucial to McCune's proof was the computer program EQP he designed. For a simplification of McCune's proof, see Dahn (1998).
Further work has been done for reducing the number of axioms; see Minimal axioms for Boolean algebra.
Generalizations
Removing the requirement of existence of a unit from the axioms of Boolean algebra yields "generalized Boolean algebras". Formally, a distributive lattice is a generalized Boolean lattice, if it has a smallest element and for any elements and in such that , there exists an element such that and . Defining as the unique such that and , we say that the structure is a generalized Boolean algebra, while is a generalized Boolean semilattice. Generalized Boolean lattices are exactly the ideals of Boolean lattices.
A structure that satisfies all axioms for Boolean algebras except the two distributivity axioms is called an orthocomplemented lattice. Orthocomplemented lattices arise naturally in quantum logic as lattices of closed linear subspaces for separable Hilbert spaces.
See also
Notes
References
Works cited
.
.
General references
. See Section 2.5.
. See Chapter 2.
.
.
.
.
. In 3 volumes. (Vol.1:, Vol.2:, Vol.3:)
.
. Reprinted by Dover Publications, 1979.
External links
Stanford Encyclopedia of Philosophy: "The Mathematics of Boolean Algebra", by J. Donald Monk.
McCune W., 1997. Robbins Algebras Are Boolean JAR 19(3), 263–276
"Boolean Algebra" by Eric W. Weisstein, Wolfram Demonstrations Project, 2007.
Burris, Stanley N.; Sankappanavar, H. P., 1981. A Course in Universal Algebra. Springer-Verlag. .
Algebraic structures
Ockham algebras | Boolean algebra (structure) | [
"Mathematics"
] | 3,300 | [
"Boolean algebra",
"Mathematical structures",
"Mathematical logic",
"Mathematical objects",
"Fields of abstract algebra",
"Algebraic structures",
"Ockham algebras"
] |
3,967 | https://en.wikipedia.org/wiki/Bandwidth%20%28signal%20processing%29 | Bandwidth is the difference between the upper and lower frequencies in a continuous band of frequencies. It is typically measured in unit of hertz (symbol Hz).
It may refer more specifically to two subcategories: Passband bandwidth is the difference between the upper and lower cutoff frequencies of, for example, a band-pass filter, a communication channel, or a signal spectrum. Baseband bandwidth is equal to the upper cutoff frequency of a low-pass filter or baseband signal, which includes a zero frequency.
Bandwidth in hertz is a central concept in many fields, including electronics, information theory, digital communications, radio communications, signal processing, and spectroscopy and is one of the determinants of the capacity of a given communication channel.
A key characteristic of bandwidth is that any band of a given width can carry the same amount of information, regardless of where that band is located in the frequency spectrum. For example, a 3 kHz band can carry a telephone conversation whether that band is at baseband (as in a POTS telephone line) or modulated to some higher frequency. However, wide bandwidths are easier to obtain and process at higher frequencies because the is smaller.
Overview
Bandwidth is a key concept in many telecommunications applications. In radio communications, for example, bandwidth is the frequency range occupied by a modulated carrier signal. An FM radio receiver's tuner spans a limited range of frequencies. A government agency (such as the Federal Communications Commission in the United States) may apportion the regionally available bandwidth to broadcast license holders so that their signals do not mutually interfere. In this context, bandwidth is also known as channel spacing.
For other applications, there are other definitions. One definition of bandwidth, for a system, could be the range of frequencies over which the system produces a specified level of performance. A less strict and more practically useful definition will refer to the frequencies beyond which performance is degraded. In the case of frequency response, degradation could, for example, mean more than 3 dB below the maximum value or it could mean below a certain absolute value. As with any definition of the width of a function, many definitions are suitable for different purposes.
In the context of, for example, the sampling theorem and Nyquist sampling rate, bandwidth typically refers to baseband bandwidth. In the context of Nyquist symbol rate or Shannon-Hartley channel capacity for communication systems it refers to passband bandwidth.
The of a simple radar pulse is defined as the inverse of its duration. For example, a one-microsecond pulse has a Rayleigh bandwidth of one megahertz.
The is defined as the portion of a signal spectrum in the frequency domain which contains most of the energy of the signal.
x dB bandwidth
In some contexts, the signal bandwidth in hertz refers to the frequency range in which the signal's spectral density (in W/Hz or V2/Hz) is nonzero or above a small threshold value. The threshold value is often defined relative to the maximum value, and is most commonly the , that is the point where the spectral density is half its maximum value (or the spectral amplitude, in or , is 70.7% of its maximum). This figure, with a lower threshold value, can be used in calculations of the lowest sampling rate that will satisfy the sampling theorem.
The bandwidth is also used to denote system bandwidth, for example in filter or communication channel systems. To say that a system has a certain bandwidth means that the system can process signals with that range of frequencies, or that the system reduces the bandwidth of a white noise input to that bandwidth.
The 3 dB bandwidth of an electronic filter or communication channel is the part of the system's frequency response that lies within 3 dB of the response at its peak, which, in the passband filter case, is typically at or near its center frequency, and in the low-pass filter is at or near its cutoff frequency. If the maximum gain is 0 dB, the 3 dB bandwidth is the frequency range where attenuation is less than 3 dB. 3 dB attenuation is also where power is half its maximum. This same half-power gain convention is also used in spectral width, and more generally for the extent of functions as full width at half maximum (FWHM).
In electronic filter design, a filter specification may require that within the filter passband, the gain is nominally 0 dB with a small variation, for example within the ±1 dB interval. In the stopband(s), the required attenuation in decibels is above a certain level, for example >100 dB. In a transition band the gain is not specified. In this case, the filter bandwidth corresponds to the passband width, which in this example is the 1 dB-bandwidth. If the filter shows amplitude ripple within the passband, the x dB point refers to the point where the gain is x dB below the nominal passband gain rather than x dB below the maximum gain.
In signal processing and control theory the bandwidth is the frequency at which the closed-loop system gain drops 3 dB below peak.
In communication systems, in calculations of the Shannon–Hartley channel capacity, bandwidth refers to the 3 dB-bandwidth. In calculations of the maximum symbol rate, the Nyquist sampling rate, and maximum bit rate according to the Hartley's law, the bandwidth refers to the frequency range within which the gain is non-zero.
The fact that in equivalent baseband models of communication systems, the signal spectrum consists of both negative and positive frequencies, can lead to confusion about bandwidth since they are sometimes referred to only by the positive half, and one will occasionally see expressions such as , where is the total bandwidth (i.e. the maximum passband bandwidth of the carrier-modulated RF signal and the minimum passband bandwidth of the physical passband channel), and is the positive bandwidth (the baseband bandwidth of the equivalent channel model). For instance, the baseband model of the signal would require a low-pass filter with cutoff frequency of at least to stay intact, and the physical passband channel would require a passband filter of at least to stay intact.
Relative bandwidth
The absolute bandwidth is not always the most appropriate or useful measure of bandwidth. For instance, in the field of antennas the difficulty of constructing an antenna to meet a specified absolute bandwidth is easier at a higher frequency than at a lower frequency. For this reason, bandwidth is often quoted relative to the frequency of operation which gives a better indication of the structure and sophistication needed for the circuit or device under consideration.
There are two different measures of relative bandwidth in common use: fractional bandwidth () and ratio bandwidth (). In the following, the absolute bandwidth is defined as follows,
where and are the upper and lower frequency limits respectively of the band in question.
Fractional bandwidth
Fractional bandwidth is defined as the absolute bandwidth divided by the center frequency (),
The center frequency is usually defined as the arithmetic mean of the upper and lower frequencies so that,
and
However, the center frequency is sometimes defined as the geometric mean of the upper and lower frequencies,
and
While the geometric mean is more rarely used than the arithmetic mean (and the latter can be assumed if not stated explicitly) the former is considered more mathematically rigorous. It more properly reflects the logarithmic relationship of fractional bandwidth with increasing frequency. For narrowband applications, there is only marginal difference between the two definitions. The geometric mean version is inconsequentially larger. For wideband applications they diverge substantially with the arithmetic mean version approaching 2 in the limit and the geometric mean version approaching infinity.
Fractional bandwidth is sometimes expressed as a percentage of the center frequency (percent bandwidth, ),
Ratio bandwidth
Ratio bandwidth is defined as the ratio of the upper and lower limits of the band,
Ratio bandwidth may be notated as . The relationship between ratio bandwidth and fractional bandwidth is given by,
and
Percent bandwidth is a less meaningful measure in wideband applications. A percent bandwidth of 100% corresponds to a ratio bandwidth of 3:1. All higher ratios up to infinity are compressed into the range 100–200%.
Ratio bandwidth is often expressed in octaves (i.e., as a frequency level) for wideband applications. An octave is a frequency ratio of 2:1 leading to this expression for the number of octaves,
Noise equivalent bandwidth
The noise equivalent bandwidth (or equivalent noise bandwidth (enbw)) of a system of frequency response is the bandwidth of an ideal filter with rectangular frequency response centered on the system's central frequency that produces the same average power outgoing when both systems are excited with a white noise source.
The value of the noise equivalent bandwidth depends on the ideal filter reference gain used. Typically, this gain equals at its center frequency, but it can also equal the peak value of .
The noise equivalent bandwidth can be calculated in the frequency domain using or in the time domain by exploiting the Parseval's theorem with the system impulse response . If is a lowpass system with zero central frequency and the filter reference gain is referred to this frequency, then:
The same expression can be applied to bandpass systems by substituting the equivalent baseband frequency response for .
The noise equivalent bandwidth is widely used to simplify the analysis of telecommunication systems in the presence of noise.
Photonics
In photonics, the term bandwidth carries a variety of meanings:
the bandwidth of the output of some light source, e.g., an ASE source or a laser; the bandwidth of ultrashort optical pulses can be particularly large
the width of the frequency range that can be transmitted by some element, e.g. an optical fiber
the gain bandwidth of an optical amplifier
the width of the range of some other phenomenon, e.g., a reflection, the phase matching of a nonlinear process, or some resonance
the maximum modulation frequency (or range of modulation frequencies) of an optical modulator
the range of frequencies in which some measurement apparatus (e.g., a power meter) can operate
the data rate (e.g., in Gbit/s) achieved in an optical communication system; see bandwidth (computing).
A related concept is the spectral linewidth of the radiation emitted by excited atoms.
See also
Bandwidth extension
Broadband
Noise bandwidth
Rise time
Spectral efficiency
Spectral width
Notes
References
Signal processing
Telecommunication theory
Filter frequency response
Spectrum (physical sciences) | Bandwidth (signal processing) | [
"Physics",
"Technology",
"Engineering"
] | 2,117 | [
"Physical phenomena",
"Telecommunications engineering",
"Computer engineering",
"Spectrum (physical sciences)",
"Signal processing",
"Waves"
] |
3,973 | https://en.wikipedia.org/wiki/Bicycle | A bicycle, also called a pedal cycle, bike, push-bike or cycle, is a human-powered or motor-assisted, pedal-driven, single-track vehicle, with two wheels attached to a frame, one behind the other. A is called a cyclist, or bicyclist.
Bicycles were introduced in the 19th century in Europe. By the early 21st century there were more than 1 billion bicycles. There are many more bicycles than cars. Bicycles are the principal means of transport in many regions. They also provide a popular form of recreation, and have been adapted for use as children's toys. Bicycles are used for fitness, military and police applications, courier services, bicycle racing, and artistic cycling.
The basic shape and configuration of a typical upright or "safety" bicycle, has changed little since the first chain-driven model was developed around 1885. However, many details have been improved, especially since the advent of modern materials and computer-aided design. These have allowed for a proliferation of specialized designs for many types of cycling. In the 21st century, electric bicycles have become popular.
The bicycle's invention has had an enormous effect on society, both in terms of culture and of advancing modern industrial methods. Several components that played a key role in the development of the automobile were initially invented for use in the bicycle, including ball bearings, pneumatic tires, chain-driven sprockets, and tension-spoked wheels.
Etymology
The word bicycle first appeared in English print in The Daily News in 1868, to describe "Bysicles and trysicles" on the "Champs Elysées and Bois de Boulogne". The word was first used in 1847 in a French publication to describe an unidentified two-wheeled vehicle, possibly a carriage. The design of the bicycle was an advance on the velocipede, although the words were used with some degree of overlap for a time.
Other words for bicycle include "bike", "pushbike", "pedal cycle", or "cycle". In Unicode, the code point for "bicycle" is 0x1F6B2. The entity 🚲 in HTML produces 🚲.
Although bike and cycle are used interchangeably to refer mostly to two types of two-wheelers, the terms still vary across the world. In India, for example, a cycle refers only to a two-wheeler using pedal power whereas the term bike is used to describe a two-wheeler using internal combustion engine or electric motors as a source of motive power instead of motorcycle/motorbike.
History
The "dandy horse", also called Draisienne or Laufmaschine ("running machine"), was the first human means of transport to use only two wheels in tandem and was invented by the German Baron Karl von Drais. It is regarded as the first bicycle and von Drais is seen as the "father of the bicycle", but it did not have pedals. Von Drais introduced it to the public in Mannheim in 1817 and in Paris in 1818. Its rider sat astride a wooden frame supported by two in-line wheels and pushed the vehicle along with his or her feet while steering the front wheel.
The first mechanically propelled, two-wheeled vehicle may have been built by Kirkpatrick MacMillan, a Scottish blacksmith, in 1839, although the claim is often disputed. He is also associated with the first recorded instance of a cycling traffic offense, when a Glasgow newspaper in 1842 reported an accident in which an anonymous "gentleman from Dumfries-shire... bestride a velocipede... of ingenious design" knocked over a little girl in Glasgow and was fined five shillings ().
In the early 1860s, Frenchmen Pierre Michaux and Pierre Lallement took bicycle design in a new direction by adding a mechanical crank drive with pedals on an enlarged front wheel (the velocipede). This was the first in mass production. Another French inventor named Douglas Grasso had a failed prototype of Pierre Lallement's bicycle several years earlier. Several inventions followed using rear-wheel drive, the best known being the rod-driven velocipede by Scotsman Thomas McCall in 1869. In that same year, bicycle wheels with wire spokes were patented by Eugène Meyer of Paris. The French vélocipède, made of iron and wood, developed into the "penny-farthing" (historically known as an "ordinary bicycle", a retronym, since there was then no other kind). It featured a tubular steel frame on which were mounted wire-spoked wheels with solid rubber tires. These bicycles were difficult to ride due to their high seat and poor weight distribution. In 1868 Rowley Turner, a sales agent of the Coventry Sewing Machine Company (which soon became the Coventry Machinists Company), brought a Michaux cycle to Coventry, England. His uncle, Josiah Turner, and business partner James Starley, used this as a basis for the 'Coventry Model' in what became Britain's first cycle factory.
The dwarf ordinary addressed some of these faults by reducing the front wheel diameter and setting the seat further back. This, in turn, required gearing—effected in a variety of ways—to efficiently use pedal power. Having to both pedal and steer via the front wheel remained a problem. Englishman J.K. Starley (nephew of James Starley), J.H. Lawson, and Shergold solved this problem by introducing the chain drive (originated by the unsuccessful "bicyclette" of Englishman Henry Lawson), connecting the frame-mounted cranks to the rear wheel. These models were known as safety bicycles, dwarf safeties, or upright bicycles for their lower seat height and better weight distribution, although without pneumatic tires the ride of the smaller-wheeled bicycle would be much rougher than that of the larger-wheeled variety. Starley's 1885 Rover, manufactured in Coventry is usually described as the first recognizably modern bicycle. Soon the seat tube was added which created the modern bike's double-triangle diamond frame.
Further innovations increased comfort and ushered in a second bicycle craze, the 1890s Golden Age of Bicycles. In 1888, Scotsman John Boyd Dunlop introduced the first practical pneumatic tire, which soon became universal. Willie Hume demonstrated the supremacy of Dunlop's tyres in 1889, winning the tyre's first-ever races in Ireland and then England. Soon after, the rear freewheel was developed, enabling the rider to coast. This refinement led to the 1890s invention of coaster brakes. Dérailleur gears and hand-operated Bowden cable-pull brakes were also developed during these years, but were only slowly adopted by casual riders.
The Svea Velocipede with vertical pedal arrangement and locking hubs was introduced in 1892 by the Swedish engineers Fredrik Ljungström and Birger Ljungström. It attracted attention at the World Fair and was produced in a few thousand units.
In the 1870s many cycling clubs flourished. They were popular in a time when there were no cars on the market and the principal mode of transportation was horse-drawn vehicles, such the horse and buggy or the horsecar. Among the earliest clubs was The Bicycle Touring Club, which has operated since 1878. By the turn of the century, cycling clubs flourished on both sides of the Atlantic, and touring and racing became widely popular. The Raleigh Bicycle Company was founded in Nottingham, England in 1888. It became the biggest bicycle manufacturing company in the world, making over two million bikes per year.
Bicycles and horse buggies were the two mainstays of private transportation just prior to the automobile, and the grading of smooth roads in the late 19th century was stimulated by the widespread advertising, production, and use of these devices. More than 1 billion bicycles have been manufactured worldwide as of the early 21st century. Bicycles are the most common vehicle of any kind in the world, and the most numerous model of any kind of vehicle, whether human-powered or motor vehicle, is the Chinese Flying Pigeon, with numbers exceeding 500 million. The next most numerous vehicle, the Honda Super Cub motorcycle, has more than 100 million units made, while most produced car, the Toyota Corolla, has reached 44 million and counting.
Uses
Bicycles are used for transportation, bicycle commuting, and utility cycling. They are also used professionally by mail carriers, paramedics, police, messengers, and general delivery services. Military uses of bicycles include communications, reconnaissance, troop movement, supply of provisions, and patrol, such as in bicycle infantries.
They are also used for recreational purposes, including bicycle touring, mountain biking, physical fitness, and play. Bicycle sports include racing, BMX racing, track racing, criterium, roller racing, sportives and time trials. Major multi-stage professional events are the Giro d'Italia, the Tour de France, the Vuelta a España, the Tour de Pologne, and the Volta a Portugal. They are also used for entertainment and pleasure in other ways, such as in organised mass rides, artistic cycling and freestyle BMX.
Technical aspects
The bicycle has undergone continual adaptation and improvement since its inception. These innovations have continued with the advent of modern materials and computer-aided design, allowing for a proliferation of specialized bicycle types, improved bicycle safety, and riding comfort.
Types
Bicycles can be categorized in many different ways: by function, by number of riders, by general construction, by gearing or by means of propulsion. The more common types include utility bicycles, mountain bicycles, racing bicycles, touring bicycles, hybrid bicycles, cruiser bicycles, and BMX bikes. Less common are tandems, low riders, tall bikes, fixed gear, folding models, amphibious bicycles, cargo bikes, recumbents and electric bicycles.
Unicycles, tricycles and quadracycles are not strictly bicycles, as they have respectively one, three and four wheels, but are often referred to informally as "bikes" or "cycles".
Dynamics
A bicycle stays upright while moving forward by being steered so as to keep its center of mass over the wheels. This steering is usually provided by the rider, but under certain conditions may be provided by the bicycle itself.
The combined center of mass of a bicycle and its rider must lean into a turn to successfully navigate it. This lean is induced by a method known as countersteering, which can be performed by the rider turning the handlebars directly with the hands or indirectly by leaning the bicycle.
Short-wheelbase or tall bicycles, when braking, can generate enough stopping force at the front wheel to flip longitudinally. The act of purposefully using this force to lift the rear wheel and balance on the front without tipping over is a trick known as a stoppie, endo, or front wheelie.
Performance
The bicycle is extraordinarily efficient in both biological and mechanical terms. The bicycle is the most efficient human-powered means of transportation in terms of energy a person must expend to travel a given distance. From a mechanical viewpoint, up to 99% of the energy delivered by the rider into the pedals is transmitted to the wheels, although the use of gearing mechanisms may reduce this by 10–15%. In terms of the ratio of cargo weight a bicycle can carry to total weight, it is also an efficient means of cargo transportation.
A human traveling on a bicycle at low to medium speeds of around uses only the power required to walk. Air drag, which is proportional to the square of speed, requires dramatically higher power outputs as speeds increase. If the rider is sitting upright, the rider's body creates about 75% of the total drag of the bicycle/rider combination. Drag can be reduced by seating the rider in a more aerodynamically streamlined position. Drag can also be reduced by covering the bicycle with an aerodynamic fairing. The fastest recorded unpaced speed on a flat surface is .
In addition, the carbon dioxide generated in the production and transportation of the food required by the bicyclist, per mile traveled, is less than that generated by energy efficient motorcars.
Parts
Frame
The great majority of modern bicycles have a frame with upright seating that looks much like the first chain-driven bike. These upright bicycles almost always feature the diamond frame, a truss consisting of two triangles: the front triangle and the rear triangle. The front triangle consists of the head tube, top tube, down tube, and seat tube. The head tube contains the headset, the set of bearings that allows the fork to turn smoothly for steering and balance. The top tube connects the head tube to the seat tube at the top, and the down tube connects the head tube to the bottom bracket. The rear triangle consists of the seat tube and paired chain stays and seat stays. The chain stays run parallel to the chain, connecting the bottom bracket to the rear dropout, where the axle for the rear wheel is held. The seat stays connect the top of the seat tube (at or near the same point as the top tube) to the rear fork ends.
Historically, women's bicycle frames had a top tube that connected in the middle of the seat tube instead of the top, resulting in a lower standover height at the expense of compromised structural integrity, since this places a strong bending load in the seat tube, and bicycle frame members are typically weak in bending. This design, referred to as a step-through frame or as an open frame, allows the rider to mount and dismount in a dignified way while wearing a skirt or dress. While some women's bicycles continue to use this frame style, there is also a variation, the mixte, which splits the top tube laterally into two thinner top tubes that bypass the seat tube on each side and connect to the rear fork ends. The ease of stepping through is also appreciated by those with limited flexibility or other joint problems. Because of its persistent image as a "women's" bicycle, step-through frames are not common for larger frames.
Step-throughs were popular partly for practical reasons and partly for social mores of the day. For most of the history of bicycles' popularity women have worn long skirts, and the lower frame accommodated these better than the top-tube. Furthermore, it was considered "unladylike" for women to open their legs to mount and dismount—in more conservative times women who rode bicycles at all were vilified as immoral or immodest. These practices were akin to the older practice of riding horse sidesaddle.
Another style is the recumbent bicycle. These are inherently more aerodynamic than upright versions, as the rider may lean back onto a support and operate pedals that are on about the same level as the seat. The world's fastest bicycle is a recumbent bicycle but this type was banned from competition in 1934 by the Union Cycliste Internationale.
Historically, materials used in bicycles have followed a similar pattern as in aircraft, the goal being high strength and low weight. Since the late 1930s alloy steels have been used for frame and fork tubes in higher quality machines. By the 1980s aluminum welding techniques had improved to the point that aluminum tube could safely be used in place of steel. Since then aluminum alloy frames and other components have become popular due to their light weight, and most mid-range bikes are now principally aluminum alloy of some kind. More expensive bikes use carbon fibre due to its significantly lighter weight and profiling ability, allowing designers to make a bike both stiff and compliant by manipulating the lay-up. Virtually all professional racing bicycles now use carbon fibre frames, as they have the best strength to weight ratio. A typical modern carbon fiber frame can weigh less than .
Other exotic frame materials include titanium and advanced alloys. Bamboo, a natural composite material with high strength-to-weight ratio and stiffness has been used for bicycles since 1894. Recent versions use bamboo for the primary frame with glued metal connections and parts, priced as exotic models.
Drivetrain and gearing
The drivetrain begins with pedals which rotate the cranks, which are held in axis by the bottom bracket. Most bicycles use a chain to transmit power to the rear wheel. A very small number of bicycles use a shaft drive to transmit power, or special belts. Hydraulic bicycle transmissions have been built, but they are currently inefficient and complex.
Since cyclists' legs are most efficient over a narrow range of pedaling speeds, or cadence, a variable gear ratio helps a cyclist to maintain an optimum pedalling speed while covering varied terrain. Some, mainly utility, bicycles use hub gears with between 3 and 14 ratios, but most use the generally more efficient dérailleur system, by which the chain is moved between different cogs called chainrings and sprockets to select a ratio. A dérailleur system normally has two dérailleurs, or mechs, one at the front to select the chainring and another at the back to select the sprocket. Most bikes have two or three chainrings, and from 5 to 11 sprockets on the back, with the number of theoretical gears calculated by multiplying front by back. In reality, many gears overlap or require the chain to run diagonally, so the number of usable gears is fewer.
An alternative to chaindrive is to use a synchronous belt. These are toothed and work much the same as a chain—popular with commuters and long distance cyclists they require little maintenance. They cannot be shifted across a cassette of sprockets, and are used either as single speed or with a hub gear.
Different gears and ranges of gears are appropriate for different people and styles of cycling. Multi-speed bicycles allow gear selection to suit the circumstances: a cyclist could use a high gear when cycling downhill, a medium gear when cycling on a flat road, and a low gear when cycling uphill. In a lower gear every turn of the pedals leads to fewer rotations of the rear wheel. This allows the energy required to move the same distance to be distributed over more pedal turns, reducing fatigue when riding uphill, with a heavy load, or against strong winds. A higher gear allows a cyclist to make fewer pedal turns to maintain a given speed, but with more effort per turn of the pedals.
With a chain drive transmission, a chainring attached to a crank drives the chain, which in turn rotates the rear wheel via the rear sprocket(s) (cassette or freewheel). There are four gearing options: two-speed hub gear integrated with chain ring, up to 3 chain rings, up to 12 sprockets, hub gear built into rear wheel (3-speed to 14-speed). The most common options are either a rear hub or multiple chain rings combined with multiple sprockets (other combinations of options are possible but less common).
Steering
The handlebars connect to the stem that connects to the fork that connects to the front wheel, and the whole assembly connects to the bike and rotates about the steering axis via the headset bearings. Three styles of handlebar are common. Upright handlebars, the norm in Europe and elsewhere until the 1970s, curve gently back toward the rider, offering a natural grip and comfortable upright position. Drop handlebars "drop" as they curve forward and down, offering the cyclist best braking power from a more aerodynamic "crouched" position, as well as more upright positions in which the hands grip the brake lever mounts, the forward curves, or the upper flat sections for increasingly upright postures. Mountain bikes generally feature a 'straight handlebar' or 'riser bar' with varying degrees of sweep backward and centimeters rise upwards, as well as wider widths which can provide better handling due to increased leverage against the wheel.
Seating
Saddles also vary with rider preference, from the cushioned ones favored by short-distance riders to narrower saddles which allow more room for leg swings. Comfort depends on riding position. With comfort bikes and hybrids, cyclists sit high over the seat, their weight directed down onto the saddle, such that a wider and more cushioned saddle is preferable. For racing bikes where the rider is bent over, weight is more evenly distributed between the handlebars and saddle, the hips are flexed, and a narrower and harder saddle is more efficient. Differing saddle designs exist for male and female cyclists, accommodating the genders' differing anatomies and sit bone width measurements, although bikes typically are sold with saddles most appropriate for men. Suspension seat posts and seat springs provide comfort by absorbing shock but can add to the overall weight of the bicycle.
A recumbent bicycle has a reclined chair-like seat that some riders find more comfortable than a saddle, especially riders who suffer from certain types of seat, back, neck, shoulder, or wrist pain. Recumbent bicycles may have either under-seat or over-seat steering.
Brakes
Bicycle brakes may be rim brakes, in which friction pads are compressed against the wheel rims; hub brakes, where the mechanism is contained within the wheel hub, or disc brakes, where pads act on a rotor attached to the hub. Most road bicycles use rim brakes, but some use disc brakes. Disc brakes are more common for mountain bikes, tandems and recumbent bicycles than on other types of bicycles, due to their increased power, coupled with an increased weight and complexity.
With hand-operated brakes, force is applied to brake levers mounted on the handlebars and transmitted via Bowden cables or hydraulic lines to the friction pads, which apply pressure to the braking surface, causing friction which slows the bicycle down. A rear hub brake may be either hand-operated or pedal-actuated, as in the back pedal coaster brakes which were popular in North America until the 1960s.
Track bicycles do not have brakes, because all riders ride in the same direction around a track which does not necessitate sharp deceleration. Track riders are still able to slow down because all track bicycles are fixed-gear, meaning that there is no freewheel. Without a freewheel, coasting is impossible, so when the rear wheel is moving, the cranks are moving. To slow down, the rider applies resistance to the pedals, acting as a braking system which can be as effective as a conventional rear wheel brake, but not as effective as a front wheel brake.
Suspension
Bicycle suspension refers to the system or systems used to suspend the rider and all or part of the bicycle. This serves two purposes: to keep the wheels in continuous contact with the ground, improving control, and to isolate the rider and luggage from jarring due to rough surfaces, improving comfort.
Bicycle suspensions are used primarily on mountain bicycles, but are also common on hybrid bicycles, as they can help deal with problematic vibration from poor surfaces. Suspension is especially important on recumbent bicycles, since while an upright bicycle rider can stand on the pedals to achieve some of the benefits of suspension, a recumbent rider cannot.
Basic mountain bicycles and hybrids usually have front suspension only, whilst more sophisticated ones also have rear suspension. Road bicycles tend to have no suspension.
Wheels and tires
The wheel axle fits into fork ends in the frame and fork. A pair of wheels may be called a wheelset, especially in the context of ready-built "off the shelf", performance-oriented wheels.
Tires vary enormously depending on their intended purpose. Road bicycles use tires 18 to 25 millimeters wide, most often completely smooth, or slick, and inflated to high pressure to roll fast on smooth surfaces. Off-road tires are usually between wide, and have treads for gripping in muddy conditions or metal studs for ice.
Groupset
Groupset generally refers to all of the components that make up a bicycle excluding the bicycle frame, fork, stem, wheels, tires, and rider contact points, such as the saddle and handlebars.
Accessories
Some components, which are often optional accessories on sports bicycles, are standard features on utility bicycles to enhance their usefulness, comfort, safety and visibility. Fenders with spoilers (mudflaps) protect the cyclist and moving parts from spray when riding through wet areas. In some countries (e.g. Germany, UK), fenders are called mudguards. The chainguards protect clothes from oil on the chain while preventing clothing from being caught between the chain and crankset teeth. Kick stands keep bicycles upright when parked, and bike locks deter theft. Front-mounted baskets, front or rear luggage carriers or racks, and panniers mounted above either or both wheels can be used to carry equipment or cargo. Pegs can be fastened to one, or both of the wheel hubs to either help the rider perform certain tricks, or allow a place for extra riders to stand, or rest. Parents sometimes add rear-mounted child seats, an auxiliary saddle fitted to the crossbar, or both to transport children. Bicycles can also be fitted with a hitch to tow a trailer for carrying cargo, a child, or both.
Toe-clips and toestraps and clipless pedals help keep the foot locked in the proper pedal position and enable cyclists to pull and push the pedals. Technical accessories include cyclocomputers for measuring speed, distance, heart rate, GPS data etc. Other accessories include lights, reflectors, mirrors, racks, trailers, bags, water bottles and cages, and bell. Bicycle lights, reflectors, and helmets are required by law in some geographic regions depending on the legal code. It is more common to see bicycles with bottle generators, dynamos, lights, fenders, racks and bells in Europe. Bicyclists also have specialized form fitting and high visibility clothing.
Children's bicycles may be outfitted with cosmetic enhancements such as bike horns, streamers, and spoke beads. Training wheels are sometimes used when learning to ride, but a dedicated balance bike teaches independent riding more effectively.
Bicycle helmets can reduce injury in the event of a collision or accident, and a suitable helmet is legally required of riders in many jurisdictions. Helmets may be classified as an accessory or as an item of clothing.
Bike trainers are used to enable cyclists to cycle while the bike remains stationary. They are frequently used to warm up before races or indoors when riding conditions are unfavorable.
Standards
A number of formal and industry standards exist for bicycle components to help make spare parts exchangeable and to maintain a minimum product safety.
The International Organization for Standardization (ISO) has a special technical committee for cycles, TC149, that has the scope of "Standardization in the field of cycles, their components and accessories with particular reference to terminology, testing methods and requirements for performance and safety, and interchangeability".
The European Committee for Standardization (CEN) also has a specific Technical Committee, TC333, that defines European standards for cycles. Their mandate states that EN cycle standards shall harmonize with ISO standards. Some CEN cycle standards were developed before ISO published their standards, leading to strong European influences in this area. European cycle standards tend to describe minimum safety requirements, while ISO standards have historically harmonized parts geometry.
Maintenance and repair
Like all devices with mechanical moving parts, bicycles require a certain amount of regular maintenance and replacement of worn parts. A bicycle is relatively simple compared with a car, so some cyclists choose to do at least part of the maintenance themselves. Some components are easy to handle using relatively simple tools, while other components may require specialist manufacturer-dependent tools.
Many bicycle components are available at several different price/quality points; manufacturers generally try to keep all components on any particular bike at about the same quality level, though at the very cheap end of the market there may be some skimping on less obvious components (e.g. bottom bracket).
There are several hundred assisted-service Community Bicycle Organizations worldwide. At a Community Bicycle Organization, laypeople bring in bicycles needing repair or maintenance; volunteers teach them how to do the required steps.
Full service is available from bicycle mechanics at a local bike shop.
In areas where it is available, some cyclists purchase roadside assistance from companies such as the Better World Club or the American Automobile Association.
Maintenance
The most basic maintenance item is keeping the tires correctly inflated; this can make a noticeable difference as to how the bike feels to ride. Bicycle tires usually have a marking on the sidewall indicating the pressure appropriate for that tire. Bicycles use much higher pressures than cars: car tires are normally in the range of , whereas bicycle tires are normally in the range of .
Another basic maintenance item is regular lubrication of the chain and pivot points for derailleurs and brake components. Most of the bearings on a modern bike are sealed and grease-filled and require little or no attention; such bearings will usually last for or more. The crank bearings require periodic maintenance, which involves removing, cleaning and repacking with the correct grease.
The chain and the brake blocks are the components which wear out most quickly, so these need to be checked from time to time, typically every or so. Most local bike shops will do such checks for free. Note that when a chain becomes badly worn it will also wear out the rear cogs/cassette and eventually the chain ring(s), so replacing a chain when only moderately worn will prolong the life of other components.
Over the longer term, tires do wear out, after ; a rash of punctures is often the most visible sign of a worn tire.
Repair
Very few bicycle components can actually be repaired; replacement of the failing component is the normal practice.
The most common roadside problem is a puncture of the tire's inner tube. A patch kit may be employed to fix the puncture or the tube can be replaced, though the latter solution comes at a greater cost and waste of material. Some brands of tires are much more puncture-resistant than others, often incorporating one or more layers of Kevlar; the downside of such tires is that they may be heavier and/or more difficult to fit and remove.
Tools
There are specialized bicycle tools for use both in the shop and at the roadside. Many cyclists carry tool kits. These may include a tire patch kit (which, in turn, may contain any combination of a hand pump or CO2 pump, tire levers, spare tubes, self-adhesive patches, or tube-patching material, an adhesive, a piece of sandpaper or a metal grater (for roughening the tube surface to be patched) and sometimes even a block of French chalk), wrenches, hex keys, screwdrivers, and a chain tool. Special, thin wrenches are often required for maintaining various screw-fastened parts, specifically, the frequently lubricated ball-bearing "cones". There are also cycling-specific multi-tools that combine many of these implements into a single compact device. More specialized bicycle components may require more complex tools, including proprietary tools specific for a given manufacturer.
Social and historical aspects
The bicycle has had a considerable effect on human society, in both the cultural and industrial realms.
In daily life
Around the turn of the 20th century, bicycles reduced crowding in inner-city tenements by allowing workers to commute from more spacious dwellings in the suburbs. They also reduced dependence on horses. Bicycles allowed people to travel for leisure into the country, since bicycles were three times as energy efficient as walking and three to four times as fast.
In built-up cities around the world, urban planning uses cycling infrastructure like bikeways to reduce traffic congestion and air pollution. A number of cities around the world have implemented schemes known as bicycle sharing systems or community bicycle programs. The first of these was the White Bicycle plan in Amsterdam in 1965. It was followed by yellow bicycles in La Rochelle and green bicycles in Cambridge. These initiatives complement public transport systems and offer an alternative to motorized traffic to help reduce congestion and pollution. In Europe, especially in the Netherlands and parts of Germany and Denmark, bicycle commuting is common. In Copenhagen, a cyclists' organization runs a Cycling Embassy that promotes biking for commuting and sightseeing. The United Kingdom has a tax break scheme (IR 176) that allows employees to buy a new bicycle tax free to use for commuting.
In the Netherlands all train stations offer free bicycle parking, or a more secure parking place for a small fee, with the larger stations also offering bicycle repair shops. Cycling is so popular that the parking capacity may be exceeded, while in some places such as Delft the capacity is usually exceeded. In Trondheim in Norway, the Trampe bicycle lift has been developed to encourage cyclists by giving assistance on a steep hill. Buses in many cities have bicycle carriers mounted on the front.
There are towns in some countries where bicycle culture has been an integral part of the landscape for generations, even without much official support. That is the case of Ílhavo, in Portugal.
In cities where bicycles are not integrated into the public transportation system, commuters often use bicycles as elements of a mixed-mode commute, where the bike is used to travel to and from train stations or other forms of rapid transit. Some students who commute several miles drive a car from home to a campus parking lot, then ride a bicycle to class. Folding bicycles are useful in these scenarios, as they are less cumbersome when carried aboard. Los Angeles removed a small amount of seating on some trains to make more room for bicycles and wheel chairs.
Some US companies, notably in the tech sector, are developing both innovative cycle designs and cycle-friendliness in the workplace. Foursquare, whose CEO Dennis Crowley "pedaled to pitch meetings ... [when he] was raising money from venture capitalists" on a two-wheeler, chose a new location for its New York headquarters "based on where biking would be easy". Parking in the office was also integral to HQ planning. Mitchell Moss, who runs the Rudin Center for Transportation Policy & Management at New York University, said in 2012: "Biking has become the mode of choice for the educated high tech worker".
Bicycles offer an important mode of transport in many developing countries. Until recently, bicycles have been a staple of everyday life throughout Asian countries. They are the most frequently used method of transport for commuting to work, school, shopping, and life in general. In Europe, bicycles are commonly used. They also offer a degree of exercise to keep individuals healthy.
Bicycles are also celebrated in the visual arts. An example of this is the Bicycle Film Festival, a film festival hosted all around the world.
Poverty alleviation
Female emancipation
The safety bicycle gave women unprecedented mobility, contributing to their emancipation in Western nations. As bicycles became safer and cheaper, more women had access to the personal freedom that bicycles embodied, and so the bicycle came to symbolize the New Woman of the late 19th century, especially in Britain and the United States. The bicycle craze in the 1890s also led to a movement for so-called rational dress, which helped liberate women from corsets and ankle-length skirts and other restrictive garments, substituting the then-shocking bloomers.
The bicycle was recognized by 19th-century feminists and suffragists as a "freedom machine" for women. American Susan B. Anthony said in a New York World interview on 2 February 1896: "I think it has done more to emancipate woman than any one thing in the world. I rejoice every time I see a woman ride by on a wheel. It gives her a feeling of self-reliance and independence the moment she takes her seat; and away she goes, the picture of untrammelled womanhood." In 1895 Frances Willard, the tightly laced president of the Woman's Christian Temperance Union, wrote A Wheel Within a Wheel: How I Learned to Ride the Bicycle, with Some Reflections by the Way, a 75-page illustrated memoir praising "Gladys", her bicycle, for its "gladdening effect" on her health and political optimism. Willard used a cycling metaphor to urge other suffragists to action.
In 1985, Georgena Terry started the first women-specific bicycle company. Her designs featured frame geometry and wheel sizes chosen to better fit women, with shorter top tubes and more suitable reach.
Economic implications
Bicycle manufacturing proved to be a training ground for other industries and led to the development of advanced metalworking techniques, both for the frames themselves and for special components such as ball bearings, washers, and sprockets. These techniques later enabled skilled metalworkers and mechanics to develop the components used in early automobiles and aircraft.
Wilbur and Orville Wright, a pair of businessmen, ran the Wright Cycle Company which designed, manufactured and sold their bicycles during the bike boom of the 1890s.
They also served to teach the industrial models later adopted, including mechanization and mass production (later copied and adopted by Ford and General Motors), vertical integration (also later copied and adopted by Ford), aggressive advertising (as much as 10% of all advertising in U.S. periodicals in 1898 was by bicycle makers), lobbying for better roads (which had the side benefit of acting as advertising, and of improving sales by providing more places to ride), all first practiced by Pope. In addition, bicycle makers adopted the annual model change (later derided as planned obsolescence, and usually credited to General Motors), which proved very successful.
Early bicycles were an example of conspicuous consumption, being adopted by the fashionable elites. In addition, by serving as a platform for accessories, which could ultimately cost more than the bicycle itself, it paved the way for the likes of the Barbie doll.
Bicycles helped create, or enhance, new kinds of businesses, such as bicycle messengers, traveling seamstresses, riding academies, and racing rinks. Their board tracks were later adapted to early motorcycle and automobile racing. There were a variety of new inventions, such as spoke tighteners, and specialized lights, socks and shoes, and even cameras, such as the Eastman Company's Poco. Probably the best known and most widely used of these inventions, adopted well beyond cycling, is Charles Bennett's Bike Web, which came to be called the jock strap.
They also presaged a move away from public transit that would explode with the introduction of the automobile.
J. K. Starley's company became the Rover Cycle Company Ltd. in the late 1890s, and then renamed the Rover Company when it started making cars. Morris Motors Limited (in Oxford) and Škoda also began in the bicycle business, as did the Wright brothers. Alistair Craig, whose company eventually emerged to become the engine manufacturers Ailsa Craig, also started from manufacturing bicycles, in Glasgow in March 1885.
In general, U.S. and European cycle manufacturers used to assemble cycles from their own frames and components made by other companies, although very large companies (such as Raleigh) used to make almost every part of a bicycle (including bottom brackets, axles, etc.) In recent years, those bicycle makers have greatly changed their methods of production. Now, almost none of them produce their own frames.
Many newer or smaller companies only design and market their products; the actual production is done by Asian companies. For example, some 60% of the world's bicycles are now being made in China. Despite this shift in production, as nations such as China and India become more wealthy, their own use of bicycles has declined due to the increasing affordability of cars and motorcycles. One of the major reasons for the proliferation of Chinese-made bicycles in foreign markets is the lower cost of labor in China.
In line with the European financial crisis of that time, in 2011 the number of bicycle sales in Italy (1.75 million) passed the number of new car sales.
Environmental impact
One of the profound economic implications of bicycle use is that it liberates the user from motor fuel consumption. (Ballantine, 1972) The bicycle is an inexpensive, fast, healthy and environmentally friendly mode of transport. Ivan Illich stated that bicycle use extended the usable physical environment for people, while alternatives such as cars and motorways degraded and confined people's environment and mobility. Currently, two billion bicycles are in use around the world. Children, students, professionals, laborers, civil servants and seniors are pedaling around their communities. They all experience the freedom and the natural opportunity for exercise that the bicycle easily provides. Bicycle also has lowest carbon intensity of travel.
Manufacturing
The global bicycle market is $61 billion in 2011. , 130 million bicycles were sold every year globally and 66% of them were made in China.
Legal requirements
Early in its development, as with automobiles, there were restrictions on the operation of bicycles. Along with advertising, and to gain free publicity, Albert A. Pope litigated on behalf of cyclists.
The 1968 Vienna Convention on Road Traffic of the United Nations considers a bicycle to be a vehicle, and a person controlling a bicycle (whether actually riding or not) is considered an operator or driver. The traffic codes of many countries reflect these definitions and demand that a bicycle satisfy certain legal requirements before it can be used on public roads. In many jurisdictions, it is an offense to use a bicycle that is not in a roadworthy condition.
In some countries, bicycles must have functioning front and rear lights when ridden after dark.
Some countries require child and/or adult cyclists to wear helmets, as this may protect riders from head trauma. Countries which require adult cyclists to wear helmets include Spain, New Zealand and Australia. Mandatory helmet wearing is one of the most controversial topics in the cycling world, with proponents arguing that it reduces head injuries and thus is an acceptable requirement, while opponents argue that by making cycling seem more dangerous and cumbersome, it reduces cyclist numbers on the streets, creating an overall negative health effect (fewer people cycling for their own health, and the remaining cyclists being more exposed through a reversed safety in numbers effect).
Theft
Bicycles are popular targets for theft, due to their value and ease of resale. The number of bicycles stolen annually is difficult to quantify as a large number of crimes are not reported. Around 50% of the participants in the Montreal International Journal of Sustainable Transportation survey were subjected to a bicycle theft in their lifetime as active cyclists. Most bicycles have serial numbers that can be recorded to verify identity in case of theft.
See also
Bicycle and motorcycle geometry
Bicycle drum brake
Bicycle fender
Bicycle lighting
Bicycle parking station
Bicycle-friendly
Bicycle-sharing system
Cyclability
Cycling advocacy
Cycling in the Netherlands
Danish bicycle VIN-system
List of bicycle types
List of films about bicycles and cycling
Outline of bicycles
Outline of cycling
rattleCAD (software for bicycle design)
Skirt guard
Twike
Velomobile
Wooden bicycle
World Bicycle Day
Notes
References
Citations
Sources
General
Further reading
External links
A History of Bicycles and Other Cycles at the Canada Science and Technology Museum
19th-century inventions
Appropriate technology
Articles containing video clips
German inventions
Sustainable technologies
Sustainable transport | Bicycle | [
"Physics"
] | 8,770 | [
"Physical systems",
"Transport",
"Sustainable transport"
] |
3,974 | https://en.wikipedia.org/wiki/Biopolymer | Biopolymers are natural polymers produced by the cells of living organisms. Like other polymers, biopolymers consist of monomeric units that are covalently bonded in chains to form larger molecules. There are three main classes of biopolymers, classified according to the monomers used and the structure of the biopolymer formed: polynucleotides, polypeptides, and polysaccharides. The Polynucleotides, RNA and DNA, are long polymers of nucleotides. Polypeptides include proteins and shorter polymers of amino acids; some major examples include collagen, actin, and fibrin. Polysaccharides are linear or branched chains of sugar carbohydrates; examples include starch, cellulose, and alginate. Other examples of biopolymers include natural rubbers (polymers of isoprene), suberin and lignin (complex polyphenolic polymers), cutin and cutan (complex polymers of long-chain fatty acids), melanin, and polyhydroxyalkanoates (PHAs).
In addition to their many essential roles in living organisms, biopolymers have applications in many fields including the food industry, manufacturing, packaging, and biomedical engineering.
Biopolymers versus synthetic polymers
A major defining difference between biopolymers and synthetic polymers can be found in their structures. All polymers are made of repetitive units called monomers. Biopolymers often have a well-defined structure, though this is not a defining characteristic (example: lignocellulose): The exact chemical composition and the sequence in which these units are arranged is called the primary structure, in the case of proteins. Many biopolymers spontaneously fold into characteristic compact shapes (see also "protein folding" as well as secondary structure and tertiary structure), which determine their biological functions and depend in a complicated way on their primary structures. Structural biology is the study of the structural properties of biopolymers. In contrast, most synthetic polymers have much simpler and more random (or stochastic) structures. This fact leads to a molecular mass distribution that is missing in biopolymers. In fact, as their synthesis is controlled by a template-directed process in most in vivo systems, all biopolymers of a type (say one specific protein) are all alike: they all contain similar sequences and numbers of monomers and thus all have the same mass. This phenomenon is called monodispersity in contrast to the polydispersity encountered in synthetic polymers. As a result, biopolymers have a dispersity of 1.
Conventions and nomenclature
Polypeptides
The convention for a polypeptide is to list its constituent amino acid residues as they occur from the amino terminus to the carboxylic acid terminus. The amino acid residues are always joined by peptide bonds. Protein, though used colloquially to refer to any polypeptide, refers to larger or fully functional forms and can consist of several polypeptide chains as well as single chains. Proteins can also be modified to include non-peptide components, such as saccharide chains and lipids.
Nucleic acids
The convention for a nucleic acid sequence is to list the nucleotides as they occur from the 5' end to the 3' end of the polymer chain, where 5' and 3' refer to the numbering of carbons around the ribose ring which participate in forming the phosphate diester linkages of the chain. Such a sequence is called the primary structure of the biopolymer.
Polysaccharides
Polysaccharides (sugar polymers) can be linear or branched and are typically joined with glycosidic bonds. The exact placement of the linkage can vary, and the orientation of the linking functional groups is also important, resulting in α- and β-glycosidic bonds with numbering definitive of the linking carbons' location in the ring. In addition, many saccharide units can undergo various chemical modifications, such as amination, and can even form parts of other molecules, such as glycoproteins.
Structural characterization
There are a number of biophysical techniques for determining sequence information. Protein sequence can be determined by Edman degradation, in which the N-terminal residues are hydrolyzed from the chain one at a time, derivatized, and then identified. Mass spectrometer techniques can also be used. Nucleic acid sequence can be determined using gel electrophoresis and capillary electrophoresis. Lastly, mechanical properties of these biopolymers can often be measured using optical tweezers or atomic force microscopy. Dual-polarization interferometry can be used to measure the conformational changes or self-assembly of these materials when stimulated by pH, temperature, ionic strength or other binding partners.
Common biopolymers
Collagen: Collagen is the primary structure of vertebrates and is the most abundant protein in mammals. Because of this, collagen is one of the most easily attainable biopolymers, and used for many research purposes. Because of its mechanical structure, collagen has high tensile strength and is a non-toxic, easily absorbable, biodegradable, and biocompatible material. Therefore, it has been used for many medical applications such as in treatment for tissue infection, drug delivery systems, and gene therapy.
Silk fibroin: Silk Fibroin (SF) is another protein rich biopolymer that can be obtained from different silkworm species, such as the mulberry worm Bombyx mori. In contrast to collagen, SF has a lower tensile strength but has strong adhesive properties due to its insoluble and fibrous protein composition. In recent studies, silk fibroin has been found to possess anticoagulation properties and platelet adhesion. Silk fibroin has been additionally found to support stem cell proliferation in vitro.
Gelatin: Gelatin is obtained from type I collagen consisting of cysteine, and produced by the partial hydrolysis of collagen from bones, tissues and skin of animals. There are two types of gelatin, Type A and Type B. Type A collagen is derived by acid hydrolysis of collagen and has 18.5% nitrogen. Type B is derived by alkaline hydrolysis containing 18% nitrogen and no amide groups. Elevated temperatures cause the gelatin to melts and exists as coils, whereas lower temperatures result in coil to helix transformation. Gelatin contains many functional groups like NH2, SH, and COOH which allow for gelatin to be modified using nanoparticles and biomolecules. Gelatin is an Extracellular Matrix protein which allows it to be applied for applications such as wound dressings, drug delivery and gene transfection.
Starch: Starch is an inexpensive biodegradable biopolymer and copious in supply. Nanofibers and microfibers can be added to the polymer matrix to increase the mechanical properties of starch improving elasticity and strength. Without the fibers, starch has poor mechanical properties due to its sensitivity to moisture. Starch being biodegradable and renewable is used for many applications including plastics and pharmaceutical tablets.
Cellulose: Cellulose is very structured with stacked chains that result in stability and strength. The strength and stability comes from the straighter shape of cellulose caused by glucose monomers joined by glycogen bonds. The straight shape allows the molecules to pack closely. Cellulose is very common in application due to its abundant supply, its biocompatibility, and is environmentally friendly. Cellulose is used vastly in the form of nano-fibrils called nano-cellulose. Nano-cellulose presented at low concentrations produces a transparent gel material. This material can be used for biodegradable, homogeneous, dense films that are very useful in the biomedical field.
Alginate: Alginate is the most copious marine natural polymer derived from brown seaweed. Alginate biopolymer applications range from packaging, textile and food industry to biomedical and chemical engineering. The first ever application of alginate was in the form of wound dressing, where its gel-like and absorbent properties were discovered. When applied to wounds, alginate produces a protective gel layer that is optimal for healing and tissue regeneration, and keeps a stable temperature environment. Additionally, there have been developments with alginate as a drug delivery medium, as drug release rate can easily be manipulated due to a variety of alginate densities and fibrous composition.
Biopolymer applications
The applications of biopolymers can be categorized under two main fields, which differ due to their biomedical and industrial use.
Biomedical
Because one of the main purposes for biomedical engineering is to mimic body parts to sustain normal body functions, due to their biocompatible properties, biopolymers are used vastly for tissue engineering, medical devices and the pharmaceutical industry. Many biopolymers can be used for regenerative medicine, tissue engineering, drug delivery, and overall medical applications due to their mechanical properties. They provide characteristics like wound healing, and catalysis of bioactivity, and non-toxicity. Compared to synthetic polymers, which can present various disadvantages like immunogenic rejection and toxicity after degradation, many biopolymers are normally better with bodily integration as they also possess more complex structures, similar to the human body.
More specifically, polypeptides like collagen and silk, are biocompatible materials that are being used in ground-breaking research, as these are inexpensive and easily attainable materials. Gelatin polymer is often used on dressing wounds where it acts as an adhesive. Scaffolds and films with gelatin allow for the scaffolds to hold drugs and other nutrients that can be used to supply to a wound for healing.
As collagen is one of the more popular biopolymers used in biomedical science, here are some examples of their use:
Collagen based drug delivery systems: collagen films act like a barrier membrane and are used to treat tissue infections like infected corneal tissue or liver cancer. Collagen films have all been used for gene delivery carriers which can promote bone formation.
Collagen sponges: Collagen sponges are used as a dressing to treat burn victims and other serious wounds. Collagen based implants are used for cultured skin cells or drug carriers that are used for burn wounds and replacing skin.
Collagen as haemostat: When collagen interacts with platelets it causes a rapid coagulation of blood. This rapid coagulation produces a temporary framework so the fibrous stroma can be regenerated by host cells. Collagen based haemostat reduces blood loss in tissues and helps manage bleeding in organs such as the liver and spleen.
Chitosan is another popular biopolymer in biomedical research. Chitosan is derived from chitin, the main component in the exoskeleton of crustaceans and insects and the second most abundant biopolymer in the world. Chitosan has many excellent characteristics for biomedical science. Chitosan is biocompatible, it is highly bioactive, meaning it stimulates a beneficial response from the body, it can biodegrade which can eliminate a second surgery in implant applications, can form gels and films, and is selectively permeable. These properties allow for various biomedical applications of chitosan.
Chitosan as drug delivery: Chitosan is used mainly with drug targeting because it has potential to improve drug absorption and stability. In addition, chitosan conjugated with anticancer agents can also produce better anticancer effects by causing gradual release of free drug into cancerous tissue.
Chitosan as an anti-microbial agent: Chitosan is used to stop the growth of microorganisms. It performs antimicrobial functions in microorganisms like algae, fungi, bacteria, and gram-positive bacteria of different yeast species.
Chitosan composite for tissue engineering: Chitosan powder blended with alginate is used to form functional wound dressings. These dressings create a moist, biocompatible environment which aids in the healing process. This wound dressing is also biodegradable and has porous structures that allows cells to grow into the dressing. Furthermore, thiolated chitosans (see thiomers) are used for tissue engineering and wound healing, as these biopolymers are able to crosslink via disulfide bonds forming stable three-dimensional networks.
Industrial
Food: Biopolymers are being used in the food industry for things like packaging, edible encapsulation films and coating foods. Polylactic acid (PLA) is very common in the food industry due to is clear color and resistance to water. However, most polymers have a hydrophilic nature and start deteriorating when exposed to moisture. Biopolymers are also being used as edible films that encapsulate foods. These films can carry things like antioxidants, enzymes, probiotics, minerals, and vitamins. The food consumed encapsulated with the biopolymer film can supply these things to the body.
Packaging: The most common biopolymers used in packaging are polyhydroxyalkanoates (PHAs), polylactic acid (PLA), and starch. Starch and PLA are commercially available and biodegradable, making them a common choice for packaging. However, their barrier properties (either moisture-barrier or gas-barrier properties) and thermal properties are not ideal. Hydrophilic polymers are not water resistant and allow water to get through the packaging which can affect the contents of the package. Polyglycolic acid (PGA) is a biopolymer that has great barrier characteristics and is now being used to correct the barrier obstacles from PLA and starch.
Water purification: Chitosan has been used for water purification. It is used as a flocculant that only takes a few weeks or months rather than years to degrade in the environment. Chitosan purifies water by chelation. This is the process in which binding sites along the polymer chain bind with the metal ions in the water forming chelates. Chitosan has been shown to be an excellent candidate for use in storm and wastewater treatment.
As materials
Some biopolymers- such as PLA, naturally occurring zein, and poly-3-hydroxybutyrate can be used as plastics, replacing the need for polystyrene or polyethylene based plastics.
Some plastics are now referred to as being 'degradable', 'oxy-degradable' or 'UV-degradable'. This means that they break down when exposed to light or air, but these plastics are still primarily (as much as 98 per cent) oil-based and are not currently certified as 'biodegradable' under the European Union directive on Packaging and Packaging Waste (94/62/EC). Biopolymers will break down, and some are suitable for domestic composting.
Biopolymers (also called renewable polymers) are produced from biomass for use in the packaging industry. Biomass comes from crops such as sugar beet, potatoes, or wheat: when used to produce biopolymers, these are classified as non food crops. These can be converted in the following pathways:
Sugar beet > Glyconic acid > Polyglyconic acid
Starch > (fermentation) > Lactic acid > Polylactic acid (PLA)
Biomass > (fermentation) > Bioethanol > Ethene > Polyethylene
Many types of packaging can be made from biopolymers: food trays, blown starch pellets for shipping fragile goods, thin films for wrapping.
Environmental impacts
Biopolymers can be sustainable, carbon neutral and are always renewable, because they are made from plant or animal materials which can be grown indefinitely. Since these materials come from agricultural crops, their use could create a sustainable industry. In contrast, the feedstocks for polymers derived from petrochemicals will eventually deplete. In addition, biopolymers have the potential to cut carbon emissions and reduce CO2 quantities in the atmosphere: this is because the CO2 released when they degrade can be reabsorbed by crops grown to replace them: this makes them close to carbon neutral.
Almost all biopolymers are biodegradable in the natural environment: they are broken down into CO2 and water by microorganisms. These biodegradable biopolymers are also compostable: they can be put into an industrial composting process and will break down by 90% within six months. Biopolymers that do this can be marked with a 'compostable' symbol, under European Standard EN 13432 (2000). Packaging marked with this symbol can be put into industrial composting processes and will break down within six months or less. An example of a compostable polymer is PLA film under 20μm thick: films which are thicker than that do not qualify as compostable, even though they are "biodegradable". In Europe there is a home composting standard and associated logo that enables consumers to identify and dispose of packaging in their compost heap.
See also
Biomaterials
Bioplastic
Biopolymers & Cell (journal)
Condensation polymers
Condensed tannins
DNA sequence
Melanin
Non food crops
Phosphoramidite
Polymer chemistry
Sequence-controlled polymers
Sequencing
Small molecules
Worm-like chain
References
External links
NNFCC: The UK's National Centre for Biorenewable Energy, Fuels and Materials
Bioplastics Magazine
Biopolymer group
What's Stopping Bioplastic?
Biomolecules
Polymers
Molecular biology
Molecular genetics
Biotechnology products
Bioplastics
Biomaterials | Biopolymer | [
"Physics",
"Chemistry",
"Materials_science",
"Biology"
] | 3,754 | [
"Biomaterials",
"Natural products",
"Biotechnology products",
"Biochemistry",
"Organic compounds",
"Materials",
"Molecular genetics",
"Biomolecules",
"Polymer chemistry",
"Structural biology",
"Polymers",
"Medical technology",
"Matter",
"Molecular biology"
] |
3,986 | https://en.wikipedia.org/wiki/Benjamin%20Franklin | Benjamin Franklin (April 17, 1790) was an American polymath: a writer, scientist, inventor, statesman, diplomat, printer, publisher and political philosopher. Among the most influential intellectuals of his time, Franklin was one of the Founding Fathers of the United States; a drafter and signer of the Declaration of Independence; and the first postmaster general.
Franklin became a successful newspaper editor and printer in Philadelphia, the leading city in the colonies, publishing The Pennsylvania Gazette at age 23. He became wealthy publishing this and Poor Richard's Almanack, which he wrote under the pseudonym "Richard Saunders". After 1767, he was associated with the Pennsylvania Chronicle, a newspaper known for its revolutionary sentiments and criticisms of the policies of the British Parliament and the Crown. He pioneered and was the first president of the Academy and College of Philadelphia, which opened in 1751 and later became the University of Pennsylvania. He organized and was the first secretary of the American Philosophical Society and was elected its president in 1769. He was appointed deputy postmaster-general for the British colonies in 1753, which enabled him to set up the first national communications network.
He was active in community affairs and colonial and state politics, as well as national and international affairs. Franklin became a hero in America when, as an agent in London for several colonies, he spearheaded the repeal of the unpopular Stamp Act by the British Parliament. An accomplished diplomat, he was widely admired as the first U.S. ambassador to France and was a major figure in the development of positive FrancoAmerican relations. His efforts proved vital in securing French aid for the American Revolution. From 1785 to 1788, he served as President of Pennsylvania. At some points in his life, he owned slaves and ran "for sale" ads for slaves in his newspaper, but by the late 1750s, he began arguing against slavery, became an active abolitionist, and promoted the education and integration of African Americans into U.S. society.
As a scientist, his studies of electricity made him a major figure in the American Enlightenment and the history of physics. He also charted and named the Gulf Stream current. His numerous important inventions include the lightning rod, bifocals, glass harmonica and the Franklin stove. He founded many civic organizations, including the Library Company, Philadelphia's first fire department, and the University of Pennsylvania.
Franklin earned the title of "The First American" for his early and indefatigable campaigning for colonial unity. He was the only person to sign the Declaration of Independence, Treaty of Paris, peace with Britain and the Constitution. Foundational in defining the American ethos, Franklin has been called "the most accomplished American of his age and the most influential in inventing the type of society America would become".
His life and legacy of scientific and political achievement, and his status as one of America's most influential Founding Fathers, have seen Franklin honored for more than two centuries after his death on the $100 bill and in the names of warships, many towns and counties, educational institutions and corporations, as well as in numerous cultural references and a portrait in the Oval Office. His more than 30,000 letters and documents have been collected in The Papers of Benjamin Franklin. Anne Robert Jacques Turgot said of him: "Eripuit fulmen cœlo, mox sceptra tyrannis" ("He snatched lightning from the sky and the scepter from tyrants").
Ancestry
Benjamin Franklin's father, Josiah Franklin, was a tallow chandler, soaper, and candlemaker. Josiah Franklin was born at Ecton, Northamptonshire, England, on December 23, 1657, the son of Thomas Franklin, a blacksmith and farmer, and his wife, Jane White. Benjamin's father and all four of his grandparents were born in England.
Josiah Franklin had a total of seventeen children with his two wives. He married his first wife, Anne Child, in about 1677 in Ecton and emigrated with her to Boston in 1683; they had three children before emigration and four after. Following her death, Josiah married Abiah Folger on July 9, 1689, in the Old South Meeting House by Reverend Samuel Willard, and had ten children with her. Benjamin, their eighth child, was Josiah Franklin's fifteenth child overall, and his tenth and final son.
Benjamin Franklin's mother, Abiah, was born in Nantucket, Massachusetts Bay Colony, on August 15, 1667, to Peter Folger, a miller and schoolteacher, and his wife, Mary Morrell Folger, a former indentured servant. Mary Folger came from a Puritan family that was among the first Pilgrims to flee to Massachusetts for religious freedom, sailing for Boston in 1635 after King Charles I of England had begun persecuting Puritans. Her father Peter was "the sort of rebel destined to transform colonial America." As clerk of the court, he was arrested on February 10, 1676, and jailed on February 19 for his inability to pay bail. He spent over a year and a half in jail.
Early life and education
Boston
Franklin was born on Milk Street in Boston, Province of Massachusetts Bay on January 17, 1706, and baptized at the Old South Meeting House in Boston. As a child growing up along the Charles River, Franklin recalled that he was "generally the leader among the boys."
Franklin's father wanted him to attend school with the clergy but only had enough money to send him to school for two years. He attended Boston Latin School but did not graduate; he continued his education through voracious reading. Although "his parents talked of the church as a career" for Franklin, his schooling ended when he was ten. He worked for his father for a time, and at 12 he became an apprentice to his brother James, a printer, who taught him the printing trade. When Benjamin was 15, James founded The New-England Courant, which was the third newspaper founded in Boston.
When denied the chance to write a letter to the paper for publication, Franklin adopted the pseudonym of "Silence Dogood," a middle-aged widow. Mrs. Dogood's letters were published and became a subject of conversation around town. Neither James nor the Courant readers were aware of the ruse, and James was unhappy with Benjamin when he discovered the popular correspondent was his younger brother. Franklin was an advocate of free speech from an early age. When his brother was jailed for three weeks in 1722 for publishing material unflattering to the governor, young Franklin took over the newspaper and had Mrs. Dogood proclaim, quoting Cato's Letters, "Without freedom of thought there can be no such thing as wisdom and no such thing as public liberty without freedom of speech." Franklin left his apprenticeship without his brother's permission, and in so doing became a fugitive.
Moves to Philadelphia and London
At age 17, Franklin ran away to Philadelphia, seeking a new start in a new city. When he first arrived, he worked in several printing shops there, but he was not satisfied by the immediate prospects in any of these jobs. After a few months, while working in one printing house, Pennsylvania governor Sir William Keith convinced him to go to London, ostensibly to acquire the equipment necessary for establishing another newspaper in Philadelphia. Discovering that Keith's promises of backing a newspaper were empty, he worked as a typesetter in a printer's shop in what is today the Lady Chapel of Church of St Bartholomew-the-Great in the Smithfield area of London, which had at that time been deconsecrated. He returned to Philadelphia in 1726 with the help of Thomas Denham, an English merchant who had emigrated but returned to England, and who employed Franklin as a clerk, shopkeeper, and bookkeeper in his business.
Junto and library
In 1727, at age 21, Franklin formed the Junto, a group of "like minded aspiring artisans and tradesmen who hoped to improve themselves while they improved their community." The Junto was a discussion group for issues of the day; it subsequently gave rise to many organizations in Philadelphia. The Junto was modeled after English coffeehouses that Franklin knew well and which had become the center of the spread of Enlightenment ideas in Britain.
Reading was a great pastime of the Junto, but books were rare and expensive. The members created a library, initially assembled from their own books, after Franklin wrote:
This did not suffice, however. Franklin conceived the idea of a subscription library, which would pool the funds of the members to buy books for all to read. This was the birth of the Library Company of Philadelphia, whose charter he composed in 1731.
Newspaperman
Upon Denham's death, Franklin returned to his former trade. In 1728, he set up a printing house in partnership with Hugh Meredith; the following year he became the publisher of The Pennsylvania Gazette, a newspaper in Philadelphia. The Gazette gave Franklin a forum for agitation about a variety of local reforms and initiatives through printed essays and observations. Over time, his commentary, and his adroit cultivation of a positive image as an industrious and intellectual young man, earned him a great deal of social respect. But even after he achieved fame as a scientist and statesman, he habitually signed his letters with the unpretentious 'B. Franklin, Printer.'
In 1732, he published the first German-language newspaper in America – Die Philadelphische Zeitung – although it failed after only one year because four other newly founded German papers quickly dominated the newspaper market. Franklin also printed Moravian religious books in German. He often visited Bethlehem, Pennsylvania, staying at the Moravian Sun Inn. In a 1751 pamphlet on demographic growth and its implications for the Thirteen Colonies, he called the Pennsylvania Germans "Palatine Boors" who could never acquire the "Complexion" of Anglo-American settlers and referred to "Blacks and Tawneys" as weakening the social structure of the colonies. Although he apparently reconsidered shortly thereafter, and the phrases were omitted from all later printings of the pamphlet, his views may have played a role in his political defeat in 1764.
According to Ralph Frasca, Franklin promoted the printing press as a device to instruct colonial Americans in moral virtue. Frasca argues he saw this as a service to God, because he understood moral virtue in terms of actions, thus, doing good provides a service to God. Despite his own moral lapses, Franklin saw himself as uniquely qualified to instruct Americans in morality. He tried to influence American moral life through the construction of a printing network based on a chain of partnerships from the Carolinas to New England. He thereby invented the first newspaper chain. It was more than a business venture, for like many publishers he believed that the press had a public-service duty.
When he established himself in Philadelphia, shortly before 1730, the town boasted two "wretched little" news sheets, Andrew Bradford's The American Weekly Mercury and Samuel Keimer's Universal Instructor in all Arts and Sciences, and Pennsylvania Gazette. This instruction in all arts and sciences consisted of weekly extracts from Chambers's Universal Dictionary. Franklin quickly did away with all of this when he took over the Instructor and made it The Pennsylvania Gazette. The Gazette soon became his characteristic organ, which he freely used for satire, for the play of his wit, even for sheer excess of mischief or of fun. From the first, he had a way of adapting his models to his own uses. The series of essays called "The Busy-Body," which he wrote for Bradford's American Mercury in 1729, followed the general Addisonian form, already modified to suit homelier conditions. The thrifty Patience, in her busy little shop, complaining of the useless visitors who waste her valuable time, is related to the women who address Mr. Spectator. The Busy-Body himself is a true Censor Morum, as Isaac Bickerstaff had been in the Tatler. And a number of the fictitious characters, Ridentius, Eugenius, Cato, and Cretico, represent traditional 18th-century classicism. Even this Franklin could use for contemporary satire, since Cretico, the "sowre Philosopher," is evidently a portrait of his rival, Samuel Keimer.
Franklin had mixed success in his plan to establish an inter-colonial network of newspapers that would produce a profit for him and disseminate virtue. Over the years he sponsored two dozen printers in Pennsylvania, South Carolina, New York, Connecticut, and even the Caribbean. By 1753, eight of the fifteen English language newspapers in the colonies were published by him or his partners. He began in Charleston, South Carolina, in 1731. After his second editor died, the widow, Elizabeth Timothy, took over and made it a success. She was one of the colonial era's first woman printers. For three decades Franklin maintained a close business relationship with her and her son Peter Timothy, who took over the South Carolina Gazette in 1746. The Gazette was impartial in political debates, while creating the opportunity for public debate, which encouraged others to challenge authority. Timothy avoided blandness and crude bias and, after 1765, increasingly took a patriotic stand in the growing crisis with Great Britain. Franklin's Connecticut Gazette (1755–68), however, proved unsuccessful. As the Revolution approached, political strife slowly tore his network apart.
Freemasonry
In 1730 or 1731, Franklin was initiated into the local Masonic lodge. He became a grand master in 1734, indicating his rapid rise to prominence in Pennsylvania. The same year, he edited and published the first Masonic book in the Americas, a reprint of James Anderson's Constitutions of the Free-Masons. He was the secretary of St. John's Lodge in Philadelphia from 1735 to 1738.
In January 1738, "Franklin appeared as a witness" in a manslaughter trial against two men who killed "a simple-minded apprentice" named Daniel Rees in a fake Masonic initiation gone wrong. One of the men "threw, or accidentally spilled, the burning spirits, and Daniel Rees died of his burns two days later." While Franklin did not directly participate in the hazing that led to Rees' death, he knew of the hazing before it turned fatal, and did nothing to stop it. He was criticized for his inaction in The American Weekly Mercury, by his publishing rival Andrew Bradford. Ultimately, "Franklin replied in his own defense in the Gazette."
Franklin remained a Freemason for the rest of his life.
Common-law marriage to Deborah Read
At age 17 in 1723, Franklin proposed to 15-year-old Deborah Read while a boarder in the Read home. At that time, Deborah's mother was wary of allowing her young daughter to marry Franklin, who was on his way to London at Governor Keith's request, and also because of his financial instability. Her own husband had recently died, and she declined Franklin's request to marry her daughter.
Franklin travelled to London, and after he failed to communicate as expected with Deborah and her family, they interpreted his long silence as a breaking of his promises. At the urging of her mother, Deborah married a potter named John Rogers on August 5, 1725. John soon fled to Barbados with her dowry in order to avoid debts and prosecution. Since Rogers' fate was unknown, bigamy laws prevented Deborah from remarrying.
Franklin returned in 1726 and resumed his courtship of Deborah. They established a common-law marriage on September 1, 1730. They took in his recently acknowledged illegitimate young son and raised him in their household. They had two children together. Their son, Francis Folger Franklin, was born in October 1732 and died of smallpox in 1736. Their daughter, Sarah "Sally" Franklin, was born in 1743 and eventually married Richard Bache.
Deborah's fear of the sea meant that she never accompanied Franklin on any of his extended trips to Europe; another possible reason why they spent much time apart is that he may have blamed her for possibly preventing their son Francis from being inoculated against the disease that subsequently killed him. Deborah wrote to him in November 1769, saying she was ill due to "dissatisfied distress" from his prolonged absence, but he did not return until his business was done. Deborah Read Franklin died of a stroke on December 14, 1774, while Franklin was on an extended mission to Great Britain; he returned in 1775.
William Franklin
In 1730, 24-year-old Franklin publicly acknowledged his illegitimate son William and raised him in his household. William was born on February 22, 1730, but his mother's identity is unknown. He was educated in Philadelphia and beginning at about age 30 studied law in London in the early 1760s. William himself fathered an illegitimate son, William Temple Franklin, born on the same day and month: February 22, 1760. The boy's mother was never identified, and he was placed in foster care. In 1762, the elder William Franklin married Elizabeth Downes, daughter of a planter from Barbados, in London. In 1763, he was appointed as the last royal governor of New Jersey.
A Loyalist to the king, William Franklin saw his relations with father Benjamin eventually break down over their differences about the American Revolutionary War, as Benjamin Franklin could never accept William's position. Deposed in 1776 by the revolutionary government of New Jersey, William was placed under house arrest at his home in Perth Amboy for six months. After the Declaration of Independence, he was formally taken into custody by order of the Provincial Congress of New Jersey, an entity which he refused to recognize, regarding it as an "illegal assembly." He was incarcerated in Connecticut for two years, in Wallingford and Middletown, and, after being caught surreptitiously engaging Americans into supporting the Loyalist cause, was held in solitary confinement at Litchfield for eight months. When finally released in a prisoner exchange in 1778, he moved to New York City, which was occupied by the British at the time.
While in New York City, he became leader of the Board of Associated Loyalists, a quasi-military organization chartered by King George III and headquartered in New York City. They initiated guerrilla forays into New Jersey, southern Connecticut, and New York counties north of the city. When British troops evacuated from New York, William Franklin left with them and sailed to England. He settled in London, never to return to North America. In the preliminary peace talks in 1782 with Britain, "... Benjamin Franklin insisted that loyalists who had borne arms against the United States would be excluded from this plea (that they be given a general pardon). He was undoubtedly thinking of William Franklin."
Success as an author
In 1732, Franklin began to publish the noted Poor Richard's Almanack (with content both original and borrowed) under the pseudonym Richard Saunders, on which much of his popular reputation is based. He frequently wrote under pseudonyms. The first issue published was for the upcoming year, 1733. He had developed a distinct, signature style that was plain, pragmatic and had a sly, soft but self-deprecating tone with declarative sentences. Although it was no secret that he was the author, his Richard Saunders character repeatedly denied it. "Poor Richard's Proverbs," adages from this almanac, such as "A penny saved is twopence dear" (often misquoted as "A penny saved is a penny earned") and "Fish and visitors stink in three days," remain common quotations in the modern world. Wisdom in folk society meant the ability to provide an apt adage for any occasion, and his readers became well prepared. He sold about ten thousand copies per year—it became an institution. In 1741, Franklin began publishing The General Magazine and Historical Chronicle for all the British Plantations in America. He used the heraldic badge of the Prince of Wales as the cover illustration.
Franklin wrote a letter, "Advice to a Friend on Choosing a Mistress," dated June 25, 1745, in which he gives advice to a young man about channeling sexual urges. Due to its licentious nature, it was not published in collections of his papers during the 19th century. Federal court rulings from the mid-to-late 20th century cited the document as a reason for overturning obscenity laws and against censorship.
Public life
Early steps in Pennsylvania
In 1736, Franklin created the Union Fire Company, one of the first volunteer firefighting companies in America. In the same year, he printed a new currency for New Jersey based on innovative anti-counterfeiting techniques he had devised. Throughout his career, he was an advocate for paper money, publishing A Modest Enquiry into the Nature and Necessity of a Paper Currency in 1729, and his printer printed money. He was influential in the more restrained and thus successful monetary experiments in the Middle Colonies, which stopped deflation without causing excessive inflation. In 1766, he made a case for paper money to the British House of Commons.
As he matured, Franklin began to concern himself more with public affairs. In 1743, he first devised a scheme for the Academy, Charity School, and College of Philadelphia; however, the person he had in mind to run the academy, Rev. Richard Peters, refused and Franklin put his ideas away until 1749 when he printed his own pamphlet, Proposals Relating to the Education of Youth in Pensilvania. He was appointed president of the Academy on November 13, 1749; the academy and the charity school opened in 1751.
In 1743, he founded the American Philosophical Society to help scientific men discuss their discoveries and theories. He began the electrical research that, along with other scientific inquiries, would occupy him for the rest of his life, in between bouts of politics and moneymaking.
During King George's War, Franklin raised a militia called the Association for General Defense because the legislators of the city had decided to take no action to defend Philadelphia "either by erecting fortifications or building Ships of War." He raised money to create earthwork defenses and buy artillery. The largest of these was the "Association Battery" or "Grand Battery" of 50 guns.
In 1747, Franklin (already a very wealthy man) retired from printing and went into other businesses. He formed a partnership with his foreman, David Hall, which provided Franklin with half of the shop's profits for 18 years. This lucrative business arrangement provided leisure time for study, and in a few years he had made many new discoveries.
Franklin became involved in Philadelphia politics and rapidly progressed. In October 1748, he was selected as a councilman; in June 1749, he became a justice of the peace for Philadelphia; and in 1751, he was elected to the Pennsylvania Assembly. On August 10, 1753, he was appointed deputy postmaster-general of British North America. His service in domestic politics included reforming the postal system, with mail sent out every week.
In 1751, Franklin and Thomas Bond obtained a charter from the Pennsylvania legislature to establish a hospital. Pennsylvania Hospital was the first hospital in the colonies. In 1752, Franklin organized the Philadelphia Contributionship, the Colonies' first homeowner's insurance company.
Between 1750 and 1753, the "educational triumvirate" of Franklin, Samuel Johnson of Stratford, Connecticut, and schoolteacher William Smith built on Franklin's initial scheme and created what Bishop James Madison, president of the College of William & Mary, called a "new-model" plan or style of American college. Franklin solicited, printed in 1752, and promoted an American textbook of moral philosophy by Samuel Johnson, titled Elementa Philosophica, to be taught in the new colleges. In June 1753, Johnson, Franklin, and Smith met in Stratford. They decided the new-model college would focus on the professions, with classes taught in English instead of Latin, have subject matter experts as professors instead of one tutor leading a class for four years, and there would be no religious test for admission. Johnson went on to found King's College (now Columbia University) in New York City in 1754, while Franklin hired Smith as provost of the College of Philadelphia, which opened in 1755. At its first commencement, on May 17, 1757, seven men graduated; six with a Bachelor of Arts and one with a Master of Arts. It was later merged with the University of the State of Pennsylvania to become the University of Pennsylvania. The college was to become influential in guiding the founding documents of the United States: in the Continental Congress, for example, over one-third of the college-affiliated men who contributed to the Declaration of Independence between September 4, 1774, and July 4, 1776, were affiliated with the college.
In 1754, he headed the Pennsylvania delegation to the Albany Congress. This meeting of several colonies had been requested by the Board of Trade in England to improve relations with the Indians and defense against the French. Franklin proposed a broad Plan of Union for the colonies. While the plan was not adopted, elements of it found their way into the Articles of Confederation and the Constitution.
In 1753, Harvard University and Yale awarded him honorary master of arts degrees. In 1756, he was awarded an honorary Master of Arts degree from the College of William & Mary. Later in 1756, Franklin organized the Pennsylvania Militia. He used Tun Tavern as a gathering place to recruit a regiment of soldiers to go into battle against the Native American uprisings that beset the American colonies.
Postmaster
Well known as a printer and publisher, Franklin was appointed postmaster of Philadelphia in 1737, holding the office until 1753, when he and publisher William Hunter were named deputy postmasters–general of British North America, the first to hold the office. (Joint appointments were standard at the time, for political reasons.) He was responsible for the British colonies from Pennsylvania north and east, as far as the island of Newfoundland. A post office for local and outgoing mail had been established in Halifax, Nova Scotia, by local stationer Benjamin Leigh, on April 23, 1754, but service was irregular. Franklin opened the first post office to offer regular, monthly mail in Halifax on December 9, 1755. Meantime, Hunter became postal administrator in Williamsburg, Virginia, and oversaw areas south of Annapolis, Maryland. Franklin reorganized the service's accounting system and improved speed of delivery between Philadelphia, New York, and Boston. By 1761, efficiencies led to the first profits for the colonial post office.
When the lands of New France were ceded to the British under the Treaty of Paris in 1763, the British province of Quebec was created among them, and Franklin saw mail service expanded between Montreal, Trois-Rivières, Quebec City, and New York. For the greater part of his appointment, he lived in England (from 1757 to 1762, and again from 1764 to 1774)—about three-quarters of his term. Eventually, his sympathies for the rebel cause in the American Revolution led to his dismissal on January 31, 1774.
On July 26, 1775, the Second Continental Congress established the United States Post Office and named Franklin as the first United States postmaster general. He had been a postmaster for decades and was a natural choice for the position. He had just returned from England and was appointed chairman of a Committee of Investigation to establish a postal system. The report of the committee, providing for the appointment of a postmaster general for the 13 American colonies, was considered by the Continental Congress on July 25 and 26. On July 26, 1775, Franklin was appointed postmaster general, the first appointed under the Continental Congress. His apprentice, William Goddard, felt that his ideas were mostly responsible for shaping the postal system and that the appointment should have gone to him, but he graciously conceded it to Franklin, 36 years his senior. Franklin, however, appointed Goddard as Surveyor of the Posts, issued him a signed pass, and directed him to investigate and inspect the various post offices and mail routes as he saw fit. The newly established postal system became the United States Post Office, a system that continues to operate today.
Political work
In 1757, he was sent to England by the Pennsylvania Assembly as a colonial agent to protest against the political influence of the Penn family, the proprietors of the colony. He remained there for five years, striving to end the proprietors' prerogative to overturn legislation from the elected Assembly and their exemption from paying taxes on their land. His lack of influential allies in Whitehall led to the failure of this mission.
At this time, many members of the Pennsylvania Assembly were feuding with William Penn's heirs, who controlled the colony as proprietors. After his return to the colony, Franklin led the "anti-proprietary party" in the struggle against the Penn family and was elected Speaker of the Pennsylvania House in May 1764. His call for a change from proprietary to royal government was a rare political miscalculation, however: Pennsylvanians worried that such a move would endanger their political and religious freedoms. Because of these fears and because of political attacks on his character, Franklin lost his seat in the October 1764 Assembly elections. The anti-proprietary party dispatched him to England again to continue the struggle against the Penn family proprietorship. During this trip, events drastically changed the nature of his mission.
In London, Franklin opposed the 1765 Stamp Act. Unable to prevent its passage, he made another political miscalculation and recommended a friend to the post of stamp distributor for Pennsylvania. Pennsylvanians were outraged, believing that he had supported the measure all along, and threatened to destroy his home in Philadelphia. Franklin soon learned of the extent of colonial resistance to the Stamp Act, and he testified during the House of Commons proceedings that led to its repeal. With this, Franklin suddenly emerged as the leading spokesman for American interests in England. He wrote popular essays on behalf of the colonies. Georgia, New Jersey, and Massachusetts also appointed him as their agent to the Crown.
During his lengthy missions to London between 1757 and 1775, Franklin lodged in a house on Craven Street, just off the Strand in central London. During his stays there, he developed a close friendship with his landlady, Margaret Stevenson, and her circle of friends and relations, in particular, her daughter Mary, who was more often known as Polly. The house is now a museum known as the Benjamin Franklin House. Whilst in London, Franklin became involved in radical politics. He belonged to a gentlemen's club (which he called "the honest Whigs"), which held stated meetings, and included members such as Richard Price, the minister of Newington Green Unitarian Church who ignited the Revolution controversy, and Andrew Kippis.
Scientific work
In 1756, Franklin had become a member of the Society for the Encouragement of Arts, Manufactures & Commerce (now the Royal Society of Arts), which had been founded in 1754. After his return to the United States in 1775, he became the Society's Corresponding Member, continuing a close connection. The Royal Society of Arts instituted a Benjamin Franklin Medal in 1956 to commemorate the 250th anniversary of his birth and the 200th anniversary of his membership of the RSA.
The study of natural philosophy (referred today as science in general) drew him into overlapping circles of acquaintance. Franklin was, for example, a corresponding member of the Lunar Society of Birmingham. In 1759, the University of St Andrews awarded him an honorary doctorate in recognition of his accomplishments. In October 1759, he was granted Freedom of the Borough of St Andrews. He was also awarded an honorary doctorate by Oxford University in 1762. Because of these honors, he was often addressed as " Franklin."
While living in London in 1768, he developed a phonetic alphabet in A Scheme for a new Alphabet and a Reformed Mode of Spelling. This reformed alphabet discarded six letters he regarded as redundant (c, j, q, w, x, and y), and substituted six new letters for sounds he felt lacked letters of their own. This alphabet never caught on, and he eventually lost interest.
Return to London and Travels in Europe
From the mid-1750s to the mid-1770s, Franklin returned to England and spent much of his time in London., using the city as a base from which to travel. In 1771, he made short journeys through different parts of England, staying with Joseph Priestley at Leeds, Thomas Percival at Manchester and Erasmus Darwin at Lichfield. In Scotland, he spent five days with Lord Kames near Stirling and stayed for three weeks with David Hume in Edinburgh. In 1759, he visited Edinburgh with his son and later reported that he considered his six weeks in Scotland "six weeks of the densest happiness I have met with in any part of my life."
In Ireland, he stayed with Lord Hillsborough. Franklin noted of him that "all the plausible behaviour I have described is meant only, by patting and stroking the horse, to make him more patient, while the reins are drawn tighter, and the spurs set deeper into his sides." In Dublin, Franklin was invited to sit with the members of the Irish Parliament rather than in the gallery. He was the first American to receive this honor. While touring Ireland, he was deeply moved by the level of poverty he witnessed. The economy of the Kingdom of Ireland was affected by the same trade regulations and laws that governed the Thirteen Colonies. He feared that the American colonies could eventually come to the same level of poverty if the regulations and laws continued to apply to them.
Franklin spent two months in German lands in 1766, but his connections to the country stretched across a lifetime. He declared a debt of gratitude to German scientist Otto von Guericke for his early studies of electricity. Franklin also co-authored the first treaty of friendship between Prussia and America in 1785. In September 1767, he visited Paris with his usual traveling partner, Sir John Pringle, 1st Baronet. News of his electrical discoveries was widespread in France. His reputation meant that he was introduced to many influential scientists and politicians, and also to King Louis XV.
Defending the American cause
One line of argument in Parliament was that Americans should pay a share of the costs of the French and Indian War and therefore taxes should be levied on them. Franklin became the American spokesman in highly publicized testimony in Parliament in 1766. He stated that Americans already contributed heavily to the defense of the Empire. He said local governments had raised, outfitted and paid 25,000 soldiers to fight France—as many as Britain itself sent—and spent many millions from American treasuries doing so in the French and Indian War alone.
In 1772, Franklin obtained private letters of Thomas Hutchinson and Andrew Oliver, governor and lieutenant governor of the Province of Massachusetts Bay, proving that they had encouraged the Crown to crack down on Bostonians. Franklin sent them to America, where they escalated tensions. The letters were finally leaked to the public in the Boston Gazette in mid-June 1773, causing a political firestorm in Massachusetts and raising significant questions in England. The British began to regard him as the fomenter of serious trouble. Hopes for a peaceful solution ended as he was systematically ridiculed and humiliated by Solicitor-General Alexander Wedderburn, before the Privy Council on January 29, 1774. He returned to Philadelphia in March 1775, and abandoned his accommodationist stance.
In 1773, Franklin published two of his most celebrated pro-American satirical essays: "Rules by Which a Great Empire May Be Reduced to a Small One," and "An Edict by the King of Prussia."
Agent for British and Hellfire Club membership
Franklin is known to have occasionally attended the Hellfire Club's meetings during 1758 as a non-member during his time in England. However, some authors and historians would argue he was in fact a British spy. As there are no records left (having been burned in 1774), many of these members are just assumed or linked by letters sent to each other. One early proponent that Franklin was a member of the Hellfire Club and a double agent is the historian Donald McCormick, who has a history of making controversial claims.
Coming of revolution
In 1763, soon after Franklin returned to Pennsylvania from England for the first time, the western frontier was engulfed in a bitter war known as Pontiac's Rebellion. The Paxton Boys, a group of settlers convinced that the Pennsylvania government was not doing enough to protect them from American Indian raids, murdered a group of peaceful Susquehannock Indians and marched on Philadelphia. Franklin helped to organize a local militia to defend the capital against the mob. He met with the Paxton leaders and persuaded them to disperse. Franklin wrote a scathing attack against the racial prejudice of the Paxton Boys. "If an Indian injures me," he asked, "does it follow that I may revenge that injury on all Indians?"
He provided an early response to British surveillance through his own network of counter-surveillance and manipulation. "He waged a public relations campaign, secured secret aid, played a role in privateering expeditions, and churned out effective and inflammatory propaganda."
Declaration of Independence
By the time Franklin arrived in Philadelphia on May 5, 1775, after his second mission to Great Britain, the American Revolution had begun at the Battles of Lexington and Concord the previous month, on April 19, 1775. The New England militia had forced the main British army to remain inside Boston. The Pennsylvania Assembly unanimously chose Franklin as their delegate to the Second Continental Congress. In June 1776, he was appointed a member of the Committee of Five that drafted the Declaration of Independence. Although he was temporarily disabled by gout and unable to attend most meetings of the committee, he made several "small but important" changes to the draft sent to him by Thomas Jefferson.
At the signing, he is quoted as having replied to a comment by John Hancock that they must all hang together, saying, "Yes, we must, indeed, all hang together, or most assuredly we shall all hang separately."
Ambassador to France (1776–1785)
On October 26, 1776, Franklin was dispatched to France as commissioner for the United States. He took with him as secretary his 16-year-old grandson, William Temple Franklin. They lived in a home in the Parisian suburb of Passy, donated by Jacques-Donatien Le Ray de Chaumont, who supported the United States. Franklin remained in France until 1785. He conducted the affairs of his country toward the French nation with great success, which included securing a critical military alliance in 1778 and signing the 1783 Treaty of Paris.
Among his associates in France was Honoré Gabriel Riqueti, comte de Mirabeau—a French Revolutionary writer, orator and statesman who in 1791 was elected president of the National Assembly. In July 1784, Franklin met with Mirabeau and contributed anonymous materials that the Frenchman used in his first signed work: Considerations sur l'ordre de Cincinnatus. The publication was critical of the Society of the Cincinnati, established in the United States. Franklin and Mirabeau thought of it as a "noble order," inconsistent with the egalitarian ideals of the new republic.
During his stay in France, he was active as a Freemason, serving as venerable master of the lodge Les Neuf Sœurs from 1779 until 1781. In 1784, when Franz Mesmer began to publicize his theory of "animal magnetism" which was considered offensive by many, Louis XVI appointed a commission to investigate it. These included the chemist Antoine Lavoisier, the physician Joseph-Ignace Guillotin, the astronomer Jean Sylvain Bailly, and Franklin. In doing so, the committee concluded, through blind trials that mesmerism only seemed to work when the subjects expected it, which discredited mesmerism and became the first major demonstration of the placebo effect, which was described at that time as "imagination." In 1781, he was elected a fellow of the American Academy of Arts and Sciences.
Franklin's advocacy for religious tolerance in France contributed to arguments made by French philosophers and politicians that resulted in Louis XVI's signing of the Edict of Versailles in November 1787. This edict effectively nullified the Edict of Fontainebleau, which had denied non-Catholics civil status and the right to openly practice their faith.
Franklin also served as American minister to Sweden, although he never visited that country. He negotiated a treaty that was signed in April 1783. On August 27, 1783, in Paris, he witnessed the world's first hydrogen balloon flight. Le Globe, created by professor Jacques Charles and Les Frères Robert, was watched by a vast crowd as it rose from the Champ de Mars (now the site of the Eiffel Tower). Franklin became so enthusiastic that he subscribed financially to the next project to build a manned hydrogen balloon. On December 1, 1783, Franklin was seated in the special enclosure for honored guests it took off from the Jardin des Tuileries, piloted by Charles and Nicolas-Louis Robert. Walter Isaacson describes a chess game between Franklin and the Duchess of Bourbon, "who made a move that inadvertently exposed her king. Ignoring the rules of the game, he promptly captured it. 'Ah,' said the duchess, 'we do not take Kings so.' Replied Franklin in a famous quip: 'We do in America.'"
Return to America
When he returned home in 1785, Franklin occupied a position second only to that of George Washington as the champion of American independence.
Le Ray honored him with a commissioned portrait painted by Joseph Duplessis, which now hangs in the National Portrait Gallery of the Smithsonian Institution in Washington, D.C. After his return, Franklin became an abolitionist and freed his two slaves. He eventually became president of the Pennsylvania Abolition Society.
President of Pennsylvania and Delegate to the Constitutional convention
Special balloting conducted October 18, 1785, unanimously elected him the sixth president of the Supreme Executive Council of Pennsylvania, replacing John Dickinson. The office was practically that of the governor. He held that office for slightly over three years, longer than any other, and served the constitutional limit of three full terms. Shortly after his initial election, he was re-elected to a full term on October 29, 1785, and again in the fall of 1786 and on October 31, 1787. In that capacity, he served as host to the Constitutional Convention of 1787 in Philadelphia.
He also served as a delegate to the Convention. It was primarily an honorary position and he seldom engaged in debate.
According to James McHenry, a Mrs. Powel asked Franklin what kind of government they had wrought. He replied: "A republic, madam, if you can keep it."
Death
Franklin suffered from obesity throughout his middle age and elder years, which resulted in multiple health problems, particularly gout, which worsened as he aged. In poor health during the signing of the U.S. Constitution in 1787, he was rarely seen in public from then until his death.
Franklin died from pleuritic attack at his home in Philadelphia on April 17, 1790. He was aged 84 at the time of his death. His last words were reportedly, "a dying man can do nothing easy," to his daughter after she suggested that he change position in bed and lie on his side so he could breathe more easily. Franklin's death is described in the book The Life of Benjamin Franklin, quoting from the account of John Paul Jones:
Approximately 20,000 people attended Franklin's funeral after which he was interred in Christ Church Burial Ground in Philadelphia. Upon learning of his death, the Constitutional Assembly in Revolutionary France entered into a state of mourning for a period of three days, and memorial services were conducted in honor of Franklin throughout the country.
In 1728, aged 22, Franklin wrote what he hoped would be his own epitaph:
Franklin's actual grave, however, as he specified in his final will, simply reads "Benjamin and Deborah Franklin."
Inventions and scientific inquiries
Franklin was a prodigious inventor. Among his many creations were the lightning rod, Franklin stove, bifocal glasses and the flexible urinary catheter. He never patented his inventions; in his autobiography he wrote, "... as we enjoy great advantages from the inventions of others, we should be glad of an opportunity to serve others by any invention of ours; and this we should do freely and generously."
Electricity
Franklin started exploring the phenomenon of electricity in the 1740s, after he met the itinerant lecturer Archibald Spencer, who used static electricity in his demonstrations. He proposed that "vitreous" and "resinous" electricity were not different types of "electrical fluid" (as electricity was called then), but the same "fluid" under different pressures. (The same proposal was made independently that same year by William Watson.) He was the first to label them as positive and negative respectively, which replaced the then current distinction made between 'vitreous' and 'resinous' electricity, and he was the first to discover the principle of conservation of charge. In 1748, he constructed a multiple plate capacitor, that he called an "electrical battery" (not a true battery like Volta's pile) by placing eleven panes of glass sandwiched between lead plates, suspended with silk cords and connected by wires.
In pursuit of more pragmatic uses for electricity, remarking in spring 1749 that he felt "chagrin'd a little" that his experiments had heretofore resulted in "Nothing in this Way of Use to Mankind," Franklin planned a practical demonstration. He proposed a dinner party where a turkey was to be killed via electric shock and roasted on an electrical spit. After having prepared several turkeys this way, he noted that "the birds kill'd in this manner eat uncommonly tender." Franklin recounted that in the process of one of these experiments, he was shocked by a pair of Leyden jars, resulting in numbness in his arms that persisted for one evening, noting "I am Ashamed to have been Guilty of so Notorious a Blunder."
Franklin briefly investigated electrotherapy, including the use of the electric bath. This work led to the field becoming widely known. In recognition of his work with electricity, he received the Royal Society's Copley Medal in 1753, and in 1756, he became one of the few 18th-century Americans elected a fellow of the Society. The CGS unit of electric charge has been named after him: one franklin (Fr) is equal to one statcoulomb.
Franklin advised Harvard University in its acquisition of new electrical laboratory apparatus after the complete loss of its original collection, in a fire that destroyed the original Harvard Hall in 1764. The collection he assembled later became part of the Harvard Collection of Historical Scientific Instruments, now on public display in its Science Center.
Kite experiment and lightning rod
Franklin published a proposal for an experiment to prove that lightning is electricity by flying a kite in a storm. On May 10, 1752, Thomas-François Dalibard of France conducted Franklin's experiment using a iron rod instead of a kite, and he extracted electrical sparks from a cloud. On June 15, 1752, Franklin may possibly have conducted his well-known kite experiment in Philadelphia, successfully extracting sparks from a cloud. He described the experiment in his newspaper, The Pennsylvania Gazette, on October 19, 1752, without mentioning that he himself had performed it. This account was read to the Royal Society on December 21 and printed as such in the Philosophical Transactions. Joseph Priestley published an account with additional details in his 1767 History and Present Status of Electricity. Franklin was careful to stand on an insulator, keeping dry under a roof to avoid the danger of electric shock. Others, such as Georg Wilhelm Richmann in Russia, were indeed electrocuted in performing lightning experiments during the months immediately following his experiment.
In his writings, Franklin indicates that he was aware of the dangers and offered alternative ways to demonstrate that lightning was electrical, as shown by his use of the concept of electrical ground. He did not perform this experiment in the way that is often pictured in popular literature, flying the kite and waiting to be struck by lightning, as it would have been dangerous. Instead he used the kite to collect some electric charge from a storm cloud, showing that lightning was electrical. On October 19, 1752, in a letter to England with directions for repeating the experiment, he wrote:
Franklin's electrical experiments led to his invention of the lightning rod. He said that conductors with a sharp rather than a smooth point could discharge silently and at a far greater distance. He surmised that this could help protect buildings from lightning by attaching "upright Rods of Iron, made sharp as a Needle and gilt to prevent Rusting, and from the Foot of those Rods a Wire down the outside of the Building into the Ground; ... Would not these pointed Rods probably draw the Electrical Fire silently out of a Cloud before it came nigh enough to strike, and thereby secure us from that most sudden and terrible Mischief!" Following a series of experiments on Franklin's own house, lightning rods were installed on the Academy of Philadelphia (later the University of Pennsylvania) and the Pennsylvania State House (later Independence Hall) in 1752.
Population studies
Franklin had a major influence on the emerging science of demography or population studies. In the 1730s and 1740s, he began taking notes on population growth, finding that the American population had the fastest growth rate on Earth. Emphasizing that population growth depended on food supplies, he emphasized the abundance of food and available farmland in America. He calculated that America's population was doubling every 20 years and would surpass that of England in a century. In 1751, he drafted Observations concerning the Increase of Mankind, Peopling of Countries, etc. Four years later, it was anonymously printed in Boston and was quickly reproduced in Britain, where it influenced the economist Adam Smith and later the demographer Thomas Malthus, who credited Franklin for discovering a rule of population growth. Franklin's predictions on how British mercantilism was unsustainable alarmed British leaders who did not want to be surpassed by the colonies, so they became more willing to impose restrictions on the colonial economy.
Kammen (1990) and Drake (2011) say Franklin's Observations concerning the Increase of Mankind (1755) stands alongside Ezra Stiles' "Discourse on Christian Union" (1760) as the leading works of 18th-century Anglo-American demography; Drake credits Franklin's "wide readership and prophetic insight." Franklin was also a pioneer in the study of slave demography, as shown in his 1755 essay. In his capacity as a farmer, he wrote at least one critique about the negative consequences of price controls, trade restrictions, and subsidy of the poor. This is succinctly preserved in his letter to the London Chronicle published November 29, 1766, titled "On the Price of Corn, and Management of the poor."
Oceanography
As deputy postmaster, Franklin became interested in North Atlantic Ocean circulation patterns. While in England in 1768, he heard a complaint from the Colonial Board of Customs. British packet ships carrying mail had taken several weeks longer to reach New York than it took an average merchant ship to reach Newport, Rhode Island. The merchantmen had a longer and more complex voyage because they left from London, while the packets left from Falmouth in Cornwall. Franklin put the question to his cousin Timothy Folger, a Nantucket whaler captain, who told him that merchant ships routinely avoided a strong eastbound mid-ocean current. The mail packet captains sailed dead into it, thus fighting an adverse current of . Franklin worked with Folger and other experienced ship captains, learning enough to chart the current and name it the Gulf Stream, by which it is still known today.
Franklin published his Gulf Stream chart in 1770 in England, where it was ignored. Subsequent versions were printed in France in 1778 and the U.S. in 1786. The British original edition of the chart had been so thoroughly ignored that everyone assumed it was lost forever until Phil Richardson, a Woods Hole oceanographer and Gulf Stream expert, discovered it in the Bibliothèque Nationale in Paris in 1980. This find received front-page coverage in The New York Times. It took many years for British sea captains to adopt Franklin's advice on navigating the current; once they did, they were able to trim two weeks from their sailing time. In 1853, the oceanographer and cartographer Matthew Fontaine Maury noted that while Franklin charted and codified the Gulf Stream, he did not discover it:
An aging Franklin accumulated all his oceanographic findings in Maritime Observations, published by the Philosophical Society's transactions in 1786. It contained ideas for sea anchors, catamaran hulls, watertight compartments, shipboard lightning rods and a soup bowl designed to stay stable in stormy weather.
Theories and experiments
Franklin was, along with his contemporary Leonhard Euler, the only major scientist who supported Christiaan Huygens's wave theory of light, which was basically ignored by the rest of the scientific community. In the 18th century, Isaac Newton's corpuscular theory was held to be true; it took Thomas Young's well-known slit experiment in 1803 to persuade most scientists to believe Huygens's theory.
On October 21, 1743, according to the popular myth, a storm moving from the southwest denied Franklin the opportunity of witnessing a lunar eclipse. He was said to have noted that the prevailing winds were actually from the northeast, contrary to what he had expected. In correspondence with his brother, he learned that the same storm had not reached Boston until after the eclipse, despite the fact that Boston is to the northeast of Philadelphia. He deduced that storms do not always travel in the direction of the prevailing wind, a concept that greatly influenced meteorology. After the Icelandic volcanic eruption of Laki in 1783, and the subsequent harsh European winter of 1784, Franklin made observations on the causal nature of these two seemingly separate events. He wrote about them in a lecture series.
Though Franklin is famously associated with kites from his lightning experiments, he has also been noted by many for using kites to pull humans and ships across waterways. George Pocock in the book A Treatise on The Aeropleustic Art, or Navigation in the Air, by means of Kites, or Buoyant Sails noted being inspired by Benjamin Franklin's traction of his body by kite power across a waterway.
Franklin noted a principle of refrigeration by observing that on a very hot day, he stayed cooler in a wet shirt in a breeze than he did in a dry one. To understand this phenomenon more clearly, he conducted experiments. In 1758 on a warm day in Cambridge, England, he and fellow scientist John Hadley experimented by continually wetting the ball of a mercury thermometer with ether and using bellows to evaporate the ether. With each subsequent evaporation, the thermometer read a lower temperature, eventually reaching . Another thermometer showed that the room temperature was constant at . In his letter Cooling by Evaporation, Franklin noted that, "One may see the possibility of freezing a man to death on a warm summer's day."
In 1761, Franklin wrote a letter to Mary Stevenson describing his experiments on the relationship between color and heat absorption. He found that darker color clothes got hotter when exposed to sunlight than lighter color clothes, an early demonstration of black body thermal radiation. One experiment he performed consisted of placing square pieces of cloth of various color out in the snow on a sunny day. He waited some time and then measured that the black pieces sank furthest into the snow of all the colors, indicating that they got the hottest and melted the most snow.
According to Michael Faraday, Franklin's experiments on the non-conduction of ice are worth mentioning, although the law of the general effect of liquefaction on electrolytes is not attributed to Franklin. However, as reported in 1836 by Franklin's great-grandson Alexander Dallas Bache of the University of Pennsylvania, the law of the effect of heat on the conduction of bodies otherwise non-conductors, for example, glass, could be attributed to Franklin. Franklin wrote, "... A certain quantity of heat will make some bodies good conductors, that will not otherwise conduct ..." and again, "... And water, though naturally a good conductor, will not conduct well when frozen into ice."
While traveling on a ship, Franklin had observed that the wake of a ship was diminished when the cooks scuttled their greasy water. He studied the effects on a large pond in Clapham Common, London. "I fetched out a cruet of oil and dropt a little of it on the water ... though not more than a teaspoon full, produced an instant calm over a space of several yards square." He later used the trick to "calm the waters" by carrying "a little oil in the hollow joint of [his] cane."
Decision-making
In a 1772 letter to Joseph Priestley, Franklin laid out the earliest known description of the Pro & Con list, a common decision-making technique, now sometimes called a decisional balance sheet:
Views on religion, morality, and slavery
Like the other advocates of republicanism, Franklin emphasized that the new republic could survive only if the people were virtuous. All his life, he explored the role of civic and personal virtue, as expressed in Poor Richard's aphorisms. He felt that organized religion was necessary to keep men good to their fellow men, but rarely attended religious services himself. When he met Voltaire in Paris and asked his fellow member of the Enlightenment vanguard to bless his grandson, Voltaire said in English, "God and Liberty," and added, "this is the only appropriate benediction for the grandson of Monsieur Franklin."
Franklin's parents were both pious Puritans. The family attended the Old South Church, the most liberal Puritan congregation in Boston, where Benjamin Franklin was baptized in 1706. Franklin's father, a poor chandler, owned a copy of a book, Bonifacius: Essays to Do Good, by the Puritan preacher and family friend Cotton Mather, which Franklin often cited as a key influence on his life. "If I have been," Franklin wrote to Cotton Mather's son seventy years later, "a useful citizen, the public owes the advantage of it to that book." His first pen name, Silence Dogood, paid homage both to the book and to a widely known sermon by Mather. The book preached the importance of forming voluntary associations to benefit society. Franklin learned about forming do-good associations from Mather, but his organizational skills made him the most influential force in making voluntarism an enduring part of the American ethos.
Franklin formulated a presentation of his beliefs and published it in 1728. He no longer accepted the key Puritan ideas regarding salvation, the divinity of Jesus, or indeed much religious dogma. He classified himself as a deist in his 1771 autobiography, although he still considered himself a Christian. He retained a strong faith in a God as the wellspring of morality and goodness in man, and as a Providential actor in history responsible for American independence.
At a critical impasse during the Constitutional Convention in June 1787, he attempted to introduce the practice of daily common prayer with these words:
The motion gained almost no support and was never brought to a vote.
Franklin was an enthusiastic admirer of the evangelical minister George Whitefield during the First Great Awakening. He did not himself subscribe to Whitefield's theology, but he admired Whitefield for exhorting people to worship God through good works. He published all of Whitefield's sermons and journals, thereby earning a lot of money and boosting the Great Awakening.
When he stopped attending church, Franklin wrote in his autobiography:
Franklin retained a lifelong commitment to the non-religious Puritan virtues and political values he had grown up with, and through his civic work and publishing, he succeeded in passing these values into the American culture permanently. He had a "passion for virtue." These Puritan values included his devotion to egalitarianism, education, industry, thrift, honesty, temperance, charity and community spirit. Thomas Kidd states, "As an adult, Franklin touted ethical responsibility, industriousness, and benevolence, even as he jettisoned Christian orthodoxy."
The classical authors read in the Enlightenment period taught an abstract ideal of republican government based on hierarchical social orders of king, aristocracy and commoners. It was widely believed that English liberties relied on their balance of power, but also hierarchal deference to the privileged class. "Puritanism ... and the epidemic evangelism of the mid-eighteenth century, had created challenges to the traditional notions of social stratification" by preaching that the Bible taught all men are equal, that the true value of a man lies in his moral behavior, not his class, and that all men can be saved. Franklin, steeped in Puritanism and an enthusiastic supporter of the evangelical movement, rejected the salvation dogma but embraced the radical notion of egalitarian democracy.
Franklin's commitment to teach these values was itself something he gained from his Puritan upbringing, with its stress on "inculcating virtue and character in themselves and their communities." These Puritan values and the desire to pass them on, were one of his quintessentially American characteristics and helped shape the character of the nation. Max Weber considered Franklin's ethical writings a culmination of the Protestant ethic, which ethic created the social conditions necessary for the birth of capitalism.
One of his characteristics was his respect, tolerance and promotion of all churches. Referring to his experience in Philadelphia, he wrote in his autobiography, "new Places of worship were continually wanted, and generally erected by voluntary Contribution, my Mite for such purpose, whatever might be the Sect, was never refused." "He helped create a new type of nation that would draw strength from its religious pluralism." The evangelical revivalists who were active mid-century, such as Whitefield, were the greatest advocates of religious freedom, "claiming liberty of conscience to be an 'inalienable right of every rational creature. Whitefield's supporters in Philadelphia, including Franklin, erected "a large, new hall, that ... could provide a pulpit to anyone of any belief." Franklin's rejection of dogma and doctrine and his stress on the God of ethics and morality and civic virtue made him the "prophet of tolerance." He composed "A Parable Against Persecution," an apocryphal 51st chapter of Genesis in which God teaches Abraham the duty of tolerance. While he was living in London in 1774, he was present at the birth of British Unitarianism, attending the inaugural session of the Essex Street Chapel, at which Theophilus Lindsey drew together the first avowedly Unitarian congregation in England; this was somewhat politically risky and pushed religious tolerance to new boundaries, as a denial of the doctrine of the Trinity was illegal until the 1813 Act.
Although his parents had intended for him a career in the church, Franklin as a young man adopted the Enlightenment religious belief in deism, that God's truths can be found entirely through nature and reason, declaring, "I soon became a thorough Deist." He rejected Christian dogma in a 1725 pamphlet A Dissertation on Liberty and Necessity, Pleasure and Pain, which he later saw as an embarrassment, while simultaneously asserting that God is "all wise, all good, all powerful." He defended his rejection of religious dogma with these words: "I think opinions should be judged by their influences and effects; and if a man holds none that tend to make him less virtuous or more vicious, it may be concluded that he holds none that are dangerous, which I hope is the case with me." After the disillusioning experience of seeing the decay in his own moral standards, and those of two friends in London whom he had converted to deism, Franklin decided that deism was true but it was not as useful in promoting personal morality as were the controls imposed by organized religion. Ralph Frasca contends that in his later life he can be considered a non-denominational Christian, although he did not believe Christ was divine.
In a major scholarly study of his religion, Thomas Kidd argues that Franklin believed that true religiosity was a matter of personal morality and civic virtue. Kidd says Franklin maintained his lifelong resistance to orthodox Christianity while arriving finally at a "doctrineless, moralized Christianity." According to David Morgan, Franklin was a proponent of "generic religion." He prayed to "Powerful Goodness" and referred to God as "the infinite." John Adams noted that he was a mirror in which people saw their own religion: "The Catholics thought him almost a Catholic. The Church of England claimed him as one of them. The Presbyterians thought him half a Presbyterian, and the Friends believed him a wet Quaker." Adams himself decided that Franklin best fit among the "Atheists, Deists, and Libertines." Whatever else Franklin was, concludes Morgan, "he was a true champion of generic religion." In a letter to Richard Price, Franklin states that he believes religion should support itself without help from the government, claiming, "When a Religion is good, I conceive that it will support itself; and, when it cannot support itself, and God does not take care to support, so that its Professors are oblig'd to call for the help of the Civil Power, it is a sign, I apprehend, of its being a bad one."
In 1790, just about a month before he died, Franklin wrote a letter to Ezra Stiles, president of Yale University, who had asked him his views on religion:
On July 4, 1776, Congress appointed a three-member committee composed of Franklin, Jefferson, and Adams to design the Great Seal of the United States. Franklin's proposal (which was not adopted) featured the motto: "Rebellion to Tyrants is Obedience to God" and a scene from the Book of Exodus he took from the frontispiece of the Geneva Bible, with Moses, the Israelites, the pillar of fire, and George III depicted as pharaoh.
The design that was produced was not acted upon by Congress, and the Great Seal's design was not finalized until a third committee was appointed in 1782.
Franklin strongly supported the right to freedom of speech:
Thirteen Virtues
Franklin sought to cultivate his character by a plan of 13 virtues, which he developed at age 20 (in 1726) and continued to practice in some form for the rest of his life. His autobiography lists his 13 virtues as:
Temperance. Eat not to dullness; drink not to elevation."
Silence. Speak not but what may benefit others or yourself; avoid trifling conversation."
Order. Let all your things have their places; let each part of your business have its time.
Resolution. Resolve to perform what you ought; perform without fail what you resolve.
Frugality. Make no expense but to do good to others or yourself; i.e., waste nothing.
Industry. Lose no time; be always employ'd in something useful; cut off all unnecessary actions.
Sincerity. Use no hurtful deceit; think innocently and justly, and, if you speak, speak accordingly.
Justice. Wrong none by doing injuries, or omitting the benefits that are your duty.
Moderation. Avoid extremes; forbear resenting injuries so much as you think they deserve.
Cleanliness. Tolerate no uncleanliness in body, clothes, or habitation.
Tranquility. Be not disturbed at trifles, or at accidents common or unavoidable.
Chastity. Rarely use venery but for health or offspring, never to dullness, weakness, or the injury of your own or another's peace or reputation.
Humility. Imitate Jesus and Socrates.
Franklin did not try to work on them all at once. Instead, he worked on only one each week "leaving all others to their ordinary chance." While he did not adhere completely to the enumerated virtues, and by his own admission he fell short of them many times, he believed the attempt made him a better man, contributing greatly to his success and happiness, which is why in his autobiography, he devoted more pages to this plan than to any other single point and wrote, "I hope, therefore, that some of my descendants may follow the example and reap the benefit."
Slavery
Franklin's views and practices concerning slavery evolved over the course of his life. In his early years, Franklin owned seven slaves, including two men who worked in his household and his shop, but in his later years became an adherent of abolition. A revenue stream for his newspaper was paid ads for the sale of slaves and for the capture of runaway slaves and Franklin allowed the sale of slaves in his general store. He later became an outspoken critic of slavery. In 1758, he advocated the opening of a school for the education of black slaves in Philadelphia. He took two slaves to England with him, Peter and King. King escaped with a woman to live in the outskirts of London, and by 1758 he was working for a household in Suffolk. After returning from England in 1762, Franklin became more abolitionist in nature, attacking American slavery. In the wake of Somerset v Stewart, he voiced frustration at British abolitionists:
Franklin refused to publicly debate the issue of slavery at the 1787 Constitutional Convention.
At the time of the American founding, there were about half a million slaves in the United States, mostly in the five southernmost states, where they made up 40% of the population. Many of the leading American founderssuch as Thomas Jefferson, George Washington, and James Madisonowned slaves, but many others did not. Benjamin Franklin thought that slavery was "an atrocious debasement of human nature" and "a source of serious evils." In 1787, Franklin and Benjamin Rush helped write a new constitution for the Pennsylvania Society for Promoting the Abolition of Slavery, and that same year Franklin became president of the organization. In 1790, Quakers from New York and Pennsylvania presented their petition for abolition to Congress. Their argument against slavery was backed by the Pennsylvania Abolitionist Society.
In his later years, as Congress was forced to deal with the issue of slavery, Franklin wrote several essays that stressed the importance of the abolition of slavery and of the integration of African Americans into American society. These writings included:
"An Address to the Public" (1789)
"A Plan for Improving the Condition of the Free Blacks" (1789)
"Sidi Mehemet Ibrahim on the Slave Trade" (1790)
Vegetarianism
Franklin became a vegetarian when he was a teenager apprenticing at a print shop, after coming upon a book by the early vegetarian advocate Thomas Tryon. In addition, he would have also been familiar with the moral arguments espoused by prominent vegetarian Quakers in the colonial-era Province of Pennsylvania, including Benjamin Lay and John Woolman. His reasons for vegetarianism were based on health, ethics, and economy:
Franklin also declared the consumption of fish to be "unprovoked murder." Despite his convictions, he began to eat fish after being tempted by fried cod on a boat sailing from Boston, justifying the eating of animals by observing that the fish's stomach contained other fish. Nonetheless, he recognized the faulty ethics in this argument and would continue to be a vegetarian on and off. He was "excited" by tofu, which he learned of from the writings of a Spanish missionary to Southeast Asia, Domingo Fernández Navarrete. Franklin sent a sample of soybeans to prominent American botanist John Bartram and had previously written to British diplomat and Chinese trade expert James Flint inquiring as to how tofu was made, with their correspondence believed to be the first documented use of the word "tofu" in the English language.
Franklin's "Second Reply to Vindex Patriae," a 1766 letter advocating self-sufficiency and less dependence on England, lists various examples of the bounty of American agricultural products, and does not mention meat. Detailing new American customs, he wrote that, "[t]hey resolved last spring to eat no more lamb; and not a joint of lamb has since been seen on any of their tables ... the sweet little creatures are all alive to this day, with the prettiest fleeces on their backs imaginable."
View on inoculation
The concept of preventing smallpox by variolation was introduced to colonial America by an African slave named Onesimus via his owner Cotton Mather in the early eighteenth century, but the procedure was not immediately accepted. James Franklin's newspaper carried articles in 1721 that vigorously denounced the concept.
However, by 1736 Benjamin Franklin was known as a supporter of the procedure. Therefore, when four-year-old "Franky" died of smallpox, opponents of the procedure circulated rumors that the child had been inoculated, and that this was the cause of his subsequent death. When Franklin became aware of this gossip, he placed a notice in the Pennsylvania Gazette, stating: "I do hereby sincerely declare, that he was not inoculated, but receiv'd the Distemper in the common Way of Infection ... I intended to have my Child inoculated." The child had a bad case of flux diarrhea, and his parents had waited for him to get well before having him inoculated. Franklin wrote in his Autobiography: "In 1736 I lost one of my sons, a fine boy of four years old, by the small-pox, taken in the common way. I long regretted bitterly, and still regret that I had not given it to him by inoculation. This I mention for the sake of parents who omit that operation, on the supposition that they should never forgive themselves if a child died under it; my example showing that the regret may be the same either way, and that, therefore, the safer should be chosen."
Interests and activities
Musical endeavors
Franklin is known to have played the violin, the harp, and the guitar. He also composed music, which included a string quartet in early classical style. While he was in London, he developed a much-improved version of the glass harmonica, in which the glasses rotate on a shaft, with the player's fingers held steady, instead of the other way around. He worked with the London glassblower Charles James to create it, and instruments based on his mechanical version soon found their way to other parts of Europe. Joseph Haydn, a fan of Franklin's enlightened ideas, had a glass harmonica in his instrument collection. Wolfgang Amadeus Mozart composed for Franklin's glass harmonica, as did Beethoven. Gaetano Donizetti used the instrument in the accompaniment to Amelia's aria "Par che mi dica ancora" in the tragic opera Il castello di Kenilworth (1821), as did Camille Saint-Saëns in his 1886 The Carnival of the Animals. Richard Strauss calls for the glass harmonica in his 1917 Die Frau ohne Schatten, and numerous other composers used Franklin's instrument as well.
Chess
Franklin was an avid chess player. He was playing chess by around 1733, making him the first chess player known by name in the American colonies. His essay on "The Morals of Chess" in Columbian Magazine in December 1786 is the second known writing on chess in America. This essay in praise of chess and prescribing a code of behavior for the game has been widely reprinted and translated. He and a friend used chess as a means of learning the Italian language, which both were studying; the winner of each game between them had the right to assign a task, such as parts of the Italian grammar to be learned by heart, to be performed by the loser before their next meeting.
Franklin was able to play chess more frequently against stronger opposition during his many years as a civil servant and diplomat in England, where the game was far better established than in America. He was able to improve his playing standard by facing more experienced players during this period. He regularly attended Old Slaughter's Coffee House in London for chess and socializing, making many important personal contacts. While in Paris, both as a visitor and later as ambassador, he visited the famous Café de la Régence, which France's strongest players made their regular meeting place. No records of his games have survived, so it is not possible to ascertain his playing strength in modern terms.
Franklin was inducted into the U.S. Chess Hall of Fame in 1999. The Franklin Mercantile Chess Club in Philadelphia, the second oldest chess club in the U.S., is named in his honor.
Legacy
Bequest
Franklin bequeathed £1,000 (about $4,400 at the time, or about $125,000 in 2021 dollars) each to the cities of Boston and Philadelphia, in trust to gather interest for 200 years. The trust began in 1785 when the French mathematician Charles-Joseph Mathon de la Cour, who admired Franklin greatly, wrote a friendly parody of Franklin's Poor Richard's Almanack called Fortunate Richard. The main character leaves a smallish amount of money in his will, five lots of 100 livres, to collect interest over one, two, three, four or five full centuries, with the resulting astronomical sums to be spent on impossibly elaborate utopian projects. Franklin, who was 79 years old at the time, wrote thanking him for a great idea and telling him that he had decided to leave a bequest of 1,000 pounds each to his native Boston and his adopted Philadelphia.
By 1990, more than $2,000,000 (~$ in ) had accumulated in Franklin's Philadelphia trust, which had loaned the money to local residents. From 1940 to 1990, the money was used mostly for mortgage loans. When the trust came due, Philadelphia decided to spend it on scholarships for local high school students. Franklin's Boston trust fund accumulated almost $5,000,000 during that same time; at the end of its first 100 years a portion was allocated to help establish a trade school that became the Franklin Institute of Boston, and the entire fund was later dedicated to supporting this institute.
In 1787, a group of prominent ministers in Lancaster, Pennsylvania, proposed the foundation of a new college named in Franklin's honor. Franklin donated £200 towards the development of Franklin College (now called Franklin & Marshall College).
Likeness and image
As the only person to have signed the Declaration of Independence in 1776, Treaty of Alliance with France in 1778, Treaty of Paris in 1783, and U.S. Constitution in 1787, Franklin is considered one of the leading Founding Fathers of the United States. His pervasive influence in the early history of the nation has led to his being jocularly called "the only president of the United States who was never president of the United States."
Franklin's likeness is ubiquitous. Since 1914, it has adorned American $100 bills. From 1948 to 1963, Franklin's portrait was on the half-dollar. He has appeared on a $50 bill and on several varieties of the $100 bill from 1914 and 1918. Franklin also appears on the $1,000 Series EE savings bond.
On April 12, 1976, as part of a bicentennial celebration, Congress dedicated a tall marble statue in Philadelphia's Franklin Institute as the Benjamin Franklin National Memorial. Vice President Nelson Rockefeller presided over the dedication ceremony. Many of Franklin's personal possessions are on display at the institute. In London, his house at 36 Craven Street, which is the only surviving former residence of Franklin, was first marked with a blue plaque and has since been opened to the public as the Benjamin Franklin House. In 1998, workmen restoring the building dug up the remains of six children and four adults hidden below the home. A total of 15 bodies have been recovered. The Friends of Benjamin Franklin House (the organization responsible for the restoration) note that the bones were likely placed there by William Hewson, who lived in the house for two years and who had built a small anatomy school at the back of the house. They note that while Franklin likely knew what Hewson was doing, he probably did not participate in any dissections because he was much more of a physicist than a medical man.
He has been honored on U.S. postage stamps many times. The image of Franklin, the first postmaster general of the United States, occurs on the face of U.S. postage more than any other American save that of George Washington. He appeared on the first U.S. postage stamp issued in 1847. From 1908 through 1923, the U.S. Post Office issued a series of postage stamps commonly referred to as the Washington–Franklin Issues, in which Washington and Franklin were depicted many times over a 14-year period, the longest run of any one series in U.S. postal history. However, he only appears on a few commemorative stamps. Some of the finest portrayals of Franklin on record can be found on the engravings inscribed on the face of U.S. postage.
See also
Benjamin Franklin in popular culture
Bibliography of early American publishers and printers
Founders Online, database of Franklin's papers
Franklin's electrostatic machine
Fugio Cent, 1787 coin designed by Franklin
List of early American publishers and printers
List of opponents of slavery
List of richest Americans in history
The Papers of Benjamin Franklin
Notes
References
Bibliography
Biographies
Becker, Carl Lotus. "Benjamin Franklin", Dictionary of American Biography (1931) – vol 3, with links online
Wood, Gordon. "Benjamin Franklin" Encyclopedia Britannica (2021) online
Scholarly studies
ProQuest Dissertations, 3461038.
"Franklin as Politician and Diplomatist" in The Century (October 1899) v. 57 pp. 881–899. By Paul Leicester Ford.
"Franklin as Printer and Publisher" in The Century (April 1899) v. 57 pp. 803–818.
"Franklin as Scientist" in The Century (September 1899) v.57 pp. 750–763. By Paul Leicester Ford.
Frasca, Ralph. "Benjamin Franklin's Printing Network and the Stamp Act." Pennsylvania History 71.4 (2004): 403–419 online .
Frasca, Ralph. Benjamin Franklin's printing network: disseminating virtue in early America (U of Missouri Press, 2006) excerpt.
ProQuest Dissertations Publishing, 9999293.
Kidd, Thomas S. Benjamin Franklin: The Religious Life of a Founding Father (Yale UP, 2017) excerpt
ProQuest Dissertations Publishing, 3357482.
ProQuest Dissertations Publishing, 9021252.
Walters, Kerry S. Benjamin Franklin and His Gods. (1999). 213 pp. Takes position midway between D H Lawrence's brutal 1930 denunciation of Franklin's religion as nothing more than a bourgeois commercialism tricked out in shallow utilitarian moralisms and Owen Aldridge's sympathetic 1967 treatment of the dynamism and protean character of Franklin's "polytheistic" religion.
Historiography
Waldstreicher, David, ed. A Companion to Benjamin Franklin (2011), 25 essays by scholars emphasizing how historians have handled Franklin. online edition
Primary sources
"A Dissertation on Liberty and Necessity, Pleasure and Pain."
"Experiments and Observations on Electricity." (1751)
"Fart Proudly: Writings of Benjamin Franklin You Never Read in School." Carl Japikse, Ed. Frog Ltd.; Reprint ed. 2003.
"Heroes of America Benjamin Franklin."
"On Marriage."
"Satires and Bagatelles."
Autobiography, Poor Richard, & Later Writings (J.A. Leo Lemay, ed.) (Library of America, 1987 one-volume, 2005 two-volume)
Benjamin Franklin Reader edited by Walter Isaacson (2003)
Benjamin Franklin's Autobiography edited by J.A. Leo Lemay and P.M. Zall, (Norton Critical Editions, 1986); 390 pp. text, contemporary documents and 20th century analysis
Houston, Alan, ed. Franklin: The Autobiography and other Writings on Politics, Economics, and Virtue. Cambridge University Press, 2004. 371 pp.
Ketcham, Ralph, ed. The Political Thought of Benjamin Franklin. (1965, reprinted 2003). 459 pp.
Lass, Hilda, ed. The Fabulous American: A Benjamin Franklin Almanac. (1964). 222 pp.
Woody, Thomas, ed. Educational views of Benjamin Franklin (1931)
Leonard Labaree, and others., eds., The Papers of Benjamin Franklin, 39 vols. to date (1959–2008), definitive edition, through 1783. This massive collection of BF's writings, and letters to him, is available in large academic libraries. It is most useful for detailed research on specific topics. The complete text of all the documents are online and searchable; .
Poor Richard Improved by Benjamin Franklin (1751)
Silence Dogood, The Busy-Body, & Early Writings (J.A. Leo Lemay, ed.) (Library of America, 1987 one-volume, 2005 two-volume)
The Papers of Benjamin Franklin online, Sponsored by The American Philosophical Society and Yale University
The Way to Wealth. Applewood Books; 1986.
Writings (Franklin)|Writings.
For young readers
Asimov, Isaac. The Kite That Won the Revolution, a biography for children that focuses on Franklin's scientific and diplomatic contributions.
Fleming, Candace. Ben Franklin's Almanac: Being a True Account of the Good Gentleman's Life. Atheneum/Anne Schwart, 2003, 128 pp. .
Miller, Brandon. Benjamin Franklin, American Genius: His Life and Ideas with 21 Activities (For Kids series) 2009 Chicago Review Press
External links
Benjamin Franklin and Electrostatics experiments and Franklin's electrical writings from Wright Center for Science Education
Benjamin Franklin Papers, Kislak Center for Special Collections, Rare Books and Manuscripts, University of Pennsylvania.
Franklin's impact on medicine – talk by medical historian, Dr. Jim Leavesley celebrating the 300th anniversary of Franklin's birth on Okham's Razor ABC Radio National – December 2006
Video with sheet music of Benjamin Franklin's string quartet
Biographical and guides
"Special Report: Citizen Ben's Greatest Virtues" Time
"Writings of Benjamin Franklin" from C-SPAN's American Writers: A Journey Through History
Afsai, Shai (2019). "Benjamin Franklin's Influence on Mussar Thought and Practice: a Chronicle of Misapprehension." Review of Rabbinic Judaism 22, 2: 228–276.
Benjamin Franklin: A Documentary History by J.A. Leo Lemay
Benjamin Franklin: An extraordinary life PBS
Benjamin Franklin: First American Diplomat, 1776–1785 US State Department
Finding Franklin: A Resource Guide Library of Congress
Guide to Benjamin Franklin By a history professor at the University of Illinois.
Online edition of Franklin's personal library
The Electric Benjamin Franklin ushistory.org
Online writings
"A Silence Dogood Sampler" – Selections from Franklin's Silence Dogood writings
Abridgement of the Book of Common Prayer (1773), by Benjamin Franklin and Francis Dashwood, transcribed by Richard Mammana
Franklin's Last Will & Testament Transcription.
Library of Congress web resource: Benjamin Franklin ... In His Own Words
Online Works by Franklin
Yale edition of complete works, the standard scholarly edition
Online, searchable edition
Autobiography
The Autobiography of Benjamin Franklin at Project Gutenberg
The Autobiography of Benjamin Franklin LibriVox recording
In the arts
Benjamin Franklin 300 (1706–2006) Official web site of the Benjamin Franklin Tercentenary.
The Historical Society of Pennsylvania Collection of Benjamin Franklin Papers, including correspondence, government documents, writings and a copy of his will, are available for research use at the Historical Society of Pennsylvania.
Founding Fathers of the United States
1706 births
1790 deaths
18th-century American diplomats
18th-century American inventors
18th-century American journalists
18th-century American letter writers
18th-century American newspaper publishers (people)
18th-century American non-fiction writers
18th-century American philosophers
18th-century American publishers (people)
18th-century American scientists
18th-century American writers
18th-century pseudonymous writers
18th-century United States government officials
Abolitionists from Pennsylvania
Activists for African-American civil rights
Activists from Boston
Activists from Philadelphia
Age of Enlightenment
Almanac compilers
Ambassadors of the United States to France
Ambassadors of the United States to Sweden
American Freemasons
American autobiographers
American businesspeople in retailing
American chess players
American chess writers
American currency designers
American deists
American humorists
American male journalists
American male non-fiction writers
American memoirists
American people of English descent
American philosophers of culture
American philosophers of education
American philosophers of religion
American philosophy writers
American political philosophers
American printers
American slave owners
American typographers and type designers
American whistleblowers
Aphorists
Burials at Christ Church, Philadelphia
Chestnut Street (Philadelphia)
Chief administrators of the University of Pennsylvania
Coin designers
Colonial agents of the British Empire
Continental Congressmen from Pennsylvania
Creators of writing systems
Deaths from pleurisy
Editors of Pennsylvania newspapers
English-language spelling reform advocates
Fellows of the American Academy of Arts and Sciences
Fellows of the Royal Society
Founder fellows of the Royal Society of Edinburgh
Franklin family
Governors of Pennsylvania
Hall of Fame for Great Americans inductees
Harvard University people
Hellfire Club
Honorary members of the Saint Petersburg Academy of Sciences
Humor researchers
Independent scholars
Independent scientists
Infectious disease deaths in Pennsylvania
Inventors from Massachusetts
Les Neuf Sœurs
Masonic grand masters
Members of the American Philosophical Society
Members of the Lunar Society of Birmingham
Members of the Pennsylvania Provincial Assembly
Musicians from Boston
Musicians from Philadelphia
Pennsylvania independents
Pennsylvania postmasters
People associated with electricity
People from colonial Boston
People from colonial Pennsylvania
People of the American Enlightenment
Philosophers of history
Philosophers of literature
Philosophers from Massachusetts
Philosophers from Pennsylvania
Philosophers of science
Philosophers of technology
Political activists from Pennsylvania
Presbyterians from Pennsylvania
Printers from the Thirteen Colonies
Recipients of the Copley Medal
Recreational cryptographers
Respiratory disease deaths in Pennsylvania
Rhetoric theorists
Scientists from Boston
Scientists from Philadelphia
Signers of the United States Constitution
Signers of the United States Declaration of Independence
Simple living advocates
Social philosophers
Speakers of the Pennsylvania House of Representatives
Speakers of the Pennsylvania Provincial Assembly
Theorists on Western civilization
United States postmasters general
University and college founders
Vaccination advocates
Writers about activism and social change
Writers about religion and science
Writers from Boston
Writers from Philadelphia | Benjamin Franklin | [
"Biology"
] | 18,424 | [
"Vaccination",
"Vaccination advocates"
] |
3,989 | https://en.wikipedia.org/wiki/Banach%20space | In mathematics, more specifically in functional analysis, a Banach space (pronounced ) is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well-defined limit that is within the space.
Banach spaces are named after the Polish mathematician Stefan Banach, who introduced this concept and studied it systematically in 1920–1922 along with Hans Hahn and Eduard Helly.
Maurice René Fréchet was the first to use the term "Banach space" and Banach in turn then coined the term "Fréchet space".
Banach spaces originally grew out of the study of function spaces by Hilbert, Fréchet, and Riesz earlier in the century. Banach spaces play a central role in functional analysis. In other areas of analysis, the spaces under study are often Banach spaces.
Definition
A Banach space is a complete normed space
A normed space is a pair
consisting of a vector space over a scalar field (where is commonly or ) together with a distinguished
norm Like all norms, this norm induces a translation invariant
distance function, called the canonical or (norm) induced metric, defined for all vectors by
This makes into a metric space
A sequence is called or or if for every real there exists some index such that
whenever and are greater than
The normed space is called a and the canonical metric is called a if is a , which by definition means for every Cauchy sequence in there exists some such that
where because this sequence's convergence to can equivalently be expressed as:
The norm of a normed space is called a if is a Banach space.
L-semi-inner product
For any normed space there exists an L-semi-inner product on such that for all in general, there may be infinitely many L-semi-inner products that satisfy this condition. L-semi-inner products are a generalization of inner products, which are what fundamentally distinguish Hilbert spaces from all other Banach spaces. This shows that all normed spaces (and hence all Banach spaces) can be considered as being generalizations of (pre-)Hilbert spaces.
Characterization in terms of series
The vector space structure allows one to relate the behavior of Cauchy sequences to that of converging series of vectors.
A normed space is a Banach space if and only if each absolutely convergent series in converges to a value that lies within
Topology
The canonical metric of a normed space induces the usual metric topology on which is referred to as the canonical or norm induced topology.
Every normed space is automatically assumed to carry this Hausdorff topology, unless indicated otherwise.
With this topology, every Banach space is a Baire space, although there exist normed spaces that are Baire but not Banach. The norm is always a continuous function with respect to the topology that it induces.
The open and closed balls of radius centered at a point are, respectively, the sets
Any such ball is a convex and bounded subset of but a compact ball / neighborhood exists if and only if is a finite-dimensional vector space.
In particular, no infinite–dimensional normed space can be locally compact or have the Heine–Borel property.
If is a vector and is a scalar then
Using shows that this norm-induced topology is translation invariant, which means that for any and the subset is open (respectively, closed) in if and only if this is true of its translation
Consequently, the norm induced topology is completely determined by any neighbourhood basis at the origin. Some common neighborhood bases at the origin include:
where is a sequence in of positive real numbers that converges to in (such as or for instance).
So for example, every open subset of can be written as a union
indexed by some subset where every may be picked from the aforementioned sequence (the open balls can be replaced with closed balls, although then the indexing set and radii may also need to be replaced).
Additionally, can always be chosen to be countable if is a , which by definition means that contains some countable dense subset.
Homeomorphism classes of separable Banach spaces
All finite–dimensional normed spaces are separable Banach spaces and any two Banach spaces of the same finite dimension are linearly homeomorphic.
Every separable infinite–dimensional Hilbert space is linearly isometrically isomorphic to the separable Hilbert sequence space with its usual norm
The Anderson–Kadec theorem states that every infinite–dimensional separable Fréchet space is homeomorphic to the product space of countably many copies of (this homeomorphism need not be a linear map).
Thus all infinite–dimensional separable Fréchet spaces are homeomorphic to each other (or said differently, their topology is unique up to a homeomorphism).
Since every Banach space is a Fréchet space, this is also true of all infinite–dimensional separable Banach spaces, including
In fact, is even homeomorphic to its own unit which stands in sharp contrast to finite–dimensional spaces (the Euclidean plane is not homeomorphic to the unit circle, for instance).
This pattern in homeomorphism classes extends to generalizations of metrizable (locally Euclidean) topological manifolds known as , which are metric spaces that are around every point, locally homeomorphic to some open subset of a given Banach space (metric Hilbert manifolds and metric Fréchet manifolds are defined similarly).
For example, every open subset of a Banach space is canonically a metric Banach manifold modeled on since the inclusion map is an open local homeomorphism.
Using Hilbert space microbundles, David Henderson showed in 1969 that every metric manifold modeled on a separable infinite–dimensional Banach (or Fréchet) space can be topologically embedded as an subset of and, consequently, also admits a unique smooth structure making it into a Hilbert manifold.
Compact and convex subsets
There is a compact subset of whose convex hull is closed and thus also compact (see this footnote for an example).
However, like in all Banach spaces, the convex hull of this (and every other) compact subset will be compact. But if a normed space is not complete then it is in general guaranteed that will be compact whenever is; an example can even be found in a (non-complete) pre-Hilbert vector subspace of
As a topological vector space
This norm-induced topology also makes into what is known as a topological vector space (TVS), which by definition is a vector space endowed with a topology making the operations of addition and scalar multiplication continuous. It is emphasized that the TVS is a vector space together with a certain type of topology; that is to say, when considered as a TVS, it is associated with particular norm or metric (both of which are "forgotten"). This Hausdorff TVS is even locally convex because the set of all open balls centered at the origin forms a neighbourhood basis at the origin consisting of convex balanced open sets. This TVS is also , which by definition refers to any TVS whose topology is induced by some (possibly unknown) norm. Normable TVSs are characterized by being Hausdorff and having a bounded convex neighborhood of the origin.
All Banach spaces are barrelled spaces, which means that every barrel is neighborhood of the origin (all closed balls centered at the origin are barrels, for example) and guarantees that the Banach–Steinhaus theorem holds.
Comparison of complete metrizable vector topologies
The open mapping theorem implies that if and are topologies on that make both and into complete metrizable TVS (for example, Banach or Fréchet spaces) and if one topology is finer or coarser than the other then they must be equal (that is, if or then ).
So for example, if and are Banach spaces with topologies and and if one of these spaces has some open ball that is also an open subset of the other space (or equivalently, if one of or is continuous) then their topologies are identical and their norms are equivalent.
Completeness
Complete norms and equivalent norms
Two norms, and on a vector space are said to be if they induce the same topology; this happens if and only if there exist positive real numbers such that for all If and are two equivalent norms on a vector space then is a Banach space if and only if is a Banach space.
See this footnote for an example of a continuous norm on a Banach space that is equivalent to that Banach space's given norm.
All norms on a finite-dimensional vector space are equivalent and every finite-dimensional normed space is a Banach space.
Complete norms vs complete metrics
A metric on a vector space is induced by a norm on if and only if is translation invariant and , which means that for all scalars and all in which case the function defines a norm on and the canonical metric induced by is equal to
Suppose that is a normed space and that is the norm topology induced on Suppose that is metric on such that the topology that induces on is equal to If is translation invariant then is a Banach space if and only if is a complete metric space.
If is translation invariant, then it may be possible for to be a Banach space but for to be a complete metric space (see this footnote for an example). In contrast, a theorem of Klee, which also applies to all metrizable topological vector spaces, implies that if there exists complete metric on that induces the norm topology on then is a Banach space.
A Fréchet space is a locally convex topological vector space whose topology is induced by some translation-invariant complete metric.
Every Banach space is a Fréchet space but not conversely; indeed, there even exist Fréchet spaces on which no norm is a continuous function (such as the space of real sequences with the product topology).
However, the topology of every Fréchet space is induced by some countable family of real-valued (necessarily continuous) maps called seminorms, which are generalizations of norms.
It is even possible for a Fréchet space to have a topology that is induced by a countable family of (such norms would necessarily be continuous)
but to not be a Banach/normable space because its topology can not be defined by any norm.
An example of such a space is the Fréchet space whose definition can be found in the article on spaces of test functions and distributions.
Complete norms vs complete topological vector spaces
There is another notion of completeness besides metric completeness and that is the notion of a complete topological vector space (TVS) or TVS-completeness, which uses the theory of uniform spaces.
Specifically, the notion of TVS-completeness uses a unique translation-invariant uniformity, called the canonical uniformity, that depends on vector subtraction and the topology that the vector space is endowed with, and so in particular, this notion of TVS completeness is independent of whatever norm induced the topology (and even applies to TVSs that are even metrizable).
Every Banach space is a complete TVS. Moreover, a normed space is a Banach space (that is, its norm-induced metric is complete) if and only if it is complete as a topological vector space.
If is a metrizable topological vector space (such as any norm induced topology, for example), then is a complete TVS if and only if it is a complete TVS, meaning that it is enough to check that every Cauchy in converges in to some point of (that is, there is no need to consider the more general notion of arbitrary Cauchy nets).
If is a topological vector space whose topology is induced by (possibly unknown) norm (such spaces are called ), then is a complete topological vector space if and only if may be assigned a norm that induces on the topology and also makes into a Banach space.
A Hausdorff locally convex topological vector space is normable if and only if its strong dual space is normable, in which case is a Banach space ( denotes the strong dual space of whose topology is a generalization of the dual norm-induced topology on the continuous dual space ; see this footnote for more details).
If is a metrizable locally convex TVS, then is normable if and only if is a Fréchet–Urysohn space.
This shows that in the category of locally convex TVSs, Banach spaces are exactly those complete spaces that are both metrizable and have metrizable strong dual spaces.
Completions
Every normed space can be isometrically embedded onto a dense vector subspace of Banach space, where this Banach space is called a of the normed space. This Hausdorff completion is unique up to isometric isomorphism.
More precisely, for every normed space there exist a Banach space and a mapping such that is an isometric mapping and is dense in If is another Banach space such that there is an isometric isomorphism from onto a dense subset of then is isometrically isomorphic to
This Banach space is the Hausdorff of the normed space The underlying metric space for is the same as the metric completion of with the vector space operations extended from to The completion of is sometimes denoted by
General theory
Linear operators, isomorphisms
If and are normed spaces over the same ground field the set of all continuous -linear maps is denoted by In infinite-dimensional spaces, not all linear maps are continuous. A linear mapping from a normed space to another normed space is continuous if and only if it is bounded on the closed unit ball of Thus, the vector space can be given the operator norm
For a Banach space, the space is a Banach space with respect to this norm. In categorical contexts, it is sometimes convenient to restrict the function space between two Banach spaces to only the short maps; in that case the space reappears as a natural bifunctor.
If is a Banach space, the space forms a unital Banach algebra; the multiplication operation is given by the composition of linear maps.
If and are normed spaces, they are isomorphic normed spaces if there exists a linear bijection such that and its inverse are continuous. If one of the two spaces or is complete (or reflexive, separable, etc.) then so is the other space. Two normed spaces and are isometrically isomorphic if in addition, is an isometry, that is, for every in The Banach–Mazur distance between two isomorphic but not isometric spaces and gives a measure of how much the two spaces and differ.
Continuous and bounded linear functions and seminorms
Every continuous linear operator is a bounded linear operator and if dealing only with normed spaces then the converse is also true. That is, a linear operator between two normed spaces is bounded if and only if it is a continuous function. So in particular, because the scalar field (which is or ) is a normed space, a linear functional on a normed space is a bounded linear functional if and only if it is a continuous linear functional. This allows for continuity-related results (like those below) to be applied to Banach spaces. Although boundedness is the same as continuity for linear maps between normed spaces, the term "bounded" is more commonly used when dealing primarily with Banach spaces.
If is a subadditive function (such as a norm, a sublinear function, or real linear functional), then is continuous at the origin if and only if is uniformly continuous on all of ; and if in addition then is continuous if and only if its absolute value is continuous, which happens if and only if is an open subset of
And very importantly for applying the Hahn–Banach theorem, a linear functional is continuous if and only if this is true of its real part and moreover, and the real part completely determines which is why the Hahn–Banach theorem is often stated only for real linear functionals.
Also, a linear functional on is continuous if and only if the seminorm is continuous, which happens if and only if there exists a continuous seminorm such that ; this last statement involving the linear functional and seminorm is encountered in many versions of the Hahn–Banach theorem.
Basic notions
The Cartesian product of two normed spaces is not canonically equipped with a norm. However, several equivalent norms are commonly used, such as
which correspond (respectively) to the coproduct and product in the category of Banach spaces and short maps (discussed above). For finite (co)products, these norms give rise to isomorphic normed spaces, and the product (or the direct sum ) is complete if and only if the two factors are complete.
If is a closed linear subspace of a normed space there is a natural norm on the quotient space
The quotient is a Banach space when is complete. The quotient map from onto sending to its class is linear, onto and has norm except when in which case the quotient is the null space.
The closed linear subspace of is said to be a complemented subspace of if is the range of a surjective bounded linear projection In this case, the space is isomorphic to the direct sum of and the kernel of the projection
Suppose that and are Banach spaces and that There exists a canonical factorization of as
where the first map is the quotient map, and the second map sends every class in the quotient to the image in This is well defined because all elements in the same class have the same image. The mapping is a linear bijection from onto the range whose inverse need not be bounded.
Classical spaces
Basic examples of Banach spaces include: the Lp spaces and their special cases, the sequence spaces that consist of scalar sequences indexed by natural numbers ; among them, the space of absolutely summable sequences and the space of square summable sequences; the space of sequences tending to zero and the space of bounded sequences; the space of continuous scalar functions on a compact Hausdorff space equipped with the max norm,
According to the Banach–Mazur theorem, every Banach space is isometrically isomorphic to a subspace of some For every separable Banach space there is a closed subspace of such that
Any Hilbert space serves as an example of a Banach space. A Hilbert space on is complete for a norm of the form
where
is the inner product, linear in its first argument that satisfies the following:
For example, the space is a Hilbert space.
The Hardy spaces, the Sobolev spaces are examples of Banach spaces that are related to spaces and have additional structure. They are important in different branches of analysis, Harmonic analysis and Partial differential equations among others.
Banach algebras
A Banach algebra is a Banach space over or together with a structure of algebra over , such that the product map is continuous. An equivalent norm on can be found so that for all
Examples
The Banach space with the pointwise product, is a Banach algebra.
The disk algebra consists of functions holomorphic in the open unit disk and continuous on its closure: Equipped with the max norm on the disk algebra is a closed subalgebra of
The Wiener algebra is the algebra of functions on the unit circle with absolutely convergent Fourier series. Via the map associating a function on to the sequence of its Fourier coefficients, this algebra is isomorphic to the Banach algebra where the product is the convolution of sequences.
For every Banach space the space of bounded linear operators on with the composition of maps as product, is a Banach algebra.
A C*-algebra is a complex Banach algebra with an antilinear involution such that The space of bounded linear operators on a Hilbert space is a fundamental example of C*-algebra. The Gelfand–Naimark theorem states that every C*-algebra is isometrically isomorphic to a C*-subalgebra of some The space of complex continuous functions on a compact Hausdorff space is an example of commutative C*-algebra, where the involution associates to every function its complex conjugate
Dual space
If is a normed space and the underlying field (either the real or the complex numbers), the continuous dual space is the space of continuous linear maps from into or continuous linear functionals.
The notation for the continuous dual is in this article.
Since is a Banach space (using the absolute value as norm), the dual is a Banach space, for every normed space The Dixmier–Ng theorem characterizes the dual spaces of Banach spaces.
The main tool for proving the existence of continuous linear functionals is the Hahn–Banach theorem.
In particular, every continuous linear functional on a subspace of a normed space can be continuously extended to the whole space, without increasing the norm of the functional.
An important special case is the following: for every vector in a normed space there exists a continuous linear functional on such that
When is not equal to the vector, the functional must have norm one, and is called a norming functional for
The Hahn–Banach separation theorem states that two disjoint non-empty convex sets in a real Banach space, one of them open, can be separated by a closed affine hyperplane.
The open convex set lies strictly on one side of the hyperplane, the second convex set lies on the other side but may touch the hyperplane.
A subset in a Banach space is total if the linear span of is dense in The subset is total in if and only if the only continuous linear functional that vanishes on is the functional: this equivalence follows from the Hahn–Banach theorem.
If is the direct sum of two closed linear subspaces and then the dual of is isomorphic to the direct sum of the duals of and
If is a closed linear subspace in one can associate the in the dual,
The orthogonal is a closed linear subspace of the dual. The dual of is isometrically isomorphic to
The dual of is isometrically isomorphic to
The dual of a separable Banach space need not be separable, but:
When is separable, the above criterion for totality can be used for proving the existence of a countable total subset in
Weak topologies
The weak topology on a Banach space is the coarsest topology on for which all elements in the continuous dual space are continuous.
The norm topology is therefore finer than the weak topology.
It follows from the Hahn–Banach separation theorem that the weak topology is Hausdorff, and that a norm-closed convex subset of a Banach space is also weakly closed.
A norm-continuous linear map between two Banach spaces and is also weakly continuous, that is, continuous from the weak topology of to that of
If is infinite-dimensional, there exist linear maps which are not continuous. The space of all linear maps from to the underlying field (this space is called the algebraic dual space, to distinguish it from also induces a topology on which is finer than the weak topology, and much less used in functional analysis.
On a dual space there is a topology weaker than the weak topology of called weak* topology.
It is the coarsest topology on for which all evaluation maps where ranges over are continuous.
Its importance comes from the Banach–Alaoglu theorem.
The Banach–Alaoglu theorem can be proved using Tychonoff's theorem about infinite products of compact Hausdorff spaces.
When is separable, the unit ball of the dual is a metrizable compact in the weak* topology.
Examples of dual spaces
The dual of is isometrically isomorphic to : for every bounded linear functional on there is a unique element such that
The dual of is isometrically isomorphic to .
The dual of Lebesgue space is isometrically isomorphic to when and
For every vector in a Hilbert space the mapping
defines a continuous linear functional on The Riesz representation theorem states that every continuous linear functional on is of the form for a uniquely defined vector in
The mapping is an antilinear isometric bijection from onto its dual
When the scalars are real, this map is an isometric isomorphism.
When is a compact Hausdorff topological space, the dual of is the space of Radon measures in the sense of Bourbaki.
The subset of consisting of non-negative measures of mass 1 (probability measures) is a convex w*-closed subset of the unit ball of
The extreme points of are the Dirac measures on
The set of Dirac measures on equipped with the w*-topology, is homeomorphic to
The result has been extended by Amir and Cambern to the case when the multiplicative Banach–Mazur distance between and is
The theorem is no longer true when the distance is
In the commutative Banach algebra the maximal ideals are precisely kernels of Dirac measures on
More generally, by the Gelfand–Mazur theorem, the maximal ideals of a unital commutative Banach algebra can be identified with its characters—not merely as sets but as topological spaces: the former with the hull-kernel topology and the latter with the w*-topology.
In this identification, the maximal ideal space can be viewed as a w*-compact subset of the unit ball in the dual
Not every unital commutative Banach algebra is of the form for some compact Hausdorff space However, this statement holds if one places in the smaller category of commutative C*-algebras.
Gelfand's representation theorem for commutative C*-algebras states that every commutative unital C*-algebra is isometrically isomorphic to a space.
The Hausdorff compact space here is again the maximal ideal space, also called the spectrum of in the C*-algebra context.
Bidual
If is a normed space, the (continuous) dual of the dual is called , or of
For every normed space there is a natural map,
This defines as a continuous linear functional on that is, an element of The map is a linear map from to
As a consequence of the existence of a norming functional for every this map is isometric, thus injective.
For example, the dual of is identified with and the dual of is identified with the space of bounded scalar sequences.
Under these identifications, is the inclusion map from to It is indeed isometric, but not onto.
If is surjective, then the normed space is called reflexive (see below).
Being the dual of a normed space, the bidual is complete, therefore, every reflexive normed space is a Banach space.
Using the isometric embedding it is customary to consider a normed space as a subset of its bidual.
When is a Banach space, it is viewed as a closed linear subspace of If is not reflexive, the unit ball of is a proper subset of the unit ball of
The Goldstine theorem states that the unit ball of a normed space is weakly*-dense in the unit ball of the bidual.
In other words, for every in the bidual, there exists a net in so that
The net may be replaced by a weakly*-convergent sequence when the dual is separable.
On the other hand, elements of the bidual of that are not in cannot be weak*-limit of in since is weakly sequentially complete.
Banach's theorems
Here are the main general results about Banach spaces that go back to the time of Banach's book () and are related to the Baire category theorem.
According to this theorem, a complete metric space (such as a Banach space, a Fréchet space or an F-space) cannot be equal to a union of countably many closed subsets with empty interiors.
Therefore, a Banach space cannot be the union of countably many closed subspaces, unless it is already equal to one of them; a Banach space with a countable Hamel basis is finite-dimensional.
The Banach–Steinhaus theorem is not limited to Banach spaces.
It can be extended for example to the case where is a Fréchet space, provided the conclusion is modified as follows: under the same hypothesis, there exists a neighborhood of in such that all in are uniformly bounded on
This result is a direct consequence of the preceding Banach isomorphism theorem and of the canonical factorization of bounded linear maps.
This is another consequence of Banach's isomorphism theorem, applied to the continuous bijection from onto sending to the sum
Reflexivity
The normed space is called reflexive when the natural map
is surjective. Reflexive normed spaces are Banach spaces.
This is a consequence of the Hahn–Banach theorem.
Further, by the open mapping theorem, if there is a bounded linear operator from the Banach space onto the Banach space then is reflexive.
Indeed, if the dual of a Banach space is separable, then is separable.
If is reflexive and separable, then the dual of is separable, so is separable.
Hilbert spaces are reflexive. The spaces are reflexive when More generally, uniformly convex spaces are reflexive, by the Milman–Pettis theorem.
The spaces are not reflexive.
In these examples of non-reflexive spaces the bidual is "much larger" than
Namely, under the natural isometric embedding of into given by the Hahn–Banach theorem, the quotient is infinite-dimensional, and even nonseparable.
However, Robert C. James has constructed an example of a non-reflexive space, usually called "the James space" and denoted by such that the quotient is one-dimensional.
Furthermore, this space is isometrically isomorphic to its bidual.
When is reflexive, it follows that all closed and bounded convex subsets of are weakly compact.
In a Hilbert space the weak compactness of the unit ball is very often used in the following way: every bounded sequence in has weakly convergent subsequences.
Weak compactness of the unit ball provides a tool for finding solutions in reflexive spaces to certain optimization problems.
For example, every convex continuous function on the unit ball of a reflexive space attains its minimum at some point in
As a special case of the preceding result, when is a reflexive space over every continuous linear functional in attains its maximum on the unit ball of
The following theorem of Robert C. James provides a converse statement.
The theorem can be extended to give a characterization of weakly compact convex sets.
On every non-reflexive Banach space there exist continuous linear functionals that are not norm-attaining.
However, the Bishop–Phelps theorem states that norm-attaining functionals are norm dense in the dual of
Weak convergences of sequences
A sequence in a Banach space is weakly convergent to a vector if converges to for every continuous linear functional in the dual The sequence is a weakly Cauchy sequence if converges to a scalar limit for every in
A sequence in the dual is weakly* convergent to a functional if converges to for every in
Weakly Cauchy sequences, weakly convergent and weakly* convergent sequences are norm bounded, as a consequence of the Banach–Steinhaus theorem.
When the sequence in is a weakly Cauchy sequence, the limit above defines a bounded linear functional on the dual that is, an element of the bidual of and is the limit of in the weak*-topology of the bidual.
The Banach space is weakly sequentially complete if every weakly Cauchy sequence is weakly convergent in
It follows from the preceding discussion that reflexive spaces are weakly sequentially complete.
An orthonormal sequence in a Hilbert space is a simple example of a weakly convergent sequence, with limit equal to the vector.
The unit vector basis of for or of is another example of a weakly null sequence, that is, a sequence that converges weakly to
For every weakly null sequence in a Banach space, there exists a sequence of convex combinations of vectors from the given sequence that is norm-converging to
The unit vector basis of is not weakly Cauchy.
Weakly Cauchy sequences in are weakly convergent, since -spaces are weakly sequentially complete.
Actually, weakly convergent sequences in are norm convergent. This means that satisfies Schur's property.
Results involving the basis
Weakly Cauchy sequences and the basis are the opposite cases of the dichotomy established in the following deep result of H. P. Rosenthal.
A complement to this result is due to Odell and Rosenthal (1975).
By the Goldstine theorem, every element of the unit ball of is weak*-limit of a net in the unit ball of When does not contain every element of is weak*-limit of a in the unit ball of
When the Banach space is separable, the unit ball of the dual equipped with the weak*-topology, is a metrizable compact space and every element in the bidual defines a bounded function on :
This function is continuous for the compact topology of if and only if is actually in considered as subset of
Assume in addition for the rest of the paragraph that does not contain
By the preceding result of Odell and Rosenthal, the function is the pointwise limit on of a sequence of continuous functions on it is therefore a first Baire class function on
The unit ball of the bidual is a pointwise compact subset of the first Baire class on
Sequences, weak and weak* compactness
When is separable, the unit ball of the dual is weak*-compact by the Banach–Alaoglu theorem and metrizable for the weak* topology, hence every bounded sequence in the dual has weakly* convergent subsequences.
This applies to separable reflexive spaces, but more is true in this case, as stated below.
The weak topology of a Banach space is metrizable if and only if is finite-dimensional. If the dual is separable, the weak topology of the unit ball of is metrizable.
This applies in particular to separable reflexive Banach spaces.
Although the weak topology of the unit ball is not metrizable in general, one can characterize weak compactness using sequences.
A Banach space is reflexive if and only if each bounded sequence in has a weakly convergent subsequence.
A weakly compact subset in is norm-compact. Indeed, every sequence in has weakly convergent subsequences by Eberlein–Šmulian, that are norm convergent by the Schur property of
Type and cotype
A way to classify Banach spaces is through the probabilistic notion of type and cotype, these two measure how far a Banach space is from a Hilbert space.
Schauder bases
A Schauder basis in a Banach space is a sequence of vectors in with the property that for every vector there exist defined scalars depending on such that
Banach spaces with a Schauder basis are necessarily separable, because the countable set of finite linear combinations with rational coefficients (say) is dense.
It follows from the Banach–Steinhaus theorem that the linear mappings are uniformly bounded by some constant
Let denote the coordinate functionals which assign to every in the coordinate of in the above expansion.
They are called biorthogonal functionals. When the basis vectors have norm the coordinate functionals have norm in the dual of
Most classical separable spaces have explicit bases.
The Haar system is a basis for
The trigonometric system is a basis in when
The Schauder system is a basis in the space
The question of whether the disk algebra has a basis remained open for more than forty years, until Bočkarev showed in 1974 that admits a basis constructed from the Franklin system.
Since every vector in a Banach space with a basis is the limit of with of finite rank and uniformly bounded, the space satisfies the bounded approximation property.
The first example by Enflo of a space failing the approximation property was at the same time the first example of a separable Banach space without a Schauder basis.
Robert C. James characterized reflexivity in Banach spaces with a basis: the space with a Schauder basis is reflexive if and only if the basis is both shrinking and boundedly complete.
In this case, the biorthogonal functionals form a basis of the dual of
Tensor product
Let and be two -vector spaces. The tensor product of and is a -vector space with a bilinear mapping which has the following universal property:
If is any bilinear mapping into a -vector space then there exists a unique linear mapping such that
The image under of a couple in is denoted by and called a simple tensor.
Every element in is a finite sum of such simple tensors.
There are various norms that can be placed on the tensor product of the underlying vector spaces, amongst others the projective cross norm and injective cross norm introduced by A. Grothendieck in 1955.
In general, the tensor product of complete spaces is not complete again. When working with Banach spaces, it is customary to say that the projective tensor product of two Banach spaces and is the of the algebraic tensor product equipped with the projective tensor norm, and similarly for the injective tensor product
Grothendieck proved in particular that
where is a compact Hausdorff space, the Banach space of continuous functions from to and the space of Bochner-measurable and integrable functions from to and where the isomorphisms are isometric.
The two isomorphisms above are the respective extensions of the map sending the tensor to the vector-valued function
Tensor products and the approximation property
Let be a Banach space. The tensor product is identified isometrically with the closure in of the set of finite rank operators.
When has the approximation property, this closure coincides with the space of compact operators on
For every Banach space there is a natural norm linear map
obtained by extending the identity map of the algebraic tensor product. Grothendieck related the approximation problem to the question of whether this map is one-to-one when is the dual of
Precisely, for every Banach space the map
is one-to-one if and only if has the approximation property.
Grothendieck conjectured that and must be different whenever and are infinite-dimensional Banach spaces.
This was disproved by Gilles Pisier in 1983.
Pisier constructed an infinite-dimensional Banach space such that and are equal. Furthermore, just as Enflo's example, this space is a "hand-made" space that fails to have the approximation property. On the other hand, Szankowski proved that the classical space does not have the approximation property.
Some classification results
Characterizations of Hilbert space among Banach spaces
A necessary and sufficient condition for the norm of a Banach space to be associated to an inner product is the parallelogram identity:
It follows, for example, that the Lebesgue space is a Hilbert space only when
If this identity is satisfied, the associated inner product is given by the polarization identity. In the case of real scalars, this gives:
For complex scalars, defining the inner product so as to be -linear in antilinear in the polarization identity gives:
To see that the parallelogram law is sufficient, one observes in the real case that is symmetric, and in the complex case, that it satisfies the Hermitian symmetry property and The parallelogram law implies that is additive in
It follows that it is linear over the rationals, thus linear by continuity.
Several characterizations of spaces isomorphic (rather than isometric) to Hilbert spaces are available.
The parallelogram law can be extended to more than two vectors, and weakened by the introduction of a two-sided inequality with a constant : Kwapień proved that if
for every integer and all families of vectors then the Banach space is isomorphic to a Hilbert space.
Here, denotes the average over the possible choices of signs
In the same article, Kwapień proved that the validity of a Banach-valued Parseval's theorem for the Fourier transform characterizes Banach spaces isomorphic to Hilbert spaces.
Lindenstrauss and Tzafriri proved that a Banach space in which every closed linear subspace is complemented (that is, is the range of a bounded linear projection) is isomorphic to a Hilbert space. The proof rests upon Dvoretzky's theorem about Euclidean sections of high-dimensional centrally symmetric convex bodies. In other words, Dvoretzky's theorem states that for every integer any finite-dimensional normed space, with dimension sufficiently large compared to contains subspaces nearly isometric to the -dimensional Euclidean space.
The next result gives the solution of the so-called . An infinite-dimensional Banach space is said to be homogeneous if it is isomorphic to all its infinite-dimensional closed subspaces. A Banach space isomorphic to is homogeneous, and Banach asked for the converse.
An infinite-dimensional Banach space is hereditarily indecomposable when no subspace of it can be isomorphic to the direct sum of two infinite-dimensional Banach spaces.
The Gowers dichotomy theorem asserts that every infinite-dimensional Banach space contains, either a subspace with unconditional basis, or a hereditarily indecomposable subspace and in particular, is not isomorphic to its closed hyperplanes.
If is homogeneous, it must therefore have an unconditional basis. It follows then from the partial solution obtained by Komorowski and Tomczak–Jaegermann, for spaces with an unconditional basis, that is isomorphic to
Metric classification
If is an isometry from the Banach space onto the Banach space (where both and are vector spaces over ), then the Mazur–Ulam theorem states that must be an affine transformation.
In particular, if this is maps the zero of to the zero of then must be linear. This result implies that the metric in Banach spaces, and more generally in normed spaces, completely captures their linear structure.
Topological classification
Finite dimensional Banach spaces are homeomorphic as topological spaces, if and only if they have the same dimension as real vector spaces.
Anderson–Kadec theorem (1965–66) proves that any two infinite-dimensional separable Banach spaces are homeomorphic as topological spaces. Kadec's theorem was extended by Torunczyk, who proved that any two Banach spaces are homeomorphic if and only if they have the same density character, the minimum cardinality of a dense subset.
Spaces of continuous functions
When two compact Hausdorff spaces and are homeomorphic, the Banach spaces and are isometric. Conversely, when is not homeomorphic to the (multiplicative) Banach–Mazur distance between and must be greater than or equal to see above the results by Amir and Cambern.
Although uncountable compact metric spaces can have different homeomorphy types, one has the following result due to Milutin:
The situation is different for countably infinite compact Hausdorff spaces.
Every countably infinite compact is homeomorphic to some closed interval of ordinal numbers
equipped with the order topology, where is a countably infinite ordinal.
The Banach space is then isometric to . When are two countably infinite ordinals, and assuming the spaces and are isomorphic if and only if .
For example, the Banach spaces
are mutually non-isomorphic.
Examples
Derivatives
Several concepts of a derivative may be defined on a Banach space. See the articles on the Fréchet derivative and the Gateaux derivative for details.
The Fréchet derivative allows for an extension of the concept of a total derivative to Banach spaces. The Gateaux derivative allows for an extension of a directional derivative to locally convex topological vector spaces.
Fréchet differentiability is a stronger condition than Gateaux differentiability.
The quasi-derivative is another generalization of directional derivative that implies a stronger condition than Gateaux differentiability, but a weaker condition than Fréchet differentiability.
Generalizations
Several important spaces in functional analysis, for instance the space of all infinitely often differentiable functions or the space of all distributions on are complete but are not normed vector spaces and hence not Banach spaces.
In Fréchet spaces one still has a complete metric, while LF-spaces are complete uniform vector spaces arising as limits of Fréchet spaces.
See also
Notes
References
Bibliography
.*
.
.
.
.
.
.
.
External links
Functional analysis
Science and technology in Poland
Topological vector spaces | Banach space | [
"Mathematics"
] | 9,277 | [
"Functions and mappings",
"Functional analysis",
"Vector spaces",
"Mathematical objects",
"Space (mathematics)",
"Topological vector spaces",
"Mathematical relations"
] |
4,012 | https://en.wikipedia.org/wiki/Basel%20Convention | The Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and Their Disposal, usually known as the Basel Convention, is an international treaty that was designed to reduce the movements of hazardous waste between nations, and specifically to restrict the transfer of hazardous waste from developed to less developed countries. It does not address the movement of radioactive waste, controlled by the International Atomic Energy Agency. The Basel Convention is also intended to minimize the rate and toxicity of wastes generated, to ensure their environmentally sound management as closely as possible to the source of generation, and to assist developing countries in environmentally sound management of the hazardous and other wastes they generate.
The convention was opened for signature on 21 March 1989, and entered into force on 5 May 1992. As of June 2024, there are 191 parties to the convention. In addition, Haiti and the United States have signed the convention but did not ratify it.
Following a petition urging action on the issue signed by more than a million people around the world, most of the world's countries, but not the United States, agreed in May 2019 to an amendment of the Basel Convention to include plastic waste as regulated material. Although the United States is not a party to the treaty, export shipments of plastic waste from the United States are now "criminal traffic as soon as the ships get on the high seas," according to the Basel Action Network (BAN), and carriers of such shipments may face liability, because the transportation of plastic waste is prohibited in just about every other country.
History
With the tightening of environmental laws (for example, RCRA) in developed nations in the 1970s, disposal costs for hazardous waste rose dramatically. At the same time, the globalization of shipping made cross-border movement of waste easier, and many less developed countries were desperate for foreign currency. Consequently, the trade in hazardous waste, particularly to poorer countries, grew rapidly. In 1990, OECD countries exported around 1.8 million tons of hazardous waste. Although most of this waste was shipped to other developed countries, a number of high-profile incidents of hazardous waste-dumping led to calls for regulation.
One of the incidents which led to the creation of the Basel Convention was the Khian Sea waste disposal incident, in which a ship carrying incinerator ash from the city of Philadelphia in the United States dumped half of its load on a beach in Haiti before being forced away. It sailed for many months, changing its name several times. Unable to unload the cargo in any port, the crew was believed to have dumped much of it at sea.
Another incident was a 1988 case in which five ships transported 8,000 barrels of hazardous waste from Italy to the small Nigerian town of Koko in exchange for $100 monthly rent which was paid to a Nigerian for the use of his farmland.
At its meeting that took place from 27 November to 1 December 2006, the parties of the Basel Agreement focused on issues of electronic waste and the dismantling of ships.
Increased trade in recyclable materials has led to an increase in a market for used products such as computers. This market is valued in billions of dollars. At issue is the distinction when used computers stop being a "commodity" and become a "waste".
As of June 2023, there are 191 parties to the treaty, which includes 188 UN member states, the Cook Islands, the European Union, and the State of Palestine. The five UN member states that are not party to the treaty are East Timor, Fiji, Haiti, South Sudan, and United States.
Definition of hazardous waste
Waste falls under the scope of the convention if it is within the category of wastes listed in Annex I of the convention and it exhibits one of the hazardous characteristics contained in Annex III.
In other words, it must both be listed and possess a characteristic such as being explosive, flammable, toxic, or corrosive. The other way that a waste may fall under the scope of the convention is if it is defined as or considered to be a hazardous waste under the laws of either the exporting country, the importing country, or any of the countries of transit.
The definition of the term disposal is made in Article 2 al 4 and just refers to annex IV, which gives a list of operations which are understood as disposal or recovery. Examples of disposal are broad, including recovery and recycling.
Alternatively, to fall under the scope of the convention, it is sufficient for waste to be included in Annex II, which lists other wastes, such as household wastes and residue that comes from incinerating household waste.
Radioactive waste that is covered under other international control systems and wastes from the normal operation of ships are not covered.
Annex IX attempts to define wastes which are not considered hazardous wastes and which would be excluded from the scope of the Basel Convention. If these wastes however are contaminated with hazardous materials to an extent causing them to exhibit an Annex III characteristic, they are not excluded.
Obligations
In addition to conditions on the import and export of the above wastes, there are stringent requirements for notice, consent and tracking for movement of wastes across national boundaries. The convention places a general prohibition on the exportation or importation of wastes between parties and non-parties. The exception to this rule is where the waste is subject to another treaty that does not take away from the Basel Convention. The United States is a notable non-party to the convention and has a number of such agreements for allowing the shipping of hazardous wastes to Basel Party countries.
The OECD Council also has its own control system that governs the transboundary movement of hazardous materials between OECD member countries. This allows, among other things, the OECD countries to continue trading in wastes with countries like the United States that have not ratified the Basel Convention.
Parties to the convention must honor import bans of other parties.
Article 4 of the Basel Convention calls for an overall reduction of waste generation. By encouraging countries to keep wastes within their boundaries and as close as possible to its source of generation, the internal pressures should provide incentives for waste reduction and pollution prevention. Parties are generally prohibited from exporting covered wastes to, or importing covered waste from, non-parties to the convention.
The convention states that illegal hazardous waste traffic is criminal but contains no enforcement provisions.
According to Article 12, parties are directed to adopt a protocol that establishes liability rules and procedures that are appropriate for damage that comes from the movement of hazardous waste across borders.
The current consensus is that as space is not classed as a "country" under the specific definition, export of e-waste to non-terrestrial locations would not be covered.
Basel Ban Amendment
After the initial adoption of the convention, some least developed countries and environmental organizations argued that it did not go far enough. Many nations and NGOs argued for a total ban on shipment of all hazardous waste to developing countries. In particular, the original convention did not prohibit waste exports to any location except Antarctica but merely required a notification and consent system known as "prior informed consent" or PIC. Further, many waste traders sought to exploit the good name of recycling and begin to justify all exports as moving to recycling destinations. Many believed a full ban was needed including exports for recycling. These concerns led to several regional waste trade bans, including the Bamako Convention.
Lobbying at 1995 Basel conference by developing countries, Greenpeace and several European countries such as Denmark, led to the adoption of an amendment to the convention in 1995 termed the Basel Ban Amendment to the Basel Convention. The amendment has been accepted by 86 countries and the European Union, but has not entered into force (as that requires ratification by three-fourths of the member states to the convention). On 6 September 2019, Croatia became the 97th country to ratify the amendment which will enter into force after 90 days on 5 December 2019. The amendment prohibits the export of hazardous waste from a list of developed (mostly OECD) countries to developing countries. The Basel Ban applies to export for any reason, including recycling. An area of special concern for advocates of the amendment was the sale of ships for salvage, shipbreaking. The Ban Amendment was strenuously opposed by a number of industry groups as well as nations including Australia and Canada. The number of ratification for the entry-into force of the Ban Amendment is under debate: Amendments to the convention enter into force after ratification of "three-fourths of the Parties who accepted them" [Art. 17.5]; so far, the parties of the Basel Convention could not yet agree whether this would be three-fourths of the parties that were party to the Basel Convention when the ban was adopted, or three-fourths of the current parties of the convention [see Report of COP 9 of the Basel Convention]. The status of the amendment ratifications can be found on the Basel Secretariat's web page. The European Union fully implemented the Basel Ban in its Waste Shipment Regulation (EWSR), making it legally binding in all EU member states. Norway and Switzerland have similarly fully implemented the Basel Ban in their legislation.
In the light of the blockage concerning the entry into force of the Ban Amendment, Switzerland and Indonesia have launched a "Country-led Initiative" (CLI) to discuss in an informal manner a way forward to ensure that the trans boundary movements of hazardous wastes, especially to developing countries and countries with economies in the transition, do not lead to an unsound management of hazardous wastes. This discussion aims at identifying and finding solutions to the reasons why hazardous wastes are still brought to countries that are not able to treat them in a safe manner. It is hoped that the CLI will contribute to the realization of the objectives of the Ban Amendment. The Basel Convention's website informs about the progress of this initiative.
Regulation of plastic waste
In the wake of popular outcry, in May 2019 most of the world's countries, but not the United States, agreed to amend the Basel Convention to include plastic waste as a regulated material. The world's oceans are estimated to contain 100 million metric tons of plastic, with up to 90% of this quantity originating in land-based sources. The United States, which produces an annual 42 million metric tons of plastic waste, more than any other country in the world, opposed the amendment, but since it is not a party to the treaty it did not have an opportunity to vote on it to try to block it. Information about, and visual images of, wildlife, such as seabirds, ingesting plastic, and scientific findings that nanoparticles do penetrate through the blood–brain barrier were reported to have fueled public sentiment for coordinated international legally binding action. Over a million people worldwide signed a petition demanding official action. Although the United States is not a party to the treaty, export shipments of plastic waste from the United States are now "criminal traffic as soon as the ships get on the high seas," according to the Basel Action Network (BAN), and carriers of such shipments may face liability, because the Basel Convention as amended in May 2019 prohibits the transportation of plastic waste to just about every other country.
The Basel Convention contains three main entries on plastic wastes in Annex II, VIII and IX of the Convention. The Plastic Waste Amendments of the convention are now binding on 186 States. In addition to ensuring the trade in plastic waste is more transparent and better regulated, under the Basel Convention governments must take steps not only to ensure the environmentally sound management of plastic waste, but also to tackle plastic waste at its source.
Basel watchdog
The Basel Action Network (BAN) is a charitable civil society non-governmental organization that works as a consumer watchdog for implementation of the Basel Convention. BAN's principal aims is fighting exportation of toxic waste, including plastic waste, from industrialized societies to developing countries. BAN is based in Seattle, Washington, United States, with a partner office in the Philippines. BAN works to curb trans-border trade in hazardous electronic waste, land dumping, incineration, and the use of prison labor.
See also
Asbestos and the law
Bamako Convention
Electronic waste by country
Rotterdam Convention
Stockholm Convention
References
Further reading
Toxic Exports, Jennifer Clapp, Cornell University Press, 2001.
Challenging the Chip: Labor Rights and Environmental Justice in the Global Electronics Industry, Ted Smith, David A. Sonnenfeld, and David Naguib Pellow, eds., Temple University Press link, .
"Toxic Trade: International Knowledge Networks & the Development of the Basel Convention," Jason Lloyd, International Public Policy Review, UCL.
External links
Text of the Convention
"A Simplified Guide to the Basel Convention"
Text of the regulation no.1013/2006 of the European Union on shipments of waste
Flow of Waste among Basel Parties
Introductory note to the Basel Convention by Dr. Katharina Kummer Peiry, Executive Secretary of the Basel Convention, UNEP on the website of the UN Audiovisual Library of International Law
Basel Convention, Treaty available in ECOLEX-the gateway to environmental law (English)
Organisations
Basel Action Network
Africa Institute for the Environmentally Sound Management of Hazardous and other Wastes a.k.a. Basel Convention Regional Centre Pretoria
Page on the Basel Convention at Greenpeace
Basel Convention Coordinating Centre for Asia and the Pacific
Pollution
Environmental treaties
Waste treaties
Chemical safety
Treaties concluded in 1989
Treaties entered into force in 1992
1992 in the environment
Hazardous waste | Basel Convention | [
"Chemistry",
"Technology"
] | 2,720 | [
"Chemical safety",
"Chemical accident",
"Hazardous waste",
"nan"
] |
4,024 | https://en.wikipedia.org/wiki/Butterfly%20effect | In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state.
The term is closely associated with the work of the mathematician and meteorologist Edward Norton Lorenz. He noted that the butterfly effect is derived from the example of the details of a tornado (the exact time of formation, the exact path taken) being influenced by minor perturbations such as a distant butterfly flapping its wings several weeks earlier. Lorenz originally used a seagull causing a storm but was persuaded to make it more poetic with the use of a butterfly and tornado by 1972. He discovered the effect when he observed runs of his weather model with initial condition data that were rounded in a seemingly inconsequential manner. He noted that the weather model would fail to reproduce the results of runs with the unrounded initial condition data. A very small change in initial conditions had created a significantly different outcome.
The idea that small causes may have large effects in weather was earlier acknowledged by the French mathematician and physicist Henri Poincaré. The American mathematician and philosopher Norbert Wiener also contributed to this theory. Lorenz's work placed the concept of instability of the Earth's atmosphere onto a quantitative base and linked the concept of instability to the properties of large classes of dynamic systems which are undergoing nonlinear dynamics and deterministic chaos.
The concept of the butterfly effect has since been used outside the context of weather science as a broad term for any situation where a small change is supposed to be the cause of larger consequences.
History
In The Vocation of Man (1800), Johann Gottlieb Fichte says "you could not remove a single grain of sand from its place without thereby ... changing something throughout all parts of the immeasurable whole".
Chaos theory and the sensitive dependence on initial conditions were described in numerous forms of literature. This is evidenced by the case of the three-body problem by Poincaré in 1890. He later proposed that such phenomena could be common, for example, in meteorology.
In 1898, Jacques Hadamard noted general divergence of trajectories in spaces of negative curvature. Pierre Duhem discussed the possible general significance of this in 1908.
In 1950, Alan Turing noted: "The displacement of a single electron by a billionth of a centimetre at one moment might make the difference between a man being killed by an avalanche a year later, or escaping."
The idea that the death of one butterfly could eventually have a far-reaching ripple effect on subsequent historical events made its earliest known appearance in "A Sound of Thunder", a 1952 short story by Ray Bradbury. "A Sound of Thunder" features time travel.
More precisely, though, almost the exact idea and the exact phrasing —of a tiny insect's wing affecting the entire atmosphere's winds— was published in a children's book which became extremely successful and well-known globally in 1962, the year before Lorenz published:
In 1961, Lorenz was running a numerical computer model to redo a weather prediction from the middle of the previous run as a shortcut. He entered the initial condition 0.506 from the printout instead of entering the full precision 0.506127 value. The result was a completely different weather scenario.
Lorenz wrote:
In 1963, Lorenz published a theoretical study of this effect in a highly cited, seminal paper called Deterministic Nonperiodic Flow (the calculations were performed on a Royal McBee LGP-30 computer). Elsewhere he stated:
Following proposals from colleagues, in later speeches and papers, Lorenz used the more poetic butterfly. According to Lorenz, when he failed to provide a title for a talk he was to present at the 139th meeting of the American Association for the Advancement of Science in 1972, Philip Merilees concocted Does the flap of a butterfly's wings in Brazil set off a tornado in Texas? as a title. Although a butterfly flapping its wings has remained constant in the expression of this concept, the location of the butterfly, the consequences, and the location of the consequences have varied widely.
The phrase refers to the effect of a butterfly's wings creating tiny changes in the atmosphere that may ultimately alter the path of a tornado or delay, accelerate, or even prevent the occurrence of a tornado in another location. The butterfly does not power or directly create the tornado, but the term is intended to imply that the flap of the butterfly's wings can cause the tornado: in the sense that the flap of the wings is a part of the initial conditions of an interconnected complex web; one set of conditions leads to a tornado, while the other set of conditions doesn't. The flapping wing creates a small change in the initial condition of the system, which cascades to large-scale alterations of events (compare: domino effect). Had the butterfly not flapped its wings, the trajectory of the system might have been vastly different—but it's also equally possible that the set of conditions without the butterfly flapping its wings is the set that leads to a tornado.
The butterfly effect presents an obvious challenge to prediction, since initial conditions for a system such as the weather can never be known to complete accuracy. This problem motivated the development of ensemble forecasting, in which a number of forecasts are made from perturbed initial conditions.
Some scientists have since argued that the weather system is not as sensitive to initial conditions as previously believed. David Orrell argues that the major contributor to weather forecast error is model error, with sensitivity to initial conditions playing a relatively small role. Stephen Wolfram also notes that the Lorenz equations are highly simplified and do not contain terms that represent viscous effects; he believes that these terms would tend to damp out small perturbations. Recent studies using generalized Lorenz models that included additional dissipative terms and nonlinearity suggested that a larger heating parameter is required for the onset of chaos.
While the "butterfly effect" is often explained as being synonymous with sensitive dependence on initial conditions of the kind described by Lorenz in his 1963 paper (and previously observed by Poincaré), the butterfly metaphor was originally applied to work he published in 1969 which took the idea a step further. Lorenz proposed a mathematical model for how tiny motions in the atmosphere scale up to affect larger systems. He found that the systems in that model could only be predicted up to a specific point in the future, and beyond that, reducing the error in the initial conditions would not increase the predictability (as long as the error is not zero). This demonstrated that a deterministic system could be "observationally indistinguishable" from a non-deterministic one in terms of predictability. Recent re-examinations of this paper suggest that it offered a significant challenge to the idea that our universe is deterministic, comparable to the challenges offered by quantum physics.
In the book entitled The Essence of Chaos published in 1993, Lorenz defined butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration." This feature is the same as sensitive dependence of solutions on initial conditions (SDIC) in . In the same book, Lorenz applied the activity of skiing and developed an idealized skiing model for revealing the sensitivity of time-varying paths to initial positions. A predictability horizon is determined before the onset of SDIC.
Illustrations
{|class="wikitable" width=100%
|-
! colspan=3|The butterfly effect in the Lorenz attractor
|-
| colspan="2" style="text-align:center;" | time 0 ≤ t ≤ 30 (larger)
| style="text-align:center;" | z coordinate (larger)
|-
| colspan="2" style="text-align:center;"|
| style="text-align:center;"|
|-
|colspan=3 | These figures show two segments of the three-dimensional evolution of two trajectories (one in blue, and the other in yellow) for the same period of time in the Lorenz attractor starting at two initial points that differ by only 10−5 in the x-coordinate. Initially, the two trajectories seem coincident, as indicated by the small difference between the z coordinate of the blue and yellow trajectories, but for t > 23 the difference is as large as the value of the trajectory. The final position of the cones indicates that the two trajectories are no longer coincident at t = 30.
|-
| style="text-align:center;" colspan="3" | An animation of the Lorenz attractor shows the continuous evolution.
|}
Theory and mathematical definition
Recurrence, the approximate return of a system toward its initial conditions, together with sensitive dependence on initial conditions, are the two main ingredients for chaotic motion. They have the practical consequence of making complex systems, such as the weather, difficult to predict past a certain time range (approximately a week in the case of weather) since it is impossible to measure the starting atmospheric conditions completely accurately.
A dynamical system displays sensitive dependence on initial conditions if points arbitrarily close together separate over time at an exponential rate. The definition is not topological, but essentially metrical. Lorenz defined sensitive dependence as follows:
The property characterizing an orbit (i.e., a solution) if most other orbits that pass close to it at some point do not remain close to it as time advances.
If M is the state space for the map , then displays sensitive dependence to initial conditions if for any x in M and any δ > 0, there are y in M, with distance such that and such that
for some positive parameter a. The definition does not require that all points from a neighborhood separate from the base point x, but it requires one positive Lyapunov exponent. In addition to a positive Lyapunov exponent, boundedness is another major feature within chaotic systems.
The simplest mathematical framework exhibiting sensitive dependence on initial conditions is provided by a particular parametrization of the logistic map:
which, unlike most chaotic maps, has a closed-form solution:
where the initial condition parameter is given by . For rational , after a finite number of iterations maps into a periodic sequence. But almost all are irrational, and, for irrational , never repeats itself – it is non-periodic. This solution equation clearly demonstrates the two key features of chaos – stretching and folding: the factor 2n shows the exponential growth of stretching, which results in sensitive dependence on initial conditions (the butterfly effect), while the squared sine function keeps folded within the range [0, 1].
In physical systems
In weather
Overview
The butterfly effect is most familiar in terms of weather; it can easily be demonstrated in standard weather prediction models, for example. The climate scientists James Annan and William Connolley explain that chaos is important in the development of weather prediction methods; models are sensitive to initial conditions. They add the caveat: "Of course the existence of an unknown butterfly flapping its wings has no direct bearing on weather forecasts, since it will take far too long for such a small perturbation to grow to a significant size, and we have many more immediate uncertainties to worry about. So the direct impact of this phenomenon on weather prediction is often somewhat wrong."
Differentiating types of butterfly effects
The concept of the butterfly effect encompasses several phenomena. The two kinds of butterfly effects, including the sensitive dependence on initial conditions, and the ability of a tiny perturbation to create an organized circulation at large distances, are not exactly the same. In Palmer et al., a new type of butterfly effect is introduced, highlighting the potential impact of small-scale processes on finite predictability within the Lorenz 1969 model. Additionally, the identification of ill-conditioned aspects of the Lorenz 1969 model points to a practical form of finite predictability. These two distinct mechanisms suggesting finite predictability in the Lorenz 1969 model are collectively referred to as the third kind of butterfly effect. The authors in have considered Palmer et al.'s suggestions and have aimed to present their perspective without raising specific contentions.
The third kind of butterfly effect with finite predictability, as discussed in, was primarily proposed based on a convergent geometric series, known as Lorenz's and Lilly's formulas. Ongoing discussions are addressing the validity of these two formulas for estimating predictability limits in.
A comparison of the two kinds of butterfly effects and the third kind of butterfly effect has been documented. In recent studies, it was reported that both meteorological and non-meteorological linear models have shown that instability plays a role in producing a butterfly effect, which is characterized by brief but significant exponential growth resulting from a small disturbance.
Recent debates on butterfly effects
The first kind of butterfly effect (BE1), known as SDIC (Sensitive Dependence on Initial Conditions), is widely recognized and demonstrated through idealized chaotic models. However, opinions differ regarding the second kind of butterfly effect, specifically the impact of a butterfly flapping its wings on tornado formation, as indicated in two 2024 articles. In more recent discussions published by Physics Today, it is acknowledged that the second kind of butterfly effect (BE2) has never been rigorously verified using a realistic weather model. While the studies suggest that BE2 is unlikely in the real atmosphere, its invalidity in this context does not negate the applicability of BE1 in other areas, such as pandemics or historical events.
For the third kind of butterfly effect, the limited predictability within the Lorenz 1969 model is explained by scale interactions in one article and by system ill-conditioning in another more recent study.
Finite predictability in chaotic systems
According to Lighthill (1986), the presence of SDIC (commonly known as the butterfly effect) implies that chaotic systems have a finite predictability limit. In a literature review, it was found that Lorenz's perspective on the predictability limit can be condensed into the following statement:
(A). The Lorenz 1963 model qualitatively revealed the essence of a finite predictability within a chaotic system such as the atmosphere. However, it did not determine a precise limit for the predictability of the atmosphere.
(B). In the 1960s, the two-week predictability limit was originally estimated based on a doubling time of five days in real-world models. Since then, this finding has been documented in Charney et al. (1966) and has become a consensus.
Recently, a short video has been created to present Lorenz's perspective on predictability limit.
A recent study refers to the two-week predictability limit, initially calculated in the 1960s with the Mintz-Arakawa model's five-day doubling time, as the "Predictability Limit Hypothesis." Inspired by Moore's Law, this term acknowledges the collaborative contributions of Lorenz, Mintz, and Arakawa under Charney's leadership. The hypothesis supports the investigation into extended-range predictions using both partial differential equation (PDE)-based physics methods and Artificial Intelligence (AI) techniques.
Revised perspectives on chaotic and non-chaotic systems
By revealing coexisting chaotic and non-chaotic attractors within Lorenz models, Shen and his colleagues proposed a revised view that "weather possesses chaos and order", in contrast to the conventional view of "weather is chaotic". As a result, sensitive dependence on initial conditions (SDIC) does not always appear. Namely, SDIC appears when two orbits (i.e., solutions) become the chaotic attractor; it does not appear when two orbits move toward the same point attractor. The above animation for double pendulum motion provides an analogy. For large angles of swing the motion of the pendulum is often chaotic. By comparison, for small angles of swing, motions are non-chaotic.
Multistability is defined when a system (e.g., the double pendulum system) contains more than one bounded attractor that depends only on initial conditions. The multistability was illustrated using kayaking in Figure on the right side (i.e., Figure 1 of ) where the appearance of strong currents and a stagnant area suggests instability and local stability, respectively. As a result, when two kayaks move along strong currents, their paths display SDIC. On the other hand, when two kayaks move into a stagnant area, they become trapped, showing no typical SDIC (although a chaotic transient may occur). Such features of SDIC or no SDIC suggest two types of solutions and illustrate the nature of multistability.
By taking into consideration time-varying multistability that is associated with the modulation of large-scale processes (e.g., seasonal forcing) and aggregated feedback of small-scale processes (e.g., convection), the above revised view is refined as follows:
"The atmosphere possesses chaos and order; it includes, as examples, emerging organized systems (such as tornadoes) and time varying forcing from recurrent seasons."
In quantum mechanics
The potential for sensitive dependence on initial conditions (the butterfly effect) has been studied in a number of cases in semiclassical and quantum physics, including atoms in strong fields and the anisotropic Kepler problem. Some authors have argued that extreme (exponential) dependence on initial conditions is not expected in pure quantum treatments; however, the sensitive dependence on initial conditions demonstrated in classical motion is included in the semiclassical treatments developed by Martin Gutzwiller and John B. Delos and co-workers. The random matrix theory and simulations with quantum computers prove that some versions of the butterfly effect in quantum mechanics do not exist.
Other authors suggest that the butterfly effect can be observed in quantum systems. Zbyszek P. Karkuszewski et al. consider the time evolution of quantum systems which have slightly different Hamiltonians. They investigate the level of sensitivity of quantum systems to small changes in their given Hamiltonians. David Poulin et al. presented a quantum algorithm to measure fidelity decay, which "measures the rate at which identical initial states diverge when subjected to slightly different dynamics". They consider fidelity decay to be "the closest quantum analog to the (purely classical) butterfly effect". Whereas the classical butterfly effect considers the effect of a small change in the position and/or velocity of an object in a given Hamiltonian system, the quantum butterfly effect considers the effect of a small change in the Hamiltonian system with a given initial position and velocity. This quantum butterfly effect has been demonstrated experimentally. Quantum and semiclassical treatments of system sensitivity to initial conditions are known as quantum chaos.
In popular culture
The butterfly effect has appeared across mediums such as literature (for instance, A Sound of Thunder), films and television (such as The Simpsons), video games (such as Life Is Strange), webcomics (such as Homestuck), AI-driven expansive language models, and more.
See also
Avalanche effect
Behavioral cusp
Cascading failure
Catastrophe theory
Causality
Chain reaction
Clapotis
Determinism
Domino effect
Dynamical system
Fractal
Great Stirrup Controversy
Innovation butterfly
Kessler syndrome
Norton's dome
Numerical analysis
Point of divergence
Positive feedback
Potentiality and actuality
Representativeness heuristic
Ripple effect
Snowball effect
Traffic congestion
Tropical cyclogenesis
Unintended consequences
References
Further reading
James Gleick, Chaos: Making a New Science, New York: Viking, 1987. 368 pp.
Bradbury, Ray. "A Sound of Thunder." Collier's. 28 June 1952
External links
Weather and Chaos: The Work of Edward N. Lorenz. A short documentary that explains the "butterfly effect" in context of Lorenz's work.
The Chaos Hypertextbook. An introductory primer on chaos and fractals
New England Complex Systems Institute - Concepts: Butterfly Effect
ChaosBook.org. Advanced graduate textbook on chaos (no fractals)
Causality
Chaos theory
Determinism
Metaphors referring to insects
Physical phenomena
Stability theory | Butterfly effect | [
"Physics",
"Mathematics"
] | 4,149 | [
"Physical phenomena",
"Stability theory",
"Dynamical systems"
] |
4,052 | https://en.wikipedia.org/wiki/BCPL | BCPL ("Basic Combined Programming Language") is a procedural, imperative, and structured programming language. Originally intended for writing compilers for other languages, BCPL is no longer in common use. However, its influence is still felt because a stripped down and syntactically changed version of BCPL, called B, was the language on which the C programming language was based. BCPL introduced several features of many modern programming languages, including using curly braces to delimit code blocks. BCPL was first implemented by Martin Richards of the University of Cambridge in 1967.
Design
BCPL was designed so that small and simple compilers could be written for it; reputedly some compilers could be run in 16 kilobytes. Furthermore, the original compiler, itself written in BCPL, was easily portable. BCPL was thus a popular choice for bootstrapping a system. A major reason for the compiler's portability lay in its structure. It was split into two parts: the front end parsed the source and generated O-code, an intermediate language. The back end took the O-code and translated it into the machine code for the target machine. Only of the compiler's code needed to be rewritten to support a new machine, a task that usually took between 2 and 5 person-months. This approach became common practice later (e.g. Pascal, Java).
The language is unusual in having only one data type: a word, a fixed number of bits, usually chosen to align with the same platform architecture's machine word and of adequate capacity to represent any valid storage address. For many machines of the time, this data type was a 16-bit word. This choice later proved to be a significant problem when BCPL was used on machines in which the smallest addressable item was not a word but a byte or on machines with larger word sizes such as 32-bit or 64-bit.
The interpretation of any value was determined by the operators used to process the values. (For example, + added two values together, treating them as integers; ! indirected through a value, effectively treating it as a pointer.) In order for this to work, the implementation provided no type checking.
The mismatch between BCPL's word orientation and byte-oriented hardware was addressed in several ways. One was by providing standard library routines for packing and unpacking words into byte strings. Later, two language features were added: the bit-field selection operator and the infix byte indirection operator (denoted by %).
BCPL handles bindings spanning separate compilation units in a unique way. There are no user-declarable global variables; instead, there is a global vector, similar to "blank common" in Fortran. All data shared between different compilation units comprises scalars and pointers to vectors stored in a pre-arranged place in the global vector. Thus, the header files (files included during compilation using the "GET" directive) become the primary means of synchronizing global data between compilation units, containing "GLOBAL" directives that present lists of symbolic names, each paired with a number that associates the name with the corresponding numerically addressed word in the global vector. As well as variables, the global vector contains bindings for external procedures. This makes dynamic loading of compilation units very simple to achieve. Instead of relying on the link loader of the underlying implementation, effectively, BCPL gives the programmer control of the linking process.
The global vector also made it very simple to replace or augment standard library routines. A program could save the pointer from the global vector to the original routine and replace it with a pointer to an alternative version. The alternative might call the original as part of its processing. This could be used as a quick ad hoc debugging aid.
BCPL was the first brace programming language and the braces survived the syntactical changes and have become a common means of denoting program source code statements. In practice, on limited keyboards of the day, source programs often used the sequences $( and $) or [ and ] in place of the symbols { and }. The single-line // comments of BCPL, which were not adopted by C, reappeared in C++ and later in C99.
The book BCPL: The language and its compiler describes the philosophy of BCPL as follows:
History
BCPL was first implemented by Martin Richards of the University of Cambridge in 1967. BCPL was a response to difficulties with its predecessor, Cambridge Programming Language, later renamed Combined Programming Language (CPL), which was designed during the early 1960s. Richards created BCPL by "removing those features of the full language which make compilation difficult". The first compiler implementation, for the IBM 7094 under Compatible Time-Sharing System, was written while Richards was visiting Project MAC at the Massachusetts Institute of Technology in the spring of 1967. The language was first described in a paper presented to the 1969 Spring Joint Computer Conference.
BCPL has been rumored to have originally stood for "Bootstrap Cambridge Programming Language", but CPL was never created since development stopped at BCPL, and the acronym was later reinterpreted for the BCPL book.
BCPL is the language in which the original "Hello, World!" program was written. The first MUD was also written in BCPL (MUD1).
Several operating systems were written partially or wholly in BCPL (for example, TRIPOS and the earliest versions of AmigaDOS). BCPL was also the initial language used in the Xerox PARC Alto project. Among other projects, the Bravo document preparation system was written in BCPL.
An early compiler, bootstrapped in 1969, by starting with a paper tape of the O-code of Richards's Atlas 2 compiler, targeted the ICT 1900 series. The two machines had different word-lengths (48 vs 24 bits), different character encodings, and different packed string representations—and the successful bootstrapping increased confidence in the practicality of the method.
By late 1970, implementations existed for the Honeywell 635 and Honeywell 645, IBM 360, PDP-10, TX-2, CDC 6400, UNIVAC 1108, PDP-9, KDF 9 and Atlas 2. In 1974 a dialect of BCPL was implemented at BBN without using the intermediate O-code. The initial implementation was a cross-compiler hosted on BBN's TENEX PDP-10s, and directly targeted the PDP-11s used in BBN's implementation of the second generation IMPs used in the ARPANET.
There was also a version produced for the BBC Micro in the mid-1980s, by Richards Computer Products, a company started by John Richards, the brother of Martin Richards. The BBC Domesday Project made use of the language. Versions of BCPL for the Amstrad CPC and Amstrad PCW computers were also released in 1986 by UK software house Arnor Ltd. MacBCPL was released for the Apple Macintosh in 1985 by Topexpress Ltd, of Kensington, England.
Both the design and philosophy of BCPL strongly influenced B, which in turn influenced C. Programmers at the time debated whether an eventual successor to C would be called "D", the next letter in the alphabet, or "P", the next letter in the parent language name. The language most accepted as being C's successor is C++ (with ++ being C's increment operator), although meanwhile, a D programming language also exists.
In 1979, implementations of BCPL existed for at least 25 architectures; the language gradually fell out of favour as C became popular on non-Unix systems.
Martin Richards maintains a modern version of BCPL on his website, last updated in 2023. This can be set up to run on various systems including Linux, FreeBSD, and Mac OS X. The latest distribution includes graphics and sound libraries, and there is a comprehensive manual. He continues to program in it, including for his research on musical automated score following.
A common informal MIME type for BCPL is .
Examples
Hello world
Richards and Whitby-Strevens provide an example of the "Hello, World!" program for BCPL using a standard system header, 'LIBHDR':
GET "LIBHDR"
LET START() BE WRITES("Hello, World")
Further examples
If these programs are run using Richards' current version of Cintsys (December 2018), LIBHDR, START and WRITEF must be changed to lower case to avoid errors.
Print factorials:
GET "LIBHDR"
LET START() = VALOF $(
FOR I = 1 TO 5 DO
WRITEF("%N! = %I4*N", I, FACT(I))
RESULTIS 0
$)
AND FACT(N) = N = 0 -> 1, N * FACT(N - 1)
Count solutions to the N queens problem:
GET "LIBHDR"
GLOBAL $(
COUNT: 200
ALL: 201
$)
LET TRY(LD, ROW, RD) BE
TEST ROW = ALL THEN
COUNT := COUNT + 1
ELSE $(
LET POSS = ALL & ~(LD | ROW | RD)
UNTIL POSS = 0 DO $(
LET P = POSS & -POSS
POSS := POSS - P
TRY(LD + P << 1, ROW + P, RD + P >> 1)
$)
$)
LET START() = VALOF $(
ALL := 1
FOR I = 1 TO 12 DO $(
COUNT := 0
TRY(0, 0, 0)
WRITEF("%I2-QUEENS PROBLEM HAS %I5 SOLUTIONS*N", I, COUNT)
ALL := 2 * ALL + 1
$)
RESULTIS 0
$)
References
Further reading
Martin Richards, The BCPL Reference Manual (Memorandum M-352, Project MAC, Cambridge, MA, USA, July, 1967)
Martin Richards, BCPL - a tool for compiler writing and systems programming (Proceedings of the Spring Joint Computer Conference, Vol 34, pp 557–566, 1969)
Martin Richards, Arthur Evans, Robert F. Mabee, The BCPL Reference Manual (MAC TR-141, Project MAC, Cambridge, MA, USA, 1974)
Martin Richards, Colin Whitby-Strevens, BCPL, the language and its compiler (Cambridge University Press, 1980)
External links
Martin Richards' BCPL distribution
Martin Richards' BCPL Reference Manual, 1967 by Dennis M. Ritchie
BCPL entry in the Jargon File
Nordier & Associates' x86 port
ArnorBCPL manual (1986, Amstrad PCW/CPC)
How BCPL evolved from CPL, Martin Richards
Ritchie's The Development of the C Language has commentary about BCPL's influence on C
The BCPL Cintsys and Cintpos User Guide
BCPL Reference Manual, 1975 Xerox Palo Alto Research Center
History of computing in the United Kingdom
Procedural programming languages
Programming languages created in 1967
Structured programming languages
Systems programming languages
University of Cambridge Computer Laboratory | BCPL | [
"Technology"
] | 2,285 | [
"History of computing",
"History of computing in the United Kingdom"
] |
4,064 | https://en.wikipedia.org/wiki/Borsuk%E2%80%93Ulam%20theorem | In mathematics, the Borsuk–Ulam theorem states that every continuous function from an n-sphere into Euclidean n-space maps some pair of antipodal points to the same point. Here, two points on a sphere are called antipodal if they are in exactly opposite directions from the sphere's center.
Formally: if is continuous then there exists an such that: .
The case can be illustrated by saying that there always exist a pair of opposite points on the Earth's equator with the same temperature. The same is true for any circle. This assumes the temperature varies continuously in space, which is, however, not always the case.
The case is often illustrated by saying that at any moment, there is always a pair of antipodal points on the Earth's surface with equal temperatures and equal barometric pressures, assuming that both parameters vary continuously in space.
The Borsuk–Ulam theorem has several equivalent statements in terms of odd functions. Recall that is the n-sphere and is the n-ball:
If is a continuous odd function, then there exists an such that: .
If is a continuous function which is odd on (the boundary of ), then there exists an such that: .
History
According to , the first historical mention of the statement of the Borsuk–Ulam theorem appears in . The first proof was given by , where the formulation of the problem was attributed to Stanisław Ulam. Since then, many alternative proofs have been found by various authors, as collected by .
Equivalent statements
The following statements are equivalent to the Borsuk–Ulam theorem.
With odd functions
A function is called odd (aka antipodal or antipode-preserving) if for every : .
The Borsuk–Ulam theorem is equivalent to the following statement: A continuous odd function from an n-sphere into Euclidean n-space has a zero. PROOF:
If the theorem is correct, then it is specifically correct for odd functions, and for an odd function, iff . Hence every odd continuous function has a zero.
For every continuous function , the following function is continuous and odd: . If every odd continuous function has a zero, then has a zero, and therefore, . Hence the theorem is correct.
With retractions
Define a retraction as a function The Borsuk–Ulam theorem is equivalent to the following claim: there is no continuous odd retraction.
Proof: If the theorem is correct, then every continuous odd function from must include 0 in its range. However, so there cannot be a continuous odd function whose range is .
Conversely, if it is incorrect, then there is a continuous odd function with no zeroes. Then we can construct another odd function by:
since has no zeroes, is well-defined and continuous. Thus we have a continuous odd retraction.
Proofs
1-dimensional case
The 1-dimensional case can easily be proved using the intermediate value theorem (IVT).
Let be the odd real-valued continuous function on a circle defined by . Pick an arbitrary . If then we are done. Otherwise, without loss of generality, But Hence, by the IVT, there is a point at which .
General case
Algebraic topological proof
Assume that is an odd continuous function with (the case is treated above, the case can be handled using basic covering theory). By passing to orbits under the antipodal action, we then get an induced continuous function between real projective spaces, which induces an isomorphism on fundamental groups. By the Hurewicz theorem, the induced ring homomorphism on cohomology with coefficients [where denotes the field with two elements],
sends to . But then we get that is sent to , a contradiction.
One can also show the stronger statement that any odd map has odd degree and then deduce the theorem from this result.
Combinatorial proof
The Borsuk–Ulam theorem can be proved from Tucker's lemma.
Let be a continuous odd function. Because g is continuous on a compact domain, it is uniformly continuous. Therefore, for every , there is a such that, for every two points of which are within of each other, their images under g are within of each other.
Define a triangulation of with edges of length at most . Label each vertex of the triangulation with a label in the following way:
The absolute value of the label is the index of the coordinate with the highest absolute value of g: .
The sign of the label is the sign of g, so that: .
Because g is odd, the labeling is also odd: . Hence, by Tucker's lemma, there are two adjacent vertices with opposite labels. Assume w.l.o.g. that the labels are . By the definition of l, this means that in both and , coordinate #1 is the largest coordinate: in this coordinate is positive while in it is negative. By the construction of the triangulation, the distance between and is at most , so in particular (since and have opposite signs) and so . But since the largest coordinate of is coordinate #1, this means that for each . So , where is some constant depending on and the norm which you have chosen.
The above is true for every ; since is compact there must hence be a point u in which .
Corollaries
No subset of is homeomorphic to
The ham sandwich theorem: For any compact sets A1, ..., An in we can always find a hyperplane dividing each of them into two subsets of equal measure.
Equivalent results
Above we showed how to prove the Borsuk–Ulam theorem from Tucker's lemma. The converse is also true: it is possible to prove Tucker's lemma from the Borsuk–Ulam theorem. Therefore, these two theorems are equivalent.
Generalizations
In the original theorem, the domain of the function f is the unit n-sphere (the boundary of the unit n-ball). In general, it is true also when the domain of f is the boundary of any open bounded symmetric subset of containing the origin (Here, symmetric means that if x is in the subset then -x is also in the subset).
More generally, if is a compact n-dimensional Riemannian manifold, and is continuous, there exists a pair of points x and y in such that and x and y are joined by a geodesic of length , for any prescribed .
Consider the function A which maps a point to its antipodal point: Note that The original theorem claims that there is a point x in which In general, this is true also for every function A for which However, in general this is not true for other functions A.
See also
Topological combinatorics
Necklace splitting problem
Ham sandwich theorem
Kakutani's theorem (geometry)
Imre Bárány
Notes
References
External links
The Borsuk-Ulam Explorer. An interactive illustration of Borsuk-Ulam Theorem.
Theorems in algebraic topology
Combinatorics
Theory of continuous functions
Theorems in topology | Borsuk–Ulam theorem | [
"Mathematics"
] | 1,437 | [
"Theorems in mathematical analysis",
"Discrete mathematics",
"Mathematical theorems",
"Theory of continuous functions",
"Fixed-point theorems",
"Combinatorics",
"Theorems in topology",
"Topology",
"Mathematical problems",
"Theorems in algebraic topology"
] |
4,077 | https://en.wikipedia.org/wiki/Binary%20prefix | A binary prefix is a unit prefix that indicates a multiple of a unit of measurement by an integer power of two. The most commonly used binary prefixes are kibi (symbol Ki, meaning ), mebi (), and gibi (). They are most often used in information technology as multipliers of bit and byte, when expressing the capacity of storage devices or the size of computer files.
The binary prefixes "kibi", "mebi", etc. were defined in 1999 by the International Electrotechnical Commission (IEC), in the IEC 60027-2 standard (Amendment 2). They were meant to replace the metric (SI) decimal power prefixes, such as "kilo" (), "mega" () and "giga" (), that were commonly used in the computer industry to indicate the nearest powers of two. For example, a memory module whose capacity was specified by the manufacturer as "2 megabytes" or "2 MB" would hold = , instead of = .
On the other hand, a hard disk whose capacity is specified by the manufacturer as "10 gigabytes" or "10 GB", holds = bytes, or a little more than that, but less than = and a file whose size is listed as "2.3 GB" may have a size closer to ≈ or to = , depending on the program or operating system providing that measurement. This kind of ambiguity is often confusing to computer system users and has resulted in lawsuits. The IEC 60027-2 binary prefixes have been incorporated in the ISO/IEC 80000 standard and are supported by other standards bodies, including the BIPM, which defines the SI system, the US NIST, and the European Union.
Prior to the 1999 IEC standard, some industry organizations, such as the Joint Electron Device Engineering Council (JEDEC), attempted to redefine the terms kilobyte, megabyte, and gigabyte, and the corresponding symbols KB, MB, and GB in the binary sense, for use in storage capacity measurements. However, other computer industry sectors (such as magnetic storage) continued using those same terms and symbols with the decimal meaning. Since then, the major standards organizations have expressly disapproved the use of SI prefixes to denote binary multiples, and recommended or mandated the use of the IEC prefixes for that purpose, but the use of SI prefixes in this sense has persisted in some fields.
Definitions
In 2022, the International Bureau of Weights and Measures (BIPM) adopted the decimal prefixes ronna for 10009 and quetta for 100010. In analogy to the existing binary prefixes, a consultation paper of the International Committee for Weights and Measures' Consultative Committee for Units (CCU) suggested the prefixes robi () and quebi () for their binary counterparts, but , no corresponding binary prefixes have been adopted.
Comparison of binary and decimal prefixes
The relative difference between the values in the binary and decimal interpretations increases, when using the SI prefixes as the base, from 2.4% for kilo to nearly 27% for the quetta prefix. Although the prefixes ronna and quetta have been defined, as of 2022 no names have been officially assigned to the corresponding binary prefixes.
History
Early prefixes
The original metric system adopted by France in 1795 included two binary prefixes named double- (2×) and demi- (×). However, these were not retained when the SI prefixes were internationally adopted by the 11th CGPM conference in 1960.
Storage capacity
Main memory
Early computers used one of two addressing methods to access the system memory; binary (base 2) or decimal (base 10). For example, the IBM 701 (1952) used a binary methods and could address 2048 words of 36 bits each, while the IBM 702 (1953) used a decimal system, and could address ten thousand 7-bit words.
By the mid-1960s, binary addressing had become the standard architecture in most computer designs, and main memory sizes were most commonly powers of two. This is the most natural configuration for memory, as all combinations of states of their address lines map to a valid address, allowing easy aggregation into a larger block of memory with contiguous addresses.
While early documentation specified those memory sizes as exact numbers such as 4096, 8192, or units (usually words, bytes, or bits), computer professionals also started using the long-established metric system prefixes "kilo", "mega", "giga", etc., defined to be powers of 10, to mean instead the nearest powers of two; namely, 210 = 1024, 220 = 10242, 230 = 10243, etc. The corresponding metric prefix symbols ("k", "M", "G", etc.) were used with the same binary meanings. The symbol for 210 = 1024 could be written either in lower case ("k") or in uppercase ("K"). The latter was often used intentionally to indicate the binary rather than decimal meaning. This convention, which could not be extended to higher powers, was widely used in the documentation of the IBM 360 (1964) and of the IBM System/370 (1972), of the CDC 7600, of the DEC PDP-11/70 (1975) and of the DEC VAX-11/780 (1977).
In other documents, however, the metric prefixes and their symbols were used to denote powers of 10, but usually with the understanding that the values given were approximate, often truncated down. Thus, for example, a 1967 document by Control Data Corporation (CDC) abbreviated "216 = = words" as "65K words" (rather than "64K" or "66K"), while the documentation of the HP 21MX real-time computer (1974) denoted = = as "196K" and 220 = as "1M".
These three possible meanings of "k" and "K" ("1024", "1000", or "approximately 1000") were used loosely around the same time, sometimes by the same company. The HP 3000 business computer (1973) could have "64K", "96K", or "128K" bytes of memory. The use of SI prefixes, and the use of "K" instead of "k" remained popular in computer-related publications well into the 21st century, although the ambiguity persisted. The correct meaning was often clear from the context; for instance, in a binary-addressed computer, the true memory size had to be either a power of 2, or a small integer multiple thereof. Thus a "512 megabyte" RAM module was generally understood to have = bytes, rather than .
Hard disks
In specifying disk drive capacities, manufacturers have always used conventional decimal SI prefixes representing powers of 10. Storage in a rotating disk drive is organized in platters and tracks whose sizes and counts are determined by mechanical engineering constraints so that the capacity of a disk drive has hardly ever been a simple multiple of a power of 2. For example, the first commercially sold disk drive, the IBM 350 (1956), had 50 physical disk platters containing a total of sectors of 100 characters each, for a total quoted capacity of 5 million characters.
Moreover, since the 1960s, many disk drives used IBM's disk format, where each track was divided into blocks of user-specified size; and the block sizes were recorded on the disk, subtracting from the usable capacity. For example, the IBM 3336 disk pack was quoted to have a 200-megabyte capacity, achieved only with a single -byte block in each of its 808 × 19 tracks.
Decimal megabytes were used for disk capacity by the CDC in 1974. The Seagate ST-412, one of several types installed in the IBM PC/XT, had a capacity of when formatted as 306 × 4 tracks and 32 256-byte sectors per track, which was quoted as "10 MB". Similarly, a "300 GB" hard drive can be expected to offer only slightly more than = , bytes, not (which would be about bytes or "322 GB"). The first terabyte (SI prefix, bytes) hard disk drive was introduced in 2007. Decimal prefixes were generally used by information processing publications when comparing hard disk capacities.
Some programs and operating systems, such as Microsoft Windows, still use "MB" and "GB" to denote binary prefixes even when displaying disk drive capacities and file sizes, as did Classic Mac OS. Thus, for example, the capacity of a "10 MB" (decimal "M") disk drive could be reported as "", and that of a "300 GB" drive as "279.4 GB". Some operating systems, such as Mac OS X, Ubuntu, and Debian, have been updated to use "MB" and "GB" to denote decimal prefixes when displaying disk drive capacities and file sizes. Some manufacturers, such as Seagate Technology, have released recommendations stating that properly-written software and documentation should specify clearly whether prefixes such as "K", "M", or "G" mean binary or decimal multipliers.
Floppy disks
Floppy disks used a variety of formats, and their capacities was usually specified with SI-like prefixes "K" and "M" with either decimal or binary meaning. The capacity of the disks was often specified without accounting for the internal formatting overhead, leading to more irregularities.
The early 8-inch diskette formats could contain less than a megabyte with the capacities of those devices specified in kilobytes, kilobits or megabits.
The 5.25-inch diskette sold with the IBM PC AT could hold = bytes, and thus was marketed as "1200 KB" with the binary sense of "KB". However, the capacity was also quoted "1.2 MB", which was a hybrid decimal and binary notation, since the "M" meant 1000 × 1024. The precise value was (decimal) or (binary).
The 5.25-inch Apple Disk II had 256 bytes per sector, 13 sectors per track, 35 tracks per side, or a total capacity of bytes. It was later upgraded to 16 sectors per track, giving a total of = bytes, which was described as "140KB" using the binary sense of "K".
The most recent version of the physical hardware, the "3.5-inch diskette" cartridge, had 720 512-byte blocks (single-sided). Since two blocks comprised 1024 bytes, the capacity was quoted "360 KB", with the binary sense of "K". On the other hand, the quoted capacity of "1.44 MB" of the High Density ("HD") version was again a hybrid decimal and binary notation, since it meant 1440 pairs of 512-byte sectors, or = . Some operating systems displayed the capacity of those disks using the binary sense of "MB", as "1.4 MB" (which would be ≈ ). User complaints forced both Apple and Microsoft to issue support bulletins explaining the discrepancy.
Optical disks
When specifying the capacities of optical compact discs, "megabyte" and "MB" usually meant 10242 bytes. Thus a "700-MB" (or "80-minute") CD has a nominal capacity of about , which is approximately (decimal).
On the other hand, capacities of other optical disc storage media like DVD, Blu-ray Disc, HD DVD and magneto-optical (MO) have been generally specified in decimal gigabytes ("GB"), that is, 10003 bytes. In particular, a typical "" DVD has a nominal capacity of about , which is about .
Tape drives and media
Tape drive and media manufacturers have generally used SI decimal prefixes to specify the maximum capacity, although the actual capacity would depend on the block size used when recording.
Data and clock rates
Computer clock frequencies are always quoted using SI prefixes in their decimal sense. For example, the internal clock frequency of the original IBM PC was , that is .
Similarly, digital information transfer rates are quoted using decimal prefixe. The Parallel ATA "" disk interface can transfer bytes per second, and a "" modem transmits bits per second. Seagate specified the sustained transfer rate of some hard disk drive models with both decimal and IEC binary prefixes.
The standard sampling rate of music compact disks, quoted as , is indeed samples per second. A "" Ethernet interface can receive or transmit up to 109 bits per second, or bytes per second within each packet. A "56k" modem can encode or decode up to bits per second.
Decimal SI prefixes are also generally used for processor-memory data transfer speeds. A PCI-X bus with clock and 64 bits wide can transfer 64-bit words per second, or = , which is usually quoted as . A PC3200 memory on a double data rate bus, transferring 8 bytes per cycle with a clock speed of has a bandwidth of = , which would be quoted as .
Ambiguous standards
The ambiguous usage of the prefixes "kilo ("K" or "k"), "mega" ("M"), and "giga" ("G"), as meaning both powers of 1000 or (in computer contexts) of 1024, has been recorded in popular dictionaries, and even in some obsolete standards, such as ANSI/IEEE 1084-1986 and ANSI/IEEE 1212-1991, IEEE 610.10-1994, and IEEE 100-2000. Some of these standards specifically limited the binary meaning to multiples of "byte" ("B") or "bit" ("b").
Early binary prefix proposals
Before the IEC standard, several alternative proposals existed for unique binary prefixes, starting in the late 1960s. In 1996, Markus Kuhn proposed the extra prefix "di" and the symbol suffix or subscript "2" to mean "binary"; so that, for example, "one dikilobyte" would mean "1024 bytes", denoted "K2B" or "K2B".
In 1968, Donald Morrison proposed to use the Greek letter kappa (κ) to denote 1024, κ2 to denote 10242, and so on. (At the time, memory size was small, and only K was in widespread use.) In the same year, Wallace Givens responded with a suggestion to use bK as an abbreviation for 1024 and bK2 or bK2 for 10242, though he noted that neither the Greek letter nor lowercase letter b would be easy to reproduce on computer printers of the day. Bruce Alan Martin of Brookhaven National Laboratory proposed that, instead of prefixes, binary powers of two were indicated by the letter B followed by the exponent, similar to E in decimal scientific notation. Thus one would write 3B20 for . This convention is still used on some calculators to present binary floating point-numbers today.
In 1969, Donald Knuth, who uses decimal notation like 1 MB = 1000 kB, proposed that the powers of 1024 be designated as "large kilobytes" and "large megabytes", with abbreviations KKB and MMB.
Consumer confusion
The ambiguous meanings of "kilo", "mega", "giga", etc., has caused significant consumer confusion, especially in the personal computer era. A common source of confusion was the discrepancy between the capacities of hard drives specified by manufacturers, using those prefixes in the decimal sense, and the numbers reported by operating systems and other software, that used them in the binary sense, such as the Apple Macintosh in 1984. For example, a hard drive marketed as "1 TB" could be reported as having only "931 GB". The confusion was compounded by fact that RAM manufacturers used the binary sense too.
Legal disputes
The different interpretations of disk size prefixes led to class action lawsuits against digital storage manufacturers. These cases involved both flash memory and hard disk drives.
Early cases
Early cases (2004–2007) were settled prior to any court ruling with the manufacturers admitting no wrongdoing but agreeing to clarify the storage capacity of their products on the consumer packaging. Accordingly, many flash memory and hard disk manufacturers have disclosures on their packaging and web sites clarifying the formatted capacity of the devices or defining MB as 1 million bytes and 1 GB as 1 billion bytes.
Willem Vroegh v. Eastman Kodak Company
On 20 February 2004, Willem Vroegh filed a lawsuit against Lexar Media, Dane–Elec Memory, Fuji Photo Film USA, Eastman Kodak Company, Kingston Technology Company, Inc., Memorex Products, Inc.; PNY Technologies Inc., SanDisk Corporation, Verbatim Corporation, and Viking Interworks alleging that their descriptions of the capacity of their flash memory cards were false and misleading.
Vroegh claimed that a 256 MB Flash Memory Device had only 244 MB of accessible memory. "Plaintiffs allege that Defendants marketed the memory capacity of their products by assuming that one megabyte equals one million bytes and one gigabyte equals one billion bytes." The plaintiffs wanted the defendants to use the customary values of 10242 for megabyte and 10243 for gigabyte. The plaintiffs acknowledged that the IEC and IEEE standards define a MB as one million bytes but stated that the industry has largely ignored the IEC standards.
The parties agreed that manufacturers could continue to use the decimal definition so long as the definition was added to the packaging and web sites. The consumers could apply for "a discount of ten percent off a future online purchase from Defendants' Online Stores Flash Memory Device".
Orin Safier v. Western Digital Corporation
On 7 July 2005, an action entitled Orin Safier v. Western Digital Corporation, et al. was filed in the Superior Court for the City and County of San Francisco, Case No. CGC-05-442812. The case was subsequently moved to the Northern District of California, Case No. 05-03353 BZ.
Although Western Digital maintained that their usage of units is consistent with "the indisputably correct industry standard for measuring and describing storage capacity", and that they "cannot be expected to reform the software industry", they agreed to settle in March 2006 with 14 June 2006 as the Final Approval hearing date.
Western Digital offered to compensate customers with a gratis download of backup and recovery software that they valued at US$30. They also paid in fees and expenses to San Francisco lawyers Adam Gutride and Seth Safier, who filed the suit. The settlement called for Western Digital to add a disclaimer to their later packaging and advertising.
Western Digital had this footnote in their settlement. "Apparently, Plaintiff believes that he could sue an egg company for fraud for labeling a carton of 12 eggs a 'dozen', because some bakers would view a 'dozen' as including 13 items."
Cho v. Seagate Technology (US) Holdings, Inc.
A lawsuit (Cho v. Seagate Technology (US) Holdings, Inc., San Francisco Superior Court, Case No. CGC-06-453195) was filed against Seagate Technology, alleging that Seagate overrepresented the amount of usable storage by 7% on hard drives sold between 22 March 2001 and 26 September 2007. The case was settled without Seagate admitting wrongdoing, but agreeing to supply those purchasers with gratis backup software or a 5% refund on the cost of the drives.
Dinan et al. v. SanDisk LLC
On 22 January 2020, the district court of the Northern District of California ruled in favor of the defendant, SanDisk, upholding its use of "GB" to mean .
IEC 1999 Standard
In 1995, the International Union of Pure and Applied Chemistry's (IUPAC) Interdivisional Committee on Nomenclature and Symbols (IDCNS) proposed the prefixes "kibi" (short for "kilobinary"), "mebi" ("megabinary"), "gibi" ("gigabinary") and "tebi" ("terabinary"), with respective symbols "kb", "Mb", "Gb" and "Tb", for binary multipliers. The proposal suggested that the SI prefixes should be used only for powers of 10; so that a disk drive capacity of "500 gigabytes", "0.5 terabytes", "500 GB", or "0.5 TB" should all mean , exactly or approximately, rather than (= ) or (= ).
The proposal was not accepted by IUPAC at the time, but was taken up in 1996 by the Institute of Electrical and Electronics Engineers (IEEE) in collaboration with the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC). The prefixes "kibi", "mebi", "gibi" and "tebi" were retained, but with the symbols "Ki" (with capital "K"), "Mi", "Gi" and "Ti" respectively.
In January 1999, the IEC published this proposal, with additional prefixes "pebi" ("Pi") and "exbi" ("Ei"), as an international standard (IEC 60027-2 Amendment 2) The standard reaffirmed the BIPM's position that the SI prefixes should always denote powers of 10. The third edition of the standard, published in 2005, added prefixes "zebi" and "yobi", thus matching all then-defined SI prefixes with binary counterparts.
The harmonized ISO/IEC IEC 80000-13:2008 standard cancels and replaces subclauses 3.8 and 3.9 of IEC 60027-2:2005 (those defining prefixes for binary multiples). The only significant change is the addition of explicit definitions for some quantities. In 2009, the prefixes kibi-, mebi-, etc. were defined by ISO 80000-1 in their own right, independently of the kibibyte, mebibyte, and so on.
The BIPM standard JCGM 200:2012 "International vocabulary of metrology – Basic and general concepts and associated terms (VIM), 3rd edition" lists the IEC binary prefixes and states "SI prefixes refer strictly to powers of 10, and should not be used for powers of 2. For example, 1 kilobit should not be used to represent bits (210 bits), which is 1 kibibit."
The IEC 60027-2 standard recommended operating systems and other software were updated to use binary or decimal prefixes consistently, but incorrect usage of SI prefixes for binary multiples is still common. At the time, the IEEE decided that their standards would use the prefixes "kilo", etc. with their metric definitions, but allowed the binary definitions to be used in an interim period as long as such usage was explicitly pointed out on a case-by-case basis.
Other standards bodies and organizations
The IEC standard binary prefixes are supported by other standardization bodies and technical organizations.
The United States National Institute of Standards and Technology (NIST) supports the ISO/IEC standards for
"Prefixes for binary multiples" and has a web page documenting them, describing and justifying their use. NIST suggests that in English, the first syllable of the name of the binary-multiple prefix should be pronounced in the same way as the first syllable of the name of the corresponding SI prefix, and that the second syllable should be pronounced as bee. NIST has stated the SI prefixes "refer strictly to powers of 10" and that the binary definitions "should not be used" for them.
As of 2014, the microelectronics industry standards body JEDEC describes the IEC prefixes in its online dictionary, but acknowledges that the SI prefixes and the symbols "K", "M" and "G" are still commonly used with the binary sense for memory sizes.
On 19 March 2005, the IEEE standard IEEE 1541-2002 ("Prefixes for Binary Multiples") was elevated to a full-use standard by the IEEE Standards Association after a two-year trial period. , the IEEE Publications division does not require the use of IEC prefixes in its major magazines such as Spectrum or Computer.
The International Bureau of Weights and Measures (BIPM), which maintains the International System of Units (SI), expressly prohibits the use of SI prefixes to denote binary multiples, and recommends the use of the IEC prefixes as an alternative since units of information are not included in the SI.
The Society of Automotive Engineers (SAE) prohibits the use of SI prefixes with anything but a power-of-1000 meaning, but does not cite the IEC binary prefixes.
The European Committee for Electrotechnical Standardization (CENELEC) adopted the IEC-recommended binary prefixes via the harmonization document HD 60027-2:2003-03. The European Union (EU) has required the use of the IEC binary prefixes since 2007.
Current practice
Some computer industry participants, such as Hewlett-Packard (HP), and IBM have adopted or recommended IEC binary prefixes as part of their general documentation policies.
As of 2023, the use of SI prefixes with the binary meanings is still prevalent for specifying the capacity of the main memory of computers, of RAM, ROM, EPROM, and EEPROM chips and memory modules, and of the cache of computer processors. For example, a "512-megabyte" or "512 MB" memory module holds 512 MiB; that is, 512 × 220 bytes, not 512 × 106 bytes.
JEDEC continues to include the customary binary definitions of "kilo", "mega", and "giga" in the document Terms, Definitions, and Letter Symbols, and, , still used those definitions in their memory standards.
On the other hand, the SI prefixes with powers of ten meanings are generally used for the capacity of external storage units, such as disk drives, solid state drives, and USB flash drives, except for some flash memory chips intended to be used as EEPROMs. However, some disk manufacturers have used the IEC prefixes to avoid confusion. The decimal meaning of SI prefixes is usually also intended in measurements of data transfer rates, and clock speeds.
Some operating systems and other software use either the IEC binary multiplier symbols ("Ki", "Mi", etc.) or the SI multiplier symbols ("k", "M", "G", etc.) with decimal meaning. Some programs, such as the GNU ls command, let the user choose between binary or decimal multipliers. However, some continue to use the SI symbols with the binary meanings, even when reporting disk or file sizes. Some programs may also use "K" instead of "k", with either meaning.
Other uses
While the binary prefixes are almost always used with the units of information, bits and bytes, they may be used with any other unit of measure, when convenient. For example, in signal processing one may need binary multiples of the frequency unit hertz (Hz), for example the kibihertz (KiHz), equal to .
See also
Binary engineering notation
B notation (scientific notation)
ISO/IEC 80000
Nibble
Octet
References
Further reading
– An introduction to binary prefixes
—a 1996–1999 paper on bits, bytes, prefixes and symbols
—Another description of binary prefixes
—White-paper on the controversy over drive capacities
External links
A summary of the organizations, software, and so on that have implemented the new binary prefixes
Storage Capacity Measurement Standards;
Measurement
Naming conventions
Numeral systems
Units of information | Binary prefix | [
"Physics",
"Mathematics"
] | 5,758 | [
"Physical quantities",
"Quantity",
"Mathematical objects",
"Measurement",
"Size",
"Numeral systems",
"Units of information",
"Numbers",
"Units of measurement"
] |
4,086 | https://en.wikipedia.org/wiki/Brainfuck | Brainfuck is an esoteric programming language created in 1993 by Swiss student Urban Müller. Designed to be extremely minimalistic, the language consists of only eight simple commands, a data pointer, and an instruction pointer.
Brainfuck is an example of a so-called Turing tarpit: it can be used to write any program, but it is not practical to do so because it provides so little abstraction that the programs get very long or complicated. While Brainfuck is fully Turing-complete, it is not intended for practical use but to challenge and amuse programmers. Brainfuck requires one to break down commands into small and simple instructions.
The language takes its name from the slang term brainfuck, which refers to things so complicated or unusual that they exceed the limits of one's understanding, as it was not meant or made for designing actual software but to challenge the boundaries of computer programming.
Because the language's name contains profanity, many substitutes are used, such as brainfsck, branflakes, brainoof, brainfrick, BrainF, and BF.
History
Müller designed Brainfuck with the goal of implementing the smallest possible compiler, inspired by the 1024-byte compiler for the FALSE programming language. Müller's original compiler was implemented in Motorola 68000 assembly on the Commodore Amiga and compiled to a binary with a size of 296 bytes. He uploaded the first Brainfuck compiler to Aminet in 1993. The program came with a "Readme" file, which briefly described the language, and challenged the reader "Who can program anything useful with it? :)". Müller also included an interpreter and some examples. A second version of the compiler used only 240 bytes.
Language design
The language consists of eight commands. A brainfuck program is a sequence of these commands, possibly interspersed with other characters (which are ignored). The commands are executed sequentially, with some exceptions: an instruction pointer begins at the first command, and each command it points to is executed, after which it normally moves forward to the next command. The program terminates when the instruction pointer moves past the last command.
The brainfuck language uses a simple machine model consisting of the program and instruction pointer, as well as a one-dimensional array of at least 30,000 byte cells initialized to zero; a movable data pointer (initialized to point to the leftmost byte of the array); and two streams of bytes for input and output (most often connected to a keyboard and a monitor respectively, and using the ASCII character encoding).
The eight language commands each consist of a single character:
[ and ] match as parentheses usually do: each [ matches exactly one ] and vice versa, the [ comes first, and there can be no unmatched [ or ] between the two.
Brainfuck programs are usually difficult to comprehend. This is partly because any mildly complex task requires a long sequence of commands and partly because the program's text gives no direct indications of the program's state. These, as well as Brainfuck's inefficiency and its limited input/output capabilities, are some of the reasons it is not used for serious programming. Nonetheless, like any Turing-complete language, Brainfuck is theoretically capable of computing any computable function or simulating any other computational model if given access to an unlimited amount of memory and time. A variety of Brainfuck programs have been written. Although Brainfuck programs, especially complicated ones, are difficult to write, it is quite trivial to write an interpreter for Brainfuck in a more typical language such as C due to its simplicity. Brainfuck interpreters written in the Brainfuck language itself also exist.
Examples
Adding two values
As a first, simple example, the following code snippet will add the current cell's value to the next cell: Each time the loop is executed, the current cell is decremented, the data pointer moves to the right, that next cell is incremented, and the data pointer moves left again. This sequence is repeated until the starting cell is 0.
[->+<]
This can be incorporated into a simple addition program as follows:
++ Cell c0 = 2
> +++++ Cell c1 = 5
[ Start your loops with your cell pointer on the loop counter (c1 in our case)
< + Add 1 to c0
> - Subtract 1 from c1
] End your loops with the cell pointer on the loop counter
At this point our program has added 5 to 2 leaving 7 in c0 and 0 in c1
but we cannot output this value to the terminal since it is not ASCII encoded
To display the ASCII character "7" we must add 48 to the value 7
We use a loop to compute 48 = 6 * 8
++++ ++++ c1 = 8 and this will be our loop counter again
[
< +++ +++ Add 6 to c0
> - Subtract 1 from c1
]
< . Print out c0 which has the value 55 which translates to "7"!
Hello World!
The following program prints "Hello World!" and a newline to the screen:
[ This program prints "Hello World!" and a newline to the screen; its
length is 106 active command characters. [It is not the shortest.]
This loop is an "initial comment loop", a simple way of adding a comment
to a BF program such that you don't have to worry about any command
characters. Any ".", ",", "+", "-", "<" and ">" characters are simply
ignored, the "[" and "]" characters just have to be balanced. This
loop and the commands it contains are ignored because the current cell
defaults to a value of 0; the 0 value causes this loop to be skipped.
]
++++++++ Set Cell #0 to 8
[
>++++ Add 4 to Cell #1; this will always set Cell #1 to 4
[ as the cell will be cleared by the loop
>++ Add 2 to Cell #2
>+++ Add 3 to Cell #3
>+++ Add 3 to Cell #4
>+ Add 1 to Cell #5
<<<<- Decrement the loop counter in Cell #1
] Loop until Cell #1 is zero; number of iterations is 4
>+ Add 1 to Cell #2
>+ Add 1 to Cell #3
>- Subtract 1 from Cell #4
>>+ Add 1 to Cell #6
[<] Move back to the first zero cell you find; this will
be Cell #1 which was cleared by the previous loop
<- Decrement the loop Counter in Cell #0
] Loop until Cell #0 is zero; number of iterations is 8
The result of this is:
Cell no : 0 1 2 3 4 5 6
Contents: 0 0 72 104 88 32 8
Pointer : ^
>>. Cell #2 has value 72 which is 'H'
>---. Subtract 3 from Cell #3 to get 101 which is 'e'
+++++++..+++. Likewise for 'llo' from Cell #3
>>. Cell #5 is 32 for the space
<-. Subtract 1 from Cell #4 for 87 to give a 'W'
<. Cell #3 was set to 'o' from the end of 'Hello'
+++.------.--------. Cell #3 for 'rl' and 'd'
>>+. Add 1 to Cell #5 gives us an exclamation point
>++. And finally a newline from Cell #6
For readability, this code has been spread across many lines, and blanks and comments have been added. Brainfuck ignores all characters except the eight commands +-<>[],. so no special syntax for comments is needed (as long as the comments do not contain the command characters). The code could just as well have been written as:
++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.
ROT13
This program enciphers its input with the ROT13 cipher. To do this, it must map characters A-M (ASCII 65–77) to N-Z (78–90), and vice versa. Also it must map a-m (97–109) to n-z (110–122) and vice versa. It must map all other characters to themselves; it reads characters one at a time and outputs their enciphered equivalents until it reads an EOF (here assumed to be represented as either -1 or "no change"), at which point the program terminates.
-,+[ Read first character and start outer character reading loop
-[ Skip forward if character is 0
>>++++[>++++++++<-] Set up divisor (32) for division loop
(MEMORY LAYOUT: dividend copy remainder divisor quotient zero zero)
<+<-[ Set up dividend (x minus 1) and enter division loop
>+>+>-[>>>] Increase copy and remainder / reduce divisor / Normal case: skip forward
<[[>+<-]>>+>] Special case: move remainder back to divisor and increase quotient
<<<<<- Decrement dividend
] End division loop
]>>>[-]+ End skip loop; zero former divisor and reuse space for a flag
>--[-[<->+++[-]]]<[ Zero that flag unless quotient was 2 or 3; zero quotient; check flag
++++++++++++<[ If flag then set up divisor (13) for second division loop
(MEMORY LAYOUT: zero copy dividend divisor remainder quotient zero zero)
>-[>+>>] Reduce divisor; Normal case: increase remainder
>[+[<+>-]>+>>] Special case: increase remainder / move it back to divisor / increase quotient
<<<<<- Decrease dividend
] End division loop
>>[<+>-] Add remainder back to divisor to get a useful 13
>[ Skip forward if quotient was 0
-[ Decrement quotient and skip forward if quotient was 1
-<<[-]>> Zero quotient and divisor if quotient was 2
]<<[<<->>-]>> Zero divisor and subtract 13 from copy if quotient was 1
]<<[<<+>>-] Zero divisor and add 13 to copy if quotient was 0
] End outer skip loop (jump to here if ((character minus 1)/32) was not 2 or 3)
<[-] Clear remainder from first division if second division was skipped
<.[-] Output ROT13ed character from copy and clear it
<-,+ Read next character
] End character reading loop
Simulation of abiogenesis
In 2024, a Google research project used a slightly modified 7-command version of Brainfuck as the basis of an artificial digital environment. In this environment, they found that replicators arose naturally and competed with each other for domination of the environment.
See also
JSFuck – an esoteric subset of the JavaScript programming language with a very limited set of characters
Notes
References
External links
Non-English-based programming languages
Esoteric programming languages
Programming languages created in 1993 | Brainfuck | [
"Technology"
] | 2,531 | [
"Non-English-based programming languages",
"Natural language and computing"
] |
4,101 | https://en.wikipedia.org/wiki/Brouwer%20fixed-point%20theorem | Brouwer's fixed-point theorem is a fixed-point theorem in topology, named after L. E. J. (Bertus) Brouwer. It states that for any continuous function mapping a nonempty compact convex set to itself, there is a point such that . The simplest forms of Brouwer's theorem are for continuous functions from a closed interval in the real numbers to itself or from a closed disk to itself. A more general form than the latter is for continuous functions from a nonempty convex compact subset of Euclidean space to itself.
Among hundreds of fixed-point theorems, Brouwer's is particularly well known, due in part to its use across numerous fields of mathematics. In its original field, this result is one of the key theorems characterizing the topology of Euclidean spaces, along with the Jordan curve theorem, the hairy ball theorem, the invariance of dimension and the Borsuk–Ulam theorem. This gives it a place among the fundamental theorems of topology. The theorem is also used for proving deep results about differential equations and is covered in most introductory courses on differential geometry. It appears in unlikely fields such as game theory. In economics, Brouwer's fixed-point theorem and its extension, the Kakutani fixed-point theorem, play a central role in the proof of existence of general equilibrium in market economies as developed in the 1950s by economics Nobel prize winners Kenneth Arrow and Gérard Debreu.
The theorem was first studied in view of work on differential equations by the French mathematicians around Henri Poincaré and Charles Émile Picard. Proving results such as the Poincaré–Bendixson theorem requires the use of topological methods. This work at the end of the 19th century opened into several successive versions of the theorem. The case of differentiable mappings of the -dimensional closed ball was first proved in 1910 by Jacques Hadamard and the general case for continuous mappings by Brouwer in 1911.
Statement
The theorem has several formulations, depending on the context in which it is used and its degree of generalization. The simplest is sometimes given as follows:
In the plane Every continuous function from a closed disk to itself has at least one fixed point.
This can be generalized to an arbitrary finite dimension:
In Euclidean spaceEvery continuous function from a closed ball of a Euclidean space into itself has a fixed point.
A slightly more general version is as follows:
Convex compact setEvery continuous function from a nonempty convex compact subset K of a Euclidean space to K itself has a fixed point.
An even more general form is better known under a different name:
Schauder fixed point theoremEvery continuous function from a nonempty convex compact subset K of a Banach space to K itself has a fixed point.
Importance of the pre-conditions
The theorem holds only for functions that are endomorphisms (functions that have the same set as the domain and codomain) and for nonempty sets that are compact (thus, in particular, bounded and closed) and convex (or homeomorphic to convex). The following examples show why the pre-conditions are important.
The function f as an endomorphism
Consider the function
with domain [-1,1]. The range of the function is [0,2]. Thus, f is not an endomorphism.
Boundedness
Consider the function
which is a continuous function from to itself. As it shifts every point to the right, it cannot have a fixed point. The space is convex and closed, but not bounded.
Closedness
Consider the function
which is a continuous function from the open interval (−1,1) to itself. Since x = 1 is not part of the interval, there is not a fixed point of f(x) = x. The space (−1,1) is convex and bounded, but not closed. On the other hand, the function f have a fixed point for the closed interval [−1,1], namely f(1) = 1.
(The domain of f is (-1,1) but the range is (0,1), which are not the same. Earlier, one of the conditions for functions satisfying the theorem is that the domain and range were the same, not that one be a subset of the other. Thus the reason for f failing is not closure.)
Convexity
Convexity is not strictly necessary for Brouwer's fixed-point theorem. Because the properties involved (continuity, being a fixed point) are invariant under homeomorphisms, Brouwer's fixed-point theorem is equivalent to forms in which the domain is required to be a closed unit ball . For the same reason it holds for every set that is homeomorphic to a closed ball (and therefore also closed, bounded, connected, without holes, etc.).
The following example shows that Brouwer's fixed-point theorem does not work for domains with holes. Consider the function , which is a continuous function from the unit circle to itself. Since -x≠x holds for any point of the unit circle, f has no fixed point. The analogous example works for the n-dimensional sphere (or any symmetric domain that does not contain the origin). The unit circle is closed and bounded, but it has a hole (and so it is not convex) . The function f have a fixed point for the unit disc, since it takes the origin to itself.
A formal generalization of Brouwer's fixed-point theorem for "hole-free" domains can be derived from the Lefschetz fixed-point theorem.
Notes
The continuous function in this theorem is not required to be bijective or surjective.
Illustrations
The theorem has several "real world" illustrations. Here are some examples.
Take two sheets of graph paper of equal size with coordinate systems on them, lay one flat on the table and crumple up (without ripping or tearing) the other one and place it, in any fashion, on top of the first so that the crumpled paper does not reach outside the flat one. There will then be at least one point of the crumpled sheet that lies directly above its corresponding point (i.e. the point with the same coordinates) of the flat sheet. This is a consequence of the n = 2 case of Brouwer's theorem applied to the continuous map that assigns to the coordinates of every point of the crumpled sheet the coordinates of the point of the flat sheet immediately beneath it.
Take an ordinary map of a country, and suppose that that map is laid out on a table inside that country. There will always be a "You are Here" point on the map which represents that same point in the country.
In three dimensions a consequence of the Brouwer fixed-point theorem is that, no matter how much you stir a delicious cocktail in a glass (or think about milk shake), when the liquid has come to rest, some point in the liquid will end up in exactly the same place in the glass as before you took any action, assuming that the final position of each point is a continuous function of its original position, that the liquid after stirring is contained within the space originally taken up by it, and that the glass (and stirred surface shape) maintain a convex volume. Ordering a cocktail shaken, not stirred defeats the convexity condition ("shaking" being defined as a dynamic series of non-convex inertial containment states in the vacant headspace under a lid). In that case, the theorem would not apply, and thus all points of the liquid disposition are potentially displaced from the original state.
Intuitive approach
Explanations attributed to Brouwer
The theorem is supposed to have originated from Brouwer's observation of a cup of gourmet coffee.
If one stirs to dissolve a lump of sugar, it appears there is always a point without motion.
He drew the conclusion that at any moment, there is a point on the surface that is not moving.
The fixed point is not necessarily the point that seems to be motionless, since the centre of the turbulence moves a little bit.
The result is not intuitive, since the original fixed point may become mobile when another fixed point appears.
Brouwer is said to have added: "I can formulate this splendid result different, I take a horizontal sheet, and another identical one which I crumple, flatten and place on the other. Then a point of the crumpled sheet is in the same place as on the other sheet."
Brouwer "flattens" his sheet as with a flat iron, without removing the folds and wrinkles. Unlike the coffee cup example, the crumpled paper example also demonstrates that more than one fixed point may exist. This distinguishes Brouwer's result from other fixed-point theorems, such as Stefan Banach's, that guarantee uniqueness.
One-dimensional case
In one dimension, the result is intuitive and easy to prove. The continuous function f is defined on a closed interval [a, b] and takes values in the same interval. Saying that this function has a fixed point amounts to saying that its graph (dark green in the figure on the right) intersects that of the function defined on the same interval [a, b] which maps x to x (light green).
Intuitively, any continuous line from the left edge of the square to the right edge must necessarily intersect the green diagonal. To prove this, consider the function g which maps x to f(x) − x. It is ≥ 0 on a and ≤ 0 on b. By the intermediate value theorem, g has a zero in [a, b]; this zero is a fixed point.
Brouwer is said to have expressed this as follows: "Instead of examining a surface, we will prove the theorem about a piece of string. Let us begin with the string in an unfolded state, then refold it. Let us flatten the refolded string. Again a point of the string has not changed its position with respect to its original position on the unfolded string."
History
The Brouwer fixed point theorem was one of the early achievements of algebraic topology, and is the basis of more general fixed point theorems which are important in functional analysis. The case n = 3 first was proved by Piers Bohl in 1904 (published in Journal für die reine und angewandte Mathematik). It was later proved by L. E. J. Brouwer in 1909. Jacques Hadamard proved the general case in 1910, and Brouwer found a different proof in the same year. Since these early proofs were all non-constructive indirect proofs, they ran contrary to Brouwer's intuitionist ideals. Although the existence of a fixed point is not constructive in the sense of constructivism in mathematics, methods to approximate fixed points guaranteed by Brouwer's theorem are now known.
Before discovery
At the end of the 19th century, the old problem of the stability of the solar system returned into the focus of the mathematical community.
Its solution required new methods. As noted by Henri Poincaré, who worked on the three-body problem, there is no hope to find an exact solution: "Nothing is more proper to give us an idea of the hardness of the three-body problem, and generally of all problems of Dynamics where there is no uniform integral and the Bohlin series diverge."
He also noted that the search for an approximate solution is no more efficient: "the more we seek to obtain precise approximations, the more the result will diverge towards an increasing imprecision".
He studied a question analogous to that of the surface movement in a cup of coffee. What can we say, in general, about the trajectories on a surface animated by a constant flow? Poincaré discovered that the answer can be found in what we now call the topological properties in the area containing the trajectory. If this area is compact, i.e. both closed and bounded, then the trajectory either becomes stationary, or it approaches a limit cycle. Poincaré went further; if the area is of the same kind as a disk, as is the case for the cup of coffee, there must necessarily be a fixed point. This fixed point is invariant under all functions which associate to each point of the original surface its position after a short time interval t. If the area is a circular band, or if it is not closed, then this is not necessarily the case.
To understand differential equations better, a new branch of mathematics was born. Poincaré called it analysis situs. The French Encyclopædia Universalis defines it as the branch which "treats the properties of an object that are invariant if it is deformed in any continuous way, without tearing". In 1886, Poincaré proved a result that is equivalent to Brouwer's fixed-point theorem, although the connection with the subject of this article was not yet apparent. A little later, he developed one of the fundamental tools for better understanding the analysis situs, now known as the fundamental group or sometimes the Poincaré group. This method can be used for a very compact proof of the theorem under discussion.
Poincaré's method was analogous to that of Émile Picard, a contemporary mathematician who generalized the Cauchy–Lipschitz theorem. Picard's approach is based on a result that would later be formalised by another fixed-point theorem, named after Banach. Instead of the topological properties of the domain, this theorem uses the fact that the function in question is a contraction.
First proofs
At the dawn of the 20th century, the interest in analysis situs did not stay unnoticed. However, the necessity of a theorem equivalent to the one discussed in this article was not yet evident. Piers Bohl, a Latvian mathematician, applied topological methods to the study of differential equations. In 1904 he proved the three-dimensional case of our theorem, but his publication was not noticed.
It was Brouwer, finally, who gave the theorem its first patent of nobility. His goals were different from those of Poincaré. This mathematician was inspired by the foundations of mathematics, especially mathematical logic and topology. His initial interest lay in an attempt to solve Hilbert's fifth problem. In 1909, during a voyage to Paris, he met Henri Poincaré, Jacques Hadamard, and Émile Borel. The ensuing discussions convinced Brouwer of the importance of a better understanding of Euclidean spaces, and were the origin of a fruitful exchange of letters with Hadamard. For the next four years, he concentrated on the proof of certain great theorems on this question. In 1912 he proved the hairy ball theorem for the two-dimensional sphere, as well as the fact that every continuous map from the two-dimensional ball to itself has a fixed point. These two results in themselves were not really new. As Hadamard observed, Poincaré had shown a theorem equivalent to the hairy ball theorem. The revolutionary aspect of Brouwer's approach was his systematic use of recently developed tools such as homotopy, the underlying concept of the Poincaré group. In the following year, Hadamard generalised the theorem under discussion to an arbitrary finite dimension, but he employed different methods. Hans Freudenthal comments on the respective roles as follows: "Compared to Brouwer's revolutionary methods, those of Hadamard were very traditional, but Hadamard's participation in the birth of Brouwer's ideas resembles that of a midwife more than that of a mere spectator."
Brouwer's approach yielded its fruits, and in 1910 he also found a proof that was valid for any finite dimension, as well as other key theorems such as the invariance of dimension. In the context of this work, Brouwer also generalized the Jordan curve theorem to arbitrary dimension and established the properties connected with the degree of a continuous mapping. This branch of mathematics, originally envisioned by Poincaré and developed by Brouwer, changed its name. In the 1930s, analysis situs became algebraic topology.
Reception
The theorem proved its worth in more than one way. During the 20th century numerous fixed-point theorems were developed, and even a branch of mathematics called fixed-point theory.
Brouwer's theorem is probably the most important. It is also among the foundational theorems on the topology of topological manifolds and is often used to prove other important results such as the Jordan curve theorem.
Besides the fixed-point theorems for more or less contracting functions, there are many that have emerged directly or indirectly from the result under discussion. A continuous map from a closed ball of Euclidean space to its boundary cannot be the identity on the boundary. Similarly, the Borsuk–Ulam theorem says that a continuous map from the n-dimensional sphere to Rn has a pair of antipodal points that are mapped to the same point. In the finite-dimensional case, the Lefschetz fixed-point theorem provided from 1926 a method for counting fixed points. In 1930, Brouwer's fixed-point theorem was generalized to Banach spaces. This generalization is known as Schauder's fixed-point theorem, a result generalized further by S. Kakutani to set-valued functions. One also meets the theorem and its variants outside topology. It can be used to prove the Hartman-Grobman theorem, which describes the qualitative behaviour of certain differential equations near certain equilibria. Similarly, Brouwer's theorem is used for the proof of the Central Limit Theorem. The theorem can also be found in existence proofs for the solutions of certain partial differential equations.
Other areas are also touched. In game theory, John Nash used the theorem to prove that in the game of Hex there is a winning strategy for white. In economics, P. Bich explains that certain generalizations of the theorem show that its use is helpful for certain classical problems in game theory and generally for equilibria (Hotelling's law), financial equilibria and incomplete markets.
Brouwer's celebrity is not exclusively due to his topological work. The proofs of his great topological theorems are not constructive, and Brouwer's dissatisfaction with this is partly what led him to articulate the idea of constructivity. He became the originator and zealous defender of a way of formalising mathematics that is known as intuitionism, which at the time made a stand against set theory. Brouwer disavowed his original proof of the fixed-point theorem.
Proof outlines
A proof using degree
Brouwer's original 1911 proof relied on the notion of the degree of a continuous mapping, stemming from ideas in differential topology. Several modern accounts of the proof can be found in the literature, notably .
Let denote the closed unit ball in centered at the origin. Suppose for simplicity that is continuously differentiable. A regular value of is a point such that the Jacobian of is non-singular at every point of the preimage of . In particular, by the inverse function theorem, every point of the preimage of lies in (the interior of ). The degree of at a regular value is defined as the sum of the signs of the Jacobian determinant of over the preimages of under :
The degree is, roughly speaking, the number of "sheets" of the preimage f lying over a small open set around p, with sheets counted oppositely if they are oppositely oriented. This is thus a generalization of winding number to higher dimensions.
The degree satisfies the property of homotopy invariance: let and be two continuously differentiable functions, and for . Suppose that the point is a regular value of for all t. Then .
If there is no fixed point of the boundary of , then the function
is well-defined, and
defines a homotopy from the identity function to it. The identity function has degree one at every point. In particular, the identity function has degree one at the origin, so also has degree one at the origin. As a consequence, the preimage is not empty. The elements of are precisely the fixed points of the original function f.
This requires some work to make fully general. The definition of degree must be extended to singular values of f, and then to continuous functions. The more modern advent of homology theory simplifies the construction of the degree, and so has become a standard proof in the literature.
A proof using the hairy ball theorem
The hairy ball theorem states that on the unit sphere in an odd-dimensional Euclidean space, there is no nowhere-vanishing continuous tangent vector field on . (The tangency condition means that = 0 for every unit vector .) Sometimes the theorem is expressed by the statement that "there is always a place on the globe with no wind". An elementary proof of the hairy ball theorem can be found in .
In fact, suppose first that is continuously differentiable. By scaling, it can be assumed that is a continuously differentiable unit tangent vector on . It can be extended radially to a small spherical shell of . For sufficiently small, a routine computation shows that the mapping () = + is a contraction mapping on and that the volume of its image is a polynomial in . On the other hand, as a contraction mapping, must restrict to a homeomorphism of onto (1 + ) and onto (1 + ) . This gives a contradiction, because, if the dimension of the Euclidean space is odd, (1 + )/2 is not a polynomial.
If is only a continuous unit tangent vector on , by the Weierstrass approximation theorem, it can be uniformly approximated by a polynomial map of into Euclidean space. The orthogonal projection on to the tangent space is given by () = () - () ⋅ . Thus is polynomial and nowhere vanishing on ; by construction /|||| is a smooth unit tangent vector field on , a contradiction.
The continuous version of the hairy ball theorem can now be used to prove the Brouwer fixed point theorem. First suppose that is even. If there were a fixed-point-free continuous self-mapping of the closed unit ball of the -dimensional Euclidean space , set
Since has no fixed points, it follows that, for in the interior of , the vector () is non-zero; and for in , the scalar product ⋅ () = 1 – ⋅ () is strictly positive. From the original -dimensional space Euclidean space , construct a new auxiliary ()-dimensional space = x R, with coordinates = (, ). Set
By construction is a continuous vector field on the unit sphere of , satisfying the tangency condition ⋅ () = 0. Moreover, () is nowhere vanishing (because, if has norm 1, then ⋅ () is non-zero; while if has norm strictly less than 1, then and () are both non-zero). This contradiction proves the fixed point theorem when is even. For odd, one can apply the fixed point theorem to the closed unit ball in dimensions and the mapping (,) = ((),0).
The advantage of this proof is that it uses only elementary techniques; more general results like the Borsuk-Ulam theorem require tools from algebraic topology.
A proof using homology or cohomology
The proof uses the observation that the boundary of the n-disk Dn is Sn−1, the (n − 1)-sphere.
Suppose, for contradiction, that a continuous function has no fixed point. This means that, for every point x in Dn, the points x and f(x) are distinct. Because they are distinct, for every point x in Dn, we can construct a unique ray from f(x) to x and follow the ray until it intersects the boundary Sn−1 (see illustration). By calling this intersection point F(x), we define a function F : Dn → Sn−1 sending each point in the disk to its corresponding intersection point on the boundary. As a special case, whenever x itself is on the boundary, then the intersection point F(x) must be x.
Consequently, F is a special type of continuous function known as a retraction: every point of the codomain (in this case Sn−1) is a fixed point of F.
Intuitively it seems unlikely that there could be a retraction of Dn onto Sn−1, and in the case n = 1, the impossibility is more basic, because S0 (i.e., the endpoints of the closed interval D1) is not even connected. The case n = 2 is less obvious, but can be proven by using basic arguments involving the fundamental groups of the respective spaces: the retraction would induce a surjective group homomorphism from the fundamental group of D2 to that of S1, but the latter group is isomorphic to Z while the first group is trivial, so this is impossible. The case n = 2 can also be proven by contradiction based on a theorem about non-vanishing vector fields.
For n > 2, however, proving the impossibility of the retraction is more difficult. One way is to make use of homology groups: the homology Hn−1(Dn) is trivial, while Hn−1(Sn−1) is infinite cyclic. This shows that the retraction is impossible, because again the retraction would induce an injective group homomorphism from the latter to the former group.
The impossibility of a retraction can also be shown using the de Rham cohomology of open subsets of Euclidean space En. For n ≥ 2, the de Rham cohomology of U = En – (0) is one-dimensional in degree 0 and n – 1, and vanishes otherwise. If a retraction existed, then U would have to be contractible and its de Rham cohomology in degree n – 1 would have to vanish, a contradiction.
A proof using Stokes' theorem
As in the proof of Brouwer's fixed-point theorem for continuous maps using homology, it is reduced to proving that there is no continuous retraction from the ball onto its boundary ∂. In that case it can be assumed that is smooth, since it can be approximated using the Weierstrass approximation theorem or by convolving with non-negative smooth bump functions of sufficiently small support and integral one (i.e. mollifying). If is a volume form on the boundary then by Stokes' theorem,
giving a contradiction.
More generally, this shows that there is no smooth retraction from any non-empty smooth oriented compact manifold onto its boundary. The proof using Stokes' theorem is closely related to the proof using homology, because the form generates the de Rham cohomology group (∂) which is isomorphic to the homology group (∂) by de Rham's theorem.
A combinatorial proof
The BFPT can be proved using Sperner's lemma. We now give an outline of the proof for the special case in which f is a function from the standard n-simplex, to itself, where
For every point also Hence the sum of their coordinates is equal:
Hence, by the pigeonhole principle, for every there must be an index such that the th coordinate of is greater than or equal to the th coordinate of its image under f:
Moreover, if lies on a k-dimensional sub-face of then by the same argument, the index can be selected from among the coordinates which are not zero on this sub-face.
We now use this fact to construct a Sperner coloring. For every triangulation of the color of every vertex is an index such that
By construction, this is a Sperner coloring. Hence, by Sperner's lemma, there is an n-dimensional simplex whose vertices are colored with the entire set of available colors.
Because f is continuous, this simplex can be made arbitrarily small by choosing an arbitrarily fine triangulation. Hence, there must be a point which satisfies the labeling condition in all coordinates: for all
Because the sum of the coordinates of and must be equal, all these inequalities must actually be equalities. But this means that:
That is, is a fixed point of
A proof by Hirsch
There is also a quick proof, by Morris Hirsch, based on the impossibility of a differentiable retraction. The indirect proof starts by noting that the map f can be approximated by a smooth map retaining the property of not fixing a point; this can be done by using the Weierstrass approximation theorem or by convolving with smooth bump functions. One then defines a retraction as above which must now be differentiable. Such a retraction must have a non-singular value, by Sard's theorem, which is also non-singular for the restriction to the boundary (which is just the identity). Thus the inverse image would be a 1-manifold with boundary. The boundary would have to contain at least two end points, both of which would have to lie on the boundary of the original ball—which is impossible in a retraction.
R. Bruce Kellogg, Tien-Yien Li, and James A. Yorke turned Hirsch's proof into a computable proof by observing that the retract is in fact defined everywhere except at the fixed points. For almost any point, q, on the boundary, (assuming it is not a fixed point) the one manifold with boundary mentioned above does exist and the only possibility is that it leads from q to a fixed point. It is an easy numerical task to follow such a path from q to the fixed point so the method is essentially computable. gave a conceptually similar path-following version of the homotopy proof which extends to a wide variety of related problems.
A proof using oriented area
A variation of the preceding proof does not employ the Sard's theorem, and goes as follows. If is a smooth retraction, one considers the smooth deformation and the smooth function
Differentiating under the sign of integral it is not difficult to check that (t) = 0 for all t, so φ is a constant function, which is a contradiction because φ(0) is the n-dimensional volume of the ball, while φ(1) is zero. The geometric idea is that φ(t) is the oriented area of gt(B) (that is, the Lebesgue measure of the image of the ball via gt, taking into account multiplicity and orientation), and should remain constant (as it is very clear in the one-dimensional case). On the other hand, as the parameter t passes from 0 to 1 the map gt transforms continuously from the identity map of the ball, to the retraction r, which is a contradiction since the oriented area of the identity coincides with the volume of the ball, while the oriented area of r is necessarily 0, as its image is the boundary of the ball, a set of null measure.
A proof using the game Hex
A quite different proof given by David Gale is based on the game of Hex. The basic theorem regarding Hex, first proven by John Nash, is that no game of Hex can end in a draw; the first player always has a winning strategy (although this theorem is nonconstructive, and explicit strategies have not been fully developed for board sizes of dimensions 10 x 10 or greater). This turns out to be equivalent to the Brouwer fixed-point theorem for dimension 2. By considering n-dimensional versions of Hex, one can prove in general that Brouwer's theorem is equivalent to the determinacy theorem for Hex.
A proof using the Lefschetz fixed-point theorem
The Lefschetz fixed-point theorem says that if a continuous map f from a finite simplicial complex B to itself has only isolated fixed points, then the number of fixed points counted with multiplicities (which may be negative) is equal to the Lefschetz number
and in particular if the Lefschetz number is nonzero then f must have a fixed point. If B is a ball (or more generally is contractible) then the Lefschetz number is one because the only non-zero simplicial homology group is: and f acts as the identity on this group, so f has a fixed point.
A proof in a weak logical system
In reverse mathematics, Brouwer's theorem can be proved in the system WKL0, and conversely over the base system RCA0 Brouwer's theorem for a square implies the weak Kőnig's lemma, so this gives a precise description of the strength of Brouwer's theorem.
Generalizations
The Brouwer fixed-point theorem forms the starting point of a number of more general fixed-point theorems.
The straightforward generalization to infinite dimensions, i.e. using the unit ball of an arbitrary Hilbert space instead of Euclidean space, is not true. The main problem here is that the unit balls of infinite-dimensional Hilbert spaces are not compact. For example, in the Hilbert space ℓ2 of square-summable real (or complex) sequences, consider the map f : ℓ2 → ℓ2 which sends a sequence (xn) from the closed unit ball of ℓ2 to the sequence (yn) defined by
It is not difficult to check that this map is continuous, has its image in the unit sphere of ℓ2, but does not have a fixed point.
The generalizations of the Brouwer fixed-point theorem to infinite dimensional spaces therefore all include a compactness assumption of some sort, and also often an assumption of convexity. See fixed-point theorems in infinite-dimensional spaces for a discussion of these theorems.
There is also finite-dimensional generalization to a larger class of spaces: If is a product of finitely many chainable continua, then every continuous function has a fixed point, where a chainable continuum is a (usually but in this case not necessarily metric) compact Hausdorff space of which every open cover has a finite open refinement , such that if and only if . Examples of chainable continua include compact connected linearly ordered spaces and in particular closed intervals of real numbers.
The Kakutani fixed point theorem generalizes the Brouwer fixed-point theorem in a different direction: it stays in Rn, but considers upper hemi-continuous set-valued functions (functions that assign to each point of the set a subset of the set). It also requires compactness and convexity of the set.
The Lefschetz fixed-point theorem applies to (almost) arbitrary compact topological spaces, and gives a condition in terms of singular homology that guarantees the existence of fixed points; this condition is trivially satisfied for any map in the case of Dn.
Equivalent results
See also
Banach fixed-point theorem
Fixed-point computation
Infinite compositions of analytic functions
Nash equilibrium
Poincaré–Miranda theorem – equivalent to the Brouwer fixed-point theorem
Topological combinatorics
Notes
References
(see p. 72–73 for Hirsch's proof utilizing non-existence of a differentiable retraction)
Leoni, Giovanni (2017). A First Course in Sobolev Spaces: Second Edition. Graduate Studies in Mathematics. 181. American Mathematical Society. pp. 734.
External links
Brouwer's Fixed Point Theorem for Triangles at cut-the-knot
Brouwer theorem , from PlanetMath with attached proof.
Reconstructing Brouwer at MathPages
Brouwer Fixed Point Theorem at Math Images.
Fixed-point theorems
Theory of continuous functions
Theorems in topology
Theorems in convex geometry | Brouwer fixed-point theorem | [
"Mathematics"
] | 7,396 | [
"Theorems in mathematical analysis",
"Mathematical theorems",
"Theory of continuous functions",
"Fixed-point theorems",
"Theorems in topology",
"Theorems in convex geometry",
"Topology",
"Theorems in geometry",
"Mathematical problems"
] |
4,107 | https://en.wikipedia.org/wiki/Boltzmann%20distribution | In statistical mechanics and mathematics, a Boltzmann distribution (also called Gibbs distribution) is a probability distribution or probability measure that gives the probability that a system will be in a certain state as a function of that state's energy and the temperature of the system. The distribution is expressed in the form:
where is the probability of the system being in state , is the exponential function, is the energy of that state, and a constant of the distribution is the product of the Boltzmann constant and thermodynamic temperature . The symbol denotes proportionality (see for the proportionality constant).
The term system here has a wide meaning; it can range from a collection of 'sufficient number' of atoms or a single atom to a macroscopic system such as a natural gas storage tank. Therefore, the Boltzmann distribution can be used to solve a wide variety of problems. The distribution shows that states with lower energy will always have a higher probability of being occupied.
The ratio of probabilities of two states is known as the Boltzmann factor and characteristically only depends on the states' energy difference:
The Boltzmann distribution is named after Ludwig Boltzmann who first formulated it in 1868 during his studies of the statistical mechanics of gases in thermal equilibrium. Boltzmann's statistical work is borne out in his paper “On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability Calculations Regarding the Conditions for Thermal Equilibrium"
The distribution was later investigated extensively, in its modern generic form, by Josiah Willard Gibbs in 1902.
The Boltzmann distribution should not be confused with the Maxwell–Boltzmann distribution or Maxwell-Boltzmann statistics. The Boltzmann distribution gives the probability that a system will be in a certain state as a function of that state's energy, while the Maxwell-Boltzmann distributions give the probabilities of particle speeds or energies in ideal gases. The distribution of energies in a one-dimensional gas however, does follow the Boltzmann distribution.
The distribution
The Boltzmann distribution is a probability distribution that gives the probability of a certain state as a function of that state's energy and temperature of the system to which the distribution is applied. It is given as
where:
is the exponential function,
is the probability of state ,
is the energy of state ,
is the Boltzmann constant,
is the absolute temperature of the system,
is the number of all states accessible to the system of interest,
(denoted by some authors by ) is the normalization denominator, which is the canonical partition function It results from the constraint that the probabilities of all accessible states must add up to 1.
Using Lagrange multipliers, one can prove that the Boltzmann distribution is the distribution that maximizes the entropy
subject to the normalization constraint that and the constraint that equals a particular mean energy value, except for two special cases. (These special cases occur when the mean value is either the minimum or maximum of the energies . In these cases, the entropy maximizing distribution is a limit of Boltzmann distributions where approaches zero from above or below, respectively.)
The partition function can be calculated if we know the energies of the states accessible to the system of interest. For atoms the partition function values can be found in the NIST Atomic Spectra Database.
The distribution shows that states with lower energy will always have a higher probability of being occupied than the states with higher energy. It can also give us the quantitative relationship between the probabilities of the two states being occupied. The ratio of probabilities for states and is given as
where:
is the probability of state ,
the probability of state ,
is the energy of state ,
is the energy of state .
The corresponding ratio of populations of energy levels must also take their degeneracies into account.
The Boltzmann distribution is often used to describe the distribution of particles, such as atoms or molecules, over bound states accessible to them. If we have a system consisting of many particles, the probability of a particle being in state is practically the probability that, if we pick a random particle from that system and check what state it is in, we will find it is in state . This probability is equal to the number of particles in state divided by the total number of particles in the system, that is the fraction of particles that occupy state .
where is the number of particles in state and is the total number of particles in the system. We may use the Boltzmann distribution to find this probability that is, as we have seen, equal to the fraction of particles that are in state i. So the equation that gives the fraction of particles in state as a function of the energy of that state is
This equation is of great importance to spectroscopy. In spectroscopy we observe a spectral line of atoms or molecules undergoing transitions from one state to another. In order for this to be possible, there must be some particles in the first state to undergo the transition. We may find that this condition is fulfilled by finding the fraction of particles in the first state. If it is negligible, the transition is very likely not observed at the temperature for which the calculation was done. In general, a larger fraction of molecules in the first state means a higher number of transitions to the second state. This gives a stronger spectral line. However, there are other factors that influence the intensity of a spectral line, such as whether it is caused by an allowed or a forbidden transition.
The softmax function commonly used in machine learning is related to the Boltzmann distribution:
Generalized Boltzmann distribution
Distribution of the form
is called generalized Boltzmann distribution by some authors.
The Boltzmann distribution is a special case of the generalized Boltzmann distribution. The generalized Boltzmann distribution is used in statistical mechanics to describe canonical ensemble, grand canonical ensemble and isothermal–isobaric ensemble. The generalized Boltzmann distribution is usually derived from the principle of maximum entropy, but there are other derivations.
The generalized Boltzmann distribution has the following properties:
It is the only distribution for which the entropy as defined by Gibbs entropy formula matches with the entropy as defined in classical thermodynamics.
It is the only distribution that is mathematically consistent with the fundamental thermodynamic relation where state functions are described by ensemble average.
In statistical mechanics
The Boltzmann distribution appears in statistical mechanics when considering closed systems of fixed composition that are in thermal equilibrium (equilibrium with respect to energy exchange). The most general case is the probability distribution for the canonical ensemble. Some special cases (derivable from the canonical ensemble) show the Boltzmann distribution in different aspects:
Canonical ensemble (general case)
The canonical ensemble gives the probabilities of the various possible states of a closed system of fixed volume, in thermal equilibrium with a heat bath. The canonical ensemble has a state probability distribution with the Boltzmann form.
Statistical frequencies of subsystems' states (in a non-interacting collection)
When the system of interest is a collection of many non-interacting copies of a smaller subsystem, it is sometimes useful to find the statistical frequency of a given subsystem state, among the collection. The canonical ensemble has the property of separability when applied to such a collection: as long as the non-interacting subsystems have fixed composition, then each subsystem's state is independent of the others and is also characterized by a canonical ensemble. As a result, the expected statistical frequency distribution of subsystem states has the Boltzmann form.
Maxwell–Boltzmann statistics of classical gases (systems of non-interacting particles)
In particle systems, many particles share the same space and regularly change places with each other; the single-particle state space they occupy is a shared space. Maxwell–Boltzmann statistics give the expected number of particles found in a given single-particle state, in a classical gas of non-interacting particles at equilibrium. This expected number distribution has the Boltzmann form.
Although these cases have strong similarities, it is helpful to distinguish them as they generalize in different ways when the crucial assumptions are changed:
When a system is in thermodynamic equilibrium with respect to both energy exchange and particle exchange, the requirement of fixed composition is relaxed and a grand canonical ensemble is obtained rather than canonical ensemble. On the other hand, if both composition and energy are fixed, then a microcanonical ensemble applies instead.
If the subsystems within a collection do interact with each other, then the expected frequencies of subsystem states no longer follow a Boltzmann distribution, and even may not have an analytical solution. The canonical ensemble can however still be applied to the collective states of the entire system considered as a whole, provided the entire system is in thermal equilibrium.
With quantum gases of non-interacting particles in equilibrium, the number of particles found in a given single-particle state does not follow Maxwell–Boltzmann statistics, and there is no simple closed form expression for quantum gases in the canonical ensemble. In the grand canonical ensemble the state-filling statistics of quantum gases are described by Fermi–Dirac statistics or Bose–Einstein statistics, depending on whether the particles are fermions or bosons, respectively.
In mathematics
In more general mathematical settings, the Boltzmann distribution is also known as the Gibbs measure.
In statistics and machine learning, it is called a log-linear model.
In deep learning, the Boltzmann distribution is used in the sampling distribution of stochastic neural networks such as the Boltzmann machine, restricted Boltzmann machine, energy-based models and deep Boltzmann machine. In deep learning, the Boltzmann machine is considered to be one of the unsupervised learning models. In the design of Boltzmann machine in deep learning, as the number of nodes are increased the difficulty of implementing in real time applications becomes critical, so a different type of architecture named Restricted Boltzmann machine is introduced.
In economics
The Boltzmann distribution can be introduced to allocate permits in emissions trading. The new allocation method using the Boltzmann distribution can describe the most probable, natural, and unbiased distribution of emissions permits among multiple countries.
The Boltzmann distribution has the same form as the multinomial logit model. As a discrete choice model, this is very well known in economics since Daniel McFadden made the connection to random utility maximization.
See also
Bose–Einstein statistics
Fermi–Dirac statistics
Negative temperature
Softmax function
References
Statistical mechanics
Distribution | Boltzmann distribution | [
"Physics"
] | 2,146 | [
"Statistical mechanics"
] |
4,111 | https://en.wikipedia.org/wiki/Bioleaching | Bioleaching is the extraction or liberation of metals from their ores through the use of living organisms. Bioleaching is one of several applications within biohydrometallurgy and several methods are used to treat ores or concentrates containing copper, zinc, lead, arsenic, antimony, nickel, molybdenum, gold, silver, and cobalt.
Bioleaching falls into two broad categories. The first, is the use of microorganisms to oxidize refractory minerals to release valuable metals such and gold and silver. Most commonly the minerals that are the target of oxidization are pyrite and arsenopyrite.
The second category is leaching of sulphide minerals to release the associated metal, for example, leaching of pentlandite to release nickel, or the leaching of chalcocite, covellite or chalcopyrite to release copper.
Process
Bioleaching can involve numerous ferrous iron and sulfur oxidizing bacteria, including Acidithiobacillus ferrooxidans (formerly known as Thiobacillus ferrooxidans) and Acidithiobacillus thiooxidans (formerly known as Thiobacillus thiooxidans). As a general principle, in one proposed method of bacterial leaching known as Indirect Leaching, Fe3+ ions are used to oxidize the ore. This step is entirely independent of microbes. The role of the bacteria is further oxidation of the ore, but also the regeneration of the chemical oxidant Fe3+ from Fe2+. For example, bacteria catalyse the breakdown of the mineral pyrite (FeS2) by oxidising the sulfur and metal (in this case ferrous iron, (Fe2+)) using oxygen. This yields soluble products that can be further purified and refined to yield the desired metal.
Pyrite leaching (FeS2):
In the first step, disulfide is spontaneously oxidized to thiosulfate by ferric ion (Fe3+), which in turn is reduced to give ferrous ion (Fe2+):
(1) spontaneous
The ferrous ion is then oxidized by bacteria using oxygen:
(2) (iron oxidizers)
Thiosulfate is also oxidized by bacteria to give sulfate:
(3) (sulfur oxidizers)
The ferric ion produced in reaction (2) oxidized more sulfide as in reaction (1), closing the cycle and given the net reaction:
(4)
The net products of the reaction are soluble ferrous sulfate and sulfuric acid.
The microbial oxidation process occurs at the cell membrane of the bacteria. The electrons pass into the cells and are used in biochemical processes to produce energy for the bacteria while reducing oxygen to water. The critical reaction is the oxidation of sulfide by ferric iron. The main role of the bacterial step is the regeneration of this reactant.
The process for copper is very similar, but the efficiency and kinetics depend on the copper mineralogy. The most efficient minerals are supergene minerals such as chalcocite, Cu2S and covellite, CuS. The main copper mineral chalcopyrite (CuFeS2) is not leached very efficiently, which is why the dominant copper-producing technology remains flotation, followed by smelting and refining. The leaching of CuFeS2 follows the two stages of being dissolved and then further oxidised, with Cu2+ ions being left in solution.
Chalcopyrite leaching:
(1) spontaneous
(2) (iron oxidizers)
(3) (sulfur oxidizers)
net reaction:
(4)
In general, sulfides are first oxidized to elemental sulfur, whereas disulfides are oxidized to give thiosulfate, and the processes above can be applied to other sulfidic ores. Bioleaching of non-sulfidic ores such as pitchblende also uses ferric iron as an oxidant (e.g., UO2 + 2 Fe3+ ==> UO22+ + 2 Fe2+). In this case, the sole purpose of the bacterial step is the regeneration of Fe3+. Sulfidic iron ores can be added to speed up the process and provide a source of iron. Bioleaching of non-sulfidic ores by layering of waste sulfides and elemental sulfur, colonized by Acidithiobacillus spp., has been accomplished, which provides a strategy for accelerated leaching of materials that do not contain sulfide minerals.
Further processing
The dissolved copper (Cu2+) ions are removed from the solution by ligand exchange solvent extraction, which leaves other ions in the solution. The copper is removed by bonding to a ligand, which is a large molecule consisting of a number of smaller groups, each possessing a lone electron pair. The ligand-copper complex is extracted from the solution using an organic solvent such as kerosene:
Cu2+(aq) + 2LH(organic) → CuL2(organic) + 2H+(aq)
The ligand donates electrons to the copper, producing a complex - a central metal atom (copper) bonded to the ligand. Because this complex has no charge, it is no longer attracted to polar water molecules and dissolves in the kerosene, which is then easily separated from the solution. Because the initial reaction is reversible, it is determined by pH. Adding concentrated acid reverses the equation, and the copper ions go back into an aqueous solution.
Then the copper is passed through an electro-winning process to increase its purity: An electric current is passed through the resulting solution of copper ions. Because copper ions have a 2+ charge, they are attracted to the negative cathodes and collect there.
The copper can also be concentrated and separated by displacing the copper with Fe from scrap iron:
Cu2+(aq) + Fe(s) → Cu(s) + Fe2+(aq)
The electrons lost by the iron are taken up by the copper. Copper is the oxidising agent (it accepts electrons), and iron is the reducing agent (it loses electrons).
Traces of precious metals such as gold may be left in the original solution. Treating the mixture with sodium cyanide in the presence of free oxygen dissolves the gold. The gold is removed from the solution by adsorbing (taking it up on the surface) to charcoal.
With fungi
Several species of fungi can be used for bioleaching. Fungi can be grown on many different substrates, such as electronic scrap, catalytic converters, and fly ash from municipal waste incineration. Experiments have shown that two fungal strains (Aspergillus niger, Penicillium simplicissimum) were able to mobilize Cu and Sn by 65%, and Al, Ni, Pb, and Zn by more than 95%. Aspergillus niger can produce some organic acids such as citric acid. This form of leaching does not rely on microbial oxidation of metal but rather uses microbial metabolism as source of acids that directly dissolve the metal.
Feasibility
Economic feasibility
Bioleaching is in general simpler and, therefore, cheaper to operate and maintain than traditional processes, since fewer specialists are needed to operate complex chemical plants. And low concentrations are not a problem for bacteria because they simply ignore the waste that surrounds the metals, attaining extraction yields of over 90% in some cases. These microorganisms actually gain energy by breaking down minerals into their constituent elements. The company simply collects the ions out of the solution after the bacteria have finished.
Bioleaching can be used to extract metals from low concentration ores such as gold that are too poor for other technologies. It can be used to partially replace the extensive crushing and grinding that translates to prohibitive cost and energy consumption in a conventional process. Because the lower cost of bacterial leaching outweighs the time it takes to extract the metal.
High concentration ores, such as copper, are more economical to smelt rather bioleach due to the slow speed of the bacterial leaching process compared to smelting. The slow speed of bioleaching introduces a significant delay in cash flow for new mines. Nonetheless, at the largest copper mine of the world, Escondida in Chile the process seems to be favorable.
Economically it is also very expensive and many companies once started can not keep up with the demand and end up in debt.
In space
In 2020 scientists showed, with an experiment with different gravity environments on the ISS, that microorganisms could be employed to mine useful elements from basaltic rocks via bioleaching in space.
Environmental impact
The process is more environmentally friendly than traditional extraction methods. For the company this can translate into profit, since the necessary limiting of sulfur dioxide emissions during smelting is expensive. Less landscape damage occurs, since the bacteria involved grow naturally, and the mine and surrounding area can be left relatively untouched. As the bacteria breed in the conditions of the mine, they are easily cultivated and recycled.
Toxic chemicals are sometimes produced in the process. Sulfuric acid and H+ ions that have been formed can leak into the ground and surface water turning it acidic, causing environmental damage. Heavy ions such as iron, zinc, and arsenic leak during acid mine drainage. When the pH of this solution rises, as a result of dilution by fresh water, these ions precipitate, forming "Yellow Boy" pollution. For these reasons, a setup of bioleaching must be carefully planned, since the process can lead to a biosafety failure. Unlike other methods, once started, bioheap leaching cannot be quickly stopped, because leaching would still continue with rainwater and natural bacteria. Projects like Finnish Talvivaara proved to be environmentally and economically disastrous.
See also
Phytomining
References
Further reading
T. A. Fowler and F. K. Crundwell – "Leaching of zinc sulfide with Thiobacillus ferrooxidans"
Brandl H. (2001) "Microbial leaching of metals". In: Rehm H. J. (ed.) Biotechnology, Vol. 10. Wiley-VCH, Weinheim, pp. 191–224
Biotechnology
Economic geology
Metallurgical processes
Applied microbiology | Bioleaching | [
"Chemistry",
"Materials_science",
"Biology"
] | 2,169 | [
"Metallurgical processes",
"nan",
"Metallurgy",
"Biotechnology"
] |
4,115 | https://en.wikipedia.org/wiki/Boiling%20point | The boiling point of a substance is the temperature at which the vapor pressure of a liquid equals the pressure surrounding the liquid and the liquid changes into a vapor.
The boiling point of a liquid varies depending upon the surrounding environmental pressure. A liquid in a partial vacuum, i.e., under a lower pressure, has a lower boiling point than when that liquid is at atmospheric pressure. Because of this, water boils at 100°C (or with scientific precision: ) under standard pressure at sea level, but at at altitude. For a given pressure, different liquids will boil at different temperatures.
The normal boiling point (also called the atmospheric boiling point or the atmospheric pressure boiling point) of a liquid is the special case in which the vapor pressure of the liquid equals the defined atmospheric pressure at sea level, one atmosphere. At that temperature, the vapor pressure of the liquid becomes sufficient to overcome atmospheric pressure and allow bubbles of vapor to form inside the bulk of the liquid. The standard boiling point has been defined by IUPAC since 1982 as the temperature at which boiling occurs under a pressure of one bar.
The heat of vaporization is the energy required to transform a given quantity (a mol, kg, pound, etc.) of a substance from a liquid into a gas at a given pressure (often atmospheric pressure).
Liquids may change to a vapor at temperatures below their boiling points through the process of evaporation. Evaporation is a surface phenomenon in which molecules located near the liquid's edge, not contained by enough liquid pressure on that side, escape into the surroundings as vapor. On the other hand, boiling is a process in which molecules anywhere in the liquid escape, resulting in the formation of vapor bubbles within the liquid.
Saturation temperature and pressure
A saturated liquid contains as much thermal energy as it can without boiling (or conversely a saturated vapor contains as little thermal energy as it can without condensing).
Saturation temperature means boiling point. The saturation temperature is the temperature for a corresponding saturation pressure at which a liquid boils into its vapor phase. The liquid can be said to be saturated with thermal energy—any addition of thermal energy results in a phase transition.
If the pressure in a system remains constant (isobaric), a vapor at saturation temperature will begin to condense into its liquid phase as thermal energy (heat) is removed. Similarly, a liquid at saturation temperature and pressure will boil into its vapor phase as additional thermal energy is applied.
The boiling point corresponds to the temperature at which the vapor pressure of the liquid equals the surrounding environmental pressure. Thus, the boiling point is dependent on the pressure. Boiling points may be published with respect to the NIST, USA standard pressure of 101.325 kPa (1 atm), or the IUPAC standard pressure of 100.000 kPa (1 bar). At higher elevations, where the atmospheric pressure is much lower, the boiling point is also lower. The boiling point increases with increased pressure up to the critical point, where the gas and liquid properties become identical. The boiling point cannot be increased beyond the critical point. Likewise, the boiling point decreases with decreasing pressure until the triple point is reached. The boiling point cannot be reduced below the triple point.
Suppose the heat of vaporization and the vapor pressure of a liquid at a certain temperature are known. In that case, the boiling point can be calculated by using the Clausius–Clapeyron equation, thus:
where:
is the boiling point at the pressure of interest,
is the ideal gas constant,
is the vapor pressure of the liquid,
is some pressure where the corresponding is known (usually data available at 1 atm or 100 kPa (1 bar)),
is the heat of vaporization of the liquid,
is the boiling temperature,
is the natural logarithm.
Saturation pressure is the pressure for a corresponding saturation temperature at which a liquid boils into its vapor phase. Saturation pressure and saturation temperature have a direct relationship: as saturation pressure is increased, so is saturation temperature.
If the temperature in a system remains constant (an isothermal system), vapor at saturation pressure and temperature will begin to condense into its liquid phase as the system pressure is increased. Similarly, a liquid at saturation pressure and temperature will tend to flash into its vapor phase as system pressure is decreased.
There are two conventions regarding the standard boiling point of water: The normal boiling point is commonly given as (actually following the thermodynamic definition of the Celsius scale based on the kelvin) at a pressure of 1 atm (101.325 kPa). The IUPAC-recommended standard boiling point of water at a standard pressure of 100 kPa (1 bar) is . For comparison, on top of Mount Everest, at elevation, the pressure is about and the boiling point of water is .
The Celsius temperature scale was defined until 1954 by two points: 0 °C being defined by the water freezing point and 100 °C being defined by the water boiling point at standard atmospheric pressure.
Relation between the normal boiling point and the vapor pressure of liquids
The higher the vapor pressure of a liquid at a given temperature, the lower the normal boiling point (i.e., the boiling point at atmospheric pressure) of the liquid.
The vapor pressure chart to the right has graphs of the vapor pressures versus temperatures for a variety of liquids. As can be seen in the chart, the liquids with the highest vapor pressures have the lowest normal boiling points.
For example, at any given temperature, methyl chloride has the highest vapor pressure of any of the liquids in the chart. It also has the lowest normal boiling point (−24.2 °C), which is where the vapor pressure curve of methyl chloride (the blue line) intersects the horizontal pressure line of one atmosphere (atm) of absolute vapor pressure.
The critical point of a liquid is the highest temperature (and pressure) it will actually boil at.
See also Vapour pressure of water.
Boiling point of chemical elements
The element with the lowest boiling point is helium. Both the boiling points of rhenium and tungsten exceed 5000 K at standard pressure; because it is difficult to measure extreme temperatures precisely without bias, both have been cited in the literature as having the higher boiling point.
Boiling point as a reference property of a pure compound
As can be seen from the above plot of the logarithm of the vapor pressure vs. the temperature for any given pure chemical compound, its normal boiling point can serve as an indication of that compound's overall volatility. A given pure compound has only one normal boiling point, if any, and a compound's normal boiling point and melting point can serve as characteristic physical properties for that compound, listed in reference books. The higher a compound's normal boiling point, the less volatile that compound is overall, and conversely, the lower a compound's normal boiling point, the more volatile that compound is overall. Some compounds decompose at higher temperatures before reaching their normal boiling point, or sometimes even their melting point. For a stable compound, the boiling point ranges from its triple point to its critical point, depending on the external pressure. Beyond its triple point, a compound's normal boiling point, if any, is higher than its melting point. Beyond the critical point, a compound's liquid and vapor phases merge into one phase, which may be called a superheated gas. At any given temperature, if a compound's normal boiling point is lower, then that compound will generally exist as a gas at atmospheric external pressure. If the compound's normal boiling point is higher, then that compound can exist as a liquid or solid at that given temperature at atmospheric external pressure, and will so exist in equilibrium with its vapor (if volatile) if its vapors are contained. If a compound's vapors are not contained, then some volatile compounds can eventually evaporate away in spite of their higher boiling points.
In general, compounds with ionic bonds have high normal boiling points, if they do not decompose before reaching such high temperatures. Many metals have high boiling points, but not all. Very generally—with other factors being equal—in compounds with covalently bonded molecules, as the size of the molecule (or molecular mass) increases, the normal boiling point increases. When the molecular size becomes that of a macromolecule, polymer, or otherwise very large, the compound often decomposes at high temperature before the boiling point is reached. Another factor that affects the normal boiling point of a compound is the polarity of its molecules. As the polarity of a compound's molecules increases, its normal boiling point increases, other factors being equal. Closely related is the ability of a molecule to form hydrogen bonds (in the liquid state), which makes it harder for molecules to leave the liquid state and thus increases the normal boiling point of the compound. Simple carboxylic acids dimerize by forming hydrogen bonds between molecules. A minor factor affecting boiling points is the shape of a molecule. Making the shape of a molecule more compact tends to lower the normal boiling point slightly compared to an equivalent molecule with more surface area.
Most volatile compounds (anywhere near ambient temperatures) go through an intermediate liquid phase while warming up from a solid phase to eventually transform to a vapor phase. By comparison to boiling, a sublimation is a physical transformation in which a solid turns directly into vapor, which happens in a few select cases such as with carbon dioxide at atmospheric pressure. For such compounds, a sublimation point is a temperature at which a solid turning directly into vapor has a vapor pressure equal to the external pressure.
Impurities and mixtures
In the preceding section, boiling points of pure compounds were covered. Vapor pressures and boiling points of substances can be affected by the presence of dissolved impurities (solutes) or other miscible compounds, the degree of effect depending on the concentration of the impurities or other compounds. The presence of non-volatile impurities such as salts or compounds of a volatility far lower than the main component compound decreases its mole fraction and the solution's volatility, and thus raises the normal boiling point in proportion to the concentration of the solutes. This effect is called boiling point elevation. As a common example, salt water boils at a higher temperature than pure water.
In other mixtures of miscible compounds (components), there may be two or more components of varying volatility, each having its own pure component boiling point at any given pressure. The presence of other volatile components in a mixture affects the vapor pressures and thus boiling points and dew points of all the components in the mixture. The dew point is a temperature at which a vapor condenses into a liquid. Furthermore, at any given temperature, the composition of the vapor is different from the composition of the liquid in most such cases. In order to illustrate these effects between the volatile components in a mixture, a boiling point diagram is commonly used. Distillation is a process of boiling and [usually] condensation which takes advantage of these differences in composition between liquid and vapor phases.
Boiling point of water with elevation
Following is a table of the change in the boiling point of water with elevation, at intervals of 500 meters over the range of human habitation [the Dead Sea at to La Rinconada, Peru at ], then of 1,000 meters over the additional range of uninhabited surface elevation [up to Mount Everest at ], along with a similar range in Imperial.
Element table
See also
Boiling points of the elements (data page)
Boiling-point elevation
Critical point (thermodynamics)
Ebulliometer, a device to accurately measure the boiling point of liquids
Hagedorn temperature
Joback method (Estimation of normal boiling points from molecular structure)
List of gases including boiling points
Melting point
Subcooling
Superheating
Trouton's constant relating latent heat to boiling point
Triple point
References
External links
Gases
Meteorological quantities
Temperature
Threshold temperatures | Boiling point | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,446 | [
"Scalar physical quantities",
"Matter",
"Temperature",
"Physical phenomena",
"Physical quantities",
"Phase transitions",
"Thermodynamic properties",
"SI base quantities",
"Intensive quantities",
"Phases of matter",
"Threshold temperatures",
"Meteorological quantities",
"Quantity",
"Thermod... |
4,116 | https://en.wikipedia.org/wiki/Big%20Bang | The Big Bang is a physical theory that describes how the universe expanded from an initial state of high density and temperature. The notion of an expanding universe was first scientifically originated by physicist Alexander Friedmann in 1922 with the mathematical derivation of the Friedmann equations. The earliest empirical observation of the notion of an expanding universe is known as Hubble's law, published in work by physicist Edwin Hubble in 1929, which discerned that galaxies are moving away from Earth at a rate that accelerates proportionally with distance. Independent of Friedmann's work, and independent of Hubble's observations, physicist Georges Lemaître proposed that the universe emerged from a "primeval atom" in 1931, introducing the modern notion of the Big Bang.
Various cosmological models of the Big Bang explain the evolution of the observable universe from the earliest known periods through its subsequent large-scale form. These models offer a comprehensive explanation for a broad range of observed phenomena, including the abundance of light elements, the cosmic microwave background (CMB) radiation, and large-scale structure. The uniformity of the universe, known as the flatness problem, is explained through cosmic inflation: a sudden and very rapid expansion of space during the earliest moments.
Extrapolating this cosmic expansion backward in time using the known laws of physics, the models describe an increasingly concentrated cosmos preceded by a singularity in which space and time lose meaning (typically named "the Big Bang singularity"). Physics lacks a widely accepted theory of quantum gravity that can model the earliest conditions of the Big Bang. In 1964 the CMB was discovered, which convinced many cosmologists that the competing steady-state model of cosmic evolution was falsified, since the Big Bang models predict a uniform background radiation caused by high temperatures and densities in the distant past. A wide range of empirical evidence strongly favors the Big Bang event, which is now essentially universally accepted. Detailed measurements of the expansion rate of the universe place the Big Bang singularity at an estimated billion years ago, which is considered the age of the universe.
There remain aspects of the observed universe that are not yet adequately explained by the Big Bang models. After its initial expansion, the universe cooled sufficiently to allow the formation of subatomic particles, and later atoms. The unequal abundances of matter and antimatter that allowed this to occur is an unexplained effect known as baryon asymmetry. These primordial elements—mostly hydrogen, with some helium and lithium—later coalesced through gravity, forming early stars and galaxies. Astronomers observe the gravitational effects of an unknown dark matter surrounding galaxies. Most of the gravitational potential in the universe seems to be in this form, and the Big Bang models and various observations indicate that this excess gravitational potential is not created by baryonic matter, such as normal atoms. Measurements of the redshifts of supernovae indicate that the expansion of the universe is accelerating, an observation attributed to an unexplained phenomenon known as dark energy.
Features of the models
The Big Bang models offer a comprehensive explanation for a broad range of observed phenomena, including the abundances of the light elements, the CMB, large-scale structure, and Hubble's law. The models depend on two major assumptions: the universality of physical laws and the cosmological principle. The universality of physical laws is one of the underlying principles of the theory of relativity. The cosmological principle states that on large scales the universe is homogeneous and isotropic—appearing the same in all directions regardless of location.
These ideas were initially taken as postulates, but later efforts were made to test each of them. For example, the first assumption has been tested by observations showing that the largest possible deviation of the fine-structure constant over much of the age of the universe is of order 10−5. Also, general relativity has passed stringent tests on the scale of the Solar System and binary stars.
The large-scale universe appears isotropic as viewed from Earth. If it is indeed isotropic, the cosmological principle can be derived from the simpler Copernican principle, which states that there is no preferred (or special) observer or vantage point. To this end, the cosmological principle has been confirmed to a level of 10−5 via observations of the temperature of the CMB. At the scale of the CMB horizon, the universe has been measured to be homogeneous with an upper bound on the order of 10% inhomogeneity, as of 1995.
Horizons
An important feature of the Big Bang spacetime is the presence of particle horizons. Since the universe has a finite age, and light travels at a finite speed, there may be events in the past whose light has not yet had time to reach earth. This places a limit or a past horizon on the most distant objects that can be observed. Conversely, because space is expanding, and more distant objects are receding ever more quickly, light emitted by us today may never "catch up" to very distant objects. This defines a future horizon, which limits the events in the future that we will be able to influence. The presence of either type of horizon depends on the details of the Friedmann–Lemaître–Robertson–Walker (FLRW) metric that describes the expansion of the universe.
Our understanding of the universe back to very early times suggests that there is a past horizon, though in practice our view is also limited by the opacity of the universe at early times. So our view cannot extend further backward in time, though the horizon recedes in space. If the expansion of the universe continues to accelerate, there is a future horizon as well.
Thermalization
Some processes in the early universe occurred too slowly, compared to the expansion rate of the universe, to reach approximate thermodynamic equilibrium. Others were fast enough to reach thermalization. The parameter usually used to find out whether a process in the very early universe has reached thermal equilibrium is the ratio between the rate of the process (usually rate of collisions between particles) and the Hubble parameter. The larger the ratio, the more time particles had to thermalize before they were too far away from each other.
Timeline
According to the Big Bang models, the universe at the beginning was very hot and very compact, and since then it has been expanding and cooling.
Singularity
In the absence of a perfect cosmological principle, extrapolation of the expansion of the universe backwards in time using general relativity yields an infinite density and temperature at a finite time in the past. This irregular behavior, known as the gravitational singularity, indicates that general relativity is not an adequate description of the laws of physics in this regime. Models based on general relativity alone cannot fully extrapolate toward the singularity. In some proposals, such as the emergent Universe models, the singularity is replaced by another cosmological epoch. A different approach identifies the initial singularity as a singularity predicted by some models of the Big Bang theory to have existed before the Big Bang event.
This primordial singularity is itself sometimes called "the Big Bang", but the term can also refer to a more generic early hot, dense phase of the universe. In either case, "the Big Bang" as an event is also colloquially referred to as the "birth" of our universe since it represents the point in history where the universe can be verified to have entered into a regime where the laws of physics as we understand them (specifically general relativity and the Standard Model of particle physics) work. Based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background, the time that has passed since that event—known as the "age of the universe"—is 13.8 billion years.
Despite being extremely dense at this time—far denser than is usually required to form a black hole—the universe did not re-collapse into a singularity. Commonly used calculations and limits for explaining gravitational collapse are usually based upon objects of relatively constant size, such as stars, and do not apply to rapidly expanding space such as the Big Bang. Since the early universe did not immediately collapse into a multitude of black holes, matter at that time must have been very evenly distributed with a negligible density gradient.
Inflation and baryogenesis
The earliest phases of the Big Bang are subject to much speculation, given the lack of available data. In the most common models the universe was filled homogeneously and isotropically with a very high energy density and huge temperatures and pressures, and was very rapidly expanding and cooling. The period up to 10−43 seconds into the expansion, the Planck epoch, was a phase in which the four fundamental forces—the electromagnetic force, the strong nuclear force, the weak nuclear force, and the gravitational force, were unified as one. In this stage, the characteristic scale length of the universe was the Planck length, , and consequently had a temperature of approximately 1032 degrees Celsius. Even the very concept of a particle breaks down in these conditions. A proper understanding of this period awaits the development of a theory of quantum gravity. The Planck epoch was succeeded by the grand unification epoch beginning at 10−43 seconds, where gravitation separated from the other forces as the universe's temperature fell.
At approximately 10−37 seconds into the expansion, a phase transition caused a cosmic inflation, during which the universe grew exponentially, unconstrained by the light speed invariance, and temperatures dropped by a factor of 100,000. This concept is motivated by the flatness problem, where the density of matter and energy is very close to the critical density needed to produce a flat universe. That is, the shape of the universe has no overall geometric curvature due to gravitational influence. Microscopic quantum fluctuations that occurred because of Heisenberg's uncertainty principle were "frozen in" by inflation, becoming amplified into the seeds that would later form the large-scale structure of the universe. At a time around 10−36 seconds, the electroweak epoch begins when the strong nuclear force separates from the other forces, with only the electromagnetic force and weak nuclear force remaining unified.
Inflation stopped locally at around 10−33 to 10−32 seconds, with the observable universe's volume having increased by a factor of at least 1078. Reheating followed as the inflaton field decayed, until the universe obtained the temperatures required for the production of a quark–gluon plasma as well as all other elementary particles. Temperatures were so high that the random motions of particles were at relativistic speeds, and particle–antiparticle pairs of all kinds were being continuously created and destroyed in collisions. At some point, an unknown reaction called baryogenesis violated the conservation of baryon number, leading to a very small excess of quarks and leptons over antiquarks and antileptons—of the order of one part in 30 million. This resulted in the predominance of matter over antimatter in the present universe.
Cooling
The universe continued to decrease in density and fall in temperature, hence the typical energy of each particle was decreasing. Symmetry-breaking phase transitions put the fundamental forces of physics and the parameters of elementary particles into their present form, with the electromagnetic force and weak nuclear force separating at about 10−12 seconds.
After about 10−11 seconds, the picture becomes less speculative, since particle energies drop to values that can be attained in particle accelerators. At about 10−6 seconds, quarks and gluons combined to form baryons such as protons and neutrons. The small excess of quarks over antiquarks led to a small excess of baryons over antibaryons. The temperature was no longer high enough to create either new proton–antiproton or neutron–antineutron pairs. A mass annihilation immediately followed, leaving just one in 108 of the original matter particles and none of their antiparticles. A similar process happened at about 1 second for electrons and positrons. After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the universe was dominated by photons (with a minor contribution from neutrinos).
A few minutes into the expansion, when the temperature was about a billion kelvin and the density of matter in the universe was comparable to the current density of Earth's atmosphere, neutrons combined with protons to form the universe's deuterium and helium nuclei in a process called Big Bang nucleosynthesis (BBN). Most protons remained uncombined as hydrogen nuclei.
As the universe cooled, the rest energy density of matter came to gravitationally dominate that of the photon radiation. The recombination epoch began after about 379,000 years, when the electrons and nuclei combined into atoms (mostly hydrogen), which were able to emit radiation. This relic radiation, which continued through space largely unimpeded, is known as the cosmic microwave background.
Structure formation
After the recombination epoch, the slightly denser regions of the uniformly distributed matter gravitationally attracted nearby matter and thus grew even denser, forming gas clouds, stars, galaxies, and the other astronomical structures observable today. The details of this process depend on the amount and type of matter in the universe. The four possible types of matter are known as cold dark matter (CDM), warm dark matter, hot dark matter, and baryonic matter. The best measurements available, from the Wilkinson Microwave Anisotropy Probe (WMAP), show that the data is well-fit by a Lambda-CDM model in which dark matter is assumed to be cold. (Warm dark matter is ruled out by early reionization.) This CDM is estimated to make up about 23% of the matter/energy of the universe, while baryonic matter makes up about 4.6%.
In an "extended model" which includes hot dark matter in the form of neutrinos, then the "physical baryon density" is estimated at 0.023. (This is different from the 'baryon density' expressed as a fraction of the total matter/energy density, which is about 0.046.) The corresponding cold dark matter density is about 0.11, and the corresponding neutrino density is estimated to be less than 0.0062.
Cosmic acceleration
Independent lines of evidence from Type Ia supernovae and the CMB imply that the universe today is dominated by a mysterious form of energy known as dark energy, which appears to homogeneously permeate all of space. Observations suggest that 73% of the total energy density of the present day universe is in this form. When the universe was very young it was likely infused with dark energy, but with everything closer together, gravity predominated, braking the expansion. Eventually, after billions of years of expansion, the declining density of matter relative to the density of dark energy allowed the expansion of the universe to begin to accelerate.
Dark energy in its simplest formulation is modeled by a cosmological constant term in Einstein field equations of general relativity, but its composition and mechanism are unknown. More generally, the details of its equation of state and relationship with the Standard Model of particle physics continue to be investigated both through observation and theory.
All of this cosmic evolution after the inflationary epoch can be rigorously described and modeled by the lambda-CDM model of cosmology, which uses the independent frameworks of quantum mechanics and general relativity. There are no easily testable models that would describe the situation prior to approximately 10−15 seconds. Understanding this earliest of eras in the history of the universe is one of the greatest unsolved problems in physics.
Concept history
Etymology
English astronomer Fred Hoyle is credited with coining the term "Big Bang" during a talk for a March 1949 BBC Radio broadcast, saying: "These theories were based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past." However, it did not catch on until the 1970s.
It is popularly reported that Hoyle, who favored an alternative "steady-state" cosmological model, intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models. Helge Kragh writes that the evidence for the claim that it was meant as a pejorative is "unconvincing", and mentions a number of indications that it was not a pejorative.
The term itself has been argued to be a misnomer because it evokes an explosion. The argument is that whereas an explosion suggests expansion into a surrounding space, the Big Bang only describes the intrinsic expansion of the contents of the universe. Another issue pointed out by Santhosh Mathew is that bang implies sound, which is not an important feature of the model. An attempt to find a more suitable alternative was not successful.
Development
The Big Bang models developed from observations of the structure of the universe and from theoretical considerations. In 1912, Vesto Slipher measured the first Doppler shift of a "spiral nebula" (spiral nebula is the obsolete term for spiral galaxies), and soon discovered that almost all such nebulae were receding from Earth. He did not grasp the cosmological implications of this fact, and indeed at the time it was highly controversial whether or not these nebulae were "island universes" outside our Milky Way. Ten years later, Alexander Friedmann, a Russian cosmologist and mathematician, derived the Friedmann equations from the Einstein field equations, showing that the universe might be expanding in contrast to the static universe model advocated by Albert Einstein at that time.
In 1924, American astronomer Edwin Hubble's measurement of the great distance to the nearest spiral nebulae showed that these systems were indeed other galaxies. Starting that same year, Hubble painstakingly developed a series of distance indicators, the forerunner of the cosmic distance ladder, using the Hooker telescope at Mount Wilson Observatory. This allowed him to estimate distances to galaxies whose redshifts had already been measured, mostly by Slipher. In 1929, Hubble discovered a correlation between distance and recessional velocity—now known as Hubble's law.
Independently deriving Friedmann's equations in 1927, Georges Lemaître, a Belgian physicist and Roman Catholic priest, proposed that the recession of the nebulae was due to the expansion of the universe. He inferred the relation that Hubble would later observe, given the cosmological principle. In 1931, Lemaître went further and suggested that the evident expansion of the universe, if projected back in time, meant that the further in the past the smaller the universe was, until at some finite time in the past all the mass of the universe was concentrated into a single point, a "primeval atom" where and when the fabric of time and space came into existence.
In the 1920s and 1930s, almost every major cosmologist preferred an eternal steady-state universe, and several complained that the beginning of time implied by the Big Bang imported religious concepts into physics; this objection was later repeated by supporters of the steady-state theory. This perception was enhanced by the fact that the originator of the Big Bang concept, Lemaître, was a Roman Catholic priest. Arthur Eddington agreed with Aristotle that the universe did not have a beginning in time, viz., that matter is eternal. A beginning in time was "repugnant" to him. Lemaître, however, disagreed:
During the 1930s, other ideas were proposed as non-standard cosmologies to explain Hubble's observations, including the Milne model, the oscillatory universe (originally suggested by Friedmann, but advocated by Albert Einstein and Richard C. Tolman) and Fritz Zwicky's tired light hypothesis.
After World War II, two distinct possibilities emerged. One was Fred Hoyle's steady-state model, whereby new matter would be created as the universe seemed to expand. In this model the universe is roughly the same at any point in time. The other was Lemaître's Big Bang theory, advocated and developed by George Gamow, who introduced BBN and whose associates, Ralph Alpher and Robert Herman, predicted the CMB. Ironically, it was Hoyle who coined the phrase that came to be applied to Lemaître's theory, referring to it as "this big bang idea" during a BBC Radio broadcast in March 1949. For a while, support was split between these two theories. Eventually, the observational evidence, most notably from radio source counts, began to favor Big Bang over steady state. The discovery and confirmation of the CMB in 1964 secured the Big Bang as the best theory of the origin and evolution of the universe.
In 1968 and 1970, Roger Penrose, Stephen Hawking, and George F. R. Ellis published papers where they showed that mathematical singularities were an inevitable initial condition of relativistic models of the Big Bang. Then, from the 1970s to the 1990s, cosmologists worked on characterizing the features of the Big Bang universe and resolving outstanding problems. In 1981, Alan Guth made a breakthrough in theoretical work on resolving certain outstanding theoretical problems in the Big Bang models with the introduction of an epoch of rapid expansion in the early universe he called "inflation". Meanwhile, during these decades, two questions in observational cosmology that generated much discussion and disagreement were over the precise values of the Hubble Constant and the matter-density of the universe (before the discovery of dark energy, thought to be the key predictor for the eventual fate of the universe).
In the mid-1990s, observations of certain globular clusters appeared to indicate that they were about 15 billion years old, which conflicted with most then-current estimates of the age of the universe (and indeed with the age measured today). This issue was later resolved when new computer simulations, which included the effects of mass loss due to stellar winds, indicated a much younger age for globular clusters.
Significant progress in Big Bang cosmology has been made since the late 1990s as a result of advances in telescope technology as well as the analysis of data from satellites such as the Cosmic Background Explorer (COBE), the Hubble Space Telescope and WMAP. Cosmologists now have fairly precise and accurate measurements of many of the parameters of the Big Bang model, and have made the unexpected discovery that the expansion of the universe appears to be accelerating.
Observational evidence
The earliest and most direct observational evidence of the validity of the theory are the expansion of the universe according to Hubble's law (as indicated by the redshifts of galaxies), discovery and measurement of the cosmic microwave background and the relative abundances of light elements produced by Big Bang nucleosynthesis (BBN). More recent evidence includes observations of galaxy formation and evolution, and the distribution of large-scale cosmic structures. These are sometimes called the "four pillars" of the Big Bang models.
Precise modern models of the Big Bang appeal to various exotic physical phenomena that have not been observed in terrestrial laboratory experiments or incorporated into the Standard Model of particle physics. Of these features, dark matter is currently the subject of most active laboratory investigations. Remaining issues include the cuspy halo problem and the dwarf galaxy problem of cold dark matter. Dark energy is also an area of intense interest for scientists, but it is not clear whether direct detection of dark energy will be possible. Inflation and baryogenesis remain more speculative features of current Big Bang models. Viable, quantitative explanations for such phenomena are still being sought. These are unsolved problems in physics.
Hubble's law and the expansion of the universe
Observations of distant galaxies and quasars show that these objects are redshifted: the light emitted from them has been shifted to longer wavelengths. This can be seen by taking a frequency spectrum of an object and matching the spectroscopic pattern of emission or absorption lines corresponding to atoms of the chemical elements interacting with the light. These redshifts are uniformly isotropic, distributed evenly among the observed objects in all directions. If the redshift is interpreted as a Doppler shift, the recessional velocity of the object can be calculated. For some galaxies, it is possible to estimate distances via the cosmic distance ladder. When the recessional velocities are plotted against these distances, a linear relationship known as Hubble's law is observed:
where
is the recessional velocity of the galaxy or other distant object,
is the proper distance to the object, and
is Hubble's constant, measured to be km/s/Mpc by the WMAP.
Hubble's law implies that the universe is uniformly expanding everywhere. This cosmic expansion was predicted from general relativity by Friedmann in 1922 and Lemaître in 1927, well before Hubble made his 1929 analysis and observations, and it remains the cornerstone of the Big Bang model as developed by Friedmann, Lemaître, Robertson, and Walker.
The theory requires the relation to hold at all times, where is the proper distance, is the recessional velocity, and , , and vary as the universe expands (hence we write to denote the present-day Hubble "constant"). For distances much smaller than the size of the observable universe, the Hubble redshift can be thought of as the Doppler shift corresponding to the recession velocity . For distances comparable to the size of the observable universe, the attribution of the cosmological redshift becomes more ambiguous, although its interpretation as a kinematic Doppler shift remains the most natural one.
An unexplained discrepancy with the determination of the Hubble constant is known as Hubble tension. Techniques based on observation of the CMB suggest a lower value of this constant compared to the quantity derived from measurements based on the cosmic distance ladder.
Cosmic microwave background radiation
In 1964, Arno Penzias and Robert Wilson serendipitously discovered the cosmic background radiation, an omnidirectional signal in the microwave band. Their discovery provided substantial confirmation of the big-bang predictions by Alpher, Herman and Gamow around 1950. Through the 1970s, the radiation was found to be approximately consistent with a blackbody spectrum in all directions; this spectrum has been redshifted by the expansion of the universe, and today corresponds to approximately 2.725 K. This tipped the balance of evidence in favor of the Big Bang model, and Penzias and Wilson were awarded the 1978 Nobel Prize in Physics.
The surface of last scattering corresponding to emission of the CMB occurs shortly after recombination, the epoch when neutral hydrogen becomes stable. Prior to this, the universe comprised a hot dense photon-baryon plasma sea where photons were quickly scattered from free charged particles. Peaking at around , the mean free path for a photon becomes long enough to reach the present day and the universe becomes transparent.
In 1989, NASA launched COBE, which made two major advances: in 1990, high-precision spectrum measurements showed that the CMB frequency spectrum is an almost perfect blackbody with no deviations at a level of 1 part in 104, and measured a residual temperature of 2.726 K (more recent measurements have revised this figure down slightly to 2.7255 K); then in 1992, further COBE measurements discovered tiny fluctuations (anisotropies) in the CMB temperature across the sky, at a level of about one part in 105. John C. Mather and George Smoot were awarded the 2006 Nobel Prize in Physics for their leadership in these results.
During the following decade, CMB anisotropies were further investigated by a large number of ground-based and balloon experiments. In 2000–2001, several experiments, most notably BOOMERanG, found the shape of the universe to be spatially almost flat by measuring the typical angular size (the size on the sky) of the anisotropies.
In early 2003, the first results of the Wilkinson Microwave Anisotropy Probe were released, yielding what were at the time the most accurate values for some of the cosmological parameters. The results disproved several specific cosmic inflation models, but are consistent with the inflation theory in general. The Planck space probe was launched in May 2009. Other ground and balloon-based cosmic microwave background experiments are ongoing.
Abundance of primordial elements
Using Big Bang models, it is possible to calculate the expected concentration of the isotopes helium-4 (4He), helium-3 (3He), deuterium (2H), and lithium-7 (7Li) in the universe as ratios to the amount of ordinary hydrogen. The relative abundances depend on a single parameter, the ratio of photons to baryons. This value can be calculated independently from the detailed structure of CMB fluctuations. The ratios predicted (by mass, not by abundance) are about 0.25 for 4He:H, about 10−3 for 2H:H, about 10−4 for 3He:H, and about 10−9 for 7Li:H.
The measured abundances all agree at least roughly with those predicted from a single value of the baryon-to-photon ratio. The agreement is excellent for deuterium, close but formally discrepant for 4He, and off by a factor of two for 7Li (this anomaly is known as the cosmological lithium problem); in the latter two cases, there are substantial systematic uncertainties. Nonetheless, the general consistency with abundances predicted by BBN is strong evidence for the Big Bang, as the theory is the only known explanation for the relative abundances of light elements, and it is virtually impossible to "tune" the Big Bang to produce much more or less than 20–30% helium. Indeed, there is no obvious reason outside of the Big Bang that, for example, the young universe before star formation, as determined by studying matter supposedly free of stellar nucleosynthesis products, should have more helium than deuterium or more deuterium than 3He, and in constant ratios, too.
Galactic evolution and distribution
Detailed observations of the morphology and distribution of galaxies and quasars are in agreement with the current Big Bang models. A combination of observations and theory suggest that the first quasars and galaxies formed within a billion years after the Big Bang, and since then, larger structures have been forming, such as galaxy clusters and superclusters.
Populations of stars have been aging and evolving, so that distant galaxies (which are observed as they were in the early universe) appear very different from nearby galaxies (observed in a more recent state). Moreover, galaxies that formed relatively recently appear markedly different from galaxies formed at similar distances but shortly after the Big Bang. These observations are strong arguments against the steady-state model. Observations of star formation, galaxy and quasar distributions and larger structures, agree well with Big Bang simulations of the formation of structure in the universe, and are helping to complete details of the theory.
Primordial gas clouds
In 2011, astronomers found what they believe to be pristine clouds of primordial gas by analyzing absorption lines in the spectra of distant quasars. Before this discovery, all other astronomical objects have been observed to contain heavy elements that are formed in stars. Despite being sensitive to carbon, oxygen, and silicon, these three elements were not detected in these two clouds. Since the clouds of gas have no detectable levels of heavy elements, they likely formed in the first few minutes after the Big Bang, during BBN.
Other lines of evidence
The age of the universe as estimated from the Hubble expansion and the CMB is now in agreement with other estimates using the ages of the oldest stars, both as measured by applying the theory of stellar evolution to globular clusters and through radiometric dating of individual Population II stars. It is also in agreement with age estimates based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background. The agreement of independent measurements of this age supports the Lambda-CDM (ΛCDM) model, since the model is used to relate some of the measurements to an age estimate, and all estimates turn agree. Still, some observations of objects from the relatively early universe (in particular quasar APM 08279+5255) raise concern as to whether these objects had enough time to form so early in the ΛCDM model.
The prediction that the CMB temperature was higher in the past has been experimentally supported by observations of very low temperature absorption lines in gas clouds at high redshift. This prediction also implies that the amplitude of the Sunyaev–Zel'dovich effect in clusters of galaxies does not depend directly on redshift. Observations have found this to be roughly true, but this effect depends on cluster properties that do change with cosmic time, making precise measurements difficult.
Future observations
Future gravitational-wave observatories might be able to detect primordial gravitational waves, relics of the early universe, up to less than a second after the Big Bang.
Problems and related issues in physics
As with any theory, a number of mysteries and problems have arisen as a result of the development of the Big Bang models. Some of these mysteries and problems have been resolved while others are still outstanding. Proposed solutions to some of the problems in the Big Bang model have revealed new mysteries of their own. For example, the horizon problem, the magnetic monopole problem, and the flatness problem are most commonly resolved with inflation theory, but the details of the inflationary universe are still left unresolved and many, including some founders of the theory, say it has been disproven. What follows are a list of the mysterious aspects of the Big Bang concept still under intense investigation by cosmologists and astrophysicists.
Baryon asymmetry
It is not yet understood why the universe has more matter than antimatter. It is generally assumed that when the universe was young and very hot it was in statistical equilibrium and contained equal numbers of baryons and antibaryons. However, observations suggest that the universe, including its most distant parts, is made almost entirely of normal matter, rather than antimatter. A process called baryogenesis was hypothesized to account for the asymmetry. For baryogenesis to occur, the Sakharov conditions must be satisfied. These require that baryon number is not conserved, that C-symmetry and CP-symmetry are violated and that the universe depart from thermodynamic equilibrium. All these conditions occur in the Standard Model, but the effects are not strong enough to explain the present baryon asymmetry.
Dark energy
Measurements of the redshift–magnitude relation for type Ia supernovae indicate that the expansion of the universe has been accelerating since the universe was about half its present age. To explain this acceleration, general relativity requires that much of the energy in the universe consists of a component with large negative pressure, dubbed "dark energy".
Dark energy, though speculative, solves numerous problems. Measurements of the cosmic microwave background indicate that the universe is very nearly spatially flat, and therefore according to general relativity the universe must have almost exactly the critical density of mass/energy. But the mass density of the universe can be measured from its gravitational clustering, and is found to have only about 30% of the critical density. Since theory suggests that dark energy does not cluster in the usual way it is the best explanation for the "missing" energy density. Dark energy also helps to explain two geometrical measures of the overall curvature of the universe, one using the frequency of gravitational lenses, and the other using the characteristic pattern of the large-scale structure--baryon acoustic oscillations--as a cosmic ruler.
Negative pressure is believed to be a property of vacuum energy, but the exact nature and existence of dark energy remains one of the great mysteries of the Big Bang. Results from the WMAP team in 2008 are in accordance with a universe that consists of 73% dark energy, 23% dark matter, 4.6% regular matter and less than 1% neutrinos. According to theory, the energy density in matter decreases with the expansion of the universe, but the dark energy density remains constant (or nearly so) as the universe expands. Therefore, matter made up a larger fraction of the total energy of the universe in the past than it does today, but its fractional contribution will fall in the far future as dark energy becomes even more dominant.
The dark energy component of the universe has been explained by theorists using a variety of competing theories including Einstein's cosmological constant but also extending to more exotic forms of quintessence or other modified gravity schemes. A cosmological constant problem, sometimes called the "most embarrassing problem in physics", results from the apparent discrepancy between the measured energy density of dark energy, and the one naively predicted from Planck units.
Dark matter
During the 1970s and the 1980s, various observations showed that there is not sufficient visible matter in the universe to account for the apparent strength of gravitational forces within and between galaxies. This led to the idea that up to 90% of the matter in the universe is dark matter that does not emit light or interact with normal baryonic matter. In addition, the assumption that the universe is mostly normal matter led to predictions that were strongly inconsistent with observations. In particular, the universe today is far more lumpy and contains far less deuterium than can be accounted for without dark matter. While dark matter has always been controversial, it is inferred by various observations: the anisotropies in the CMB, galaxy cluster velocity dispersions, large-scale structure distributions, gravitational lensing studies, and X-ray measurements of galaxy clusters.
Indirect evidence for dark matter comes from its gravitational influence on other matter, as no dark matter particles have been observed in laboratories. Many particle physics candidates for dark matter have been proposed, and several projects to detect them directly are underway.
Additionally, there are outstanding problems associated with the currently favored cold dark matter model which include the dwarf galaxy problem and the cuspy halo problem. Alternative theories have been proposed that do not require a large amount of undetected matter, but instead modify the laws of gravity established by Newton and Einstein; yet no alternative theory has been as successful as the cold dark matter proposal in explaining all extant observations.
Horizon problem
The horizon problem results from the premise that information cannot travel faster than light. In a universe of finite age this sets a limit—the particle horizon—on the separation of any two regions of space that are in causal contact. The observed isotropy of the CMB is problematic in this regard: if the universe had been dominated by radiation or matter at all times up to the epoch of last scattering, the particle horizon at that time would correspond to about 2 degrees on the sky. There would then be no mechanism to cause wider regions to have the same temperature.
A resolution to this apparent inconsistency is offered by inflation theory in which a homogeneous and isotropic scalar energy field dominates the universe at some very early period (before baryogenesis). During inflation, the universe undergoes exponential expansion, and the particle horizon expands much more rapidly than previously assumed, so that regions presently on opposite sides of the observable universe are well inside each other's particle horizon. The observed isotropy of the CMB then follows from the fact that this larger region was in causal contact before the beginning of inflation.
Heisenberg's uncertainty principle predicts that during the inflationary phase there would be quantum thermal fluctuations, which would be magnified to a cosmic scale. These fluctuations served as the seeds for all the current structures in the universe. Inflation predicts that the primordial fluctuations are nearly scale invariant and Gaussian, which has been confirmed by measurements of the CMB.
A related issue to the classic horizon problem arises because in most standard cosmological inflation models, inflation ceases well before electroweak symmetry breaking occurs, so inflation should not be able to prevent large-scale discontinuities in the electroweak vacuum since distant parts of the observable universe were causally separate when the electroweak epoch ended.
Magnetic monopoles
The magnetic monopole objection was raised in the late 1970s. Grand unified theories (GUTs) predicted topological defects in space that would manifest as magnetic monopoles. These objects would be produced efficiently in the hot early universe, resulting in a density much higher than is consistent with observations, given that no monopoles have been found. This problem is resolved by cosmic inflation, which removes all point defects from the observable universe, in the same way that it drives the geometry to flatness.
Flatness problem
The flatness problem (also known as the oldness problem) is an observational problem associated with a FLRW. The universe may have positive, negative, or zero spatial curvature depending on its total energy density. Curvature is negative if its density is less than the critical density; positive if greater; and zero at the critical density, in which case space is said to be flat. Observations indicate the universe is consistent with being flat.
The problem is that any small departure from the critical density grows with time, and yet the universe today remains very close to flat. Given that a natural timescale for departure from flatness might be the Planck time, 10−43 seconds, the fact that the universe has reached neither a heat death nor a Big Crunch after billions of years requires an explanation. For instance, even at the relatively late age of a few minutes (the time of nucleosynthesis), the density of the universe must have been within one part in 1014 of its critical value, or it would not exist as it does today.
Misconceptions
One of the common misconceptions about the Big Bang model is that it fully explains the origin of the universe. However, the Big Bang model does not describe how energy, time, and space were caused, but rather it describes the emergence of the present universe from an ultra-dense and high-temperature initial state. It is misleading to visualize the Big Bang by comparing its size to everyday objects. When the size of the universe at Big Bang is described, it refers to the size of the observable universe, and not the entire universe.
Another common misconception is that the Big Bang must be understood as the expansion of space and not in terms of the contents of space exploding apart. In fact, either description can be accurate. The expansion of space (implied by the FLRW metric) is only a mathematical convention, corresponding to a choice of coordinates on spacetime. There is no generally covariant sense in which space expands.
The recession speeds associated with Hubble's law are not velocities in a relativistic sense (for example, they are not related to the spatial components of 4-velocities). Therefore, it is not remarkable that according to Hubble's law, galaxies farther than the Hubble distance recede faster than the speed of light. Such recession speeds do not correspond to faster-than-light travel.
Many popular accounts attribute the cosmological redshift to the expansion of space. This can be misleading because the expansion of space is only a coordinate choice. The most natural interpretation of the cosmological redshift is that it is a Doppler shift.
Implications
Given current understanding, scientific extrapolations about the future of the universe are only possible for finite durations, albeit for much longer periods than the current age of the universe. Anything beyond that becomes increasingly speculative. Likewise, at present, a proper understanding of the origin of the universe can only be subject to conjecture.
Pre–Big Bang cosmology
The Big Bang explains the evolution of the universe from a starting density and temperature that is well beyond humanity's capability to replicate, so extrapolations to the most extreme conditions and earliest times are necessarily more speculative. Lemaître called this initial state the "primeval atom" while Gamow called the material "ylem". How the initial state of the universe originated is still an open question, but the Big Bang model does constrain some of its characteristics. For example, if specific laws of nature were to come to existence in a random way, inflation models show, some combinations of these are far more probable, partly explaining why our Universe is rather stable. Another possible explanation for the stability of the Universe could be a hypothetical multiverse, which assumes every possible universe to exist, and thinking species could only emerge in those stable enough. A flat universe implies a balance between gravitational potential energy and other energy forms, requiring no additional energy to be created.
The Big Bang theory, built upon the equations of classical general relativity, indicates a singularity at the origin of cosmic time, and such an infinite energy density may be a physical impossibility. However, the physical theories of general relativity and quantum mechanics as currently realized are not applicable before the Planck epoch, and correcting this will require the development of a correct treatment of quantum gravity. Certain quantum gravity treatments, such as the Wheeler–DeWitt equation, imply that time itself could be an emergent property. As such, physics may conclude that time did not exist before the Big Bang.
While it is not known what could have preceded the hot dense state of the early universe or how and why it originated, or even whether such questions are sensible, speculation abounds on the subject of "cosmogony".
Some speculative proposals in this regard, each of which entails untested hypotheses, are:
The simplest models, in which the Big Bang was caused by quantum fluctuations. That scenario had very little chance of happening, but, according to the totalitarian principle, even the most improbable event will eventually happen. It took place instantly, in our perspective, due to the absence of perceived time before the Big Bang.
Emergent Universe models, which feature a low-activity past-eternal era before the Big Bang, resembling ancient ideas of a cosmic egg and birth of the world out of primordial chaos.
Models in which the whole of spacetime is finite, including the Hartle–Hawking no-boundary condition. For these cases, the Big Bang does represent the limit of time but without a singularity. In such a case, the universe is self-sufficient.
Brane cosmology models, in which inflation is due to the movement of branes in string theory; the pre-Big Bang model; the ekpyrotic model, in which the Big Bang is the result of a collision between branes; and the cyclic model, a variant of the ekpyrotic model in which collisions occur periodically. In the latter model the Big Bang was preceded by a Big Crunch and the universe cycles from one process to the other.
Eternal inflation, in which universal inflation ends locally here and there in a random fashion, each end-point leading to a bubble universe, expanding from its own big bang. This is sometimes referred to as pre-big bang inflation.
Proposals in the last two categories see the Big Bang as an event in either a much larger and older universe or in a multiverse.
Ultimate fate of the universe
Before observations of dark energy, cosmologists considered two scenarios for the future of the universe. If the mass density of the universe were greater than the critical density, then the universe would reach a maximum size and then begin to collapse. It would become denser and hotter again, ending with a state similar to that in which it started—a Big Crunch.
Alternatively, if the density in the universe were equal to or below the critical density, the expansion would slow down but never stop. Star formation would cease with the consumption of interstellar gas in each galaxy; stars would burn out, leaving white dwarfs, neutron stars, and black holes. Collisions between these would result in mass accumulating into larger and larger black holes. The average temperature of the universe would very gradually asymptotically approach absolute zero—a Big Freeze. Moreover, if protons are unstable, then baryonic matter would disappear, leaving only radiation and black holes. Eventually, black holes would evaporate by emitting Hawking radiation. The entropy of the universe would increase to the point where no organized form of energy could be extracted from it, a scenario known as heat death.
Modern observations of accelerating expansion imply that more and more of the currently visible universe will pass beyond our event horizon and out of contact with us. The eventual result is not known. The ΛCDM model of the universe contains dark energy in the form of a cosmological constant. This theory suggests that only gravitationally bound systems, such as galaxies, will remain together, and they too will be subject to heat death as the universe expands and cools. Other explanations of dark energy, called phantom energy theories, suggest that ultimately galaxy clusters, stars, planets, atoms, nuclei, and matter itself will be torn apart by the ever-increasing expansion in a so-called Big Rip.
Religious and philosophical interpretations
As a description of the origin of the universe, the Big Bang has significant bearing on religion and philosophy. As a result, it has become one of the liveliest areas in the discourse between science and religion. Some believe the Big Bang implies a creator, while others argue that Big Bang cosmology makes the notion of a creator superfluous.
See also
, a Big Bang speculation
. Also known as the Big Chill and the Big Freeze
, a discredited theory that denied the Big Bang and posited that the universe always existed
Notes
References
Bibliography
"Reprinted from Astrophysics and Space Science Volumes 269–270, Nos. 1–4, 1999".
"Lectures presented at the XX Canary Islands Winter School of Astrophysics, held in Tenerife, Spain, November 17–18, 2008."
"Symposium held in Dallas, Tex., Dec. 11-16, 1988."
The 2004 edition of the book is available from the Internet Archive. Retrieved 20 December 2019.
Further reading
1st edition is available from the Internet Archive. Retrieved 23 December 2019.
External links
Once Upon a Universe – STFC funded project explaining the history of the universe in easy-to-understand language
"Big Bang Cosmology" – NASA/WMAP Science Team
"The Big Bang" – NASA Science
"Big Bang, Big Bewilderment" – Big bang model with animated graphics by Johannes Koelman
"The Trouble With "The Big Bang"" – A rash of recent articles illustrates a longstanding confusion over the famous term. by Sabine Hossenfelde
Physical cosmology
Concepts in astronomy
Astronomical events
Scientific models
Origins
Beginnings | Big Bang | [
"Physics",
"Astronomy"
] | 10,454 | [
"Beginnings",
"Cosmogony",
"Astronomical sub-disciplines",
"Physical quantities",
"Time",
"Concepts in astronomy",
"Big Bang",
"Astronomical events",
"Theoretical physics",
"Astrophysics",
"Spacetime",
"Physical cosmology"
] |
4,163 | https://en.wikipedia.org/wiki/Bertrand%20Russell | Bertrand Arthur William Russell, 3rd Earl Russell, (18 May 1872 – 2 February 1970) was a British philosopher, logician, mathematician, and public intellectual. He had influence on mathematics, logic, set theory, and various areas of analytic philosophy.
He was one of the early 20th century's prominent logicians and a founder of analytic philosophy, along with his predecessor Gottlob Frege, his friend and colleague G. E. Moore, and his student and protégé Ludwig Wittgenstein. Russell with Moore led the British "revolt against idealism". Together with his former teacher A. N. Whitehead, Russell wrote Principia Mathematica, a milestone in the development of classical logic and a major attempt to reduce the whole of mathematics to logic (see logicism). Russell's article "On Denoting" has been considered a "paradigm of philosophy".
Russell was a pacifist who championed anti-imperialism and chaired the India League. He went to prison for his pacifism during World War I, and initially supported appeasement against Adolf Hitler's Nazi Germany, before changing his view in 1943, describing war as a necessary "lesser of two evils". In the wake of World War II, he welcomed American global hegemony in preference to either Soviet hegemony or no (or ineffective) world leadership, even if it were to come at the cost of using their nuclear weapons. He would later criticise Stalinist totalitarianism, condemn the United States' involvement in the Vietnam War, and become an outspoken proponent of nuclear disarmament.
In 1950, Russell was awarded the Nobel Prize in Literature "in recognition of his varied and significant writings in which he champions humanitarian ideals and freedom of thought". He was also the recipient of the De Morgan Medal (1932), Sylvester Medal (1934), Kalinga Prize (1957), and Jerusalem Prize (1963).
Biography
Early life and background
Bertrand Arthur William Russell was born at Ravenscroft, a country house in Trellech, Monmouthshire, on 18 May 1872, into an influential and liberal family of the British aristocracy. His parents were Viscount and Viscountess Amberley. Lord Amberley consented to his wife's affair with their children's tutor, the biologist Douglas Spalding. Both were early advocates of birth control at a time when this was considered scandalous. Lord Amberley was a deist, and even asked the philosopher John Stuart Mill to act as Russell's secular godfather. Mill died the year after Russell's birth, but his writings later influenced Russell's life.
Russell's paternal grandfather, Lord John Russell, later 1st Earl Russell (1792–1878), had twice been Prime Minister of the United Kingdom in the 1840s and 1860s. A member of Parliament since the early 1810s, he met with Napoleon Bonaparte in Elba. The Russells had been prominent in England for several centuries before this, coming to power and the peerage with the rise of the Tudor dynasty (see: Duke of Bedford). They established themselves as one of the leading Whig families and participated in political events from the dissolution of the monasteries in 1536–1540 to the Glorious Revolution in 1688–1689 and the Great Reform Act in 1832.
Lady Amberley was the daughter of Lord and Lady Stanley of Alderley. Russell often feared the ridicule of his maternal grandmother, one of the campaigners for education of women.
Childhood and adolescence
Russell had two siblings: brother Frank (seven years older), and sister Rachel (four years older). In June 1874, Russell's mother died of diphtheria, followed shortly by Rachel's death. In January 1876, his father died of bronchitis after a long period of depression. Frank and Bertrand were placed in the care of Victorian paternal grandparents, who lived at Pembroke Lodge in Richmond Park. His grandfather, former Prime Minister Earl Russell, died in 1878, and was remembered by Russell as a kind old man in a wheelchair. His grandmother, the Countess Russell (née Lady Frances Elliot), was the central family figure for the rest of Russell's childhood and youth.
The Countess was from a Scottish Presbyterian family and petitioned the Court of Chancery to set aside a provision in Amberley's will requiring the children to be raised as agnostics. Despite her religious conservatism, she held progressive views in other areas (accepting Darwinism and supporting Irish Home Rule), and her influence on Bertrand Russell's outlook on social justice and standing up for principle remained with him throughout his life. Her favourite Bible verse, "Thou shalt not follow a multitude to do evil", became his motto. The atmosphere at Pembroke Lodge was one of frequent prayer, emotional repression and formality; Frank reacted to this with open rebellion, but the young Bertrand learned to hide his feelings.
Russell's adolescence was lonely and he contemplated suicide. He remarked in his autobiography that his interests in "nature and books and (later) mathematics saved me from complete despondency;" only his wish to know more mathematics kept him from suicide. He was educated at home by a series of tutors. When Russell was eleven years old, his brother Frank introduced him to the work of Euclid, which he described in his autobiography as "one of the great events of my life, as dazzling as first love".
During these formative years, he also discovered the works of Percy Bysshe Shelley. Russell wrote: "I spent all my spare time reading him, and learning him by heart, knowing no one to whom I could speak of what I thought or felt, I used to reflect how wonderful it would have been to know Shelley, and to wonder whether I should meet any live human being with whom I should feel so much sympathy." Russell claimed that beginning at age 15, he spent considerable time thinking about the validity of Christian religious dogma, which he found unconvincing. At this age, he came to the conclusion that there is no free will and, two years later, that there is no life after death. Finally, at the age of 18, after reading Mill's Autobiography, he abandoned the "First Cause" argument and became an atheist.
He travelled to the continent in 1890 with an American friend, Edward FitzGerald, and with FitzGerald's family he visited the Paris Exhibition of 1889 and climbed the Eiffel Tower soon after it was completed.
Education
Russell won a scholarship to read for the Mathematical Tripos at Trinity College, Cambridge, and began his studies there in 1890, taking as coach Robert Rumsey Webb. He became acquainted with the younger George Edward Moore and came under the influence of Alfred North Whitehead, who recommended him to the Cambridge Apostles. He distinguished himself in mathematics and philosophy, graduating as seventh Wrangler in the former in 1893 and becoming a Fellow in the latter in 1895.
Early career
Russell began his published work in 1896 with German Social Democracy, a study in politics that was an early indication of his interest in political and social theory. In 1896 he taught German social democracy at the London School of Economics. He was a member of the Coefficients dining club of social reformers set up in 1902 by the Fabian campaigners Sidney and Beatrice Webb.
He now started a study of the foundations of mathematics at Trinity. In 1897, he wrote An Essay on the Foundations of Geometry (submitted at the Fellowship Examination of Trinity College) which discussed the Cayley–Klein metrics used for non-Euclidean geometry. He attended the first International Congress of Philosophy in Paris in 1900 where he met Giuseppe Peano and Alessandro Padoa. The Italians had responded to Georg Cantor, making a science of set theory; they gave Russell their literature including the Formulario mathematico. Russell was impressed by the precision of Peano's arguments at the Congress, read the literature upon returning to England, and came upon Russell's paradox. In 1903 he published The Principles of Mathematics, a work on the foundations of mathematics. It advanced a thesis of logicism, that mathematics and logic are one and the same.
At the age of 29, in February 1901, Russell underwent what he called a "sort of mystic illumination", after witnessing Whitehead's wife's suffering in an angina attack. "I found myself filled with semi-mystical feelings about beauty and with a desire almost as profound as that of the Buddha to find some philosophy which should make human life endurable", Russell would later recall. "At the end of those five minutes, I had become a completely different person."
In 1905, he wrote the essay "On Denoting", which was published in the philosophical journal Mind. Russell was elected a Fellow of the Royal Society (FRS) in 1908. The three-volume Principia Mathematica, written with Whitehead, was published between 1910 and 1913. This, along with the earlier The Principles of Mathematics, soon made Russell world-famous in his field. Russell's first political activity was as the Independent Liberal candidate in the 1907 by-election for the Wimbledon constituency, where he was not elected.
In 1910, he became a lecturer at the University of Cambridge, Trinity College, where he had studied. He was considered for a fellowship, which would give him a vote in the college government and protect him from being fired for his opinions, but was passed over because he was "anti-clerical", because he was agnostic. He was approached by the Austrian engineering student Ludwig Wittgenstein, who started undergraduate study with him. Russell viewed Wittgenstein as a successor who would continue his work on logic. He spent hours dealing with Wittgenstein's various phobias and his bouts of despair. This was a drain on Russell's energy, but Russell continued to be fascinated by him and encouraged his academic development, including the publication of Wittgenstein's Tractatus Logico-Philosophicus in 1922. Russell delivered his lectures on logical atomism, his version of these ideas, in 1918, before the end of World War I. Wittgenstein was, at that time, serving in the Austrian Army and subsequently spent nine months in an Italian prisoner of war camp at the end of the conflict.
First World War
During World War I, Russell was one of the few people to engage in active pacifist activities. In 1916, because of his lack of a fellowship, he was dismissed from Trinity College following his conviction under the Defence of the Realm Act 1914. He later described this, in Free Thought and Official Propaganda, as an illegitimate means the state used to violate freedom of expression. Russell championed the case of Eric Chappelow, a poet jailed and abused as a conscientious objector. Russell played a part in the Leeds Convention in June 1917, a historic event which saw well over a thousand "anti-war socialists" gather; many being delegates from the Independent Labour Party and the Socialist Party, united in their pacifist beliefs and advocating a peace settlement. The international press reported that Russell appeared with a number of Labour Members of Parliament (MPs), including Ramsay MacDonald and Philip Snowden, as well as former Liberal MP and anti-conscription campaigner, Professor Arnold Lupton. After the event, Russell told Lady Ottoline Morrell that, "to my surprise, when I got up to speak, I was given the greatest ovation that was possible to give anybody".
His conviction in 1916 resulted in Russell being fined £100 (), which he refused to pay in the hope that he would be sent to prison, but his books were sold at auction to raise the money. The books were bought by friends; he later treasured his copy of the King James Bible that was stamped "Confiscated by Cambridge Police".
A later conviction for publicly lecturing against inviting the United States to enter the war on the United Kingdom's side resulted in six months' imprisonment in Brixton Prison (see Bertrand Russell's political views) in 1918 (he was prosecuted under the Defence of the Realm Act) He later said of his imprisonment:
While he was reading Strachey's Eminent Victorians chapter about Gordon he laughed out loud in his cell prompting the warder to intervene and reminding him that "prison was a place of punishment".
Russell was reinstated to Trinity in 1919, resigned in 1920, was Tarner Lecturer in 1926 and became a Fellow again in 1944 until 1949.
In 1924, Russell again gained press attention when attending a "banquet" in the House of Commons with well-known campaigners, including Arnold Lupton, who had been an MP and had also endured imprisonment for "passive resistance to military or naval service".
G. H. Hardy on the Trinity controversy
In 1941, G. H. Hardy wrote a 61-page pamphlet titled Bertrand Russell and Trinity – published later as a book by Cambridge University Press with a foreword by C. D. Broad—in which he gave an authoritative account of Russell's 1916 dismissal from Trinity College, explaining that a reconciliation between the college and Russell had later taken place and gave details about Russell's personal life. Hardy writes that Russell's dismissal had created a scandal since the vast majority of the Fellows of the College opposed the decision. The ensuing pressure from the Fellows induced the Council to reinstate Russell. In January 1920, it was announced that Russell had accepted the reinstatement offer from Trinity and would begin lecturing in October. In July 1920, Russell applied for a one-year leave of absence; this was approved. He spent the year giving lectures in China and Japan. In January 1921, it was announced by Trinity that Russell had resigned and his resignation had been accepted. This resignation, Hardy explains, was voluntary and was not the result of another altercation.
The reason for the resignation, according to Hardy, was that Russell was going through a tumultuous time in his personal life with a divorce and subsequent remarriage. Russell contemplated asking Trinity for another one-year leave of absence but decided against it since this would have been an "unusual application" and the situation had the potential to snowball into another controversy. Although Russell did the right thing, in Hardy's opinion, the reputation of the College suffered with Russell's resignation since the 'world of learning' knew about Russell's altercation with Trinity but not that the rift had healed. In 1925, Russell was asked by the Council of Trinity College to give the Tarner Lectures on the Philosophy of the Sciences; these would later be the basis for one of Russell's best-received books according to Hardy: The Analysis of Matter, published in 1927. In the preface to the Trinity pamphlet, Hardy wrote:
Between the wars
In August 1920, Russell travelled to Soviet Russia as part of an official delegation sent by the British government to investigate the effects of the Russian Revolution. He wrote a four-part series of articles, titled "Soviet Russia—1920", for the magazine The Nation. He met Vladimir Lenin and had an hour-long conversation with him. In his autobiography, he mentions that he found Lenin disappointing, sensing an "impish cruelty" in him and comparing him to "an opinionated professor". He cruised down the Volga on a steamship. His experiences destroyed his previous tentative support for the revolution. He subsequently wrote a book, The Practice and Theory of Bolshevism, about his experiences on this trip, taken with a group of 24 others from the UK, all of whom came home thinking well of the Soviet regime, despite Russell's attempts to change their minds. For example, he told them that he had heard shots fired in the middle of the night and was sure that these were clandestine executions, but the others maintained that it was only cars backfiring.
Russell's lover Dora Black, a British author, feminist and socialist campaigner, visited Soviet Russia independently at the same time; in contrast to his reaction, she was enthusiastic about the Bolshevik revolution.
The following year, Russell, accompanied by Dora, visited Peking (as Beijing was then known outside of China) to lecture on philosophy for a year. He went with optimism and hope, seeing China as then being on a new path. Other scholars present in China at the time included John Dewey and Rabindranath Tagore, the Indian Nobel-laureate poet. Before leaving China, Russell became gravely ill with pneumonia, and incorrect reports of his death were published in the Japanese press. When the couple visited Japan on their return journey, Dora took on the role of spurning the local press by handing out notices reading "Mr. Bertrand Russell, having died according to the Japanese press, is unable to give interviews to Japanese journalists". Apparently they found this harsh and reacted resentfully. Russell supported his family during this time by writing popular books explaining matters of physics, ethics, and education to the layman.
From 1922 to 1927 the Russells divided their time between London and Cornwall, spending summers in Porthcurno. In the 1922 and 1923 general elections Russell stood as a Labour Party candidate in the Chelsea constituency, but only on the basis that he knew he was unlikely to be elected in such a safe Conservative seat, and he was unsuccessful on both occasions.
After the birth of his two children, he became interested in education, especially early childhood education. He was not satisfied with the old traditional education and thought that progressive education also had some flaws; as a result, together with Dora, Russell founded the experimental Beacon Hill School in 1927. The school was run from a succession of different locations, including its original premises at the Russells' residence, Telegraph House, near Harting, West Sussex. During this time, he published On Education, Especially in Early Childhood. On 8 July 1930, Dora gave birth to her third child Harriet Ruth. After he left the school in 1932, Dora continued it until 1943.
In 1927 Russell met Barry Fox (later Barry Stevens), who became a known Gestalt therapist and writer in later years. They developed an intense relationship, and in Fox's words: "...for three years we were very close." Fox sent her daughter Judith to Beacon Hill School. From 1927 to 1932 Russell wrote 34 letters to Fox. Upon the death of his elder brother Frank, in 1931, Russell became the 3rd Earl Russell.
Russell's marriage to Dora grew tenuous, and it reached a breaking point over her having two children with an American journalist, Griffin Barry. They separated in 1932 and finally divorced. On 18 January 1936, Russell married his third wife, an Oxford undergraduate named Patricia ("Peter") Spence, who had been his children's governess since 1930. Russell and Peter had one son, Conrad Sebastian Robert Russell, 5th Earl Russell, who became a historian and one of the leading figures in the Liberal Democrat party.
Russell returned in 1937 to the London School of Economics to lecture on the science of power. During the 1930s, Russell became a friend and collaborator of V. K. Krishna Menon, then President of the India League, the foremost lobby in the United Kingdom for Indian independence. Russell chaired the India League from 1932 to 1939.
Second World War
Russell's political views changed over time, mostly about war. He opposed rearmament against Nazi Germany. In 1937, he wrote in a personal letter: "If the Germans succeed in sending an invading army to England we should do best to treat them as visitors, give them quarters and invite the commander and chief to dine with the prime minister." In 1940, he changed his appeasement view that avoiding a full-scale world war was more important than defeating Hitler. He concluded that Adolf Hitler taking over all of Europe would be a permanent threat to democracy. In 1943, he adopted a stance toward large-scale warfare called "relative political pacifism": "War was always a great evil, but in some particularly extreme circumstances, it may be the lesser of two evils."
Before World War II, Russell taught at the University of Chicago, later moving on to Los Angeles to lecture at the UCLA Department of Philosophy. He was appointed professor at the City College of New York (CCNY) in 1940, but after a public outcry the appointment was annulled by a court judgment that pronounced him "morally unfit" to teach at the college because of his opinions, especially those relating to sexual morality, detailed in Marriage and Morals (1929). The matter was taken to the New York Supreme Court by Jean Kay who was afraid that her daughter would be harmed by the appointment, though her daughter was not a student at CCNY. Many intellectuals, led by John Dewey, protested at his treatment. Albert Einstein's oft-quoted aphorism that "great spirits have always encountered violent opposition from mediocre minds" originated in his open letter, dated 19 March 1940, to Morris Raphael Cohen, a professor emeritus at CCNY, supporting Russell's appointment. Dewey and Horace M. Kallen edited a collection of articles on the CCNY affair in The Bertrand Russell Case. Russell soon joined the Barnes Foundation, lecturing to a varied audience on the history of philosophy; these lectures formed the basis of A History of Western Philosophy. His relationship with the eccentric Albert C. Barnes soon soured, and he returned to the UK in 1944 to rejoin the faculty of Trinity College.
Later life
Russell participated in many broadcasts over the BBC, particularly The Brains Trust and for the Third Programme, on various topical and philosophical subjects. By this time Russell was known outside academic circles, frequently the subject or author of magazine and newspaper articles, and was called upon to offer opinions on a variety of subjects, even mundane ones. En route to one of his lectures in Trondheim, Russell was one of 24 survivors (out of 43 passengers) of an aeroplane crash in Hommelvik in October 1948. He said he owed his life to smoking since the people who drowned were in the non-smoking part of the plane. A History of Western Philosophy (1945) became a best-seller and provided Russell with a steady income for the remainder of his life.
In 1942, Russell argued in favour of a moderate socialism, capable of overcoming its metaphysical principles. In an inquiry on dialectical materialism, launched by the Austrian artist and philosopher Wolfgang Paalen in his journal DYN, Russell said: "I think the metaphysics of both Hegel and Marx plain nonsense—Marx's claim to be 'science' is no more justified than Mary Baker Eddy's. This does not mean that I am opposed to socialism."
In 1943, Russell expressed support for Zionism: "I have come gradually to see that, in a dangerous and largely hostile world, it is essential to Jews to have some country which is theirs, some region where they are not suspected aliens, some state which embodies what is distinctive in their culture".
In a speech in 1948, Russell said that if the USSR's aggression continued, it would be morally worse to go to war after the USSR possessed an atomic bomb than before it possessed one, because if the USSR had no bomb the West's victory would come more swiftly and with fewer casualties than if there were atomic bombs on both sides. At that time, only the United States possessed an atomic bomb, and the USSR was pursuing an aggressive policy towards the countries in Eastern Europe which were being absorbed into the Soviet Union's sphere of influence. Many understood Russell's comments to mean that Russell approved of a first strike in a war with the USSR, including Nigel Lawson, who was present when Russell spoke of such matters. Others, including Griffin, who obtained a transcript of the speech, have argued that he was explaining the usefulness of America's atomic arsenal in deterring the USSR from continuing its domination of Eastern Europe.
Just after the atomic bombs exploded over Hiroshima and Nagasaki, Russell wrote letters, and published articles in newspapers from 1945 to 1948, stating clearly that it was morally justified and better to go to war against the USSR using atomic bombs while the United States possessed them and before the USSR did. In September 1949, one week after the USSR tested its first A-bomb, but before this became known, Russell wrote that the USSR would be unable to develop nuclear weapons because following Stalin's purges only science based on Marxist principles would be practised in the Soviet Union. After it became known that the USSR had carried out its nuclear bomb tests, Russell declared his position advocating the total abolition of atomic weapons.
In 1948, Russell was invited by the BBC to deliver the inaugural Reith Lectures—what was to become an annual series of lectures, still broadcast by the BBC. His series of six broadcasts, titled Authority and the Individual, explored themes such as the role of individual initiative in the development of a community and the role of state control in a progressive society. Russell continued to write about philosophy. He wrote a foreword to Words and Things by Ernest Gellner, which was highly critical of the later thought of Ludwig Wittgenstein and of ordinary language philosophy. Gilbert Ryle refused to have the book reviewed in the philosophical journal Mind, which caused Russell to respond via The Times. The result was a month-long correspondence in The Times between the supporters and detractors of ordinary language philosophy, which was ended when the paper published an editorial critical of both sides but agreeing with the opponents of ordinary language philosophy.
In the King's Birthday Honours of 9 June 1949, Russell was awarded the Order of Merit, and the following year he was awarded the Nobel Prize in Literature. When he was given the Order of Merit, George VI was affable but embarrassed at decorating a former jailbird, saying, "You have sometimes behaved in a manner that would not do if generally adopted". Russell merely smiled, but afterwards claimed that the reply "That's right, just like your brother" immediately came to mind.
In 1950, Russell attended the inaugural conference for the Congress for Cultural Freedom, a CIA-funded anti-communist organisation committed to the deployment of culture as a weapon during the Cold War. Russell was one of the known patrons of the Congress until he resigned in 1956.
In 1952, Russell was divorced by Spence, with whom he had been very unhappy. Conrad, Russell's son by Spence, did not see his father between the time of the divorce and 1968 (at which time his decision to meet his father caused a permanent breach with his mother). Russell married his fourth wife, Edith Finch, soon after the divorce, on 15 December 1952. They had known each other since 1925, and Edith had taught English at Bryn Mawr College near Philadelphia, sharing a house for 20 years with Russell's old friend Lucy Donnelly. Edith remained with him until his death, and, by all accounts, their marriage was a happy, close, and loving one. Russell's eldest son John suffered from mental illness, which was the source of ongoing disputes between Russell and his former wife Dora.
In 1962 Russell played a public role in the Cuban Missile Crisis: in an exchange of telegrams with Soviet leader Nikita Khrushchev, Khrushchev assured him that the Soviet government would not be reckless. Russell sent this telegram to President Kennedy:
According to historian Peter Knight, after JFK's assassination, Russell, "prompted by the emerging work of the lawyer Mark Lane in the US ... rallied support from other noteworthy and left-leaning compatriots to form a Who Killed Kennedy Committee in June 1964, members of which included Michael Foot MP, Caroline Benn, the publisher Victor Gollancz, the writers John Arden and J. B. Priestley, and the Oxford history professor Hugh Trevor-Roper." Russell published a highly critical article in The Minority of One weeks before the Warren Commission Report was published, setting forth 16 Questions on the Assassination. Russell equated the Oswald case with the Dreyfus affair of late 19th-century France, in which the state convicted an innocent man. Russell also criticised the American press for failing to heed any voices critical of the official version.
Political causes
Bertrand Russell was opposed to war from a young age; his opposition to World War I was used as grounds for his dismissal from Trinity College at Cambridge. This incident fused two of his controversial causes, as he had failed to be granted fellow status which would have protected him from firing, because he was not willing to either pretend to be a devout Christian, or at least avoid admitting he was agnostic.
He later described the resolution of these issues as essential to freedom of thought and expression, citing the incident in Free Thought and Official Propaganda, where he explained that the expression of any idea, even the most obviously "bad", must be protected not only from direct State intervention but also economic leveraging and other means of being silenced:
Russell spent the 1950s and 1960s engaged in political causes primarily related to nuclear disarmament and opposing the Vietnam War. The 1955 Russell–Einstein Manifesto was a document calling for nuclear disarmament and was signed by eleven of the most prominent nuclear physicists and intellectuals of the time. In October 1960 "The Committee of 100" was formed with a declaration by Russell and Michael Scott, entitled "Act or Perish", which called for a "movement of nonviolent resistance to nuclear war and weapons of mass destruction". In September 1961, at the age of 89, Russell was jailed for seven days in Brixton Prison for a "breach of the peace" after taking part in an anti-nuclear demonstration in London. The magistrate offered to exempt him from jail if he pledged himself to "good behaviour", to which Russell replied: "No, I won't."
From 1966 to 1967, Russell worked with Jean-Paul Sartre and many other intellectual figures to form the Russell Vietnam War Crimes Tribunal to investigate the conduct of the United States in Vietnam. He wrote many letters to world leaders during this period.
Early in his life, Russell supported eugenicist policies. In 1894, he proposed that the state issue certificates of health to prospective parents and withhold public benefits from those considered unfit. In 1929, he wrote that people deemed "mentally defective" and "feebleminded" should be sexually sterilised because they "are apt to have enormous numbers of illegitimate children, all, as a rule, wholly useless to the community." Russell was also an advocate of population control:
On 20 November 1948, in a public speech at Westminster School, addressing a gathering arranged by the New Commonwealth, Russell shocked some observers by suggesting that a preemptive nuclear strike on the Soviet Union was justified. Russell argued that war between the United States and the Soviet Union seemed inevitable, so it would be a humanitarian gesture to get it over with quickly and have the United States in the dominant position. Currently, Russell argued, humanity could survive such a war, whereas a full nuclear war after both sides had manufactured large stockpiles of more destructive weapons was likely to result in the extinction of the human race. Russell later relented from this stance, instead arguing for mutual disarmament by the nuclear powers.
In 1956, before and during the Suez Crisis, Russell expressed his opposition to European imperialism in the Middle East. He viewed the crisis as another reminder of the pressing need for an effective mechanism for international governance, and to restrict national sovereignty in places such as the Suez Canal area "where general interest is involved". At the same time the Suez Crisis was taking place, the world was also captivated by the Hungarian Revolution and the subsequent crushing of the revolt by intervening Soviet forces. Russell attracted criticism for speaking out fervently against the Suez war while ignoring Soviet repression in Hungary, to which he responded that he did not criticise the Soviets "because there was no need. Most of the so-called Western World was fulminating". Although he later feigned a lack of concern, at the time he was disgusted by the brutal Soviet response, and on 16 November 1956, he expressed approval for a declaration of support for Hungarian scholars which Michael Polanyi had cabled to the Soviet embassy in London twelve days previously, shortly after Soviet troops had entered Budapest.
In November 1957 Russell wrote an article addressing US President Dwight D. Eisenhower and Soviet Premier Nikita Khrushchev, urging a summit to consider "the conditions of co-existence". Khrushchev responded that peace could be served by such a meeting. In January 1958 Russell elaborated his views in The Observer, proposing a cessation of all nuclear weapons production, with the UK taking the first step by unilaterally suspending its own nuclear weapons program if necessary, and with Germany "freed from all alien armed forces and pledged to neutrality in any conflict between East and West". US Secretary of State John Foster Dulles replied for Eisenhower. The exchange of letters was published as The Vital Letters of Russell, Khrushchev, and Dulles.
Russell was asked by The New Republic, a liberal American magazine, to elaborate his views on world peace. He urged that all nuclear weapons testing and flights by planes armed with nuclear weapons be halted immediately, and negotiations be opened for the destruction of all hydrogen bombs, with the number of conventional nuclear devices limited to ensure a balance of power. He proposed that Germany be reunified and accept the Oder-Neisse line as its border, and that a neutral zone be established in Central Europe, consisting at the minimum of Germany, Poland, Hungary, and Czechoslovakia, with each of these countries being free of foreign troops and influence, and prohibited from forming alliances with countries outside the zone. In the Middle East, Russell suggested that the West avoid opposing Arab nationalism, and proposed the creation of a United Nations peacekeeping force to guard Israel's frontiers to ensure that Israel was prevented from committing aggression and protected from it. He also suggested Western recognition of the People's Republic of China, and that it be admitted to the UN with a permanent seat on the UN Security Council.
He was in contact with Lionel Rogosin while the latter was filming his anti-war film Good Times, Wonderful Times in the 1960s. He became a hero to many of the youthful members of the New Left. In early 1963, Russell became increasingly vocal in his disapproval of the Vietnam War, and felt that the US government's policies there were near-genocidal. In 1963 he became the inaugural recipient of the Jerusalem Prize, an award for writers concerned with the freedom of the individual in society. In 1964 he was one of eleven world figures who issued an appeal to Israel and the Arab countries to accept an arms embargo and international supervision of nuclear plants and rocket weaponry. In October 1965 he tore up his Labour Party card because he suspected Harold Wilson's Labour government was going to send troops to support the United States in Vietnam.
Final years, death and legacy
In June 1955, Russell had leased Plas Penrhyn in Penrhyndeudraeth, Merionethshire, Wales and on 5 July of the following year it became his and Edith's principal residence.
Russell published his three-volume autobiography in 1967, 1968, and 1969. He made a cameo appearance playing himself in the anti-war Hindi film Aman, by Mohan Kumar, which was released in India in 1967. This was Russell's only appearance in a feature film.
On 23 November 1969, he wrote to The Times newspaper saying that the preparation for show trials in Czechoslovakia was "highly alarming". The same month, he appealed to Secretary General U Thant of the United Nations to support an international war crimes commission to investigate alleged torture and genocide by the United States in South Vietnam during the Vietnam War. The following month, he protested to Alexei Kosygin over the expulsion of Aleksandr Solzhenitsyn from the Soviet Union of Writers.
On 31 January 1970, Russell issued a statement condemning "Israel's aggression in the Middle East", and in particular, Israeli bombing raids being carried out deep in Egyptian territory as part of the War of Attrition, which he compared to German bombing raids in the Battle of Britain and the US bombing of Vietnam. He called for an Israeli withdrawal to the pre-Six-Day War borders, stating "The aggression committed by Israel must be condemned, not only because no state has the right to annexe foreign territory, but because every expansion is an experiment to discover how much more aggression the world will tolerate." This was Russell's final political statement or act. It was read out at the International Conference of Parliamentarians in Cairo on 3 February 1970, the day after his death.
Russell died of influenza, just after 8 pm on 2 February 1970 at his home in Penrhyndeudraeth, aged 97. His body was cremated in Colwyn Bay on 5 February 1970 with five people present. In accordance with his will, there was no religious ceremony but one minute's silence; his ashes were later scattered over the Welsh mountains. Although he was born in Monmouthshire, and died in Penrhyndeudraeth in Wales, Russell identified as English. Later in 1970, on 23 October, his will was published showing he had left an estate valued at £69,423 (equivalent to £ million in ). In 1980, a memorial to Russell was commissioned by a committee including the philosopher A. J. Ayer. It consists of a bust of Russell in Red Lion Square in London sculpted by Marcelle Quinton.
Lady Katharine Jane Tait, Russell's daughter, founded the Bertrand Russell Society in 1974 to preserve and understand his work. It publishes the Bertrand Russell Society Bulletin, holds meetings and awards prizes for scholarship, including the Bertrand Russell Society Award. She also authored several essays about her father; as well as a book, My Father, Bertrand Russell, which was published in 1975. All members receive Russell: The Journal of Bertrand Russell Studies.
For the sesquicentennial of his birth, in May 2022, McMaster University's Bertrand Russell Archive, the university's largest and most heavily used research collection, organised both a physical and virtual exhibition on Russell's anti-nuclear stance in the post-war era, Scientists for Peace: the Russell-Einstein Manifesto and the Pugwash Conference, which included the earliest version of the Russell–Einstein Manifesto. The Bertrand Russell Peace Foundation held a commemoration at Conway Hall in Red Lion Square, London, on 18 May, the anniversary of his birth. For its part, on the same day, La Estrella de Panamá published a biographical sketch by Francisco Díaz Montilla, who commented that "[if he] had to characterize Russell's work in one sentence [he] would say: criticism and rejection of dogmatism."
Bangladesh's first leader, Mujibur Rahman, named his youngest son Sheikh Russel in honour of Bertrand Russell.
Marriages and issue
In 1889, Russell at 17 years of age, met the family of Alys Pearsall Smith, an American Quaker five years older, who was a graduate of Bryn Mawr College near Philadelphia. He became a friend of the Pearsall Smith family. They knew him as "Lord John's grandson" and enjoyed showing him off.
He fell in love with Alys, and contrary to his grandmother's wishes, married her on 13 December 1894. Their marriage began to fall apart in 1901 when it occurred to Russell, while cycling, that he no longer loved her. She asked him if he loved her and he cruelly replied that he did not. Russell also disliked Alys's mother, finding her controlling and cruel. A lengthy period of separation began in 1911 with Russell's affair with Lady Ottoline Morrell, and he and Alys finally divorced in 1921 to enable Russell to remarry.
During his years of separation from Alys, Russell had affairs (often simultaneous) with a number of women, including Morrell and the actress Lady Constance Malleson. Some have suggested that at this point he had an affair with Vivienne Haigh-Wood, the English governess and writer, and first wife of T. S. Eliot.
In 1921, his second marriage was to Dora Winifred Black MBE (died 1986), daughter of Sir Frederick Black. Dora was six months pregnant when the couple returned to England.
This was dissolved in 1935, having produced two children:
John Conrad Russell, 4th Earl Russell (1921–1987)
Lady Katharine Jane Russell (1923–2021), who married Rev. Charles Tait in 1948 and had issue
Russell's third marriage was to Patricia Helen Spence (died 2004) in 1936, with the marriage producing one child:
Conrad Sebastian Robert Russell, 5th Earl Russell (1937–2004). 5th Earl Russell, who became a historian and one of the leading figures in the Liberal Democrat party.
Russell's third marriage ended in divorce in 1952. He married Edith Finch in the same year. Finch died in 1978.
Titles, awards and honours
Upon his brother's death in 1931, Russell became the 3rd Earl Russell of Kingston Russell, and the subsidiary title of Viscount Amberley of Amberley and of Ardsalla. He held both titles, and the accompanying seat in the House of Lords, until his death in 1970.
Honours and Awards
Scholastic
Views
Philosophy
Russell is credited with being one of the founders of analytic philosophy. He was impressed by Gottfried Leibniz (1646–1716), and wrote on major areas of philosophy except aesthetics. He was prolific in the fields of metaphysics, logic and the philosophy of mathematics, the philosophy of language, ethics and epistemology. When Brand Blanshard asked Russell why he did not write on aesthetics, Russell replied that he did not know anything about it, though he hastened to add "but that is not a very good excuse, for my friends tell me it has not deterred me from writing on other subjects".
On ethics, Russell wrote that he was a utilitarian in his youth, yet he later distanced himself from this view.
For the advancement of science and protection of liberty of expression, Russell advocated The Will to Doubt, the recognition that all human knowledge is at most a best guess, that one should always remember:
Religion
Russell described himself in 1947 as an agnostic or an atheist: he found it difficult to determine which term to adopt, saying: For most of his adult life, Russell maintained religion to be little more than superstition and, despite any positive effects, largely harmful to people. He believed that religion and the religious outlook serve to impede knowledge and foster fear and dependency, and to be responsible for much of our world's wars, oppression, and misery. He was a member of the advisory council of the British Humanist Association and the president of Cardiff Humanists until his death.
Society
Political and social activism occupied much of Russell's time for most of his life. Russell remained politically active almost to the end of his life, writing to and exhorting world leaders and lending his name to various causes. He was a prominent campaigner against Western intervention into the Vietnam War in the 1960s, writing essays and books, attending demonstrations, and even organising the Russell Tribunal in 1966 alongside other prominent philosophers such as Jean-Paul Sartre and Simone de Beauvoir, which fed into his 1967 book War Crimes in Vietnam.
Russell argued for a "scientific society", where war would be abolished, population growth would be limited, and prosperity would be shared. He suggested the establishment of a "single supreme world government" able to enforce peace, claiming that "the only thing that will redeem mankind is co-operation". He was one of the signatories of the agreement to convene a convention for drafting a world constitution. As a result, for the first time in human history, a World Constituent Assembly convened to draft and adopt the Constitution for the Federation of Earth. Russell also expressed support for guild socialism, and commented positively on several socialist thinkers and activists. According to Jean Bricmont and Normand Baillargeon, "Russell was both a liberal and a socialist, a combination that was perfectly comprehensible in his time, but which has become almost unthinkable today. He was a liberal in that he opposed concentrations of power in all its manifestations, military, governmental, or religious, as well as the superstitious or nationalist ideas that usually serve as its justification. But he was also a socialist, even as an extension of his liberalism, because he was equally opposed to the concentrations of power stemming from the private ownership of the major means of production, which therefore needed to be put under social control (which does not mean state control)."
Russell was an active supporter of the Homosexual Law Reform Society, being one of the signatories of A. E. Dyson's 1958 letter to The Times calling for a change in the law regarding male homosexual practices, which were partly legalised in 1967, when Russell was still alive.
He expressed sympathy and support for the Palestinian people and was critical of Israel's actions. He wrote in 1960 that, "I think it was a mistake to establish a Jewish State in Palestine, but it would be a still greater mistake to try to get rid of it now that it exists." In his final written document, read aloud in Cairo three days after his death on 31 January 1970, he condemned Israel as an aggressive imperialist power, which "wishes to consolidate with the least difficulty what it has already taken by violence. Every new conquest becomes the new basis of the proposed negotiation from strength, which ignores the injustice of the previous aggression." In regards to the Palestinian people and refugees, he wrote that, "No people anywhere in the world would accept being expelled en masse from their own country; how can anyone require the people of Palestine to accept a punishment which nobody else would tolerate? A permanent just settlement of the refugees in their homeland is an essential ingredient of any genuine settlement in the Middle East."
Russell advocated for a universal basic income. In his 1918 book Roads to Freedom, Russell wrote that "Anarchism has the advantage as regards liberty, Socialism as regards the inducement to work. Can we not find a method of combining these two advantages? It seems to me that we can. [...] Stated in more familiar terms, the plan we are advocating amounts essentially to this: that a certain small income, sufficient for necessaries, should be secured to all, whether they work or not, and that a larger income – as much larger as might be warranted by the total amount of commodities produced – should be given to those who are willing to engage in some work which the community recognizes as useful...When education is finished, no one should be compelled to work, and those who choose not to work should receive a bare livelihood and be left completely free."
In "Reflections on My Eightieth Birthday" ("Postscript" in his Autobiography), Russell wrote: "I have lived in the pursuit of a vision, both personal and social. Personal: to care for what is noble, for what is beautiful, for what is gentle; to allow moments of insight to give wisdom at more mundane times. Social: to see in imagination the society that is to be created, where individuals grow freely, and where hate and greed and envy die because there is nothing to nourish them. These things I believe, and the world, for all its horrors, has left me unshaken".
Freedom of opinion and expression
Russell supported freedom of opinion and was an opponent of both censorship and indoctrination. In 1928, he wrote: "The fundamental argument for freedom of opinion is the doubtfulness of all our belief... when the State intervenes to ensure the indoctrination of some doctrine, it does so because there is no conclusive evidence in favour of that doctrine ... It is clear that thought is not free if the profession of certain opinions make it impossible to make a living". In 1957, he wrote: "'Free thought' means thinking freely ... to be worthy of the name freethinker he must be free of two things: the force of tradition and the tyranny of his own passions."
Education
Russell has presented ideas on the possible means of control of education in case of scientific dictatorship governments, of the kind of this excerpt taken from Chapter II "General Effects of Scientific Technique" of "The Impact of Science on society":
He pushed his visionary scenarios even further into details, in Chapter III "Scientific Technique in an Oligarchy" of the same book, stating as an example:
Selected works
Below are selected Russell's works in English, sorted by year of first publication:
1896. German Social Democracy. London: Longmans, Green
1897. An Essay on the Foundations of Geometry. Cambridge: Cambridge University Press
1900. A Critical Exposition of the Philosophy of Leibniz. Cambridge: Cambridge University Press
1903. The Principles of Mathematics. Cambridge University Press
1903. A Free man's worship, and other essays.
1905. On Denoting, Mind, Vol. 14. . Basil Blackwell
1910. Philosophical Essays. London: Longmans, Green
1910–1913. Principia Mathematica. (with Alfred North Whitehead). 3 vols. Cambridge: Cambridge University Press
1912. The Problems of Philosophy. London: Williams and Norgate
1914. Our Knowledge of the External World as a Field for Scientific Method in Philosophy. Chicago and London: Open Court Publishing.
1916. Principles of Social Reconstruction. London, George Allen and Unwin
1916. Why Men Fight. New York: The Century Co
1916. The Policy of the Entente, 1904–1914: a reply to Professor Gilbert Murray. Manchester: The National Labour Press
1916. Justice in War-time. Chicago: Open Court
1917. Political Ideals. New York: The Century Co.
1918. Mysticism and Logic and Other Essays. London: George Allen & Unwin
1918. Proposed Roads to Freedom: Socialism, Anarchism, and Syndicalism. London: George Allen & Unwin
1919. Introduction to Mathematical Philosophy. London: George Allen & Unwin. ( for Routledge paperback)
1920. The Practice and Theory of Bolshevism. London: George Allen & Unwin
1921. The Analysis of Mind. London: George Allen & Unwin
1922. The Problem of China. London: George Allen & Unwin
1922. Free Thought and Official Propaganda, delivered at South Place Institute
1923. The Prospects of Industrial Civilization, in collaboration with Dora Russell. London: George Allen & Unwin
1923. The ABC of Atoms, London: Kegan Paul. Trench, Trubner
1924. Icarus; or, The Future of Science. London: Kegan Paul, Trench, Trubner
1925. The ABC of Relativity. London: Kegan Paul, Trench, Trubner (revised and edited by Felix Pirani)
1925. What I Believe. London: Kegan Paul, Trench, Trubner
1926. On Education, Especially in Early Childhood. London: George Allen & Unwin
1927. The Analysis of Matter. London: Kegan Paul, Trench, Trubner
1927. An Outline of Philosophy. London: George Allen & Unwin
1927. Why I Am Not a Christian. London: Watts
1927. Selected Papers of Bertrand Russell. New York: Modern Library
1928. Sceptical Essays. London: George Allen & Unwin
1929. Marriage and Morals. London: George Allen & Unwin
1930. The Conquest of Happiness. London: George Allen & Unwin
1931. The Scientific Outlook, London: George Allen & Unwin
1932. Education and the Social Order, London: George Allen & Unwin
1934. Freedom and Organization, 1814–1914. London: George Allen & Unwin
1935. In Praise of Idleness and Other Essays. London: George Allen & Unwin
1935. Religion and Science. London: Thornton Butterworth
1936. Which Way to Peace?. London: Jonathan Cape
1937. The Amberley Papers: The Letters and Diaries of Lord and Lady Amberley, with Patricia Russell, 2 vols., London: Leonard & Virginia Woolf at the Hogarth Press; reprinted (1966) as The Amberley Papers. Bertrand Russell's Family Background, 2 vols., London: George Allen & Unwin
1938. Power: A New Social Analysis. London: George Allen & Unwin
1940. An Inquiry into Meaning and Truth. New York: W. W. Norton & Company.
1945. The Bomb and Civilisation. Published in the Glasgow Forward on 18 August 1945
1946. A History of Western Philosophy and Its Connection with Political and Social Circumstances from the Earliest Times to the Present Day New York: Simon and Schuster
1948. Human Knowledge: Its Scope and Limits. London: George Allen & Unwin
1949. Authority and the Individual. London: George Allen & Unwin
1950. . London: George Allen & Unwin
1951. New Hopes for a Changing World. London: George Allen & Unwin
1952. The Impact of Science on Society. London: George Allen & Unwin
1953. Satan in the Suburbs and Other Stories. London: George Allen & Unwin
1954. Human Society in Ethics and Politics. London: George Allen & Unwin
1954. Nightmares of Eminent Persons and Other Stories. London: George Allen & Unwin
1956. Portraits from Memory and Other Essays. London: George Allen & Unwin
1956. Logic and Knowledge: Essays 1901–1950, edited by Robert C. Marsh. London: George Allen & Unwin
1957. Why I Am Not A Christian and Other Essays on Religion and Related Subjects, edited by Paul Edwards. London: George Allen & Unwin
1958. Understanding History and Other Essays. New York: Philosophical Library
1958. The Will to Doubt. New York: Philosophical Library
1959. Common Sense and Nuclear Warfare. London: George Allen & Unwin
1959. My Philosophical Development. London: George Allen & Unwin
1959. Wisdom of the West: A Historical Survey of Western Philosophy in Its Social and Political Setting, edited by Paul Foulkes. London: Macdonald
1960. Bertrand Russell Speaks His Mind, Cleveland and New York: World Publishing Company
1961. The Basic Writings of Bertrand Russell, edited by R. E. Egner and L. E. Denonn. London: George Allen & Unwin
1961. Fact and Fiction. London: George Allen & Unwin
1961. Has Man a Future? London: George Allen & Unwin
1963. Essays in Skepticism. New York: Philosophical Library
1963. Unarmed Victory. London: George Allen & Unwin
1965. Legitimacy Versus Industrialism, 1814–1848. London: George Allen & Unwin (first published as Parts I and II of Freedom and Organization, 1814–1914, 1934)
1965. On the Philosophy of Science, edited by Charles A. Fritz, Jr. Indianapolis: The Bobbs–Merrill Company
1966. The ABC of Relativity. London: George Allen & Unwin
1967. Russell's Peace Appeals, edited by Tsutomu Makino and Kazuteru Hitaka. Japan: Eichosha's New Current Books
1967. War Crimes in Vietnam. London: George Allen & Unwin
1951–1969. The Autobiography of Bertrand Russell, 3 vols., London: George Allen & Unwin. Vol. 2, 1956
1969. Dear Bertrand Russell... A Selection of his Correspondence with the General Public 1950–1968, edited by Barry Feinberg and Ronald Kasrils. London: George Allen and Unwin
Russell was the author of more than sixty books and over two thousand articles. Additionally, he wrote many pamphlets, introductions, and letters to the editor. One pamphlet titled, I Appeal unto Caesar': The Case of the Conscientious Objectors, ghostwritten for Margaret Hobhouse, the mother of imprisoned peace activist Stephen Hobhouse, allegedly helped secure the release from prison of hundreds of conscientious objectors.
His works can be found in anthologies and collections, including The Collected Papers of Bertrand Russell, which McMaster University began publishing in 1983. By March 2017 this collection of his shorter and previously unpublished works included 18 volumes, and several more are in progress. A bibliography in three additional volumes catalogues his publications. The Russell Archives held by McMaster's William Ready Division of Archives and Research Collections possess over 40,000 of his letters.
See also
Cambridge University Moral Sciences Club
Criticism of Jesus
Joseph Conrad (Russell's impression)
List of peace activists
List of pioneers in computer science
Information Research Department
Type theory
Type system
Logicomix, a graphic novel about the foundational quest in mathematics, the narrator of the story being Bertrand Russell and with his life as the main storyline
Notes
References
Citations
General and cited sources
Primary sources
1900, Sur la logique des relations avec des applications à la théorie des séries, Rivista di matematica 7: 115–148.
1901, On the Notion of Order, Mind (n.s.) 10: 35–51.
1902, (with Alfred North Whitehead), On Cardinal Numbers, American Journal of Mathematics 24: 367–384.
1948, BBC Reith Lectures: Authority and the Individual A series of six radio lectures broadcast on the BBC Home Service in December 1948.
Secondary sources
John Newsome Crossley. A Note on Cantor's Theorem and Russell's Paradox, Australian Journal of Philosophy 51, 1973, 70–71.
Ivor Grattan-Guinness. The Search for Mathematical Roots 1870–1940. Princeton: Princeton University Press, 2000.
Alan Ryan. Bertrand Russell: A Political Life, New York: Oxford University Press, 1981.
Further reading
Books about Russell's philosophy
Alfred Julius Ayer. Russell, London: Fontana, 1972. . A lucid summary exposition of Russell's thought.
Elizabeth Ramsden Eames. Bertrand Russell's Theory of Knowledge, London: George Allen and Unwin, 1969. . A clear description of Russell's philosophical development.
Celia Green. The Lost Cause: Causation and the Mind-Body Problem, Oxford: Oxford Forum, 2003. Contains a sympathetic analysis of Russell's views on causality.
A. C. Grayling. Russell: A Very Short Introduction, Oxford University Press, 2002.
Nicholas Griffin. Russell's Idealist Apprenticeship, Oxford: Oxford University Press, 1991.
A. D. Irvine, ed. Bertrand Russell: Critical Assessments, 4 volumes, London: Routledge, 1999. Consists of essays on Russell's work by many distinguished philosophers.
Michael K. Potter. Bertrand Russell's Ethics, Bristol: Thoemmes Continuum, 2006. A clear and accessible explanation of Russell's moral philosophy.
P. A. Schilpp, ed. The Philosophy of Bertrand Russell, Evanston and Chicago: Northwestern University, 1944.
John Slater. Bertrand Russell, Bristol: Thoemmes Press, 1994.
Biographical books
A. J. Ayer. Bertrand Russell, New York: Viking Press, 1972, reprint ed. London: University of Chicago Press, 1988,
Andrew Brink. Bertrand Russell: A Psychobiography of a Moralist, Atlantic Highlands, NJ: Humanities Press International, Inc., 1989,
Ronald W. Clark. The Life of Bertrand Russell, London: Jonathan Cape, 1975,
Ronald W. Clark. Bertrand Russell and His World, London: Thames & Hudson, 1981,
Rupert Crawshay-Williams. Russell Remembered, London: Oxford University Press, 1970. Written by a close friend of Russell's
John Lewis. Bertrand Russell: Philosopher and Humanist, London: Lawerence & Wishart, 1968
Ray Monk. Bertrand Russell: Mathematics: Dreams and Nightmares, London: Phoenix, 1997,
Ray Monk. Bertrand Russell: The Spirit of Solitude, 1872–1920 Vol. I, New York: Routledge, 1997,
Ray Monk. Bertrand Russell: The Ghost of Madness, 1921–1970 Vol. II, New York: Routledge, 2001,
Caroline Moorehead. Bertrand Russell: A Life, New York: Viking, 1993,
George Santayana. "Bertrand Russell", in Selected Writings of George Santayana, Norman Henfrey (ed.), Cambridge: Cambridge University Press, I, 1968, pp. 326–329
Peter Stone et al. Bertrand Russell's Life and Legacy. Wilmington: Vernon Press, 2017.
Katharine Tait. My Father Bertrand Russell, New York: Thoemmes Press, 1975
Alan Wood. Bertrand Russell: The Passionate Sceptic, London: George Allen & Unwin, 1957.
External links
The Bertrand Russell Society
BBC Face to Face interview with Bertrand Russell and John Freeman, broadcast 4 March 1959
including the Nobel Lecture, 11 December 1950 "What Desires Are Politically Important?"
Interview with Ray Monk at Today, 18 May 2022 (from 2:58:35)
1872 births
1970 deaths
19th-century atheists
19th-century English mathematicians
19th-century English philosophers
19th-century English essayists
20th-century atheists
20th-century English mathematicians
20th-century English philosophers
20th-century English essayists
Academics of the London School of Economics
Alumni of Trinity College, Cambridge
Analytic philosophers
British anti–Vietnam War activists
Aristotelian philosophers
Atheist philosophers
British anti–World War I activists
British atheism activists
British ethicists
Campaign for Nuclear Disarmament activists
British consciousness researchers and theorists
Consequentialists
British critics of Christianity
British critics of religions
Critics of the Catholic Church
Critics of work and the work ethic
De Morgan Medallists
Deaths from influenza in the United Kingdom
Earls Russell
Empiricists
English agnostics
English anti-fascists
English anti–nuclear weapons activists
British writers on atheism
English essayists
British historians of philosophy
English humanists
English logicians
English male non-fiction writers
English Nobel laureates
English pacifists
English people of Scottish descent
English people of Welsh descent
English political commentators
English political philosophers
English political writers
English prisoners and detainees
English sceptics
English socialists
British epistemologists
European democratic socialists
Fellows of the Royal Society
Fellows of Trinity College, Cambridge
Free love advocates
British free speech activists
Freethought writers
Georgists
Honorary Fellows of the British Academy
Infectious disease deaths in Wales
Intellectual historians
Jerusalem Prize recipients
Kalinga Prize recipients
English LGBTQ rights activists
Liberal socialism
Linguistic turn
Logicians
Mathematical logicians
Members of the Order of Merit
Metaphilosophers
British metaphysicians
Metaphysics writers
Nobel laureates in Literature
British nonviolence advocates
Ontologists
People from Harting
People from Monmouthshire
British philosophers of culture
Philosophers of economics
British philosophers of education
Philosophers of history
British philosophers of language
Philosophers of law
Philosophers of literature
British philosophers of logic
Philosophers of love
Philosophers of mathematics
British philosophers of mind
British philosophers of religion
British philosophers of science
Philosophers of sexuality
Philosophers of social science
Philosophers of technology
British political philosophers
Presidents of the Aristotelian Society
Residents of Pembroke Lodge, Richmond Park
Rhetoric theorists
Bertrand Russell
Secular humanists
Set theorists
The Nation (U.S. magazine) people
Theorists on Western civilization
Universal basic income writers
University of California, Los Angeles faculty
University of Chicago faculty
Utilitarians
Writers about activism and social change
Writers about communism
Writers about globalization
Writers about religion and science
Writers about the Soviet Union
People from Penrhyndeudraeth
Anti-nationalists
Labour Party (UK) parliamentary candidates
Labour Party (UK) hereditary peers
World Constitutional Convention call signatories | Bertrand Russell | [
"Mathematics"
] | 12,853 | [
"Philosophers of mathematics"
] |
4,169 | https://en.wikipedia.org/wiki/Bronze | Bronze is an alloy consisting primarily of copper, commonly with about 12–12.5% tin and often with the addition of other metals (including aluminium, manganese, nickel, or zinc) and sometimes non-metals (such as phosphorus) or metalloids (such as arsenic or silicon). These additions produce a range of alloys some of which are harder than copper alone or have other useful properties, such as strength, ductility, or machinability.
The archaeological period during which bronze was the hardest metal in widespread use is known as the Bronze Age. The beginning of the Bronze Age in western Eurasia and India is conventionally dated to the mid-4th millennium BCE (~3500 BCE), and to the early 2nd millennium BCE in China; elsewhere it gradually spread across regions. The Bronze Age was followed by the Iron Age, which started about 1300 BCE and reaching most of Eurasia by about 500 BCE, although bronze continued to be much more widely used than it is in modern times.
Because historical artworks were often made of bronzes and [brass]]es (alloys of copper and zinc) of different metallic compositions, modern museum and scholarly descriptions of older artworks increasingly use the generalized term "copper alloy" instead of the names of individual alloys. This is done (at least in part) to prevent database searches from failing merely because of errors or disagreements in the naming of historic copper alloys.
Etymology
The word bronze (1730–1740) is borrowed from Middle French (1511), itself borrowed from Italian (13th century, transcribed in Medieval Latin as ) from either:
, back-formation from Byzantine Greek (, 11th century), perhaps from (, ), reputed for its bronze; or originally:
in its earliest form from Old Persian , (, , modern ) and () , from which also came Georgian (), Turkish from "bir" (one) "birinç" (primary), and Armenian (), also meaning .
History
The discovery of bronze enabled people to create metal objects that were harder and more durable than had previously been possible. Bronze tools, weapons, armor, and building materials such as decorative tiles were harder and more durable than their stone and copper ("Chalcolithic") predecessors. Initially, bronze was made out of copper and arsenic or from naturally or artificially mixed ores of those metals, forming arsenic bronze.
The earliest known arsenic-copper-alloy artifacts come from a Yahya Culture (Period V 3800-3400 BCE) site, at Tal-i-Iblis on the Iranian plateau, and were smelted from native arsenical copper and copper-arsenides, such as algodonite and domeykite.
The earliest tin-copper-alloy artifact has been dated to , in a Vinča culture site in Pločnik (Serbia), and believed to have been smelted from a natural tin-copper ore, stannite.
Other early examples date to the late 4th millennium BCE in Egypt, Susa (Iran) and some ancient sites in China, Luristan (Iran), Tepe Sialk (Iran), Mundigak (Afghanistan), and Mesopotamia (Iraq).
Tin bronze was superior to arsenic bronze in that the alloying process could be more easily controlled, and the resulting alloy was stronger and easier to cast. Also, unlike those of arsenic, metallic tin and the fumes from tin refining are not toxic.
Tin became the major non-copper ingredient of bronze in the late 3rd millennium BCE. Ores of copper and the far rarer tin are not often found together (exceptions include Cornwall in the United Kingdom, one ancient site in Thailand and one in Iran), so serious bronze work has always involved trade with other regions. Tin sources and trade in ancient times had a major influence on the development of cultures. In Europe, a major source of tin was the British deposits of ore in Cornwall, which were traded as far as Phoenicia in the eastern Mediterranean. In many parts of the world, large hoards of bronze artifacts are found, suggesting that bronze also represented a store of value and an indicator of social status. In Europe, large hoards of bronze tools, typically socketed axes (illustrated above), are found, which mostly show no signs of wear. With Chinese ritual bronzes, which are documented in the inscriptions they carry and from other sources, the case is clear. These were made in enormous quantities for elite burials, and also used by the living for ritual offerings.
Transition to iron
Though bronze, whose Vickers hardness is 60–258, is generally harder than wrought iron, with a hardness of 30–80,, the Bronze Age gave way to the Iron Age after a serious disruption of the tin trade: the population migrations of around 1200–1100 BCE reduced the shipment of tin around the Mediterranean and from Britain, limiting supplies and raising prices. As the art of working in iron improved, iron became cheaper and improved in quality. As later cultures advanced from hand-wrought iron to machine-forged iron (typically made with trip hammers powered by water), blacksmiths also learned how to make steel, which is stronger and harder than bronze and holds a sharper edge longer. Bronze was still used during the Iron Age and has continued in use for many purposes to the modern day.
Composition
There are many different bronze alloys, but typically modern bronze is about 88% copper and 12% tin. Alpha bronze consists of the alpha solid solution of tin in copper. Alpha bronze alloys of 4–5% tin are used to make coins, springs, turbines and blades. Historical "bronzes" are highly variable in composition, as most metalworkers probably used whatever scrap was on hand; the metal of the 12th-century English Gloucester Candlestick is bronze containing a mixture of copper, zinc, tin, lead, nickel, iron, antimony, arsenic and an unusually large amount of silver – between 22.5% in the base and 5.76% in the pan below the candle. The proportions of this mixture suggest that the candlestick was made from a hoard of old coins. The 13th-century Benin Bronzes are in fact brass, and the 12th-century Romanesque Baptismal font at St Bartholomew's Church, Liège is sometimes described as bronze and sometimes as brass.
During the Bronze Age, two forms of bronze were commonly used: "classic bronze", about 10% tin, was used in casting; "mild bronze", about 6% tin, was hammered from ingots to make sheets. Bladed weapons were primarily cast from classic bronze while helmets and armor were hammered from mild bronze.
Modern commercial bronze (90% copper and 10% zinc) and architectural bronze (57% copper, 3% lead, 40% zinc) are more properly regarded as brass alloys because they contain zinc as the main alloying ingredient. They are commonly used in architectural applications. Plastic bronze contains a significant quantity of lead, which makes for improved plasticity, and may have been used by the ancient Greeks in ship construction. has a composition of Si: 2.80–3.80%, Mn: 0.50–1.30%, Fe: 0.80% max., Zn: 1.50% max., Pb: 0.05% max., Cu: balance. Other bronze alloys include aluminium bronze, phosphor bronze, manganese bronze, bell metal, arsenical bronze, speculum metal, bismuth bronze, and cymbal alloys.
Properties
Copper-based alloys have lower melting points than steel or iron and are more readily produced from their constituent metals. They are generally about 10 percent denser than steel, although alloys using aluminum or silicon may be slightly less dense. Bronze conducts heat and electricity better than most steels. Copper-base alloys are generally more costly than steels but less so than nickel-base alloys.
Bronzes are typically ductile alloys and are considerably less brittle than cast iron. Copper and its alloys have a huge variety of uses that reflect their versatile physical, mechanical, and chemical properties. Some common examples are the high electrical conductivity of pure copper, the low-friction properties of bearing bronze (bronze that has a high lead content— 6–8%), the resonant qualities of bell bronze (20% tin, 80% copper), and the resistance to corrosion by seawater of several bronze alloys.
The melting point of bronze is about but varies depending on the ratio of the alloy components. Bronze is usually nonmagnetic, but certain alloys containing iron or nickel may have magnetic properties. Bronze typically oxidizes only superficially; once a copper oxide (eventually becoming copper carbonate) layer is formed, the underlying metal is protected from further corrosion. This can be seen on statues from the Hellenistic period. If copper chlorides are formed, a corrosion-mode called "bronze disease" will eventually destroy it completely.
Uses
Bronze, or bronze-like alloys and mixtures, were used for coins over a longer period. Bronze was especially suitable for use in boat and ship fittings prior to the wide employment of stainless steel owing to its combination of toughness and resistance to salt water corrosion. Bronze is still commonly used in ship propellers and submerged bearings. In the 20th century, silicon was introduced as the primary alloying element, creating an alloy with wide application in industry and the major form used in contemporary statuary. Sculptors may prefer silicon bronze because of the ready availability of silicon bronze brazing rod, which allows color-matched repair of defects in castings. Aluminum is also used for the structural metal aluminum bronze. Bronze parts are tough and typically used for bearings, clips, electrical connectors and springs.
Bronze also has low friction against dissimilar metals, making it important for cannons prior to modern tolerancing, where iron cannonballs would otherwise stick in the barrel. It is still widely used today for springs, bearings, bushings, automobile transmission pilot bearings, and similar fittings, and is particularly common in the bearings of small electric motors. Phosphor bronze is particularly suited to precision-grade bearings and springs. It is also used in guitar and piano strings. Unlike steel, bronze struck against a hard surface will not generate sparks, so it (along with beryllium copper) is used to make hammers, mallets, wrenches and other durable tools to be used in explosive atmospheres or in the presence of flammable vapors. Bronze is used to make bronze wool for woodworking applications where steel wool would discolor oak. Phosphor bronze is used for ships' propellers, musical instruments, and electrical contacts. Bearings are often made of bronze for its friction properties. It can be impregnated with oil to make the proprietary Oilite and similar material for bearings. Aluminum bronze is hard and wear-resistant, and is used for bearings and machine tool ways. The Doehler Die Casting Co. of Toledo, Ohio were known for the production of Brastil, a high tensile corrosion resistant bronze alloy.
Architectural bronze
The Seagram Building on New York City's Park Avenue is the "iconic glass box sheathed in bronze, designed by Mies van der Rohe." The Seagram Building was the first time that an entire building was sheathed in bronze. The General Bronze Corporation fabricated 3,200,000 pounds (1,600 tons) of bronze at its plant in Garden City, New York. The Seagram Building is a 38-story, 516-foot bronze-and-topaz-tinted glass building. The building looks like a "squarish 38-story tower clad in a restrained curtain wall of metal and glass." "Bronze was selected because of its color, both before and after aging, its corrosion resistance, and its extrusion properties. In 1958, it was not only the most expensive building of its time — $36 million — but it was the first building in the world with floor-to-ceiling glass walls. Mies van der Rohe achieved the crisp edges that were custom-made with specific detailing by General Bronze and "even the screws that hold in the fixed glass-plate windows were made of brass."
Sculptures
Bronze is widely used for casting bronze sculptures. Common bronze alloys have the unusual and desirable property of expanding slightly just before they set, thus filling the finest details of a mould. Then, as the bronze cools, it shrinks a little, making it easier to separate from the mould. The Assyrian king Sennacherib (704–681 BCE) claims to have been the first to cast monumental bronze statues (of up to 30 tonnes) using two-part moulds instead of the lost-wax method.
Bronze statues were regarded as the highest form of sculpture in Ancient Greek art, though survivals are few, as bronze was a valuable material in short supply in the Late Antique and medieval periods. Many of the most famous Greek bronze sculptures are known through Roman copies in marble, which were more likely to survive. In India, bronze sculptures from the Kushana (Chausa hoard) and Gupta periods (Brahma from Mirpur-Khas, Akota Hoard, Sultanganj Buddha) and later periods (Hansi Hoard) have been found. Indian Hindu artisans from the period of the Chola empire in Tamil Nadu used bronze to create intricate statues via the lost-wax casting method with ornate detailing depicting the deities of Hinduism. The art form survives to this day, with many silpis, craftsmen, working in the areas of Swamimalai and Chennai.
In antiquity other cultures also produced works of high art using bronze. For example: in Africa, the bronze heads of the Kingdom of Benin; in Europe, Grecian bronzes typically of figures from Greek mythology; in east Asia, Chinese ritual bronzes of the Shang and Zhou dynasty—more often ceremonial vessels but including some figurine examples. Bronze continues into modern times as one of the materials of choice for monumental statuary.
Lamps
Tiffany Glass Studios, made famous by Louis C. Tiffany commonly referred to his product as favrile glass or "Tiffany glass," and used bronze in their artisan work for his Tiffany lamps.
Fountains and doors
The largest and most ornate bronze fountain known to be cast in the world was by the Roman Bronze Works and General Bronze Corporation in 1952. The material used for the fountain, known as statuary bronze, is a quaternary alloy made of copper, zinc, tin, and lead, and traditionally golden brown in color. This was made for the Andrew W. Mellon Memorial Fountain in Federal Triangle in Washington, DC. Another example of the massive, ornate design projects of bronze, and attributed to General Bronze/Roman Bronze Works were the massive bronze doors to the United States Supreme Court Building in Washington, DC.
Mirrors
Before it became possible to produce glass with acceptably flat surfaces, bronze was a standard material for mirrors. Bronze was used for this purpose in many parts of the world, probably based on independent discoveries. Bronze mirrors survive from the Egyptian Middle Kingdom (2040–1750 BCE), and China from at least . In Europe, the Etruscans were making bronze mirrors in the sixth century BCE, and Greek and Roman mirrors followed the same pattern. Although other materials such as speculum metal had come into use, and Western glass mirrors had largely taken over, bronze mirrors were still being made in Japan and elsewhere in the eighteenth century, and are still made on a small scale in Kerala, India.
Musical instruments
Bronze is the preferred metal for bells in the form of a high tin bronze alloy known as bell metal, which is typically about 23% tin.
Nearly all professional cymbals are made from bronze, which gives a desirable balance of durability and timbre. Several types of bronze are used, commonly B20 bronze, which is roughly 20% tin, 80% copper, with traces of silver, or the tougher B8 bronze made from 8% tin and 92% copper. As the tin content in a bell or cymbal rises, the timbre drops.
Bronze is also used for the windings of steel and nylon strings of various stringed instruments such as the double bass, piano, harpsichord, and guitar. Bronze strings are commonly reserved on pianoforte for the lower pitch tones, as they possess a superior sustain quality to that of high-tensile steel.
Bronzes of various metallurgical properties are widely used in struck idiophones around the world, notably bells, singing bowls, gongs, cymbals, and other idiophones from Asia. Examples include Tibetan singing bowls, temple bells of many sizes and shapes, Javanese gamelan, and other bronze musical instruments. The earliest bronze archeological finds in Indonesia date from 1–2 BCE, including flat plates probably suspended and struck by a wooden or bone mallet. Ancient bronze drums from Thailand and Vietnam date back 2,000 years. Bronze bells from Thailand and Cambodia date back to 3600 BCE.
Some companies are now making saxophones from phosphor bronze (3.5 to 10% tin and up to 1% phosphorus content). Bell bronze/B20 is used to make the tone rings of many professional model banjos. The tone ring is a heavy (usually ) folded or arched metal ring attached to a thick wood rim, over which a skin, or most often, a plastic membrane (or head) is stretched – it is the bell bronze that gives the banjo a crisp powerful lower register and clear bell-like treble register.
Coins and medals
Bronze has also been used in coins; most "copper" coins are actually bronze, with about 4 percent tin and 1 percent zinc.
As with coins, bronze has been used in the manufacture of various types of medals for centuries, and "bronze medals" are known in contemporary times for being awarded for third place in sporting competitions and other events. The term is now often used for third place even when no actual bronze medal is awarded. The usage in part arose from the trio of gold, silver and bronze to represent the first three Ages of Man in Greek mythology: the Golden Age, when men lived among the gods; the Silver age, where youth lasted a hundred years; and the Bronze Age, the era of heroes. It was first adopted for a sports event at the 1904 Summer Olympics. At the 1896 event, silver was awarded to winners and bronze to runners-up, while at 1900 other prizes were given rather than medals.
Bronze is the normal material for the related form of the plaquette, normally a rectangular work of art with a scene in relief, for a collectors' market.
Bronze is also associated with eighth wedding anniversaries.
Biblical references
There are over 125 references to bronze ('nehoshet'), which appears to be the Hebrew word used for copper and any of its alloys. However, the Old Testament era Hebrews are not thought to have had the capability to manufacture zinc (needed to make brass) and so it is likely that 'nehoshet' refers to copper and its alloys with tin, now called bronze. In the King James Version, there is no use of the word 'bronze' and 'nehoshet' was translated as 'brass'. Modern translations use 'bronze'. Bronze (nehoshet) was used widely in the Tabernacle for items such as the bronze altar (Exodus Ch.27), bronze laver (Exodus Ch.30), utensils, and mirror (Exodus Ch.38). It was mentioned in the account of Moses holding up a bronze snake on a pole in Numbers Ch.21. In First Kings, it is mentioned that Hiram was very skilled in working with bronze, and he made many furnishings for Solomon's Temple including pillars, capitals, stands, wheels, bowls, and plates, some of which were highly decorative (see I Kings 7:13-47). Bronze was also widely used as battle armor and helmet, as in the battle of David and Goliath in I Samuel 17:5-6;38 (also see II Chron. 12:10).
See also
References
External links
Bronze bells (archived 16 December 2006)
"Lost Wax, Found Bronze": lost-wax casting explained (archived 23 May 2009)
Viking Bronze – Ancient and Early Medieval bronze casting (archived 16 April 2016)
Copper alloys
Tin alloys
Coinage metals and alloys | Bronze | [
"Chemistry"
] | 4,157 | [
"Tin alloys",
"Alloys",
"Copper alloys",
"Coinage metals and alloys"
] |
4,183 | https://en.wikipedia.org/wiki/Botany | Botany, also called plant science or phytology, is the branch of natural science and biology studying plants, especially their anatomy, taxonomy, and ecology. A botanist, plant scientist or phytologist is a scientist who specialises in this field. Nowadays, botanists (in the strict sense) study approximately 410,000 species of land plants, including some 391,000 species of vascular plants (of which approximately 369,000 are flowering plants) and approximately 20,000 bryophytes.
Botany originated in prehistory as herbalism with the efforts of early humans to identify – and later cultivate – plants that were edible, poisonous, and possibly medicinal, making it one of the first endeavours of human investigation. Medieval physic gardens, often attached to monasteries, contained plants possibly having medicinal benefit. They were forerunners of the first botanical gardens attached to universities, founded from the 1540s onwards. One of the earliest was the Padua botanical garden. These gardens facilitated the academic study of plants. Efforts to catalogue and describe their collections were the beginnings of plant taxonomy and led in 1753 to the binomial system of nomenclature of Carl Linnaeus that remains in use to this day for the naming of all biological species.
In the 19th and 20th centuries, new techniques were developed for the study of plants, including methods of optical microscopy and live cell imaging, electron microscopy, analysis of chromosome number, plant chemistry and the structure and function of enzymes and other proteins. In the last two decades of the 20th century, botanists exploited the techniques of molecular genetic analysis, including genomics and proteomics and DNA sequences to classify plants more accurately.
Modern botany is a broad subject with contributions and insights from most other areas of science and technology. Research topics include the study of plant structure, growth and differentiation, reproduction, biochemistry and primary metabolism, chemical products, development, diseases, evolutionary relationships, systematics, and plant taxonomy. Dominant themes in 21st-century plant science are molecular genetics and epigenetics, which study the mechanisms and control of gene expression during differentiation of plant cells and tissues. Botanical research has diverse applications in providing staple foods, materials such as timber, oil, rubber, fibre and drugs, in modern horticulture, agriculture and forestry, plant propagation, breeding and genetic modification, in the synthesis of chemicals and raw materials for construction and energy production, in environmental management, and the maintenance of biodiversity.
Etymology
The term "botany" comes from the Ancient Greek word () meaning "pasture", "herbs" "grass", or "fodder"; is in turn derived from (Greek: ), "to feed" or "to graze". Traditionally, botany has also included the study of fungi and algae by mycologists and phycologists respectively, with the study of these three groups of organisms remaining within the sphere of interest of the International Botanical Congress.
History
Early botany
Botany originated as herbalism, the study and use of plants for their possible medicinal properties. The early recorded history of botany includes many ancient writings and plant classifications. Examples of early botanical works have been found in ancient texts from India dating back to before 1100 BCE, Ancient Egypt, in archaic Avestan writings, and in works from China purportedly from before 221 BCE.
Modern botany traces its roots back to Ancient Greece specifically to Theophrastus (–287 BCE), a student of Aristotle who invented and described many of its principles and is widely regarded in the scientific community as the "Father of Botany". His major works, Enquiry into Plants and On the Causes of Plants, constitute the most important contributions to botanical science until the Middle Ages, almost seventeen centuries later.
Another work from Ancient Greece that made an early impact on botany is , a five-volume encyclopedia about preliminary herbal medicine written in the middle of the first century by Greek physician and pharmacologist Pedanius Dioscorides. was widely read for more than 1,500 years. Important contributions from the medieval Muslim world include Ibn Wahshiyya's Nabatean Agriculture, Abū Ḥanīfa Dīnawarī's (828–896) the Book of Plants, and Ibn Bassal's The Classification of Soils. In the early 13th century, Abu al-Abbas al-Nabati, and Ibn al-Baitar (d. 1248) wrote on botany in a systematic and scientific manner.
In the mid-16th century, botanical gardens were founded in a number of Italian universities. The Padua botanical garden in 1545 is usually considered to be the first which is still in its original location. These gardens continued the practical value of earlier "physic gardens", often associated with monasteries, in which plants were cultivated for suspected medicinal uses. They supported the growth of botany as an academic subject. Lectures were given about the plants grown in the gardens. Botanical gardens came much later to northern Europe; the first in England was the University of Oxford Botanic Garden in 1621.
German physician Leonhart Fuchs (1501–1566) was one of "the three German fathers of botany", along with theologian Otto Brunfels (1489–1534) and physician Hieronymus Bock (1498–1554) (also called Hieronymus Tragus). Fuchs and Brunfels broke away from the tradition of copying earlier works to make original observations of their own. Bock created his own system of plant classification.
Physician Valerius Cordus (1515–1544) authored a botanically and pharmacologically important herbal Historia Plantarum in 1544 and a pharmacopoeia of lasting importance, the Dispensatorium in 1546. Naturalist Conrad von Gesner (1516–1565) and herbalist John Gerard (1545–) published herbals covering the supposed medicinal uses of plants. Naturalist Ulisse Aldrovandi (1522–1605) was considered the father of natural history, which included the study of plants. In 1665, using an early microscope, Polymath Robert Hooke discovered cells (a term he coined) in cork, and a short time later in living plant tissue.
Early modern botany
During the 18th century, systems of plant identification were developed comparable to dichotomous keys, where unidentified plants are placed into taxonomic groups (e.g. family, genus and species) by making a series of choices between pairs of characters. The choice and sequence of the characters may be artificial in keys designed purely for identification (diagnostic keys) or more closely related to the natural or phyletic order of the taxa in synoptic keys. By the 18th century, new plants for study were arriving in Europe in increasing numbers from newly discovered countries and the European colonies worldwide. In 1753, Carl Linnaeus published his Species Plantarum, a hierarchical classification of plant species that remains the reference point for modern botanical nomenclature. This established a standardised binomial or two-part naming scheme where the first name represented the genus and the second identified the species within the genus. For the purposes of identification, Linnaeus's Systema Sexuale classified plants into 24 groups according to the number of their male sexual organs. The 24th group, Cryptogamia, included all plants with concealed reproductive parts, mosses, liverworts, ferns, algae and fungi.
Increasing knowledge of plant anatomy, morphology and life cycles led to the realisation that there were more natural affinities between plants than the artificial sexual system of Linnaeus. Adanson (1763), de Jussieu (1789), and Candolle (1819) all proposed various alternative natural systems of classification that grouped plants using a wider range of shared characters and were widely followed. The Candollean system reflected his ideas of the progression of morphological complexity and the later Bentham & Hooker system, which was influential until the mid-19th century, was influenced by Candolle's approach. Darwin's publication of the Origin of Species in 1859 and his concept of common descent required modifications to the Candollean system to reflect evolutionary relationships as distinct from mere morphological similarity.
In the 19th century botany was a socially acceptable hobby for upper-class women. These women would collect and paint flowers and plants from around the world with scientific accuracy. The paintings were used to record many species that could not be transported or maintained in other environments. Marianne North illustrated over 900 species in extreme detail with watercolor and oil paintings. Her work and many other women's botany work was the beginning of popularizing botany to a wider audience.
Botany was greatly stimulated by the appearance of the first "modern" textbook, Matthias Schleiden's , published in English in 1849 as Principles of Scientific Botany. Schleiden was a microscopist and an early plant anatomist who co-founded the cell theory with Theodor Schwann and Rudolf Virchow and was among the first to grasp the significance of the cell nucleus that had been described by Robert Brown in 1831. In 1855, Adolf Fick formulated Fick's laws that enabled the calculation of the rates of molecular diffusion in biological systems.
Late modern botany
Building upon the gene-chromosome theory of heredity that originated with Gregor Mendel (1822–1884), August Weismann (1834–1914) proved that inheritance only takes place through gametes. No other cells can pass on inherited characters. The work of Katherine Esau (1898–1997) on plant anatomy is still a major foundation of modern botany. Her books Plant Anatomy and Anatomy of Seed Plants have been key plant structural biology texts for more than half a century.
The discipline of plant ecology was pioneered in the late 19th century by botanists such as Eugenius Warming, who produced the hypothesis that plants form communities, and his mentor and successor Christen C. Raunkiær whose system for describing plant life forms is still in use today. The concept that the composition of plant communities such as temperate broadleaf forest changes by a process of ecological succession was developed by Henry Chandler Cowles, Arthur Tansley and Frederic Clements. Clements is credited with the idea of climax vegetation as the most complex vegetation that an environment can support and Tansley introduced the concept of ecosystems to biology. Building on the extensive earlier work of Alphonse de Candolle, Nikolai Vavilov (1887–1943) produced accounts of the biogeography, centres of origin, and evolutionary history of economic plants.
Particularly since the mid-1960s there have been advances in understanding of the physics of plant physiological processes such as transpiration (the transport of water within plant tissues), the temperature dependence of rates of water evaporation from the leaf surface and the molecular diffusion of water vapour and carbon dioxide through stomatal apertures. These developments, coupled with new methods for measuring the size of stomatal apertures, and the rate of photosynthesis have enabled precise description of the rates of gas exchange between plants and the atmosphere. Innovations in statistical analysis by Ronald Fisher, Frank Yates and others at Rothamsted Experimental Station facilitated rational experimental design and data analysis in botanical research. The discovery and identification of the auxin plant hormones by Kenneth V. Thimann in 1948 enabled regulation of plant growth by externally applied chemicals. Frederick Campion Steward pioneered techniques of micropropagation and plant tissue culture controlled by plant hormones. The synthetic auxin 2,4-dichlorophenoxyacetic acid or 2,4-D was one of the first commercial synthetic herbicides.
20th century developments in plant biochemistry have been driven by modern techniques of organic chemical analysis, such as spectroscopy, chromatography and electrophoresis. With the rise of the related molecular-scale biological approaches of molecular biology, genomics, proteomics and metabolomics, the relationship between the plant genome and most aspects of the biochemistry, physiology, morphology and behaviour of plants can be subjected to detailed experimental analysis. The concept originally stated by Gottlieb Haberlandt in 1902 that all plant cells are totipotent and can be grown in vitro ultimately enabled the use of genetic engineering experimentally to knock out a gene or genes responsible for a specific trait, or to add genes such as GFP that report when a gene of interest is being expressed. These technologies enable the biotechnological use of whole plants or plant cell cultures grown in bioreactors to synthesise pesticides, antibiotics or other pharmaceuticals, as well as the practical application of genetically modified crops designed for traits such as improved yield.
Modern morphology recognises a continuum between the major morphological categories of root, stem (caulome), leaf (phyllome) and trichome. Furthermore, it emphasises structural dynamics. Modern systematics aims to reflect and discover phylogenetic relationships between plants. Modern Molecular phylogenetics largely ignores morphological characters, relying on DNA sequences as data. Molecular analysis of DNA sequences from most families of flowering plants enabled the Angiosperm Phylogeny Group to publish in 1998 a phylogeny of flowering plants, answering many of the questions about relationships among angiosperm families and species. The theoretical possibility of a practical method for identification of plant species and commercial varieties by DNA barcoding is the subject of active current research.
Branches of botany
Botany is divided along several axes.
Some subfields of botany relate to particular groups of organisms. Divisions related to the broader historical sense of botany include bacteriology, mycology (or fungology) and phycology - the study of bacteria, fungi and algae respectively - with lichenology as a subfield of mycology. The narrower sense of botany in the sense of the study of embryophytes (land plants) is disambiguated as phytology. Bryology is the study of mosses (and in the broader sense also liverworts and hornworts). Pteridology (or filicology) is the study of ferns and allied plants. A number of other taxa of ranks varying from family to subgenus have terms for their study, including agrostology (or graminology) for the study of grasses, synantherology for the study of composites, and batology for the study of brambles.
Study can also be divided by guild rather than clade or grade. Dendrology is the study of woody plants.
Many divisions of biology have botanical subfields. These are commonly denoted by prefixing the word plant (e.g. plant taxonomy, plant ecology, plant anatomy, plant morphology, plant systematics, plant ecology), or prefixing or substituting the prefix phyto- (e.g. phytochemistry, phytogeography). The study of fossil plants is palaeobotany. Other fields are denoted by adding or substituting the word botany (e.g. systematic botany).
Phytosociology is a subfield of plant ecology that classifies and studies communities of plants.
The intersection of fields from the above pair of categories gives rise to fields such as bryogeography (the study of the distribution of mosses).
Different parts of plants also give rise to their own subfields, including xylology, carpology (or fructology) and palynology, these being the study of wood, fruit and pollen/spores respectively.
Botany also overlaps on the one hand with agriculture, horticulture and silviculture, and on the other hand with medicine and pharmacology, giving rise to fields such as agronomy, horticultural botany, phytopathology and phytopharmacology.
Scope and importance
The study of plants is vital because they underpin almost all animal life on Earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical energy they need to exist. Plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. As a by-product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. In addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. Plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil.
Historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. Botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. At each of these levels, a botanist may be concerned with the classification (taxonomy), phylogeny and evolution, structure (anatomy and morphology), or function (physiology) of plant life.
The strictest definition of "plant" includes only the "land plants" or embryophytes, which include seed plants (gymnosperms, including the pines, and flowering plants) and the free-sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. Embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. They have life cycles with alternating haploid and diploid phases. The sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. Other groups of organisms that were previously studied by botanists include bacteria (now studied in bacteriology), fungi (mycology) – including lichen-forming fungi (lichenology), non-chlorophyte algae (phycology), and viruses (virology). However, attention is still given to these groups by botanists, and fungi (including lichens) and photosynthetic protists are usually covered in introductory botany courses.
Palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. Cyanobacteria, the first oxygen-releasing photosynthetic organisms on Earth, are thought to have given rise to the ancestor of plants by entering into an endosymbiotic relationship with an early eukaryote, ultimately becoming the chloroplasts in plant cells. The new photosynthetic plants (along with their algal relatives) accelerated the rise in atmospheric oxygen started by the cyanobacteria, changing the ancient oxygen-free, reducing, atmosphere to one in which free oxygen has been abundant for more than 2 billion years.
Among the important botanical questions of the 21st century are the role of plants as primary producers in the global cycling of life's basic ingredients: energy, carbon, oxygen, nitrogen and water, and ways that our plant stewardship can help address the global environmental issues of resource management, conservation, human food security, biologically invasive organisms, carbon sequestration, climate change, and sustainability.
Human nutrition
Virtually all staple foods come either directly from primary production by plants, or indirectly from animals that eat them. Plants and other photosynthetic organisms are at the base of most food chains because they use the energy from the sun and nutrients from the soil and atmosphere, converting them into a form that can be used by animals. This is what ecologists call the first trophic level. The modern forms of the major staple foods, such as hemp, teff, maize, rice, wheat and other cereal grasses, pulses, bananas and plantains, as well as hemp, flax and cotton grown for their fibres, are the outcome of prehistoric selection over thousands of years from among wild ancestral plants with the most desirable characteristics.
Botanists study how plants produce food and how to increase yields, for example through plant breeding, making their work important to humanity's ability to feed the world and provide food security for future generations. Botanists also study weeds, which are a considerable problem in agriculture, and the biology and control of plant pathogens in agriculture and natural ecosystems. Ethnobotany is the study of the relationships between plants and people. When applied to the investigation of historical plant–people relationships ethnobotany may be referred to as archaeobotany or palaeoethnobotany. Some of the earliest plant-people relationships arose between the indigenous people of Canada in identifying edible plants from inedible plants. This relationship the indigenous people had with plants was recorded by ethnobotanists.
Plant biochemistry
Plant biochemistry is the study of the chemical processes used by plants. Some of these processes are used in their primary metabolism like the photosynthetic Calvin cycle and crassulacean acid metabolism. Others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds.
Plants and various other groups of photosynthetic eukaryotes collectively known as "algae" have unique organelles known as chloroplasts. Chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. Chloroplasts and cyanobacteria contain the blue-green pigment chlorophyll a. Chlorophyll a (as well as its plant and green algal-specific cousin chlorophyll b) absorbs light in the blue-violet and orange/red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these organisms. The energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy-rich carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen (O2) as a by-product.
The light energy captured by chlorophyll a is initially in the form of electrons (and later a proton gradient) that's used to make molecules of ATP and NADPH which temporarily store and transport energy. Their energy is used in the light-independent reactions of the Calvin cycle by the enzyme rubisco to produce molecules of the 3-carbon sugar glyceraldehyde 3-phosphate (G3P). Glyceraldehyde 3-phosphate is the first product of photosynthesis and the raw material from which glucose and almost all other organic molecules of biological origin are synthesised. Some of the glucose is converted to starch which is stored in the chloroplast. Starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family Asteraceae. Some of the glucose is converted to sucrose (common table sugar) for export to the rest of the plant.
Unlike in animals (which lack chloroplasts), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. The fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out.
Plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed.
Vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. Lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. Sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and the pollen of seed plants in the fossil record. It is widely regarded as a marker for the start of land plant evolution during the Ordovician period.
The concentration of carbon dioxide in the atmosphere today is much lower than it was when plants emerged onto land during the Ordovician and Silurian periods. Many monocots like maize and the pineapple and some dicots like the Asteraceae have since independently evolved pathways like Crassulacean acid metabolism and the carbon fixation pathway for photosynthesis which avoid the losses resulting from photorespiration in the more common carbon fixation pathway. These biochemical strategies are unique to land plants.
Medicine and materials
Phytochemistry is a branch of plant biochemistry primarily concerned with the chemical substances produced by plants during secondary metabolism. Some of these compounds are toxins such as the alkaloid coniine from hemlock. Others, such as the essential oils peppermint oil and lemon oil are useful for their aroma, as flavourings and spices (e.g., capsaicin), and in medicine as pharmaceuticals as in opium from opium poppies. Many medicinal and recreational drugs, such as tetrahydrocannabinol (active ingredient in cannabis), caffeine, morphine and nicotine come directly from plants. Others are simple derivatives of botanical natural products. For example, the pain killer aspirin is the acetyl ester of salicylic acid, originally isolated from the bark of willow trees, and a wide range of opiate painkillers like heroin are obtained by chemical modification of morphine obtained from the opium poppy. Popular stimulants come from plants, such as caffeine from coffee, tea and chocolate, and nicotine from tobacco. Most alcoholic beverages come from fermentation of carbohydrate-rich plant products such as barley (beer), rice (sake) and grapes (wine). Native Americans have used various plants as ways of treating illness or disease for thousands of years. This knowledge Native Americans have on plants has been recorded by enthnobotanists and then in turn has been used by pharmaceutical companies as a way of drug discovery.
Plants can synthesise coloured dyes and pigments such as the anthocyanins responsible for the red colour of red wine, yellow weld and blue woad used together to produce Lincoln green, indoxyl, source of the blue dye indigo traditionally used to dye denim and the artist's pigments gamboge and rose madder.
Sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. Charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal-smelting fuel, as a filter material and adsorbent and as an artist's material and is one of the three ingredients of gunpowder. Cellulose, the world's most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. Products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. Sugarcane, rapeseed and soy are some of the plants with a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil fuels, such as biodiesel. Sweetgrass was used by Native Americans to ward off bugs like mosquitoes. These bug repelling properties of sweetgrass were later found by the American Chemical Society in the molecules phytol and coumarin.
Plant ecology
Plant ecology is the science of the functional relationships between plants and their habitats – the environments where they complete their life cycles. Plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. Some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. This information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. The goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change.
Plants depend on certain edaphic (soil) and climatic factors in their environment but can modify these factors too. For example, they can change their environment's albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. Plants compete with other organisms in their ecosystem for resources. They interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. Regions with characteristic vegetation types and dominant plants as well as similar abiotic and biotic factors, climate, and geography make up biomes like tundra or tropical rainforest.
Herbivores eat plants, but plants can defend themselves and some species are parasitic or even carnivorous. Other organisms form mutually beneficial relationships with plants. For example, mycorrhizal fungi and rhizobia provide plants with nutrients in exchange for food, ants are recruited by ant plants to provide protection, honey bees, bats and other animals pollinate flowers and humans and other animals act as dispersal vectors to spread spores and seeds.
Plants, climate and environmental change
Plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. For example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. Palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. Estimates of atmospheric concentrations since the Palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. Ozone depletion can expose plants to higher levels of ultraviolet radiation-B (UV-B), resulting in lower growth rates. Moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction.
Genetics
Inheritance in plants follows the same fundamental principles of genetics as in other multicellular organisms. Gregor Mendel discovered the genetic laws of inheritance by studying inherited traits such as shape in Pisum sativum (peas). What Mendel learned from studying plants has had far-reaching benefits outside of botany. Similarly, "jumping genes" were discovered by Barbara McClintock while she was studying maize. Nevertheless, there are some distinctive genetic differences between plants and other organisms.
Species boundaries in plants may be weaker than in animals, and cross species hybrids are often possible. A familiar example is peppermint, Mentha × piperita, a sterile hybrid between Mentha aquatica and spearmint, Mentha spicata. The many cultivated varieties of wheat are the result of multiple inter- and intra-specific crosses between wild species and their hybrids. Angiosperms with monoecious flowers often have self-incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach the stigma or fails to germinate and produce male gametes. This is one of several methods used by plants to promote outcrossing. In many land plants the male and female gametes are produced by separate individuals. These species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes.
Charles Darwin in his 1878 book The Effects of Cross and Self-Fertilization in the Vegetable Kingdom at the start of chapter XII noted "The first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross-fertilisation is beneficial and self-fertilisation often injurious, at least with the plants on which I experimented." An important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. This beneficial effect is also known as hybrid vigor or heterosis. Once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression.
Unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. The formation of stem tubers in potato is one example. Particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. This is one of several types of apomixis that occur in plants. Apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent.
Most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. This can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid (endopolyploidy), or during gamete formation. An allopolyploid plant may result from a hybridisation event between two different species. Both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross-breed successfully with the parent population because there is a mismatch in chromosome numbers. These plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. Some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. Durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. The commercial banana is an example of a sterile, seedless triploid hybrid. Common dandelion is a triploid that produces viable seeds by apomictic seed.
As in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non-Mendelian. Chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants.
Molecular genetics
A considerable amount of new knowledge about plant function comes from studies of the molecular genetics of model plants such as the Thale cress, Arabidopsis thaliana, a weedy species in the mustard family (Brassicaceae). The genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of DNA, forming one of the smallest genomes among flowering plants. Arabidopsis was the first plant to have its genome sequenced, in 2000. The sequencing of some other relatively small genomes, of rice (Oryza sativa) and Brachypodium distachyon, has made them important model species for understanding the genetics, cellular and molecular biology of cereals, grasses and monocots generally.
Model plants such as Arabidopsis thaliana are used for studying the molecular biology of plant cells and the chloroplast. Ideally, these organisms have small genomes that are well known or completely sequenced, small stature and short generation times. Corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in plants. The single celled green alga Chlamydomonas reinhardtii, while not an embryophyte itself, contains a green-pigmented chloroplast related to that of land plants, making it useful for study. A red alga Cyanidioschyzon merolae has also been used to study some basic chloroplast functions. Spinach, peas, soybeans and a moss Physcomitrella patens are commonly used to study plant cell biology.
Agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus-inducing Ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. Schell and Van Montagu (1977) hypothesised that the Ti plasmid could be a natural vector for introducing the Nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. Today, genetic modification of the Ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops.
Epigenetics
Epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying DNA sequence but cause the organism's genes to behave (or "express themselves") differently. One example of epigenetic change is the marking of the genes by DNA methylation which determines whether they will be expressed or not. Gene expression can also be controlled by repressor proteins that attach to silencer regions of the DNA and prevent that region of the DNA code from being expressed. Epigenetic marks may be added or removed from the DNA during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. Epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of the cell's life. Some epigenetic changes have been shown to be heritable, while others are reset in the germ cells.
Epigenetic changes in eukaryotic biology serve to regulate the process of cellular differentiation. During morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. A single fertilised egg cell, the zygote, gives rise to the many different plant cell types including parenchyma, xylem vessel elements, phloem sieve tubes, guard cells of the epidermis, etc. as it continues to divide. The process results from the epigenetic activation of some genes and inhibition of others.
Unlike animals, many plant cells, particularly those of the parenchyma, do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. Exceptions include highly lignified cells, the sclerenchyma and xylem which are dead at maturity, and the phloem sieve tubes which lack nuclei. While plants use many of the same epigenetic mechanisms as animals, such as chromatin remodelling, an alternative hypothesis is that plants set their gene expression patterns using positional information from the environment and surrounding cells to determine their developmental fate.
Epigenetic changes can lead to paramutations, which do not follow the Mendelian heritage rules. These epigenetic marks are carried from one generation to the next, with one allele inducing a change on the other.
Plant evolution
The chloroplasts of plants have a number of biochemical, structural and genetic similarities to cyanobacteria, (commonly but incorrectly known as "blue-green algae") and are thought to be derived from an ancient endosymbiotic relationship between an ancestral eukaryotic cell and a cyanobacterial resident.
The algae are a polyphyletic group and are placed in various divisions, some more closely related to plants than others. There are many differences between them in features such as cell wall composition, biochemistry, pigmentation, chloroplast structure and nutrient reserves. The algal division Charophyta, sister to the green algal division Chlorophyta, is considered to contain the ancestor of true plants. The Charophyte class Charophyceae and the land plant sub-kingdom Embryophyta together form the monophyletic group or clade Streptophytina.
Nonvascular land plants are embryophytes that lack the vascular tissues xylem and phloem. They include mosses, liverworts and hornworts. Pteridophytic vascular plants with true xylem and phloem that reproduced by spores germinating into free-living gametophytes evolved during the Silurian period and diversified into several lineages during the late Silurian and early Devonian. Representatives of the lycopods have survived to the present day. By the end of the Devonian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved "megaspory" – their spores were of two distinct sizes, larger megaspores and smaller microspores. Their reduced gametophytes developed from megaspores retained within the spore-producing organs (megasporangia) of the sporophyte, a condition known as endospory. Seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers (integuments). The young sporophyte develops within the seed, which on germination splits to release it. The earliest known seed plants date from the latest Devonian Famennian stage. Following the evolution of the seed habit, seed plants diversified, giving rise to a number of now-extinct groups, including seed ferns, as well as the modern gymnosperms and angiosperms. Gymnosperms produce "naked seeds" not fully enclosed in an ovary; modern representatives include conifers, cycads, Ginkgo, and Gnetales. Angiosperms produce seeds enclosed in a structure such as a carpel or an ovary. Ongoing research on the molecular phylogenetics of living plants appears to show that the angiosperms are a sister clade to the gymnosperms.
Plant physiology
Plant physiology encompasses all the internal chemical and physical activities of plants associated with life. Chemicals obtained from the air, soil and water form the basis of all plant metabolism. The energy of sunlight, captured by oxygenic photosynthesis and released by cellular respiration, is the basis of almost all life. Photoautotrophs, including all green plants, algae and cyanobacteria gather energy directly from sunlight by photosynthesis. Heterotrophs including all animals, all fungi, all completely parasitic plants, and non-photosynthetic bacteria take in organic molecules produced by photoautotrophs and respire them or use them in the construction of cells and tissues. Respiration is the oxidation of carbon compounds by breaking them down into simpler structures to release the energy they contain, essentially the opposite of photosynthesis.
Molecules are moved within plants by transport processes that operate at a variety of spatial scales. Subcellular transport of ions, electrons and molecules such as water and enzymes occurs across cell membranes. Minerals and water are transported from roots to other parts of the plant in the transpiration stream. Diffusion, osmosis, and active transport and mass flow are all different ways transport can occur. Examples of elements that plants need to transport are nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. In vascular plants, these elements are extracted from the soil as soluble ions by the roots and transported throughout the plant in the xylem. Most of the elements required for plant nutrition come from the chemical breakdown of soil minerals. Sucrose produced by photosynthesis is transported from the leaves to other parts of the plant in the phloem and plant hormones are transported by a variety of processes.
Plant hormones
Plants are not passive, but respond to external signals such as light, touch, and injury by moving or growing towards or away from the stimulus, as appropriate. Tangible evidence of touch sensitivity is the almost instantaneous collapse of leaflets of Mimosa pudica, the insect traps of Venus flytrap and bladderworts, and the pollinia of orchids.
The hypothesis that plant growth and development is coordinated by plant hormones or plant growth regulators first emerged in the late 19th century. Darwin experimented on the movements of plant shoots and roots towards light and gravity, and concluded "It is hardly an exaggeration to say that the tip of the radicle . . acts like the brain of one of the lower animals . . directing the several movements". About the same time, the role of auxins (from the Greek , to grow) in control of plant growth was first outlined by the Dutch scientist Frits Went. The first known auxin, indole-3-acetic acid (IAA), which promotes cell growth, was only isolated from plants about 50 years later. This compound mediates the tropic responses of shoots and roots towards light and gravity. The finding in 1939 that plant callus could be maintained in culture containing IAA, followed by the observation in 1947 that it could be induced to form roots and shoots by controlling the concentration of growth hormones were key steps in the development of plant biotechnology and genetic modification.
Cytokinins are a class of plant hormones named for their control of cell division (especially cytokinesis). The natural cytokinin zeatin was discovered in corn, Zea mays, and is a derivative of the purine adenine. Zeatin is produced in roots and transported to shoots in the xylem where it promotes cell division, bud development, and the greening of chloroplasts. The gibberelins, such as gibberelic acid are diterpenes synthesised from acetyl CoA via the mevalonate pathway. They are involved in the promotion of germination and dormancy-breaking in seeds, in regulation of plant height by controlling stem elongation and the control of flowering. Abscisic acid (ABA) occurs in all land plants except liverworts, and is synthesised from carotenoids in the chloroplasts and other plastids. It inhibits cell division, promotes seed maturation, and dormancy, and promotes stomatal closure. It was so named because it was originally thought to control abscission. Ethylene is a gaseous hormone that is produced in all higher plant tissues from methionine. It is now known to be the hormone that stimulates or regulates fruit ripening and abscission, and it, or the synthetic growth regulator ethephon which is rapidly metabolised to produce ethylene, are used on industrial scale to promote ripening of cotton, pineapples and other climacteric crops.
Another class of phytohormones is the jasmonates, first isolated from the oil of Jasminum grandiflorum which regulates wound responses in plants by unblocking the expression of genes required in the systemic acquired resistance response to pathogen attack.
In addition to being the primary energy source for plants, light functions as a signalling device, providing information to the plant, such as how much sunlight the plant receives each day. This can result in adaptive changes in a process known as photomorphogenesis. Phytochromes are the photoreceptors in a plant that are sensitive to light.
Plant anatomy and morphology
Plant anatomy is the study of the structure of plant cells and tissues, whereas plant morphology is the study of their external form.
All plants are multicellular eukaryotes, their DNA stored in nuclei. The characteristic features of plant cells that distinguish them from those of animals and fungi include a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. Other plastids contain storage products such as starch (amyloplasts) or lipids (elaioplasts). Uniquely, streptophyte cells and those of the green algal order Trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division.
The bodies of vascular plants including clubmosses, ferns and seed plants (gymnosperms and angiosperms) generally have aerial and subterranean subsystems. The shoots consist of stems bearing green photosynthesising leaves and reproductive structures. The underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. Non-vascular plants, the liverworts, hornworts and mosses do not produce ground-penetrating vascular roots and most of the plant participates in photosynthesis. The sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts.
The root system and the shoot system are interdependent – the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. Cells in each system are capable of creating cells of the other and producing adventitious shoots or roots. Stolons and tubers are examples of shoots that can grow roots. Roots that spread out close to the surface, such as those of willows, can produce shoots and ultimately new plants. In the event that one of the systems is lost, the other can often regrow it. In fact it is possible to grow an entire plant from a single leaf, as is the case with plants in Streptocarpus sect. Saintpaulia, or even a single cell – which can dedifferentiate into a callus (a mass of unspecialised cells) that can grow into a new plant.
In vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. Roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots.
Stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. Leaves gather sunlight and carry out photosynthesis. Large, flat, flexible, green leaves are called foliage leaves. Gymnosperms, such as conifers, cycads, Ginkgo, and gnetophytes are seed-producing plants with open seeds. Angiosperms are seed-producing plants that produce flowers and have enclosed seeds. Woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues: wood (secondary xylem) and bark (secondary phloem and cork). All gymnosperms and many angiosperms are woody plants. Some plants reproduce sexually, some asexually, and some via both means.
Although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. Furthermore, structures can be seen as processes, that is, process combinations.
Systematic botany
Systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. It involves, or is related to, biological classification, scientific taxonomy and phylogenetics. Biological classification is the method by which botanists group organisms into categories such as genera or species. Biological classification is a form of scientific taxonomy. Modern taxonomy is rooted in the work of Carl Linnaeus, who grouped species according to shared physical characteristics. These groupings have since been revised to align better with the Darwinian principle of common descent – grouping organisms by ancestry rather than superficial characteristics. While scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses DNA sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. The dominant classification system is called Linnaean taxonomy. It includes ranks and binomial nomenclature. The nomenclature of botanical organisms is codified in the International Code of Nomenclature for algae, fungi, and plants (ICN) and administered by the International Botanical Congress.
Kingdom Plantae belongs to Domain Eukaryota and is broken down recursively until each species is separately classified. The order is: Kingdom; Phylum (or Division); Class; Order; Family; Genus (plural genera); Species. The scientific name of a plant represents its genus and its species within the genus, resulting in a single worldwide name for each organism. For example, the tiger lily is Lilium columbianum. Lilium is the genus, and columbianum the specific epithet. The combination is the name of the species. When writing the scientific name of an organism, it is proper to capitalise the first letter in the genus and put all of the specific epithet in lowercase. Additionally, the entire term is ordinarily italicised (or underlined when italics are not available).
The evolutionary relationships and heredity of a group of organisms is called its phylogeny. Phylogenetic studies attempt to discover phylogenies. The basic approach is to use similarities based on shared inheritance to determine relationships. As an example, species of Pereskia are trees or bushes with prominent leaves. They do not obviously resemble a typical leafless cactus such as an Echinocactus. However, both Pereskia and Echinocactus have spines produced from areoles (highly specialised pad-like structures) suggesting that the two genera are indeed related.
Judging relationships based on shared characters requires care, since plants may resemble one another through convergent evolution in which characters have arisen independently. Some euphorbias have leafless, rounded bodies adapted to water conservation similar to those of globular cacti, but characters such as the structure of their flowers make it clear that the two groups are not closely related. The cladistic method takes a systematic approach to characters, distinguishing between those that carry no information about shared evolutionary history – such as those evolved separately in different groups (homoplasies) or those left over from ancestors (plesiomorphies) – and derived characters, which have been passed down from innovations in a shared ancestor (apomorphies). Only derived characters, such as the spine-producing areoles of cacti, provide evidence for descent from a common ancestor. The results of cladistic analyses are expressed as cladograms: tree-like diagrams showing the pattern of evolutionary branching and descent.
From the 1990s onwards, the predominant approach to constructing phylogenies for living plants has been molecular phylogenetics, which uses molecular characters, particularly DNA sequences, rather than morphological characters like the presence or absence of spines and areoles. The difference is that the genetic code itself is used to decide evolutionary relationships, instead of being used indirectly via the characters it gives rise to. Clive Stace describes this as having "direct access to the genetic basis of evolution." As a simple example, prior to the use of genetic evidence, fungi were thought either to be plants or to be more closely related to plants than animals. Genetic evidence suggests that the true evolutionary relationship of multicelled organisms is as shown in the cladogram below – fungi are more closely related to animals than to plants.
In 1998, the Angiosperm Phylogeny Group published a phylogeny for flowering plants based on an analysis of DNA sequences from most families of flowering plants. As a result of this work, many questions, such as which families represent the earliest branches of angiosperms, have now been answered. Investigating how plant species are related to each other allows botanists to better understand the process of evolution in plants. Despite the study of model plants and increasing use of DNA evidence, there is ongoing work and discussion among taxonomists about how best to classify plants into various taxa. Technological developments such as computers and electron microscopes have greatly increased the level of detail studied and speed at which data can be analysed.
Symbols
A few symbols are in current use in botany. A number of others are obsolete; for example, Linnaeus used planetary symbols (Mars) for biennial plants, (Jupiter) for herbaceous perennials and (Saturn) for woody perennials, based on the planets' orbital periods of 2, 12 and 30 years; and Willd used (Saturn) for neuter in addition to (Mercury) for hermaphroditic. The following symbols are still used:
♀ female
♂ male
⚥ hermaphrodite/bisexual
⚲ vegetative (asexual) reproduction
◊ sex unknown
☉ annual
⚇ biennial
♾ perennial
☠ poisonous
🛈 further information
× crossbred hybrid
+ grafted hybrid
See also
Branches of botany
Evolution of plants
Glossary of botanical terms
Glossary of plant morphology
List of botany journals
List of botanists
List of botanical gardens
List of botanists by author abbreviation
List of domesticated plants
List of flowers
List of systems of plant taxonomy
Outline of botany
Timeline of British botany
Notes
References
Citations
Sources
Supporting Information
External links
Articles containing video clips | Botany | [
"Biology"
] | 11,930 | [
"Plants",
"Botany"
] |
4,187 | https://en.wikipedia.org/wiki/Bactericide | A bactericide or bacteriocide, sometimes abbreviated Bcidal, is a substance which kills bacteria. Bactericides are disinfectants, antiseptics, or antibiotics.
However, material surfaces can also have bactericidal properties based solely on their physical surface structure, as for example biomaterials like insect wings.
Disinfectants
The most used disinfectants are those applying
active chlorine (i.e., hypochlorites, chloramines, dichloroisocyanurate and trichloroisocyanurate, wet chlorine, chlorine dioxide, etc.),
active oxygen (peroxides, such as peracetic acid, potassium persulfate, sodium perborate, sodium percarbonate, and urea perhydrate),
iodine (povidone-iodine, Lugol's solution, iodine tincture, iodinated nonionic surfactants),
concentrated alcohols (mainly ethanol, 1-propanol, called also n-propanol and 2-propanol, called isopropanol and mixtures thereof; further, 2-phenoxyethanol and 1- and 2-phenoxypropanols are used),
phenolic substances (such as phenol (also called "carbolic acid"), cresols such as thymol, halogenated (chlorinated, brominated) phenols, such as hexachlorophene, triclosan, trichlorophenol, tribromophenol, pentachlorophenol, salts and isomers thereof),
cationic surfactants, such as some quaternary ammonium cations (such as benzalkonium chloride, cetyl trimethylammonium bromide or chloride, didecyldimethylammonium chloride, cetylpyridinium chloride, benzethonium chloride) and others, non-quaternary compounds, such as chlorhexidine, glucoprotamine, octenidine dihydrochloride etc.),
strong oxidizers, such as ozone and permanganate solutions;
heavy metals and their salts, such as colloidal silver, silver nitrate, mercury chloride, phenylmercury salts, copper sulfate, copper oxide-chloride etc. Heavy metals and their salts are the most toxic and environment-hazardous bactericides and therefore their use is strongly discouraged or prohibited
strong acids (phosphoric, nitric, sulfuric, amidosulfuric, toluenesulfonic acids), pH < 1, and
alkalis (sodium, potassium, calcium hydroxides), such as of pH > 13, particularly under elevated temperature (above 60 °C), kills bacteria.
Antiseptics
As antiseptics (i.e., germicide agents that can be used on human or animal body, skin, mucosae, wounds and the like), few of the above-mentioned disinfectants can be used, under proper conditions (mainly concentration, pH, temperature and toxicity toward humans and animals). Among them, some important are
properly diluted chlorine preparations (f.e. Dakin's solution, 0.5% sodium or potassium hypochlorite solution, pH-adjusted to pH 7–8, or 0.5–1% solution of sodium benzenesulfochloramide (chloramine B)), some
iodine preparations, such as iodopovidone in various galenics (ointment, solutions, wound plasters), in the past also Lugol's solution,
peroxides such as urea perhydrate solutions and pH-buffered 0.1 – 0.25% peracetic acid solutions,
alcohols with or without antiseptic additives, used mainly for skin antisepsis,
weak organic acids such as sorbic acid, benzoic acid, lactic acid and salicylic acid
some phenolic compounds, such as hexachlorophene, triclosan and Dibromol, and
cationic surfactants, such as 0.05–0.5% benzalkonium, 0.5–4% chlorhexidine, 0.1–2% octenidine solutions.
Others are generally not applicable as safe antiseptics, either because of their corrosive or toxic nature.
Antibiotics
Bactericidal antibiotics kill bacteria; bacteriostatic antibiotics slow their growth or reproduction.
Bactericidal antibiotics that inhibit cell wall synthesis: the beta-lactam antibiotics (penicillin derivatives (penams), cephalosporins (cephems), monobactams, and carbapenems) and vancomycin.
Also bactericidal are daptomycin, fluoroquinolones, metronidazole, nitrofurantoin, co-trimoxazole, telithromycin.
Aminoglycosidic antibiotics are usually considered bactericidal, although they may be bacteriostatic with some organisms.
As of 2004, the distinction between bactericidal and bacteriostatic agents appeared to be clear according to the basic/clinical definition, but this only applies under strict laboratory conditions and it is important to distinguish microbiological and clinical definitions. The distinction is more arbitrary when agents are categorized in clinical situations. The supposed superiority of bactericidal agents over bacteriostatic agents is of little relevance when treating the vast majority of infections with gram-positive bacteria, particularly in patients with uncomplicated infections and noncompromised immune systems. Bacteriostatic agents have been effectively used for treatment that are considered to require bactericidal activity. Furthermore, some broad classes of antibacterial agents considered bacteriostatic can exhibit bactericidal activity against some bacteria on the basis of in vitro determination of MBC/MIC values. At high concentrations, bacteriostatic agents are often bactericidal against some susceptible organisms. The ultimate guide to treatment of any infection must be clinical outcome.
Surfaces
Material surfaces can exhibit bactericidal properties because of their crystallographic surface structure.
Somewhere in the mid-2000s it was shown that metallic nanoparticles can kill bacteria. The effect of a silver nanoparticle for example depends on its size with a preferential diameter of about 1–10 nm to interact with bacteria.
In 2013, cicada wings were found to have a selective anti-gram-negative bactericidal effect based on their physical surface structure. Mechanical deformation of the more or less rigid nanopillars found on the wing releases energy, striking and killing bacteria within minutes, hence called a mechano-bactericidal effect.
In 2020 researchers combined cationic polymer adsorption and femtosecond laser surface structuring to generate a bactericidal effect against both gram-positive Staphylococcus aureus and gram-negative Escherichia coli bacteria on borosilicate glass surfaces, providing a practical platform for the study of the bacteria-surface interaction.
See also
List of antibiotics
Microbicide
Virucide
References | Bactericide | [
"Biology"
] | 1,539 | [
"Bactericides",
"Biocides"
] |
4,191 | https://en.wikipedia.org/wiki/BCG%20vaccine | The Bacillus Calmette–Guérin (BCG) vaccine is a vaccine primarily used against tuberculosis (TB). It is named after its inventors Albert Calmette and Camille Guérin. In countries where tuberculosis or leprosy is common, one dose is recommended in healthy babies as soon after birth as possible. In areas where tuberculosis is not common, only children at high risk are typically immunized, while suspected cases of tuberculosis are individually tested for and treated. Adults who do not have tuberculosis and have not been previously immunized, but are frequently exposed, may be immunized, as well. BCG also has some effectiveness against Buruli ulcer infection and other nontuberculous mycobacterial infections. Additionally, it is sometimes used as part of the treatment of bladder cancer.
Rates of protection against tuberculosis infection vary widely and protection lasts up to 20 years. Among children, it prevents about 20% from getting infected and among those who do get infected, it protects half from developing disease. The vaccine is injected into the skin. No evidence shows that additional doses are beneficial.
Serious side effects are rare. Redness, swelling, and mild pain often occur at the injection site. A small ulcer may also form with some scarring after healing. Side effects are more common and potentially more severe in those with immunosuppression. Although no harmful effects on the fetus have been observed, there is insufficient evidence about the safety of BCG vaccination during pregnancy. Therefore, the vaccine is not recommended for use during pregnancy. The vaccine was originally developed from Mycobacterium bovis, which is commonly found in cattle. While it has been weakened, it is still live.
The BCG vaccine was first used medically in 1921. It is on the World Health Organization's List of Essential Medicines. , the vaccine is given to about 100 million children per year globally. However, it is not commonly administered in the United States.
Medical uses
Tuberculosis
The main use of BCG is for vaccination against tuberculosis. BCG vaccine can be administered after birth intradermally. BCG vaccination can cause a false positive Mantoux test.
The most controversial aspect of BCG is the variable efficacy found in different clinical trials, which appears to depend on geography. Trials in the UK consistently show a 60 to 80% protective effect. Still, those trials conducted elsewhere have shown no protective effect, and efficacy appears to fall the closer one gets to the equator.
A 1994 systematic review found that BCG reduces the risk of getting tuberculosis by about 50%. Differences in effectiveness depend on region, due to factors such as genetic differences in the populations, changes in environment, exposure to other bacterial infections, and conditions in the laboratory where the vaccine is grown, including genetic differences between the strains being cultured and the choice of growth medium.
A systematic review and meta-analysis conducted in 2014 demonstrated that the BCG vaccine reduced infections by 19–27% and reduced progression to active tuberculosis by 71%. The studies included in this review were limited to those that used interferon gamma release assay.
The duration of protection of BCG is not clearly known. In those studies showing a protective effect, the data are inconsistent. The MRC study showed protection waned to 59% after 15 years and to zero after 20 years; however, a study looking at Native Americans immunized in the 1930s found evidence of protection even 60 years after immunization, with only slightly waning in efficacy.
BCG seems to have its greatest effect in preventing miliary tuberculosis or tuberculosis meningitis, so it is still extensively used even in countries where efficacy against pulmonary tuberculosis is negligible.
The 100th anniversary of the BCG vaccine was in 2021. It remains the only vaccine licensed against tuberculosis, which is an ongoing pandemic. Tuberculosis elimination is a goal of the World Health Organization (WHO). The development of new vaccines with greater efficacy against adult pulmonary tuberculosis may be needed to make substantial progress.
Efficacy
Several possible reasons for the variable efficacy of BCG in different countries have been proposed. None has been proven, some have been disproved, and none can explain the lack of efficacy in low tuberculosis-burden countries (US) and high tuberculosis-burden countries (India). The reasons for variable efficacy have been discussed at length in a WHO document on BCG.
Genetic variation in BCG strains: Genetic variation in the BCG strains may explain the variable efficacy reported in different trials.
Genetic variation in populations: Differences in the genetic makeup of different populations may explain the difference in efficacy. The Birmingham BCG trial was published in 1988. The trial, based in Birmingham, United Kingdom, examined children born to families who originated from the Indian subcontinent (where vaccine efficacy had previously been shown to be zero). The trial showed a 64% protective effect, similar to the figure from other UK trials, thus arguing against the genetic variation hypothesis.
Interference by nontuberculous mycobacteria: Exposure to environmental mycobacteria (especially Mycobacterium avium, Mycobacterium marinum and Mycobacterium intracellulare) results in a nonspecific immune response against mycobacteria. Administering BCG to someone with a nonspecific immune response against mycobacteria does not augment the response. BCG will, therefore, appear not to be efficacious because that person already has a level of immunity and BCG is not adding to that immunity. This effect is called masking because the effect of BCG is masked by environmental mycobacteria. Clinical evidence for this effect was found in a series of studies performed in parallel in adolescent school children in the UK and Malawi. In this study, the UK school children had a low baseline cellular immunity to mycobacteria which was increased by BCG; in contrast, the Malawi school children had a high baseline cellular immunity to mycobacteria and this was not significantly increased by BCG. Whether this natural immune response is protective is not known. An alternative explanation is suggested by mouse studies; immunity against mycobacteria stops BCG from replicating and so stops it from producing an immune response. This is called the block hypothesis.
Interference by concurrent parasitic infection: In another hypothesis, simultaneous infection with parasites such as helminthiasis changes the immune response to BCG, making it less effective. As Th1 response is required for an effective immune response to tuberculous infection, concurrent infection with various parasites produces a simultaneous Th2 response, which blunts the effect of BCG.
Mycobacteria
BCG has protective effects against some nontuberculosis mycobacteria.
Leprosy: BCG has a protective effect against leprosy in the range of 20 to 80%.
Buruli ulcer: BCG may protect against or delay the onset of Buruli ulcer.
Cancer
BCG has been one of the most successful immunotherapies. BCG vaccine has been the "standard of care for patients with bladder cancer (NMIBC)" since 1977. By 2014, more than eight different considered biosimilar agents or strains used to treat nonmuscle-invasive bladder cancer.
Several cancer vaccines use BCG as an additive to provide an initial stimulation of the person's immune system.
BCG is used in the treatment of superficial forms of bladder cancer. Since the late 1970s, evidence has become available that the instillation of BCG into the bladder is an effective form of immunotherapy in this disease. While the mechanism is unclear, it appears a local immune reaction is mounted against the tumor. Immunotherapy with BCG prevents recurrence in up to 67% of cases of superficial bladder cancer.
BCG has been evaluated in many studies as a therapy for colorectal cancer. The US biotech company Vaccinogen is evaluating BCG as an adjuvant to autologous tumour cells used as a cancer vaccine in stage II colon cancer.
Method of administration
A pre-injection tuberculin skin test is usually carried out before administering the BCG vaccine. A reactive tuberculin skin test is a contraindication to BCG due to the risk of severe local inflammation and scarring; it does not indicate immunity. BCG is also contraindicated in certain people who have IL-12 receptor pathway defects.
BCG is given as a single intradermal injection at the insertion of the deltoid. If BCG is accidentally given subcutaneously, then a local abscess may form (a "BCG-oma") that can sometimes ulcerate, and may require treatment with antibiotics immediately, otherwise without treatment it could spread the infection, causing severe damage to vital organs. An abscess is not always associated with incorrect administration, and it is one of the more common complications that can occur with the vaccination. Numerous medical studies on the treatment of these abscesses with antibiotics have been done with varying results, but the consensus is once pus is aspirated and analysed, provided no unusual bacilli are present, the abscess will generally heal on its own in a matter of weeks.
The characteristic raised scar that BCG immunization leaves is often used as proof of prior immunization. This scar must be distinguished from that of smallpox vaccination, which it may resemble.
When given for bladder cancer, the vaccine is not injected through the skin but is instilled into the bladder through the urethra using a soft catheter.
Adverse effects
BCG immunization generally causes some pain and scarring at the site of injection. The main adverse effects are keloids—large, raised scars. The insertion to the deltoid muscle is most frequently used because the local complication rate is smallest when that site is used. Nonetheless, the buttock is an alternative site of administration because it provides better cosmetic outcomes.
BCG vaccine should be given intradermally. If given subcutaneously, it may induce local infection and spread to the regional lymph nodes, causing either suppurative (production of pus) or nonsuppurative lymphadenitis. Conservative management is usually adequate for nonsuppurative lymphadenitis. If suppuration occurs, it may need needle aspiration. For unresolved suppuration, surgical excision may be required. Evidence for the treatment of these complications is scarce.
Uncommonly, breast and gluteal abscesses can occur due to haematogenous (carried by the blood) and lymphangiomatous spread. Regional bone infection (BCG osteomyelitis or osteitis) and disseminated BCG infection are rare complications of BCG vaccination, but potentially life-threatening. Systemic antituberculous therapy may be helpful in severe complications.
When BCG is used for bladder cancer, around 2.9% of treated patients discontinue immunotherapy due to a genitourinary or systemic BCG-related infection, however while symptomatic bladder BCG infection is frequent, the involvement of other organs is very uncommon. When systemic involvement occurs, liver and lungs are the first organs to be affected (1 week [median] after the last BCG instillation).
If BCG is accidentally given to an immunocompromised patient (e.g., an infant with severe combined immune deficiency), it can cause disseminated or life-threatening infection. The documented incidence of this happening is less than one per million immunizations given. In 2007, the WHO stopped recommending BCG for infants with HIV, even if the risk of exposure to tuberculosis is high, because of the risk of disseminated BCG infection (which is roughly 400 per 100,000 in that higher risk context).
Usage
The person's age and the frequency with which BCG is given have always varied from country to country. The WHO recommends childhood BCG for all countries with a high incidence of tuberculosis and/or high leprosy burden. This is a partial list of historic and active BCG practices around the globe. A complete atlas of past and present practice has been generated. As of 2022, 155 countries offer the BCG vaccine in their schedule.
Americas
Brazil introduced universal BCG immunization in 1967–1968, and the practice continues until now. According to Brazilian law, BCG is given again to professionals in the health sector and people close to patients with tuberculosis or leprosy.
Canadian Indigenous communities receive the BCG vaccine, and in the province of Quebec the vaccine was offered to children until the mid-70s.
Most countries in Central and South America have universal BCG immunizations, as does Mexico.
The United States has never used mass immunization of BCG due to the rarity of tuberculosis in the US, relying instead on the detection and treatment of latent tuberculosis.
Europe
Asia
China: Introduced in the 1930s. Increasingly, it became more widespread after 1949. The majority were inoculated by 1979.
South Korea, Singapore, Taiwan, and Malaysia. In these countries, BCG was given at birth and again at age 12. In Malaysia and Singapore from 2001, this policy was changed to once only at birth. South Korea stopped re-vaccination in 2008.
Hong Kong: BCG is given to all newborns.
Japan: In Japan, BCG was introduced in 1951, given typically at age 6. From 2005 it is administered between five and eight months after birth, and no later than a child's first birthday. BCG was administered no later than the fourth birthday until 2005, and no later than six months from birth from 2005 to 2012; the schedule was changed in 2012 due to reports of osteitis side effects from vaccinations at 3–4 months. Some municipalities recommend an earlier immunization schedule.
Thailand: In Thailand, the BCG vaccine is given routinely at birth.
India and Pakistan: India and Pakistan introduced BCG mass immunization in 1948, the first countries outside Europe to do so. In 2015, millions of infants were denied BCG vaccine in Pakistan for the first time due to shortage globally.
Mongolia: All newborns are vaccinated with BCG. Previously, the vaccine was also given at ages 8 and 15, although this is no longer common practice.
Philippines: BCG vaccine started in the Philippines in 1979 with the Expanded Program on Immunization.
Sri Lanka: In Sri Lanka, The National Policy of Sri Lanka is to give BCG vaccination to all newborn babies immediately after birth. BCG vaccination is carried out under the Expanded Programme of Immunisation (EPI).
Middle East
Israel: BCG was given to all newborns between 1955 and 1982.
Iran: Iran's vaccination policy was implemented in 1984. Vaccination with the Bacillus Calmette–Guerin (BCG) is among the most important tuberculosis control strategies in Iran [2]. According to Iranian neonatal vaccination policy, BCG has been given as a single dose to children aged <6 years, shortly after birth or at first contact with the health services.
Africa
South Africa: In South Africa, the BCG Vaccine is given routinely at birth, to all newborns, except those with clinically symptomatic AIDS. The vaccination site is in the right arm.
Morocco: In Morocco, the BCG was introduced in 1949. The policy is BCG vaccination at birth, to all newborns.
Kenya: In Kenya, the BCG Vaccine is given routinely at birth to all newborns.
South Pacific
Australia: BCG vaccination was used between the 1950s and mid-1980s. BCG has not been part of routine vaccination since the mid-1980s.
New Zealand: BCG Immunisation was first introduced for 13-year-olds in 1948. Vaccination was phased out from 1963 to 1990.
Manufacture
BCG is prepared from a strain of the attenuated (virulence-reduced) live bovine tuberculosis bacillus, Mycobacterium bovis, that has lost its ability to cause disease in humans. It is specially subcultured in a culture medium, usually Middlebrook 7H9. Because the living bacilli evolve to make the best use of available nutrients, they become less well-adapted to human blood and can no longer induce disease when introduced into a human host. Still, they are similar enough to their wild ancestors to provide some immunity against human tuberculosis. The BCG vaccine can be anywhere from 0 to 80% effective in preventing tuberculosis for 15 years; however, its protective effect appears to vary according to geography and the lab in which the vaccine strain was grown.
Several companies make BCG, sometimes using different genetic strains of the bacterium. This may result in different product characteristics. OncoTICE, used for bladder instillation for bladder cancer, was developed by Organon Laboratories (since acquired by Schering-Plough, and in turn acquired by Merck & Co.). A similar application is the product of Onko BCG of the Polish company Biomed-Lublin, which owns the Brazilian substrain M. bovis BCG Moreau which is less reactogenic than vaccines including other BCG strains. Pacis BCG, made from the Montréal (Institut Armand-Frappier) strain, was first marketed by Urocor in about 2002. Urocor was since acquired by Dianon Systems. Evans Vaccines (a subsidiary of PowderJect Pharmaceuticals). Statens Serum Institut in Denmark has marketed a BCG vaccine prepared using Danish strain 1331. The production of BCG Danish strain 1331 and its distribution was later undertaken by AJVaccines company since the ownership transfer of SSI's vaccine production business to AJ Vaccines Holding A/S which took place on 16 January 2017. Japan BCG Laboratory markets its vaccine, based on the Tokyo 172 substrain of Pasteur BCG, in 50 countries worldwide.
According to a UNICEF report published in December 2015, on BCG vaccine supply security, global demand increased in 2015 from 123 to 152.2 million doses. To improve security and to [diversify] sources of affordable and flexible supply," UNICEF awarded seven new manufacturers contracts to produce BCG. Along with supply availability from existing manufacturers, and a "new WHO prequalified vaccine" the total supply will be "sufficient to meet both suppressed 2015 demand carried over to 2016, as well as total forecast demand through 2016–2018."
Supply shortage
In 2011, the Sanofi Pasteur plant flooded, causing problems with mold. The facility, located in Toronto, Ontario, Canada, produced BCG vaccine products made with substrain Connaught such as a tuberculosis vaccine and ImmuCYST, a BCG immunotherapeutic and bladder cancer drug. By April 2012 the FDA had found dozens of documented problems with sterility at the plant including mold, nesting birds and rusted electrical conduits. The resulting closure of the plant for over two years caused shortages of bladder cancer and tuberculosis vaccines. On 29 October 2014 Health Canada gave the permission for Sanofi to resume production of BCG. A 2018 analysis of the global supply concluded that the supplies are adequate to meet forecast BCG vaccine demand, but that risks of shortages remain, mainly due to dependence of 75 percent of WHO pre-qualified supply on just two suppliers.
Dried
Some BCG vaccines are freeze dried and become fine powder. Sometimes the powder is sealed with vacuum in a glass ampoule. Such a glass ampoule has to be opened slowly to prevent the airflow from blowing out the powder. Then the powder has to be diluted with saline water before injecting.
History
The history of BCG is tied to that of smallpox. By 1865 Jean Antoine Villemin had demonstrated that rabbits could be infected with tuberculosis from humans; by 1868 he had found that rabbits could be infected with tuberculosis from cows and that rabbits could be infected with tuberculosis from other rabbits. Thus, he concluded that tuberculosis was transmitted via some unidentified microorganism (or "virus", as he called it). In 1882 Robert Koch regarded human and bovine tuberculosis as identical. But in 1895, Theobald Smith presented differences between human and bovine tuberculosis, which he reported to Koch. By 1901 Koch distinguished Mycobacterium bovis from Mycobacterium tuberculosis. Following the success of vaccination in preventing smallpox, established during the 18th century, scientists thought to find a corollary in tuberculosis by drawing a parallel between bovine tuberculosis and cowpox: it was hypothesized that infection with bovine tuberculosis might protect against infection with human tuberculosis. In the late 19th century, clinical trials using M. bovis were conducted in Italy with disastrous results, because M. bovis was found to be just as virulent as M. tuberculosis.
Albert Calmette, a French physician and bacteriologist, and his assistant and later colleague, Camille Guérin, a veterinarian, were working at the Institut Pasteur de Lille (Lille, France) in 1908. Their work included subculturing virulent strains of the tuberculosis bacillus and testing different culture media. They noted a glycerin-bile-potato mixture grew bacilli that seemed less virulent and changed the course of their research to see if repeated subculturing would produce a strain that was attenuated enough to be considered for use as a vaccine. The BCG strain was isolated after subculturing 239 times during 13 years from a virulent strain on glycerine potato medium. The research continued throughout World War I until 1919 when the now avirulent bacilli were unable to cause tuberculosis disease in research animals. Calmette and Guerin transferred to the Paris Pasteur Institute in 1919. The BCG vaccine was first used in humans in 1921.
Public acceptance was slow, and the Lübeck disaster, in particular, did much to harm it. Between 1929 and 1933 in Lübeck, 251 infants were vaccinated in the first 10 days of life; 173 developed tuberculosis and 72 died. It was subsequently discovered that the BCG administered there had been contaminated with a virulent strain that was being stored in the same incubator, which led to legal action against the vaccine's manufacturers.
Dr. R. G. Ferguson, working at the Fort Qu'Appelle Sanatorium in Saskatchewan, was among the pioneers in developing the practice of vaccination against tuberculosis. In Canada, more than 600 children from residential schools were used as involuntary participants in BCG vaccine trials between 1933 and 1945. In 1928, the BCG vaccine was adopted by the Health Committee of the League of Nations (predecessor to the World Health Organization (WHO)). Because of opposition, however, it only became widely used after World War II. From 1945 to 1948, relief organizations (International Tuberculosis Campaign or Joint Enterprises) vaccinated over eight million babies in Eastern Europe and prevented the predicted typical increase of tuberculosis after a major war.
The BCG vaccine is very efficacious against tuberculous meningitis in the pediatric age group, but its efficacy against pulmonary tuberculosis appears variable. Some countries have removed the BCG vaccine from routine vaccination. Two countries that have never used it routinely are the United States and the Netherlands (in both countries, it is felt that having a reliable Mantoux test and therefore being able to accurately detect active disease is more beneficial to society than vaccinating against a relatively rare condition).
Other names include "Vaccin Bilié de Calmette et Guérin vaccine" and "Bacille de Calmette et Guérin vaccine".
Research
Tentative evidence exists for a beneficial non-specific effect of BCG vaccination on overall mortality in low-income countries, or for its reducing other health problems including sepsis and respiratory infections when given early, with greater benefit the earlier it is used.
In rhesus macaques, BCG shows improved rates of protection when given intravenously. Some risks must be evaluated before it can be translated to humans.
The University of Oxford Jenner Institute is conducting a study comparing the efficacy of injected versus inhaled BCG vaccine in already-vaccinated adults.
Type 1 diabetes
, BCG vaccine is in the early stages of being studied in type 1 diabetes (T1D).
COVID-19
Use of the BCG vaccine may provide protection against COVID-19. However, epidemiologic observations in this respect are ambiguous. The WHO does not recommend its use for prevention .
, 20 BCG trials are in various clinical stages. , the results are extremely mixed. A 15-month trial involving people thrice-vaccinated over the two years before the pandemic shows positive results in preventing infection in BCG-naive people with type 1 diabetes. On the other hand, a 5-month trial shows that re-vaccinating with BCG does not help prevent infection in healthcare workers. Both of these trials were double-blind randomized controlled trials.
References
External links
Vaccines
Cancer vaccines
Live vaccines
Tuberculosis vaccines
Drugs developed by Schering-Plough
Drugs developed by Merck & Co.
World Health Organization essential medicines (vaccines)
Wikipedia medicine articles ready to translate | BCG vaccine | [
"Biology"
] | 5,232 | [
"Vaccination",
"Vaccines"
] |
4,194 | https://en.wikipedia.org/wiki/Bohrium | Bohrium is a synthetic chemical element; it has symbol Bh and atomic number 107. It is named after Danish physicist Niels Bohr. As a synthetic element, it can be created in particle accelerators but is not found in nature. All known isotopes of bohrium are highly radioactive; the most stable known isotope is 270Bh with a half-life of approximately 2.4 minutes, though the unconfirmed 278Bh may have a longer half-life of about 11.5 minutes.
In the periodic table, it is a d-block transactinide element. It is a member of the 7th period and belongs to the group 7 elements as the fifth member of the 6d series of transition metals. Chemistry experiments have confirmed that bohrium behaves as the heavier homologue to rhenium in group 7. The chemical properties of bohrium are characterized only partly, but they compare well with the chemistry of the other group 7 elements.
Introduction
History
Discovery
Two groups claimed discovery of the element. Evidence of bohrium was first reported in 1976 by a Soviet research team led by Yuri Oganessian, in which targets of bismuth-209 and lead-208 were bombarded with accelerated nuclei of chromium-54 and manganese-55, respectively. Two activities, one with a half-life of one to two milliseconds, and the other with an approximately five-second half-life, were seen. Since the ratio of the intensities of these two activities was constant throughout the experiment, it was proposed that the first was from the isotope bohrium-261 and that the second was from its daughter dubnium-257. Later, the dubnium isotope was corrected to dubnium-258, which indeed has a five-second half-life (dubnium-257 has a one-second half-life); however, the half-life observed for its parent is much shorter than the half-lives later observed in the definitive discovery of bohrium at Darmstadt in 1981. The IUPAC/IUPAP Transfermium Working Group (TWG) concluded that while dubnium-258 was probably seen in this experiment, the evidence for the production of its parent bohrium-262 was not convincing enough.
In 1981, a German research team led by Peter Armbruster and Gottfried Münzenberg at the GSI Helmholtz Centre for Heavy Ion Research (GSI Helmholtzzentrum für Schwerionenforschung) in Darmstadt bombarded a target of bismuth-209 with accelerated nuclei of chromium-54 to produce 5 atoms of the isotope bohrium-262:
+ → +
This discovery was further substantiated by their detailed measurements of the alpha decay chain of the produced bohrium atoms to previously known isotopes of fermium and californium. The IUPAC/IUPAP Transfermium Working Group (TWG) recognised the GSI collaboration as official discoverers in their 1992 report.
Proposed names
In September 1992, the German group suggested the name nielsbohrium with symbol Ns to honor the Danish physicist Niels Bohr. The Soviet scientists at the Joint Institute for Nuclear Research in Dubna, Russia had suggested this name be given to element 105 (which was finally called dubnium) and the German team wished to recognise both Bohr and the fact that the Dubna team had been the first to propose the cold fusion reaction, and simultaneously help to solve the controversial problem of the naming of element 105. The Dubna team agreed with the German group's naming proposal for element 107.
There was an element naming controversy as to what the elements from 104 to 106 were to be called; the IUPAC adopted unnilseptium (symbol Uns) as a temporary, systematic element name for this element. In 1994 a committee of IUPAC recommended that element 107 be named bohrium, not nielsbohrium, since there was no precedent for using a scientist's complete name in the naming of an element. This was opposed by the discoverers as there was some concern that the name might be confused with boron and in particular the distinguishing of the names of their respective oxyanions, bohrate and borate. The matter was handed to the Danish branch of IUPAC which, despite this, voted in favour of the name bohrium, and thus the name bohrium for element 107 was recognized internationally in 1997; the names of the respective oxyanions of boron and bohrium remain unchanged despite their homophony.
Isotopes
Bohrium has no stable or naturally occurring isotopes. Several radioactive isotopes have been synthesized in the laboratory, either by fusing two atoms or by observing the decay of heavier elements. Twelve different isotopes of bohrium have been reported with atomic masses 260–262, 264–267, 270–272, 274, and 278, one of which, bohrium-262, has a known metastable state. All of these but the unconfirmed 278Bh decay only through alpha decay, although some unknown bohrium isotopes are predicted to undergo spontaneous fission.
The lighter isotopes usually have shorter half-lives; half-lives of under 100 ms for 260Bh, 261Bh, 262Bh, and 262mBh were observed. 264Bh, 265Bh, 266Bh, and 271Bh are more stable at around 1 s, and 267Bh and 272Bh have half-lives of about 10 s. The heaviest isotopes are the most stable, with 270Bh and 274Bh having measured half-lives of about 2.4 min and 40 s respectively, and the even heavier unconfirmed isotope 278Bh appearing to have an even longer half-life of about 11.5 minutes.
The most proton-rich isotopes with masses 260, 261, and 262 were directly produced by cold fusion, those with mass 262 and 264 were reported in the decay chains of meitnerium and roentgenium, while the neutron-rich isotopes with masses 265, 266, 267 were created in irradiations of actinide targets. The five most neutron-rich ones with masses 270, 271, 272, 274, and 278 (unconfirmed) appear in the decay chains of 282Nh, 287Mc, 288Mc, 294Ts, and 290Fl respectively. The half-lives of bohrium isotopes range from about ten milliseconds for 262mBh to about one minute for 270Bh and 274Bh, extending to about 11.5 minutes for the unconfirmed 278Bh, which may have one of the longest half-lives among reported superheavy nuclides.
Predicted properties
Very few properties of bohrium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that bohrium (and its parents) decays very quickly. A few singular chemistry-related properties have been measured, but properties of bohrium metal remain unknown and only predictions are available.
Chemical
Bohrium is the fifth member of the 6d series of transition metals and the heaviest member of group 7 in the periodic table, below manganese, technetium and rhenium. All the members of the group readily portray their group oxidation state of +7 and the state becomes more stable as the group is descended. Thus bohrium is expected to form a stable +7 state. Technetium also shows a stable +4 state whilst rhenium exhibits stable +4 and +3 states. Bohrium may therefore show these lower states as well. The higher +7 oxidation state is more likely to exist in oxyanions, such as perbohrate, , analogous to the lighter permanganate, pertechnetate, and perrhenate. Nevertheless, bohrium(VII) is likely to be unstable in aqueous solution, and would probably be easily reduced to the more stable bohrium(IV).
The lighter group 7 elements are known to form volatile heptoxides M2O7 (M = Mn, Tc, Re), so bohrium should also form the volatile oxide Bh2O7. The oxide should dissolve in water to form perbohric acid, HBhO4.
Rhenium and technetium form a range of oxyhalides from the halogenation of the oxide. The chlorination of the oxide forms the oxychlorides MO3Cl, so BhO3Cl should be formed in this reaction. Fluorination results in MO3F and MO2F3 for the heavier elements in addition to the rhenium compounds ReOF5 and ReF7. Therefore, oxyfluoride formation for bohrium may help to indicate eka-rhenium properties. Since the oxychlorides are asymmetrical, and they should have increasingly large dipole moments going down the group, they should become less volatile in the order TcO3Cl > ReO3Cl > BhO3Cl: this was experimentally confirmed in 2000 by measuring the enthalpies of adsorption of these three compounds. The values are for TcO3Cl and ReO3Cl are −51 kJ/mol and −61 kJ/mol respectively; the experimental value for BhO3Cl is −77.8 kJ/mol, very close to the theoretically expected value of −78.5 kJ/mol.
Physical and atomic
Bohrium is expected to be a solid under normal conditions and assume a hexagonal close-packed crystal structure (c/a = 1.62), similar to its lighter congener rhenium. Early predictions by Fricke estimated its density at 37.1 g/cm3, but newer calculations predict a somewhat lower value of 26–27 g/cm3.
The atomic radius of bohrium is expected to be around 128 pm. Due to the relativistic stabilization of the 7s orbital and destabilization of the 6d orbital, the Bh+ ion is predicted to have an electron configuration of [Rn] 5f14 6d4 7s2, giving up a 6d electron instead of a 7s electron, which is the opposite of the behavior of its lighter homologues manganese and technetium. Rhenium, on the other hand, follows its heavier congener bohrium in giving up a 5d electron before a 6s electron, as relativistic effects have become significant by the sixth period, where they cause among other things the yellow color of gold and the low melting point of mercury. The Bh2+ ion is expected to have an electron configuration of [Rn] 5f14 6d3 7s2; in contrast, the Re2+ ion is expected to have a [Xe] 4f14 5d5 configuration, this time analogous to manganese and technetium. The ionic radius of hexacoordinate heptavalent bohrium is expected to be 58 pm (heptavalent manganese, technetium, and rhenium having values of 46, 57, and 53 pm respectively). Pentavalent bohrium should have a larger ionic radius of 83 pm.
Experimental chemistry
In 1995, the first report on attempted isolation of the element was unsuccessful, prompting new theoretical studies to investigate how best to investigate bohrium (using its lighter homologs technetium and rhenium for comparison) and removing unwanted contaminating elements such as the trivalent actinides, the group 5 elements, and polonium.
In 2000, it was confirmed that although relativistic effects are important, bohrium behaves like a typical group 7 element. A team at the Paul Scherrer Institute (PSI) conducted a chemistry reaction using six atoms of 267Bh produced in the reaction between 249Bk and 22Ne ions. The resulting atoms were thermalised and reacted with a HCl/O2 mixture to form a volatile oxychloride. The reaction also produced isotopes of its lighter homologues, technetium (as 108Tc) and rhenium (as 169Re). The isothermal adsorption curves were measured and gave strong evidence for the formation of a volatile oxychloride with properties similar to that of rhenium oxychloride. This placed bohrium as a typical member of group 7. The adsorption enthalpies of the oxychlorides of technetium, rhenium, and bohrium were measured in this experiment, agreeing very well with the theoretical predictions and implying a sequence of decreasing oxychloride volatility down group 7 of TcO3Cl > ReO3Cl > BhO3Cl.
2 Bh + 3 + 2 HCl → 2 +
The longer-lived heavy isotopes of bohrium, produced as the daughters of heavier elements, offer advantages for future radiochemical experiments. Although the heavy isotope 274Bh requires a rare and highly radioactive berkelium target for its production, the isotopes 272Bh, 271Bh, and 270Bh can be readily produced as daughters of more easily produced moscovium and nihonium isotopes.
Notes
References
Bibliography
External links
Bohrium at The Periodic Table of Videos (University of Nottingham)
Chemical elements
Transition metals
Synthetic elements
Chemical elements with hexagonal close-packed structure | Bohrium | [
"Physics",
"Chemistry"
] | 2,764 | [
"Matter",
"Chemical elements",
"Synthetic materials",
"Synthetic elements",
"Atoms",
"Radioactivity"
] |
4,196 | https://en.wikipedia.org/wiki/Barnard%27s%20Star | Barnard's Star is a small red dwarf star in the constellation of Ophiuchus. At a distance of from Earth, it is the fourth-nearest-known individual star to the Sun after the three components of the Alpha Centauri system, and is the closest star in the northern celestial hemisphere. Its stellar mass is about 16% of the Sun's, and it has 19% of the Sun's diameter. Despite its proximity, the star has a dim apparent visual magnitude of +9.5 and is invisible to the unaided eye; it is much brighter in the infrared than in visible light.
The star is named after Edward Emerson Barnard, an American astronomer who in 1916 measured its proper motion as 10.3 arcseconds per year relative to the Sun, the highest known for any star. The star had previously appeared on Harvard University photographic plates in 1888 and 1890.
Barnard's Star is among the most studied red dwarfs because of its proximity and favorable location for observation near the celestial equator. Historically, research on Barnard's Star has focused on measuring its stellar characteristics, its astrometry, and also refining the limits of possible extrasolar planets. Although Barnard's Star is ancient, it still experiences stellar flare events, one being observed in 1998.
Barnard's Star hosts at least one planet, Barnard's Star b, a close-orbiting sub-Earth discovered in 2024, with additional candidates suspected. Previously, it was subject to multiple claims of planets that were disproven.
Naming
In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Barnard's Star for this star on 1 February 2017 and it is now included in the List of IAU-approved Star Names.
Description
Barnard's Star is a red dwarf of the dim spectral type M4 and is too faint to see without a telescope; its apparent magnitude is 9.5.
At 7–12 billion years of age, Barnard's Star is considerably older than the Sun, which is 4.5 billion years old, and it might be among the oldest stars in the Milky Way galaxy. Barnard's Star has lost a great deal of rotational energy; the periodic slight changes in its brightness indicate that it rotates once in 130 days (the Sun rotates in 25). Given its age, Barnard's Star was long assumed to be quiescent in terms of stellar activity. In 1998, astronomers observed an intense stellar flare, showing that Barnard's Star is a flare star. Barnard's Star has the variable star designation V2500 Ophiuchi. In 2003, Barnard's Star presented the first detectable change in the radial velocity of a star caused by its motion. Further variability in the radial velocity of Barnard's Star was attributed to its stellar activity.
The proper motion of Barnard's Star corresponds to a relative lateral speed of 90km/s. The 10.3 arcseconds it travels in a year amount to a quarter of a degree in a human lifetime, roughly half the angular diameter of the full Moon.
The radial velocity of Barnard's Star is , as measured from the blueshift due to its motion toward the Sun. Combined with its proper motion and distance, this gives a "space velocity" (actual speed relative to the Sun) of . Barnard's Star will make its closest approach to the Sun around 11,800 CE, when it will approach to within about 3.75 light-years.
Proxima Centauri is the closest star to the Sun at a position currently 4.24 light-years distant from it. However, despite Barnard's Star's even closer pass to the Sun in 11,800 CE, it will still not then be the nearest star, since by that time Proxima Centauri will have moved to a yet-nearer proximity to the Sun. At the time of the star's closest pass by the Sun, Barnard's Star will still be too dim to be seen with the naked eye, since its apparent magnitude will only have increased by one magnitude to about 8.5 by then, still being 2.5 magnitudes short of visibility to the naked eye.
Barnard's Star has a mass of about 0.16 solar masses (), and a radius about 0.2 times that of the Sun. Thus, although Barnard's Star has roughly 150 times the mass of Jupiter (), its radius is only roughly 2 times larger, due to its much higher density. Its effective temperature is about 3,220 kelvin, and it has a luminosity of only 0.0034 solar luminosities. Barnard's Star is so faint that if it were at the same distance from Earth as the Sun is, it would appear only 100 times brighter than a full moon, comparable to the brightness of the Sun at 80 astronomical units.
Barnard's Star has 10–32% of the solar metallicity. Metallicity is the proportion of stellar mass made up of elements heavier than helium and helps classify stars relative to the galactic population. Barnard's Star seems to be typical of the old, red dwarf population II stars, yet these are also generally metal-poor halo stars. While sub-solar, Barnard's Star's metallicity is higher than that of a halo star and is in keeping with the low end of the metal-rich disk star range; this, plus its high space motion, have led to the designation "intermediate population II star", between a halo and disk star. However, some recently published scientific papers have given much higher estimates for the metallicity of the star, very close to the Sun's level, between 75 and 125% of the solar metallicity.
Planetary system
In August 2024, by using data from ESPRESSO spectrograph of the Very Large Telescope, the existence of an exoplanet with a minimum mass of and orbital period of 3.15 days was confirmed. This constituted the first convincing evidence for a planet orbiting Barnard's Star. Additionally, three other candidate low-mass planets were proposed in this study. All of these planets orbit closer to the star than the habitable zone. The confirmed planet is designated Barnard's Star b (or Barnard b), a re-use of the designation originally used for the refuted super-Earth candidate.
Previous planetary claims
Barnard's Star has been subject to multiple claims of planets that were later disproven. From the early 1960s to the early 1970s, Peter van de Kamp argued that planets orbited Barnard's Star. His specific claims of large gas giants were refuted in the mid-1970s after much debate. In November 2018, a candidate super-Earth planetary companion was reported to orbit Barnard's Star. It was believed to have a minimum mass of and orbit at . However, work presented in July 2021 refuted the existence of this planet.
Astrometric planetary claims
For a decade from 1963 to about 1973, a substantial number of astronomers accepted a claim by Peter van de Kamp that he had detected, by using astrometry, a perturbation in the proper motion of Barnard's Star consistent with its having one or more planets comparable in mass with Jupiter. Van de Kamp had been observing the star from 1938, attempting, with colleagues at the Sproul Observatory at Swarthmore College, to find minuscule variations of one micrometre in its position on photographic plates consistent with orbital perturbations that would indicate a planetary companion; this involved as many as ten people averaging their results in looking at plates, to avoid systemic individual errors.
Van de Kamp's initial suggestion was a planet having about at a distance of 4.4AU in a slightly eccentric orbit, and these measurements were apparently refined in a 1969 paper. Later that year, Van de Kamp suggested that there were two planets of 1.1 and .
Other astronomers subsequently repeated Van de Kamp's measurements, and two papers in 1973 undermined the claim of a planet or planets. George Gatewood and Heinrich Eichhorn, at a different observatory and using newer plate measuring techniques, failed to verify the planetary companion. Another paper published by John L. Hershey four months earlier, also using the Swarthmore observatory, found that changes in the astrometric field of various stars correlated to the timing of adjustments and modifications that had been carried out on the refractor telescope's objective lens; the claimed planet was attributed to an artifact of maintenance and upgrade work. The affair has been discussed as part of a broader scientific review.
Van de Kamp never acknowledged any error and published a further claim of two planets' existence as late as 1982; he died in 1995. Wulff Heintz, Van de Kamp's successor at Swarthmore and an expert on double stars, questioned his findings and began publishing criticisms from 1976 onwards. The two men were reported to have become estranged because of this.
Refuted 2018 planetary claim
In November 2018, an international team of astronomers announced the detection by radial velocity of a candidate super-Earth orbiting in relatively close proximity to Barnard's Star. Led by Ignasi Ribas of Spain their work, conducted over two decades of observation, provided strong evidence of the planet's existence. However, the existence of the planet was refuted in 2021, when the radial velocity signal was found to originate from long-term activity on the star itself, related to its rotation. Further studies in the following years confirmed this result.
Dubbed Barnard's Star b, the planet was thought to be near the stellar system's snow line, which is an ideal spot for the icy accretion of proto-planetary material. It was thought to orbit at 0.4AU every 233 days and had a proposed minimum mass of . The planet would have most likely been frigid, with an estimated surface temperature of about , and lie outside Barnard Star's presumed habitable zone. Direct imaging of the planet and its tell-tale light signature would have been possible in the decade after its discovery. Further faint and unaccounted-for perturbations in the system suggested there may be a second planetary companion even farther out.
Refining planetary boundaries
For the more than four decades between van de Kamp's rejected claim and the eventual announcement of a planet candidate, Barnard's Star was carefully studied and the mass and orbital boundaries for possible planets were slowly tightened. M dwarfs such as Barnard's Star are more easily studied than larger stars in this regard because their lower masses render perturbations more obvious.
Null results for planetary companions continued throughout the 1980s and 1990s, including interferometric work with the Hubble Space Telescope in 1999. Gatewood was able to show in 1995 that planets with were impossible around Barnard's Star, in a paper which helped refine the negative certainty regarding planetary objects in general. In 1999, the Hubble work further excluded planetary companions of with an orbital period of less than 1,000 days (Jupiter's orbital period is 4,332 days), while Kuerster determined in 2003 that within the habitable zone around Barnard's Star, planets are not possible with an "M sin i" value greater than 7.5 times the mass of the Earth (), or with a mass greater than 3.1 times the mass of Neptune (much lower than van de Kamp's smallest suggested value).
In 2013, a research paper was published that further refined planet mass boundaries for the star. Using radial velocity measurements, taken over a period of 25 years, from the Lick and Keck Observatories and applying Monte Carlo analysis for both circular and eccentric orbits, upper masses for planets out to 1,000-day orbits were determined. Planets above two Earth masses in orbits of less than 10 days were excluded, and planets of more than ten Earth masses out to a two-year orbit were also confidently ruled out. It was also discovered that the habitable zone of the star seemed to be devoid of roughly Earth-mass planets or larger, save for face-on orbits.
Even though this research greatly restricted the possible properties of planets around Barnard's Star, it did not rule them out completely as terrestrial planets were always going to be difficult to detect. NASA's Space Interferometry Mission, which was to begin searching for extrasolar Earth-like planets, was reported to have chosen Barnard's Star as an early search target, however the mission was shut down in 2010. ESA's similar Darwin interferometry mission had the same goal, but was stripped of funding in 2007.
The analysis of radial velocities that eventually led to the announcement of a candidate super-Earth orbiting Barnard's Star was also used to set more precise upper mass limits for possible planets, up to and within the habitable zone: a maximum of up to the inner edge and on the outer edge of the optimistic habitable zone, corresponding to orbital periods of up to 10 and 40 days respectively. Therefore, it appears that Barnard's Star indeed does not host Earth-mass planets or larger, in hot and temperate orbits, unlike other M-dwarf stars that commonly have these types of planets in close-in orbits.
Stellar flares
1998
In 1998 a stellar flare on Barnard's Star was detected based on changes in the spectral emissions on 17 July during an unrelated search for variations in the proper motion. Four years passed before the flare was fully analyzed, at which point it was suggested that the flare's temperature was 8,000K, more than twice the normal temperature of the star. Given the essentially random nature of flares, Diane Paulson, one of the authors of that study, noted that "the star would be fantastic for amateurs to observe".
The flare was surprising because intense stellar activity is not expected in stars of such age. Flares are not completely understood, but are believed to be caused by strong magnetic fields, which suppress plasma convection and lead to sudden outbursts: strong magnetic fields occur in rapidly rotating stars, while old stars tend to rotate slowly. For Barnard's Star to undergo an event of such magnitude is thus presumed to be a rarity. Research on the star's periodicity, or changes in stellar activity over a given timescale, also suggest it ought to be quiescent; 1998 research showed weak evidence for periodic variation in the star's brightness, noting only one possible starspot over 130 days.
Stellar activity of this sort has created interest in using Barnard's Star as a proxy to understand similar stars. It is hoped that photometric studies of its X-ray and UV emissions will shed light on the large population of old M dwarfs in the galaxy. Such research has astrobiological implications: given that the habitable zones of M dwarfs are close to the star, any planet located therein would be strongly affected by solar flares, stellar winds, and plasma ejection events.
2019
In 2019, two additional ultraviolet stellar flares were detected, each with far-ultraviolet energy of 3×1022 joules, together with one X-ray stellar flare with energy 1.6×1022 joules. The flare rate observed to date is enough to cause loss of 87 Earth atmospheres per billion years through thermal processes and ≈3 Earth atmospheres per billion years through ion loss processes on Barnard's Star b.
Environment
Barnard's Star shares much the same neighborhood as the Sun. The neighbors of Barnard's Star are generally of red dwarf size, the smallest and most common star type. Its closest neighbor is currently the red dwarf Ross 154, at a distance of 1.66 parsecs (5.41 light-years). The Sun (5.98 light-years) and Alpha Centauri (6.47 light-years) are, respectively, the next closest systems. From Barnard's Star, the Sun would appear on the diametrically opposite side of the sky at coordinates RA=, Dec=, in the westernmost part of the constellation Monoceros. The absolute magnitude of the Sun is 4.83, and at a distance of 1.834 parsecs, it would be a first-magnitude star, as Pollux is from the Earth.
Proposed exploration
Project Daedalus
Barnard's Star was studied as part of Project Daedalus. Undertaken between 1973 and 1978, the study suggested that rapid, uncrewed travel to another star system was possible with existing or near-future technology. Barnard's Star was chosen as a target partly because it was believed to have planets.
The theoretical model suggested that a nuclear pulse rocket employing nuclear fusion (specifically, electron bombardment of deuterium and helium-3) and accelerating for four years could achieve a velocity of 12% of the speed of light. The star could then be reached in 50 years, within a human lifetime. Along with detailed investigation of the star and any companions, the interstellar medium would be examined and baseline astrometric readings performed.
The initial Project Daedalus model sparked further theoretical research. In 1980, Robert Freitas suggested a more ambitious plan: a self-replicating spacecraft intended to search for and make contact with extraterrestrial life. Built and launched in Jupiter's orbit, it would reach Barnard's Star in 47 years under parameters similar to those of the original Project Daedalus. Once at the star, it would begin automated self-replication, constructing a factory, initially to manufacture exploratory probes and eventually to create a copy of the original spacecraft after 1,000 years.
See also
Kepler-42 – Nearly identical to Barnard's star, and hosts three sub-Earth sized planets.
Notes
References
External links
Amateur work showing Barnard's Star movement over time.
Animated image with frames approx. one year apart, beginning in 2007, showing the movement of Barnard's Star.
J17574849+0441405
?
BY Draconis variables
Discoveries by Edward Emerson Barnard
BD+04 3561A
Flare stars
0699
087937
Planetary systems with one confirmed planet
Local Interstellar Cloud
M-type main-sequence stars
Ophiuchi, V2500
Ophiuchus
Stars with proper names | Barnard's Star | [
"Astronomy"
] | 3,729 | [
"Ophiuchus",
"Constellations"
] |
4,199 | https://en.wikipedia.org/wiki/Bayer%20designation | A Bayer designation is a stellar designation in which a specific star is identified by a Greek or Latin letter followed by the genitive form of its parent constellation's Latin name. The original list of Bayer designations contained 1564 stars. The brighter stars were assigned their first systematic names by the German astronomer Johann Bayer in 1603, in his star atlas Uranometria. Bayer catalogued only a few stars too far south to be seen from Germany, but later astronomers (including Nicolas-Louis de Lacaille and Benjamin Apthorp Gould) supplemented Bayer's catalog with entries for southern constellations.
Scheme
Bayer assigned a lowercase Greek letter (alpha (α), beta (β), gamma (γ), etc.) or a Latin letter (A, b, c, etc.) to each star he catalogued, combined with the Latin name of the star's parent constellation in genitive (possessive) form. The constellation name is frequently abbreviated to a standard three-letter form. For example, Aldebaran in the constellation Taurus (the Bull) is designated α Tauri (abbreviated α Tau, pronounced Alpha Tauri), which means "Alpha of the Bull".
Bayer used Greek letters for the brighter stars, but the Greek alphabet has only twenty-four letters, while a single constellation may contain fifty or more stars visible to the naked eye. When the Greek letters ran out, Bayer continued with Latin letters: uppercase A, followed by lowercase b through z (omitting j and v, but o was included), for a total of another 24 letters.
Bayer did not label "permanent" stars with uppercase letters (except for A, which he used instead of a to avoid confusion with α). However, a number of stars in southern constellations have uppercase letter designations, like B Centauri and G Scorpii. These letters were assigned by later astronomers, notably Lacaille in his Coelum Australe Stelliferum and Gould in his Uranometria Argentina. Lacaille followed Bayer's use of Greek letters, but this was insufficient for many constellations. He used first the lowercase letters, starting with a, and if needed the uppercase letters, starting with A, thus deviating somewhat from Bayer's practice. Lacaille used the Latin alphabet three times over in the large constellation Argo Navis, once for each of the three areas that are now the constellations of Carina, Puppis and Vela. That was still insufficient for the number of stars, so he also used uppercase Latin letters such as N Velorum and Q Puppis. Lacaille assigned uppercase letters between R and Z in several constellations, but these have either been dropped to allow the assignment of those letters to variable stars or have actually turned out to be variable.
Order by magnitude class
In most constellations, Bayer assigned Greek and Latin letters to stars within a constellation in rough order of apparent brightness, from brightest to dimmest. The order is not necessarily a precise labeling from brightest to dimmest: in Bayer's day stellar brightness could not be measured precisely. Instead, stars were traditionally assigned to one of six magnitude classes (the brightest to first magnitude, the dimmest to sixth), and Bayer typically ordered stars within a constellation by class: all the first-magnitude stars (in some order), followed by all the second-magnitude stars, and so on. Within each magnitude class, Bayer made no attempt to arrange stars by relative brightness. As a result, the brightest star in each class did not always get listed first in Bayer's order—and the brightest star overall did not necessarily get the designation "Alpha". A good example is the constellation Gemini, where Pollux is Beta Geminorum and the slightly dimmer Castor is Alpha Geminorum.
In addition, Bayer did not always follow the magnitude class rule; he sometimes assigned letters to stars according to their location within a constellation, or the order of their rising, or to historical or mythological details. Occasionally the order looks quite arbitrary.
Of the 88 modern constellations, there are at least 30 in which Alpha is not the brightest star, and four of those lack a star labeled "Alpha" altogether. The constellations with no Alpha-designated star include Vela and Puppis—both formerly part of Argo Navis, whose Greek-letter stars were split among three constellations. Canopus, the former α Argus, is now α Carinae in the modern constellation Carina. Norma's Alpha and Beta were reassigned to Scorpius and re-designated N and H Scorpii respectively, leaving Norma with no Alpha. Francis Baily died before designating an Alpha in Leo Minor, so it also has no Alpha. (The star 46 Leonis Minoris would have been the obvious candidate.)
Orion as an example
In Orion, Bayer first designated Betelgeuse and Rigel, the two 1st-magnitude stars (those of magnitude 1.5 or less), as Alpha and Beta from north to south, with Betelgeuse (the shoulder) coming ahead of Rigel (the foot), even though the latter is usually the brighter. (Betelgeuse is a variable star and can at its maximum occasionally outshine Rigel.) Bayer then repeated the procedure for the stars of the 2nd magnitude, labeling them from gamma through zeta in "top-down" (north-to-south) order. Letters as far as Latin p were used for stars of the sixth magnitude.
Bayer's miscellaneous labels
Although Bayer did not use uppercase Latin letters (except A) for "fixed stars", he did use them to label other items shown on his charts, such as neighboring constellations, "temporary stars", miscellaneous astronomical objects, or reference lines like the Tropic of Cancer. In Cygnus, for example, Bayer's fixed stars run through g, and on this chart Bayer employs H through P as miscellaneous labels, mostly for neighboring constellations. Bayer did not intend such labels as catalog designations, but some have survived to refer to astronomical objects: P Cygni for example is still used as a designation for Nova Cyg 1600. Tycho's Star (SN 1572), another "temporary star", appears as B Cassiopeiae. In charts for constellations that did not exhaust the Greek letters, Bayer sometimes used the leftover Greek letters for miscellaneous labels as well.
Revised designations
Ptolemy designated four stars as "border stars", each shared by two constellations: Alpheratz (in Andromeda and Pegasus), Elnath (in Taurus and Auriga), Nu Boötis (Nu1 and Nu2)(in Boötes and Hercules) and Fomalhaut (in Piscis Austrinus and Aquarius). Bayer assigned the first three of these stars a Greek letter from both constellations: , , and . (He catalogued Fomalhaut only once, as Alpha Piscis Austrini.) When the International Astronomical Union (IAU) assigned definite boundaries to the constellations in 1930, it declared that stars and other celestial objects can belong to only one constellation. Consequently, the redundant second designation in each pair above has dropped out of use.
Bayer assigned two stars duplicate names by mistake: (duplicated as ) and (Kappa1 and Kappa2) (duplicated as ). He corrected these in a later atlas, and the duplicate names were no longer used.
Other cases of multiple Bayer designations arose when stars named by Bayer in one constellation were transferred by later astronomers to a different constellation. Bayer's Gamma and Omicron Scorpii, for example, were later reassigned from Scorpius to Libra and given the new names Sigma and Upsilon Librae. (To add to the confusion, the star now known as Omicron Scorpii was not named by Bayer but was assigned the designation o Scorpii (Latin lowercase 'o') by Lacaille—which later astronomers misinterpreted as omicron once Bayer's omicron had been reassigned to Libra.)
A few stars no longer lie (according to the modern constellation boundaries) within the constellation for which they are named. The proper motion of Rho Aquilae, for example, carried it across the boundary into Delphinus in 1992.
A further complication is the use of numeric superscripts to distinguish neighboring stars that Bayer (or a later astronomer) labeled with a common letter. Usually these are double stars (mostly optical doubles rather than true binary stars), but there are some exceptions such as the chain of stars π1, π2, π3, π4, π5 and π6 Orionis. The most stars given the same Bayer designation but with an extra number attached to it is Psi Aurigae. (ψ1, ψ2, ψ3, ψ4, ψ5, ψ6, ψ7, ψ8, ψ9, ψ10, although according to the modern IAU constellation boundaries, ψ10 lies in Lynx).
See also
Flamsteed designation
Gould designation
Lists of constellations
Star catalogue
Stellar designations and names
Table of stars with Bayer designations
Variable star designation
References
Notes
Astronomical catalogues | Bayer designation | [
"Astronomy"
] | 1,916 | [
"Astronomical catalogues",
"Works about astronomy",
"Astronomical objects"
] |
4,200 | https://en.wikipedia.org/wiki/Bo%C3%B6tes | Boötes ( ) is a constellation in the northern sky, located between 0° and +60° declination, and 13 and 16 hours of right ascension on the celestial sphere. The name comes from , which comes from 'herdsman' or 'plowman' (literally, 'ox-driver'; from boûs 'cow').
One of the 48 constellations described by the 2nd-century astronomer Ptolemy, Boötes is now one of the 88 modern constellations. It contains the fourth-brightest star in the night sky, the orange giant Arcturus. Epsilon Boötis, or Izar, is a colourful multiple star popular with amateur astronomers. Boötes is home to many other bright stars, including eight above the fourth magnitude and an additional 21 above the fifth magnitude, making a total of 29 stars easily visible to the naked eye.
History and mythology
In ancient Babylon, the stars of Boötes were known as SHU.PA. They were apparently depicted as the god Enlil, who was the leader of the Babylonian pantheon and special patron of farmers. Boötes may have been represented by the animal foreleg constellation in ancient Egypt, resembling that of an ox sufficiently to have been originally proposed as the "foreleg of ox" by Berio.
Homer mentions Boötes in the Odyssey as a celestial reference for navigation, describing it as "late-setting" or "slow to set". Exactly whom Boötes is supposed to represent in Greek mythology is not clear. According to one version, he was a son of Demeter, Philomenus, twin brother of Plutus, a plowman who drove the oxen in the constellation Ursa Major. This agrees with the constellation's name. The ancient Greeks saw the asterism now called the "Big Dipper" or "Plough" as a cart with oxen. Some myths say that Boötes invented the plow and was memorialized for his ingenuity as a constellation.
Another myth associated with Boötes by Hyginus is that of Icarius, who was schooled as a grape farmer and winemaker by Dionysus. Icarius made wine so strong that those who drank it appeared poisoned, which caused shepherds to avenge their supposedly poisoned friends by killing Icarius. Maera, Icarius' dog, brought his daughter Erigone to her father's body, whereupon both she and the dog died by suicide. Zeus then chose to honor all three by placing them in the sky as constellations: Icarius as Boötes, Erigone as Virgo, and Maera as Canis Major or Canis Minor.
Following another reading, the constellation is identified with Arcas and also referred to as Arcas and Arcturus, son of Zeus and Callisto. Arcas was brought up by his maternal grandfather Lycaon, to whom one day Zeus went and had a meal. To verify that the guest was really the king of the gods, Lycaon killed his grandson and prepared a meal made from his flesh. Zeus noticed and became very angry, transforming Lycaon into a wolf and giving life back to his son. In the meantime Callisto had been transformed into a she-bear by Zeus's wife Hera, who was angry at Zeus's infidelity. This is corroborated by the Greek name for Boötes, Arctophylax, which means "Bear Watcher".
Callisto, in the form of a bear was almost killed by her son, who was out hunting. Zeus rescued her, taking her into the sky where she became Ursa Major, "the Great Bear". Arcturus, the name of the constellation's brightest star, comes from the Greek word meaning "guardian of the bear". Sometimes Arcturus is depicted as leading the hunting dogs of nearby Canes Venatici and driving the bears of Ursa Major and Ursa Minor.
Several former constellations were formed from stars now included in Boötes. Quadrans Muralis, the Quadrant, was a constellation created near Beta Boötis from faint stars. It was designated in 1795 by Jérôme Lalande, an astronomer who used a quadrant to perform detailed astronometric measurements. Lalande worked with Nicole-Reine Lepaute and others to predict the 1758 return of Halley's Comet. Quadrans Muralis was formed from the stars of eastern Boötes, western Hercules and Draco. It was originally called Le Mural by Jean Fortin in his 1795 Atlas Céleste; it was not given the name Quadrans Muralis until Johann Bode's 1801 Uranographia. The constellation was quite faint, with its brightest stars reaching the 5th magnitude. Mons Maenalus, representing the Maenalus mountains, was created by Johannes Hevelius in 1687 at the foot of the constellation's figure. The mountain was named for the son of Lycaon, Maenalus. The mountain, one of Diana's hunting grounds, was also holy to Pan.
Non-Western astronomy
The stars of Boötes were incorporated into many different Chinese constellations. Arcturus was part of the most prominent of these, variously designated as the celestial king's throne (Tian Wang) or the Blue Dragon's horn (Daijiao); the name Daijiao, meaning "great horn", is more common. Arcturus was given such importance in Chinese celestial mythology because of its status marking the beginning of the lunar calendar, as well as its status as the brightest star in the northern night sky.
Two constellations flanked Daijiao: Yousheti to the right and Zuosheti to the left; they represented companions that orchestrated the seasons. Zuosheti was formed from modern Zeta, Omicron and Pi Boötis, while Yousheti was formed from modern Eta, Tau and Upsilon Boötis. Dixi, the Emperor's ceremonial banquet mat, was north of Arcturus, consisting of the stars 12, 11 and 9 Boötis. Another northern constellation was Qigong, the Seven Dukes, which mostly straddled the Boötes-Hercules border. It included either Delta Boötis or Beta Boötis as its terminus.
The other Chinese constellations made up of the stars of Boötes existed in the modern constellation's north; they are all representations of weapons. Tianqiang, the spear, was formed from Iota, Kappa and Theta Boötis; Genghe, variously representing a lance or shield, was formed from Epsilon, Rho and Sigma Boötis.
There were also two weapons made up of a singular star. Xuange, the halberd, was represented by Lambda Boötis, and Zhaoyao, either the sword or the spear, was represented by Gamma Boötis.
Two Chinese constellations have an uncertain placement in Boötes. Kangchi, the lake, was placed south of Arcturus, though its specific location is disputed. It may have been placed entirely in Boötes, on either side of the Boötes-Virgo border, or on either side of the Virgo-Libra border. The constellation Zhouding, a bronze tripod-mounted container used for food, was sometimes cited as the stars 1, 2 and 6 Boötis. However, it has also been associated with three stars in Coma Berenices.
Boötes is also known to Native American cultures. In Yup'ik language, Boötes is Taluyaq, literally "fish trap," and the funnel-shaped part of the fish trap is known as Ilulirat.
Characteristics
Boötes is a constellation bordered by Virgo to the south, Coma Berenices and Canes Venatici to the west, Ursa Major to the northwest, Draco to the northeast, and Hercules, Corona Borealis and Serpens Caput to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Boo". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 16 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates stretch from +7.36° to +55.1°. Covering 907 square degrees, Boötes culminates at midnight around 2 May and ranks 13th in area.
Colloquially, its pattern of stars has been likened to a kite or ice cream cone. However, depictions of Boötes have varied historically. Aratus described him circling the north pole, herding the two bears. Later ancient Greek depictions, described by Ptolemy, have him holding the reins of his hunting dogs (Canes Venatici) in his left hand, with a spear, club, or staff in his right hand. After Hevelius introduced Mons Maenalus in 1681, Boötes was often depicted standing on the Peloponnese mountain. By 1801, when Johann Bode published his Uranographia, Boötes had acquired a sickle, which was also held in his left hand.
The placement of Arcturus has also been mutable through the centuries. Traditionally, Arcturus lay between his thighs, as Ptolemy depicted him. However, Germanicus Caesar deviated from this tradition by placing Arcturus "where his garment is fastened by a knot".
Features
Stars
In his Uranometria, Johann Bayer used the Greek letters alpha through to omega and then A to k to label what he saw as the most prominent 35 stars in the constellation, with subsequent astronomers splitting Kappa, Mu, Nu and Pi as two stars each. Nu is also the same star as Psi Herculis. John Flamsteed numbered 54 stars for the constellation.
Located 36.7 light-years from Earth, Arcturus, or Alpha Boötis, is the brightest star in Boötes and the fourth-brightest star in the sky at an apparent magnitude of −0.05; It is also the brightest star north of the celestial equator, just shading out Vega and Capella. Its name comes from the Greek for "bear-keeper". An orange giant of spectral class K1.5III, Arcturus is an ageing star that has exhausted its core supply of hydrogen and cooled and expanded to a diameter of 27 solar diameters, equivalent to approximately 32 million kilometers. Though its mass is approximately one solar mass (), Arcturus shines with 133 times the luminosity of the Sun ().
Bayer located Arcturus above the Herdman's left knee in his Uranometria. Nearby Eta Boötis, or Muphrid, is the uppermost star denoting the left leg. It is a 2.68-magnitude star 37 light-years distant with a spectral class of G0IV, indicating it has just exhausted its core hydrogen and is beginning to expand and cool. It is 9 times as luminous as the Sun and has 2.7 times its diameter. Analysis of its spectrum reveals that it is a spectroscopic binary. Muphrid and Arcturus lie only 3.3 light-years away from each other. Viewed from Arcturus, Muphrid would have a visual magnitude of −2½, while Arcturus would be around visual magnitude −4½ when seen from Muphrid.
Marking the herdsman's head is Beta Boötis, or Nekkar, a yellow giant of magnitude 3.5 and spectral type G8IIIa. Like Arcturus, it has expanded and cooled off the main sequence—likely to have lived most of its stellar life as a blue-white B-type main sequence star. Its common name comes from the Arabic phrase for "ox-driver". It is 219 light-years away and has a luminosity of .
Located 86 light-years distant, Gamma Boötis, or Seginus, is a white giant star of spectral class A7III, with a luminosity 34 times and diameter 3.5 times that of the Sun. It is a Delta Scuti variable, ranging between magnitudes 3.02 and 3.07 every 7 hours. These stars are short period (six hours at most) pulsating stars that have been used as standard candles and as subjects to study asteroseismology.
Delta Boötis is a wide double star with a primary of magnitude 3.5 and a secondary of magnitude 7.8. The primary is a yellow giant that has cooled and expanded to 10.4 times the diameter of the Sun. Of spectral class G8IV, it is around 121 light-years away, while the secondary is a yellow main sequence star of spectral type G0V. The two are thought to take 120,000 years to orbit each other.
Mu Boötis, known as Alkalurops, is a triple star popular with amateur astronomers. It has an overall magnitude of 4.3 and is 121 light-years away. Its name is from the Arabic phrase for "club" or "staff". The primary appears to be of magnitude 4.3 and is blue-white. The secondary appears to be of magnitude 6.5, but is actually a close double star itself with a primary of magnitude 7.0 and a secondary of magnitude 7.6. The secondary and tertiary stars have an orbital period of 260 years. The primary has an absolute magnitude of 2.6 and is of spectral class F0. The secondary and tertiary stars are separated by 2 arcseconds; the primary and secondary are separated by 109.1 arcseconds at an angle of 171 degrees.
Nu Boötis is an optical double star. The primary is an orange giant of magnitude 5.0 and the secondary is a white star of magnitude 5.0. The primary is 870 light-years away and the secondary is 430 light-years.
Epsilon Boötis, also known as Izar or Pulcherrima, is a close triple star popular with amateur astronomers and the most prominent binary star in Boötes. The primary is a yellow- or orange-hued magnitude 2.5 giant star, the secondary is a magnitude 4.6 blue-hued main-sequence star, and the tertiary is a magnitude 12.0 star. The system is 210 light-years away. The name "Izar" comes from the Arabic word for "girdle" or "loincloth", referring to its location in the constellation. The name "Pulcherrima" comes from the Latin phrase for "most beautiful", referring to its contrasting colors in a telescope. The primary and secondary stars are separated by 2.9 arcseconds at an angle of 341 degrees; the primary's spectral class is K0 and it has a luminosity of . To the naked eye, Izar has a magnitude of 2.37.
Nearby Rho and Sigma Boötis denote the herdsman's waist. Rho is an orange giant of spectral type K3III located around 160 light-years from Earth. It is ever so slightly variable, wavering by 0.003 of a magnitude from its average of 3.57. Sigma, a yellow-white main-sequence star of spectral type F3V, is suspected of varying in brightness from 4.45 to 4.49. It is around 52 light-years distant.
Traditionally known as Aulād al Dhiʼbah (أولاد الضباع – aulād al dhiʼb), "the Whelps of the Hyenas", Theta, Iota, Kappa and Lambda Boötis (or Xuange) are a small group of stars in the far north of the constellation. The magnitude 4.05 Theta Boötis has a spectral type of F7 and an absolute magnitude of 3.8. Iota Boötis is a triple star with a primary of magnitude 4.8 and spectral class of A7, a secondary of magnitude 7.5, and a tertiary of magnitude 12.6. The primary is 97 light-years away. The primary and secondary stars are separated by 38.5 arcseconds, at an angle of 33 degrees. The primary and tertiary stars are separated by 86.7 arcseconds at an angle of 194 degrees. Both the primary and tertiary appear white in a telescope, but the secondary appears yellow-hued.
Kappa Boötis is another wide double star. The primary is 155 light-years away and has a magnitude of 4.5. The secondary is 196 light-years away and has a magnitude of 6.6. The two components are separated by 13.4 arcseconds, at an angle of 236 degrees. The primary, with spectral class A7, appears white and the secondary appears bluish.
An apparent magnitude 4.18 type A0p star, Lambda Boötis is the prototype of a class of chemically peculiar stars, only some of which pulsate as Delta Scuti-type stars. The distinction between the Lambda Boötis stars as a class of stars with peculiar spectra, and the Delta Scuti stars whose class describes pulsation in low-overtone pressure modes, is an important one. While many Lambda Boötis stars pulsate and are Delta Scuti stars, not many Delta Scuti stars have Lambda Boötis peculiarities, since the Lambda Boötis stars are a much rarer class whose members can be found both inside and outside the Delta Scuti instability strip. Lambda Boötis stars are dwarf stars that can be either spectral class A or F. Like BL Boötis-type stars they are metal-poor. Scientists have had difficulty explaining the characteristics of Lambda Boötis stars, partly because only around 60 confirmed members exist, but also due to heterogeneity in the literature. Lambda has an absolute magnitude of 1.8.
There are two dimmer F-type stars, magnitude 4.83 12 Boötis, class F8; and magnitude 4.93 45 Boötis, class F5. Xi Boötis is a G8 yellow dwarf of magnitude 4.55, and absolute magnitude is 5.5. Two dimmer G-type stars are magnitude 4.86 31 Boötis, class G8, and magnitude 4.76 44 Boötis, class G0.
Of apparent magnitude 4.06, Upsilon Boötis has a spectral class of K5 and an absolute magnitude of −0.3. Dimmer than Upsilon Boötis is magnitude 4.54 Phi Boötis, with a spectral class of K2 and an absolute magnitude of −0.1. Just slightly dimmer than Phi at magnitude 4.60 is O Boötis, which, like Izar, has a spectral class of K0. O Boötis has an absolute magnitude of 0.2. The other four dim stars are magnitude 4.91 6 Boötis, class K4; magnitude 4.86 20 Boötis, class K3; magnitude 4.81 Omega Boötis, class K4; and magnitude 4.83 A Boötis, class K1.
There is one bright B-class star in Boötes; magnitude 4.93 Pi1 Boötis, also called Alazal. It has a spectral class of B9 and is 40 parsecs from Earth. There is also one M-type star, magnitude 4.81 34 Boötis. It is of class gM0.
Multiple stars
Besides Pulcherrima and Alkalurops, there are several other binary stars in Boötes:
Xi Boötis is a quadruple star popular with amateur astronomers. The primary is a yellow star of magnitude 4.7 and the secondary is an orange star of magnitude 6.8. The system is 22 light-years away and has an orbital period of 150 years. The primary and secondary have a separation of 6.7 arcseconds at an angle of 319 degrees. The tertiary is a magnitude 12.6 star (though it may be observed to be brighter) and the quaternary is a magnitude 13.6 star.
Pi Boötis is a close triple star. The primary is a blue-white star of magnitude 4.9, the secondary is a blue-white star of magnitude 5.8, and the tertiary is a star of magnitude 10.4. The primary and secondary components are separated by 5.6 arcseconds at an angle of 108 degrees; the primary and tertiary components are separated by 128 arcseconds at an angle of 128 degrees.
Zeta Boötis is a triple star that consists of a physical binary pair with an optical companion. Lying 205 light-years away from Earth, The physical pair has a period of 123.3 years and consists of a magnitude 4.5 and a magnitude 4.6 star. The two components are separated by 1.0 arcseconds at an angle of 303 degrees. The optical companion is of magnitude 10.9, separated by 99.3 arcseconds at an angle of 259 degrees. 44 Boötis is an eclipsing variable star. The primary is of variable magnitude and the secondary is of magnitude 6.2; they have an orbital period of 225 years. The components are separated by 1.0 arcsecond at an angle of 40 degrees.
44 Boötis (i Boötis) is a double variable star 42 light-years away. It has an overall magnitude of 4.8 and appears yellow to the naked eye. The primary is of magnitude 5.3 and the secondary is of magnitude 6.1; their orbital period is 220 years. The secondary is itself an eclipsing variable star with a range of 0.6 magnitudes; its orbital period is 6.4 hours. It is a W Ursae Majoris variable that ranges in magnitude from a minimum of 7.1 to a maximum of 6.5 every 0.27 days. Both stars are G-type stars. Another eclipsing binary star is ZZ Boötis, which has two F2-type components of almost equal mass, and ranges in magnitude from a minimum of 6.79 to a maximum of 7.44 over a period of 5.0 days.
Variable stars
Two of the brighter Mira-type variable stars in the constellation are R and S Boötis. Both are red giants that range greatly in magnitude—from 6.2 to 13.1 over 223.4 days, and 7.8 to 13.8 over a period of 270.7 days, respectively. Also red giants, V and W Boötis are semi-regular variable stars that range in magnitude from 7.0 to 12.0 over a period of 258 days, and magnitude 4.7 to 5.4 over 450 days, respectively.
BL Boötis is the prototype of its class of pulsating variable stars, the anomalous Cepheids. These stars are somewhat similar to Cepheid variables, but they do not have the same relationship between their period and luminosity. Their periods are similar to RRAB variables; however, they are far brighter than these stars. BL Boötis is a member of the cluster NGC 5466. Anomalous Cepheids are metal poor and have masses not much larger than the Sun's, on average, . BL Boötis type stars are a subtype of RR Lyrae variables.
T Boötis was a nova observed in April 1860 at a magnitude of 9.7. It has never been observed since, but that does not preclude the possibility of it being a highly irregular variable star or a recurrent nova.
Stars with planetary systems
Extrasolar planets have been discovered encircling ten stars in Boötes as of 2012. Tau Boötis is orbited by a large planet, discovered in 1999. The host star itself is a magnitude 4.5 star of type F7V, 15.6 parsecs from Earth. It has a mass of and a radius of 1.331 solar radii (); a companion, GJ527B, orbits at a distance of 240 AU. Tau Boötis b, the sole planet discovered in the system, orbits at a distance of 0.046 AU every 3.31 days. Discovered through radial velocity measurements, it has a mass of 5.95 Jupiter masses (). This makes it a hot Jupiter. The host star and planet are tidally locked, meaning that the planet's orbit and the star's particularly high rotation are synchronized. Furthermore, a slight variability in the host star's light may be caused by magnetic interactions with the planet. Carbon monoxide is present in the planet's atmosphere. Tau Boötis b does not transit its star, rather, its orbit is inclined 46 degrees.
Like Tau Boötis b, HAT-P-4b is also a hot Jupiter. It is noted for orbiting a particularly metal-rich host star and being of low density. Discovered in 2007, HAT-P-4 b has a mass of and a radius of . It orbits every 3.05 days at a distance of 0.04 AU. HAT-P-4, the host star, is an F-type star of magnitude 11.2, 310 parsecs from Earth. It is larger than the Sun, with a mass of and a radius of .
Boötes is also home to multiple-planet systems. HD 128311 is the host star for a two-planet system, consisting of HD 128311 b and HD 128311 c, discovered in 2002 and 2005, respectively. HD 128311 b is the smaller planet, with a mass of ; it was discovered through radial velocity observations. It orbits at almost the same distance as Earth, at 1.099 AU; however, its orbital period is significantly longer at 448.6 days.
The larger of the two, HD 128311 c, has a mass of and was discovered in the same manner. It orbits every 919 days inclined at 50°, and is 1.76 AU from the host star. The host star, HD 128311, is a K0V-type star located 16.6 parsecs from Earth. It is smaller than the Sun, with a mass of and a radius of ; it also appears below the threshold of naked-eye visibility at an apparent magnitude of 7.51.
There are several single-planet systems in Boötes. HD 132406 is a Sun-like star of spectral type G0V with an apparent magnitude of 8.45, 231.5 light-years from Earth. It has a mass of and a radius of . The star is orbited by a gas giant, HD 132406 b, discovered in 2007. HD 132406 orbits 1.98 AU from its host star with a period of 974 days and has a mass of . The planet was discovered by the radial velocity method.
WASP-23 is a star with one orbiting planet, WASP-23 b. The planet, discovered by the transit method in 2010, orbits every 2.944 days very close to its Sun, at 0.0376 AU. It is smaller than Jupiter, at and . Its star is a K1V-type star of apparent magnitude 12.7, far below naked-eye visibility, and smaller than the Sun at and .
HD 131496 is also encircled by one planet, HD 131496 b. The star is of type K0 and is located 110 parsecs from Earth; it appears at a visual magnitude of 7.96. It is significantly larger than the Sun, with a mass of and a radius of 4.6 solar radii. Its one planet, discovered in 2011 by the radial velocity method, has a mass of ; its radius is as yet undetermined. HD 131496 b orbits at a distance of 2.09 AU with a period of 883 days.
Another single planetary system in Boötes is the HD 132563 system, a triple star system. The parent star, technically HD 132563B, is a star of magnitude 9.47, 96 parsecs from Earth. It is almost exactly the size of the Sun, with the same radius and a mass only 1% greater. Its planet, HD 132563B b, was discovered in 2011 by the radial velocity method. , it orbits 2.62 AU from its star with a period of 1544 days. Its orbit is somewhat elliptical, with an eccentricity of 0.22. HD 132563B b is one of very few planets found in triple star systems; it orbits the isolated member of the system, which is separated from the other components, a spectroscopic binary, by 400 AU.
Also discovered through the radial velocity method, albeit a year earlier, is HD 136418 b, a two-Jupiter-mass planet that orbits the star HD 136418 at a distance of 1.32 AU with a period of 464.3 days. Its host star is a magnitude 7.88 G5-type star, 98.2 parsecs from Earth. It has a radius of and a mass of .
WASP-14 b is one of the most massive and dense exoplanets known, with a mass of and a radius of . Discovered via the transit method, it orbits 0.036 AU from its host star with a period of 2.24 days. WASP-14 b has a density of 4.6 grams per cubic centimeter, making it one of the densest exoplanets known. Its host star, WASP-14, is an F5V-type star of magnitude 9.75, 160 parsecs from Earth. It has a radius of and a mass of . It also has a very high proportion of lithium.
Deep-sky objects
Boötes is in a part of the celestial sphere facing away from the plane of our home Milky Way galaxy, and so does not have open clusters or nebulae. Instead, it has one bright globular cluster and many faint galaxies. The globular cluster NGC 5466 has an overall magnitude of 9.1 and a diameter of 11 arcminutes. It is a very loose globular cluster with fairly few stars and may appear as a rich, concentrated open cluster in a telescope. NGC 5466 is classified as a Shapley–Sawyer Concentration Class 12 cluster, reflecting its sparsity. Its fairly large diameter means that it has a low surface brightness, so it appears far dimmer than the catalogued magnitude of 9.1 and requires a large amateur telescope to view. Only approximately 12 stars are resolved by an amateur instrument.
Boötes has two bright galaxies. NGC 5248 (Caldwell 45) is a type Sc galaxy (a variety of spiral galaxy) of magnitude 10.2. It measures 6.5 by 4.9 arcminutes. Fifty million light-years from Earth, NGC 5248 is a member of the Virgo Cluster of galaxies; it has dim outer arms and obvious H II regions, dust lanes and young star clusters. NGC 5676 is another type Sc galaxy of magnitude 10.9. It measures 3.9 by 2.0 arcminutes. Other galaxies include NGC 5008, a type Sc emission-line galaxy, NGC 5548, a type S Seyfert galaxy, NGC 5653, a type S HII galaxy, NGC 5778 (also classified as NGC 5825), a type E galaxy that is the brightest of its cluster, NGC 5886, and NGC 5888, a type SBb galaxy. NGC 5698 is a barred spiral galaxy, notable for being the host of the 2005 supernova SN 2005bc, which peaked at magnitude 15.3.
Further away lies the 250-million-light-year-diameter Boötes void, a huge space largely empty of galaxies. Discovered by Robert Kirshner and colleagues in 1981, it is roughly 700 million light-years from Earth. Beyond it and within the bounds of the constellation, lie two superclusters at around 830 million and 1 billion light-years distant.
The Hercules–Corona Borealis Great Wall, the largest-known structure in the Universe, covers a significant part of Boötes.
Meteor showers
Boötes is home to the Quadrantid meteor shower, the most prolific annual meteor shower. It was discovered in January 1835 and named in 1864 by Alexander Herschel. The radiant is located in northern Boötes near Kappa Boötis, in its namesake former constellation of Quadrans Muralis. Quadrantid meteors are dim, but have a peak visible hourly rate of approximately 100 per hour on January 3–4. The zenithal hourly rate of the Quadrantids is approximately 130 meteors per hour at their peak; it is also a very narrow shower.
The Quadrantids are notoriously difficult to observe because of a low radiant and often inclement weather. The parent body of the meteor shower has been disputed for decades; however, Peter Jenniskens has proposed 2003 EH1, a minor planet, as the parent. 2003 EH1 may be linked to C/1490 Y1, a comet previously thought to be a potential parent body for the Quadrantids.
2003 EH1 is a short-period comet of the Jupiter family; 500 years ago, it experienced a catastrophic breakup event. It is now dormant. The Quadrantids had notable displays in 1982, 1985 and 2004. Meteors from this shower often appear to have a blue hue and travel at a moderate speed of 41.5–43 kilometers per second.
On April 28, 1984, a remarkable outburst of the normally placid Alpha Bootids was observed by visual observer Frank Witte from 00:00 to 2:30 UTC. In a 6 cm telescope, he observed 433 meteors in a field of view near Arcturus with a diameter of less than 1°. Peter Jenniskens comments that this outburst resembled a "typical dust trail crossing". The Alpha Bootids normally begin on April 14, peaking on April 27 and 28, and finishing on May 12. Its meteors are slow-moving, with a velocity of 20.9 kilometers per second. They may be related to Comet 73P/Schwassmann–Wachmann 3, but this connection is only theorized.
The June Bootids, also known as the Iota Draconids, is a meteor shower associated with the comet 7P/Pons–Winnecke, first recognized on May 27, 1916, by William F. Denning. The shower, with its slow meteors, was not observed prior to 1916 because Earth did not cross the comet's dust trail until Jupiter perturbed Pons–Winnecke's orbit, causing it to come within of Earth's orbit the first year the June Bootids were observed.
In 1982, E. A. Reznikov discovered that the 1916 outburst was caused by material released from the comet in 1819. Another outburst of the June Bootids was not observed until 1998, because Comet Pons–Winnecke's orbit was not in a favorable position. However, on June 27, 1998, an outburst of meteors radiating from Boötes, later confirmed to be associated with Pons-Winnecke, was observed. They were incredibly long-lived, with trails of the brightest meteors lasting several seconds at times. Many fireballs, green-hued trails, and even some meteors that cast shadows were observed throughout the outburst, which had a maximum zenithal hourly rate of 200–300 meteors per hour.
Two Russian astronomers determined in 2002 that material ejected from the comet in 1825 was responsible for the 1998 outburst. Ejecta from the comet dating to 1819, 1825 and 1830 was predicted to enter Earth's atmosphere on June 23, 2004. The predictions of a shower less spectacular than the 1998 showing were borne out in a display that had a maximum zenithal hourly rate of 16–20 meteors per hour that night. The June Bootids are not expected to have another outburst in the next 50 years.
Typically, only 1–2 dim, very slow meteors are visible per hour; the average June Bootid has a magnitude of 5.0. It is related to the Alpha Draconids and the Bootids-Draconids. The shower lasts from June 27 to July 5, with a peak on the night of June 28. The June Bootids are classified as a class III shower (variable), and has an average entry velocity of 18 kilometers per second. Its radiant is located 7 degrees north of Beta Boötis.
The Beta Bootids is a weak shower that begins on January 5, peaks on January 16, and ends on January 18. Its meteors travel at 43 km/s. The January Bootids is a short, young meteor shower that begins on January 9, peaks from January 16 to January 18, and ends on January 18.
The Phi Bootids is another weak shower radiating from Boötes. It begins on April 16, peaks on April 30 and May 1, and ends on May 12. Its meteors are slow-moving, with a velocity of 15.1 km/s. They were discovered in 2006. The shower's peak hourly rate can be as high as six meteors per hour. Though named for a star in Boötes, the Phi Bootid radiant has moved into Hercules. The meteor stream is associated with three different asteroids: 1620 Geographos, 2062 Aten and 1978 CA.
The Lambda Bootids, part of the Bootid-Coronae Borealid Complex, are a weak annual shower with moderately fast meteors; 41.75 km/s. The complex includes the Lambda Bootids, as well as the Theta Coronae Borealids and Xi Coronae Borealids. All of the Bootid-Coronae Borealid showers are Jupiter family comet showers; the streams in the complex have highly inclined orbits.
There are several minor showers in Boötes, some of whose existence is yet to be verified. The Rho Bootids radiate from near the namesake star, and were hypothesized in 2010. The average Rho Bootid has an entry velocity of 43 km/s. It peaks in November and lasts for three days.
The Rho Bootid shower is part of the SMA complex, a group of meteor showers related to the Taurids, which is in turn linked to the comet 2P/Encke. However, the link to the Taurid shower remains unconfirmed and may be a chance correlation. Another such shower is the Gamma Bootids, which were hypothesized in 2006. Gamma Bootids have an entry velocity of 50.3 km/s. The Nu Bootids, hypothesized in 2012, have faster meteors, with an entry velocity of 62.8 km/s.
See also
Lists of astronomical objects
References
Citations
References
(web preprint)
External links
Warburg Institute Iconographic Database (medieval and early-modern images of Bootes)
The clickable Bootes
Constellations
Northern constellations
Constellations listed by Ptolemy | Boötes | [
"Astronomy"
] | 8,016 | [
"Constellations listed by Ptolemy",
"Boötes",
"Constellations",
"Northern constellations",
"Sky regions"
] |
4,214 | https://en.wikipedia.org/wiki/Bioinformatics | Bioinformatics () is an interdisciplinary field of science that develops methods and software tools for understanding biological data, especially when the data sets are large and complex. Bioinformatics uses biology, chemistry, physics, computer science, data science, computer programming, information engineering, mathematics and statistics to analyze and interpret biological data. The process of analyzing and interpreting data can sometimes be referred to as computational biology, however this distinction between the two terms is often disputed. To some, the term computational biology refers to building and using models of biological systems.
Computational, statistical, and computer programming techniques have been used for computer simulation analyses of biological queries. They include reused specific analysis "pipelines", particularly in the field of genomics, such as by the identification of genes and single nucleotide polymorphisms (SNPs). These pipelines are used to better understand the genetic basis of disease, unique adaptations, desirable properties (especially in agricultural species), or differences between populations. Bioinformatics also includes proteomics, which tries to understand the organizational principles within nucleic acid and protein sequences.
Image and signal processing allow extraction of useful results from large amounts of raw data. In the field of genetics, it aids in sequencing and annotating genomes and their observed mutations. Bioinformatics includes text mining of biological literature and the development of biological and gene ontologies to organize and query biological data. It also plays a role in the analysis of gene and protein expression and regulation. Bioinformatics tools aid in comparing, analyzing and interpreting genetic and genomic data and more generally in the understanding of evolutionary aspects of molecular biology. At a more integrative level, it helps analyze and catalogue the biological pathways and networks that are an important part of systems biology. In structural biology, it aids in the simulation and modeling of DNA, RNA, proteins as well as biomolecular interactions.
History
The first definition of the term bioinformatics was coined by Paulien Hogeweg and Ben Hesper in 1970, to refer to the study of information processes in biotic systems. This definition placed bioinformatics as a field parallel to biochemistry (the study of chemical processes in biological systems).
Bioinformatics and computational biology involved the analysis of biological data, particularly DNA, RNA, and protein sequences. The field of bioinformatics experienced explosive growth starting in the mid-1990s, driven largely by the Human Genome Project and by rapid advances in DNA sequencing technology.
Analyzing biological data to produce meaningful information involves writing and running software programs that use algorithms from graph theory, artificial intelligence, soft computing, data mining, image processing, and computer simulation. The algorithms in turn depend on theoretical foundations such as discrete mathematics, control theory, system theory, information theory, and statistics.
Sequences
There has been a tremendous advance in speed and cost reduction since the completion of the Human Genome Project, with some labs able to sequence over 100,000 billion bases each year, and a full genome can be sequenced for $1,000 or less.
Computers became essential in molecular biology when protein sequences became available after Frederick Sanger determined the sequence of insulin in the early 1950s. Comparing multiple sequences manually turned out to be impractical. Margaret Oakley Dayhoff, a pioneer in the field, compiled one of the first protein sequence databases, initially published as books as well as methods of sequence alignment and molecular evolution. Another early contributor to bioinformatics was Elvin A. Kabat, who pioneered biological sequence analysis in 1970 with his comprehensive volumes of antibody sequences released online with Tai Te Wu between 1980 and 1991.
In the 1970s, new techniques for sequencing DNA were applied to bacteriophage MS2 and øX174, and the extended nucleotide sequences were then parsed with informational and statistical algorithms. These studies illustrated that well known features, such as the coding segments and the triplet code, are revealed in straightforward statistical analyses and were the proof of the concept that bioinformatics would be insightful.
Goals
In order to study how normal cellular activities are altered in different disease states, raw biological data must be combined to form a comprehensive picture of these activities. Therefore, the field of bioinformatics has evolved such that the most pressing task now involves the analysis and interpretation of various types of data. This also includes nucleotide and amino acid sequences, protein domains, and protein structures.
Important sub-disciplines within bioinformatics and computational biology include:
Development and implementation of computer programs to efficiently access, manage, and use various types of information.
Development of new mathematical algorithms and statistical measures to assess relationships among members of large data sets. For example, there are methods to locate a gene within a sequence, to predict protein structure and/or function, and to cluster protein sequences into families of related sequences.
The primary goal of bioinformatics is to increase the understanding of biological processes. What sets it apart from other approaches is its focus on developing and applying computationally intensive techniques to achieve this goal. Examples include: pattern recognition, data mining, machine learning algorithms, and visualization. Major research efforts in the field include sequence alignment, gene finding, genome assembly, drug design, drug discovery, protein structure alignment, protein structure prediction, prediction of gene expression and protein–protein interactions, genome-wide association studies, the modeling of evolution and cell division/mitosis.
Bioinformatics entails the creation and advancement of databases, algorithms, computational and statistical techniques, and theory to solve formal and practical problems arising from the management and analysis of biological data.
Over the past few decades, rapid developments in genomic and other molecular research technologies and developments in information technologies have combined to produce a tremendous amount of information related to molecular biology. Bioinformatics is the name given to these mathematical and computing approaches used to glean understanding of biological processes.
Common activities in bioinformatics include mapping and analyzing DNA and protein sequences, aligning DNA and protein sequences to compare them, and creating and viewing 3-D models of protein structures.
Sequence analysis
Since the bacteriophage Phage Φ-X174 was sequenced in 1977, the DNA sequences of thousands of organisms have been decoded and stored in databases. This sequence information is analyzed to determine genes that encode proteins, RNA genes, regulatory sequences, structural motifs, and repetitive sequences. A comparison of genes within a species or between different species can show similarities between protein functions, or relations between species (the use of molecular systematics to construct phylogenetic trees). With the growing amount of data, it long ago became impractical to analyze DNA sequences manually. Computer programs such as BLAST are used routinely to search sequences—as of 2008, from more than 260,000 organisms, containing over 190 billion nucleotides.
DNA sequencing
Before sequences can be analyzed, they are obtained from a data storage bank, such as GenBank. DNA sequencing is still a non-trivial problem as the raw data may be noisy or affected by weak signals. Algorithms have been developed for base calling for the various experimental approaches to DNA sequencing.
Sequence assembly
Most DNA sequencing techniques produce short fragments of sequence that need to be assembled to obtain complete gene or genome sequences. The shotgun sequencing technique (used by The Institute for Genomic Research (TIGR) to sequence the first bacterial genome, Haemophilus influenzae) generates the sequences of many thousands of small DNA fragments (ranging from 35 to 900 nucleotides long, depending on the sequencing technology). The ends of these fragments overlap and, when aligned properly by a genome assembly program, can be used to reconstruct the complete genome. Shotgun sequencing yields sequence data quickly, but the task of assembling the fragments can be quite complicated for larger genomes. For a genome as large as the human genome, it may take many days of CPU time on large-memory, multiprocessor computers to assemble the fragments, and the resulting assembly usually contains numerous gaps that must be filled in later. Shotgun sequencing is the method of choice for virtually all genomes sequenced (rather than chain-termination or chemical degradation methods), and genome assembly algorithms are a critical area of bioinformatics research.
Genome annotation
In genomics, annotation refers to the process of marking the stop and start regions of genes and other biological features in a sequenced DNA sequence. Many genomes are too large to be annotated by hand. As the rate of sequencing exceeds the rate of genome annotation, genome annotation has become the new bottleneck in bioinformatics.
Genome annotation can be classified into three levels: the nucleotide, protein, and process levels.
Gene finding is a chief aspect of nucleotide-level annotation. For complex genomes, a combination of ab initio gene prediction and sequence comparison with expressed sequence databases and other organisms can be successful. Nucleotide-level annotation also allows the integration of genome sequence with other genetic and physical maps of the genome.
The principal aim of protein-level annotation is to assign function to the protein products of the genome. Databases of protein sequences and functional domains and motifs are used for this type of annotation. About half of the predicted proteins in a new genome sequence tend to have no obvious function.
Understanding the function of genes and their products in the context of cellular and organismal physiology is the goal of process-level annotation. An obstacle of process-level annotation has been the inconsistency of terms used by different model systems. The Gene Ontology Consortium is helping to solve this problem.
The first description of a comprehensive annotation system was published in 1995 by The Institute for Genomic Research, which performed the first complete sequencing and analysis of the genome of a free-living (non-symbiotic) organism, the bacterium Haemophilus influenzae. The system identifies the genes encoding all proteins, transfer RNAs, ribosomal RNAs, in order to make initial functional assignments. The GeneMark program trained to find protein-coding genes in Haemophilus influenzae is constantly changing and improving.
Following the goals that the Human Genome Project left to achieve after its closure in 2003, the ENCODE project was developed by the National Human Genome Research Institute. This project is a collaborative data collection of the functional elements of the human genome that uses next-generation DNA-sequencing technologies and genomic tiling arrays, technologies able to automatically generate large amounts of data at a dramatically reduced per-base cost but with the same accuracy (base call error) and fidelity (assembly error).
Gene function prediction
While genome annotation is primarily based on sequence similarity (and thus homology), other properties of sequences can be used to predict the function of genes. In fact, most gene function prediction methods focus on protein sequences as they are more informative and more feature-rich. For instance, the distribution of hydrophobic amino acids predicts transmembrane segments in proteins. However, protein function prediction can also use external information such as gene (or protein) expression data, protein structure, or protein-protein interactions.
Computational evolutionary biology
Evolutionary biology is the study of the origin and descent of species, as well as their change over time. Informatics has assisted evolutionary biologists by enabling researchers to:
trace the evolution of a large number of organisms by measuring changes in their DNA, rather than through physical taxonomy or physiological observations alone,
compare entire genomes, which permits the study of more complex evolutionary events, such as gene duplication, horizontal gene transfer, and the prediction of factors important in bacterial speciation,
build complex computational population genetics models to predict the outcome of the system over time
track and share information on an increasingly large number of species and organisms
Future work endeavours to reconstruct the now more complex tree of life.
Comparative genomics
The core of comparative genome analysis is the establishment of the correspondence between genes (orthology analysis) or other genomic features in different organisms. Intergenomic maps are made to trace the evolutionary processes responsible for the divergence of two genomes. A multitude of evolutionary events acting at various organizational levels shape genome evolution. At the lowest level, point mutations affect individual nucleotides. At a higher level, large chromosomal segments undergo duplication, lateral transfer, inversion, transposition, deletion and insertion. Entire genomes are involved in processes of hybridization, polyploidization and endosymbiosis that lead to rapid speciation. The complexity of genome evolution poses many exciting challenges to developers of mathematical models and algorithms, who have recourse to a spectrum of algorithmic, statistical and mathematical techniques, ranging from exact, heuristics, fixed parameter and approximation algorithms for problems based on parsimony models to Markov chain Monte Carlo algorithms for Bayesian analysis of problems based on probabilistic models.
Many of these studies are based on the detection of sequence homology to assign sequences to protein families.
Pan genomics
Pan genomics is a concept introduced in 2005 by Tettelin and Medini. Pan genome is the complete gene repertoire of a particular monophyletic taxonomic group. Although initially applied to closely related strains of a species, it can be applied to a larger context like genus, phylum, etc. It is divided in two parts: the Core genome, a set of genes common to all the genomes under study (often housekeeping genes vital for survival), and the Dispensable/Flexible genome: a set of genes not present in all but one or some genomes under study. A bioinformatics tool BPGA can be used to characterize the Pan Genome of bacterial species.
Genetics of disease
As of 2013, the existence of efficient high-throughput next-generation sequencing technology allows for the identification of cause many different human disorders. Simple Mendelian inheritance has been observed for over 3,000 disorders that have been identified at the Online Mendelian Inheritance in Man database, but complex diseases are more difficult. Association studies have found many individual genetic regions that individually are weakly associated with complex diseases (such as infertility, breast cancer and Alzheimer's disease), rather than a single cause. There are currently many challenges to using genes for diagnosis and treatment, such as how we don't know which genes are important, or how stable the choices an algorithm provides.
Genome-wide association studies have successfully identified thousands of common genetic variants for complex diseases and traits; however, these common variants only explain a small fraction of heritability. Rare variants may account for some of the missing heritability. Large-scale whole genome sequencing studies have rapidly sequenced millions of whole genomes, and such studies have identified hundreds of millions of rare variants. Functional annotations predict the effect or function of a genetic variant and help to prioritize rare functional variants, and incorporating these annotations can effectively boost the power of genetic association of rare variants analysis of whole genome sequencing studies. Some tools have been developed to provide all-in-one rare variant association analysis for whole-genome sequencing data, including integration of genotype data and their functional annotations, association analysis, result summary and visualization. Meta-analysis of whole genome sequencing studies provides an attractive solution to the problem of collecting large sample sizes for discovering rare variants associated with complex phenotypes.
Analysis of mutations in cancer
In cancer, the genomes of affected cells are rearranged in complex or unpredictable ways. In addition to single-nucleotide polymorphism arrays identifying point mutations that cause cancer, oligonucleotide microarrays can be used to identify chromosomal gains and losses (called comparative genomic hybridization). These detection methods generate terabytes of data per experiment. The data is often found to contain considerable variability, or noise, and thus Hidden Markov model and change-point analysis methods are being developed to infer real copy number changes.
Two important principles can be used to identify cancer by mutations in the exome. First, cancer is a disease of accumulated somatic mutations in genes. Second, cancer contains driver mutations which need to be distinguished from passengers.
Further improvements in bioinformatics could allow for classifying types of cancer by analysis of cancer driven mutations in the genome. Furthermore, tracking of patients while the disease progresses may be possible in the future with the sequence of cancer samples. Another type of data that requires novel informatics development is the analysis of lesions found to be recurrent among many tumors.
Gene and protein expression
Analysis of gene expression
The expression of many genes can be determined by measuring mRNA levels with multiple techniques including microarrays, expressed cDNA sequence tag (EST) sequencing, serial analysis of gene expression (SAGE) tag sequencing, massively parallel signature sequencing (MPSS), RNA-Seq, also known as "Whole Transcriptome Shotgun Sequencing" (WTSS), or various applications of multiplexed in-situ hybridization. All of these techniques are extremely noise-prone and/or subject to bias in the biological measurement, and a major research area in computational biology involves developing statistical tools to separate signal from noise in high-throughput gene expression studies. Such studies are often used to determine the genes implicated in a disorder: one might compare microarray data from cancerous epithelial cells to data from non-cancerous cells to determine the transcripts that are up-regulated and down-regulated in a particular population of cancer cells.
Analysis of protein expression
Protein microarrays and high throughput (HT) mass spectrometry (MS) can provide a snapshot of the proteins present in a biological sample. The former approach faces similar problems as with microarrays targeted at mRNA, the latter involves the problem of matching large amounts of mass data against predicted masses from protein sequence databases, and the complicated statistical analysis of samples when multiple incomplete peptides from each protein are detected. Cellular protein localization in a tissue context can be achieved through affinity proteomics displayed as spatial data based on immunohistochemistry and tissue microarrays.
Analysis of regulation
Gene regulation is a complex process where a signal, such as an extracellular signal such as a hormone, eventually leads to an increase or decrease in the activity of one or more proteins. Bioinformatics techniques have been applied to explore various steps in this process.
For example, gene expression can be regulated by nearby elements in the genome. Promoter analysis involves the identification and study of sequence motifs in the DNA surrounding the protein-coding region of a gene. These motifs influence the extent to which that region is transcribed into mRNA. Enhancer elements far away from the promoter can also regulate gene expression, through three-dimensional looping interactions. These interactions can be determined by bioinformatic analysis of chromosome conformation capture experiments.
Expression data can be used to infer gene regulation: one might compare microarray data from a wide variety of states of an organism to form hypotheses about the genes involved in each state. In a single-cell organism, one might compare stages of the cell cycle, along with various stress conditions (heat shock, starvation, etc.). Clustering algorithms can be then applied to expression data to determine which genes are co-expressed. For example, the upstream regions (promoters) of co-expressed genes can be searched for over-represented regulatory elements. Examples of clustering algorithms applied in gene clustering are k-means clustering, self-organizing maps (SOMs), hierarchical clustering, and consensus clustering methods.
Analysis of cellular organization
Several approaches have been developed to analyze the location of organelles, genes, proteins, and other components within cells. A gene ontology category, cellular component, has been devised to capture subcellular localization in many biological databases.
Microscopy and image analysis
Microscopic pictures allow for the location of organelles as well as molecules, which may be the source of abnormalities in diseases.
Protein localization
Finding the location of proteins allows us to predict what they do. This is called protein function prediction. For instance, if a protein is found in the nucleus it may be involved in gene regulation or splicing. By contrast, if a protein is found in mitochondria, it may be involved in respiration or other metabolic processes. There are well developed protein subcellular localization prediction resources available, including protein subcellular location databases, and prediction tools.
Nuclear organization of chromatin
Data from high-throughput chromosome conformation capture experiments, such as Hi-C (experiment) and ChIA-PET, can provide information on the three-dimensional structure and nuclear organization of chromatin. Bioinformatic challenges in this field include partitioning the genome into domains, such as Topologically Associating Domains (TADs), that are organised together in three-dimensional space.
Structural bioinformatics
Finding the structure of proteins is an important application of bioinformatics. The Critical Assessment of Protein Structure Prediction (CASP) is an open competition where worldwide research groups submit protein models for evaluating unknown protein models.
Amino acid sequence
The linear amino acid sequence of a protein is called the primary structure. The primary structure can be easily determined from the sequence of codons on the DNA gene that codes for it. In most proteins, the primary structure uniquely determines the 3-dimensional structure of a protein in its native environment. An exception is the misfolded protein involved in bovine spongiform encephalopathy. This structure is linked to the function of the protein. Additional structural information includes the secondary, tertiary and quaternary structure. A viable general solution to the prediction of the function of a protein remains an open problem. Most efforts have so far been directed towards heuristics that work most of the time.
Homology
In the genomic branch of bioinformatics, homology is used to predict the function of a gene: if the sequence of gene A, whose function is known, is homologous to the sequence of gene B, whose function is unknown, one could infer that B may share A's function. In structural bioinformatics, homology is used to determine which parts of a protein are important in structure formation and interaction with other proteins. Homology modeling is used to predict the structure of an unknown protein from existing homologous proteins.
One example of this is hemoglobin in humans and the hemoglobin in legumes (leghemoglobin), which are distant relatives from the same protein superfamily. Both serve the same purpose of transporting oxygen in the organism. Although both of these proteins have completely different amino acid sequences, their protein structures are virtually identical, which reflects their near identical purposes and shared ancestor.
Other techniques for predicting protein structure include protein threading and de novo (from scratch) physics-based modeling.
Another aspect of structural bioinformatics include the use of protein structures for Virtual Screening models such as Quantitative Structure-Activity Relationship models and proteochemometric models (PCM). Furthermore, a protein's crystal structure can be used in simulation of for example ligand-binding studies and in silico mutagenesis studies.
A 2021 deep-learning algorithms-based software called AlphaFold, developed by Google's DeepMind, greatly outperforms all other prediction software methods, and has released predicted structures for hundreds of millions of proteins in the AlphaFold protein structure database.
Network and systems biology
Network analysis seeks to understand the relationships within biological networks such as metabolic or protein–protein interaction networks. Although biological networks can be constructed from a single type of molecule or entity (such as genes), network biology often attempts to integrate many different data types, such as proteins, small molecules, gene expression data, and others, which are all connected physically, functionally, or both.
Systems biology involves the use of computer simulations of cellular subsystems (such as the networks of metabolites and enzymes that comprise metabolism, signal transduction pathways and gene regulatory networks) to both analyze and visualize the complex connections of these cellular processes. Artificial life or virtual evolution attempts to understand evolutionary processes via the computer simulation of simple (artificial) life forms.
Molecular interaction networks
Tens of thousands of three-dimensional protein structures have been determined by X-ray crystallography and protein nuclear magnetic resonance spectroscopy (protein NMR) and a central question in structural bioinformatics is whether it is practical to predict possible protein–protein interactions only based on these 3D shapes, without performing protein–protein interaction experiments. A variety of methods have been developed to tackle the protein–protein docking problem, though it seems that there is still much work to be done in this field.
Other interactions encountered in the field include Protein–ligand (including drug) and protein–peptide. Molecular dynamic simulation of movement of atoms about rotatable bonds is the fundamental principle behind computational algorithms, termed docking algorithms, for studying molecular interactions.
Biodiversity informatics
Biodiversity informatics deals with the collection and analysis of biodiversity data, such as taxonomic databases, or microbiome data. Examples of such analyses include phylogenetics, niche modelling, species richness mapping, DNA barcoding, or species identification tools. A growing area is also macro-ecology, i.e. the study of how biodiversity is connected to ecology and human impact, such as climate change.
Others
Literature analysis
The enormous number of published literature makes it virtually impossible for individuals to read every paper, resulting in disjointed sub-fields of research. Literature analysis aims to employ computational and statistical linguistics to mine this growing library of text resources. For example:
Abbreviation recognition – identify the long-form and abbreviation of biological terms
Named-entity recognition – recognizing biological terms such as gene names
Protein–protein interaction – identify which proteins interact with which proteins from text
The area of research draws from statistics and computational linguistics.
High-throughput image analysis
Computational technologies are used to automate the processing, quantification and analysis of large amounts of high-information-content biomedical imagery. Modern image analysis systems can improve an observer's accuracy, objectivity, or speed. Image analysis is important for both diagnostics and research. Some examples are:
high-throughput and high-fidelity quantification and sub-cellular localization (high-content screening, cytohistopathology, Bioimage informatics)
morphometrics
clinical image analysis and visualization
determining the real-time air-flow patterns in breathing lungs of living animals
quantifying occlusion size in real-time imagery from the development of and recovery during arterial injury
making behavioral observations from extended video recordings of laboratory animals
infrared measurements for metabolic activity determination
inferring clone overlaps in DNA mapping, e.g. the Sulston score
High-throughput single cell data analysis
Computational techniques are used to analyse high-throughput, low-measurement single cell data, such as that obtained from flow cytometry. These methods typically involve finding populations of cells that are relevant to a particular disease state or experimental condition.
Ontologies and data integration
Biological ontologies are directed acyclic graphs of controlled vocabularies. They create categories for biological concepts and descriptions so they can be easily analyzed with computers. When categorised in this way, it is possible to gain added value from holistic and integrated analysis.
The OBO Foundry was an effort to standardise certain ontologies. One of the most widespread is the Gene ontology which describes gene function. There are also ontologies which describe phenotypes.
Databases
Databases are essential for bioinformatics research and applications. Databases exist for many different information types, including DNA and protein sequences, molecular structures, phenotypes and biodiversity. Databases can contain both empirical data (obtained directly from experiments) and predicted data (obtained from analysis of existing data). They may be specific to a particular organism, pathway or molecule of interest. Alternatively, they can incorporate data compiled from multiple other databases. Databases can have different formats, access mechanisms, and be public or private.
Some of the most commonly used databases are listed below:
Used in biological sequence analysis: Genbank, UniProt
Used in structure analysis: Protein Data Bank (PDB)
Used in finding Protein Families and Motif Finding: InterPro, Pfam
Used for Next Generation Sequencing: Sequence Read Archive
Used in Network Analysis: Metabolic Pathway Databases (KEGG, BioCyc), Interaction Analysis Databases, Functional Networks
Used in design of synthetic genetic circuits: GenoCAD
Software and tools
Software tools for bioinformatics include simple command-line tools, more complex graphical programs, and standalone web-services. They are made by bioinformatics companies or by public institutions.
Open-source bioinformatics software
Many free and open-source software tools have existed and continued to grow since the 1980s. The combination of a continued need for new algorithms for the analysis of emerging types of biological readouts, the potential for innovative in silico experiments, and freely available open code bases have created opportunities for research groups to contribute to both bioinformatics regardless of funding. The open source tools often act as incubators of ideas, or community-supported plug-ins in commercial applications. They may also provide de facto standards and shared object models for assisting with the challenge of bioinformation integration.
Open-source bioinformatics software includes Bioconductor, BioPerl, Biopython, BioJava, BioJS, BioRuby, Bioclipse, EMBOSS, .NET Bio, Orange with its bioinformatics add-on, Apache Taverna, UGENE and GenoCAD.
The non-profit Open Bioinformatics Foundation and the annual Bioinformatics Open Source Conference promote open-source bioinformatics software.
Web services in bioinformatics
SOAP- and REST-based interfaces have been developed to allow client computers to use algorithms, data and computing resources from servers in other parts of the world. The main advantage are that end users do not have to deal with software and database maintenance overheads.
Basic bioinformatics services are classified by the EBI into three categories: SSS (Sequence Search Services), MSA (Multiple Sequence Alignment), and BSA (Biological Sequence Analysis). The availability of these service-oriented bioinformatics resources demonstrate the applicability of web-based bioinformatics solutions, and range from a collection of standalone tools with a common data format under a single web-based interface, to integrative, distributed and extensible bioinformatics workflow management systems.
Bioinformatics workflow management systems
A bioinformatics workflow management system is a specialized form of a workflow management system designed specifically to compose and execute a series of computational or data manipulation steps, or a workflow, in a Bioinformatics application. Such systems are designed to
provide an easy-to-use environment for individual application scientists themselves to create their own workflows,
provide interactive tools for the scientists enabling them to execute their workflows and view their results in real-time,
simplify the process of sharing and reusing workflows between the scientists, and
enable scientists to track the provenance of the workflow execution results and the workflow creation steps.
Some of the platforms giving this service: Galaxy, Kepler, Taverna, UGENE, Anduril, HIVE.
BioCompute and BioCompute Objects
In 2014, the US Food and Drug Administration sponsored a conference held at the National Institutes of Health Bethesda Campus to discuss reproducibility in bioinformatics. Over the next three years, a consortium of stakeholders met regularly to discuss what would become BioCompute paradigm. These stakeholders included representatives from government, industry, and academic entities. Session leaders represented numerous branches of the FDA and NIH Institutes and Centers, non-profit entities including the Human Variome Project and the European Federation for Medical Informatics, and research institutions including Stanford, the New York Genome Center, and the George Washington University.
It was decided that the BioCompute paradigm would be in the form of digital 'lab notebooks' which allow for the reproducibility, replication, review, and reuse, of bioinformatics protocols. This was proposed to enable greater continuity within a research group over the course of normal personnel flux while furthering the exchange of ideas between groups. The US FDA funded this work so that information on pipelines would be more transparent and accessible to their regulatory staff.
In 2016, the group reconvened at the NIH in Bethesda and discussed the potential for a BioCompute Object, an instance of the BioCompute paradigm. This work was copied as both a "standard trial use" document and a preprint paper uploaded to bioRxiv. The BioCompute object allows for the JSON-ized record to be shared among employees, collaborators, and regulators.
Education platforms
Bioinformatics is not only taught as in-person master's degree at many universities. The computational nature of bioinformatics lends it to computer-aided and online learning. Software platforms designed to teach bioinformatics concepts and methods include Rosalind and online courses offered through the Swiss Institute of Bioinformatics Training Portal. The Canadian Bioinformatics Workshops provides videos and slides from training workshops on their website under a Creative Commons license. The 4273π project or 4273pi project also offers open source educational materials for free. The course runs on low cost Raspberry Pi computers and has been used to teach adults and school pupils. 4273 is actively developed by a consortium of academics and research staff who have run research level bioinformatics using Raspberry Pi computers and the 4273π operating system.
MOOC platforms also provide online certifications in bioinformatics and related disciplines, including Coursera's Bioinformatics Specialization at the University of California, San Diego, Genomic Data Science Specialization at Johns Hopkins University, and EdX's Data Analysis for Life Sciences XSeries at Harvard University.
Conferences
There are several large conferences that are concerned with bioinformatics. Some of the most notable examples are Intelligent Systems for Molecular Biology (ISMB), European Conference on Computational Biology (ECCB), and Research in Computational Molecular Biology (RECOMB).
See also
References
Further reading
Sehgal et al. : Structural, phylogenetic and docking studies of D-amino acid oxidase activator(DAOA ), a candidate schizophrenia gene. Theoretical Biology and Medical Modelling 2013 10 :3.
Achuthsankar S Nair Computational Biology & Bioinformatics – A gentle Overview , Communications of Computer Society of India, January 2007
Aluru, Srinivas, ed. Handbook of Computational Molecular Biology. Chapman & Hall/Crc, 2006. (Chapman & Hall/Crc Computer and Information Science Series)
Baldi, P and Brunak, S, Bioinformatics: The Machine Learning Approach, 2nd edition. MIT Press, 2001.
Barnes, M.R. and Gray, I.C., eds., Bioinformatics for Geneticists, first edition. Wiley, 2003.
Baxevanis, A.D. and Ouellette, B.F.F., eds., Bioinformatics: A Practical Guide to the Analysis of Genes and Proteins, third edition. Wiley, 2005.
Baxevanis, A.D., Petsko, G.A., Stein, L.D., and Stormo, G.D., eds., Current Protocols in Bioinformatics. Wiley, 2007.
Cristianini, N. and Hahn, M. Introduction to Computational Genomics , Cambridge University Press, 2006. ( |)
Durbin, R., S. Eddy, A. Krogh and G. Mitchison, Biological sequence analysis. Cambridge University Press, 1998.
Keedwell, E., Intelligent Bioinformatics: The Application of Artificial Intelligence Techniques to Bioinformatics Problems. Wiley, 2005.
Kohane, et al. Microarrays for an Integrative Genomics. The MIT Press, 2002.
Lund, O. et al. Immunological Bioinformatics. The MIT Press, 2005.
Pachter, Lior and Sturmfels, Bernd. "Algebraic Statistics for Computational Biology" Cambridge University Press, 2005.
Pevzner, Pavel A. Computational Molecular Biology: An Algorithmic Approach The MIT Press, 2000.
Soinov, L. Bioinformatics and Pattern Recognition Come Together Journal of Pattern Recognition Research (JPRR ), Vol 1 (1) 2006 p. 37–41
Stevens, Hallam, Life Out of Sequence: A Data-Driven History of Bioinformatics, Chicago: The University of Chicago Press, 2013,
Tisdall, James. "Beginning Perl for Bioinformatics" O'Reilly, 2001.
Catalyzing Inquiry at the Interface of Computing and Biology (2005) CSTB report
Calculating the Secrets of Life: Contributions of the Mathematical Sciences and computing to Molecular Biology (1995)
Foundations of Computational and Systems Biology MIT Course
Computational Biology: Genomes, Networks, Evolution Free MIT Course
External links
Bioinformatics Resource Portal (SIB) | Bioinformatics | [
"Engineering",
"Biology"
] | 7,642 | [
"Bioinformatics",
"Biological engineering"
] |
4,230 | https://en.wikipedia.org/wiki/Cell%20%28biology%29 | The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane; many cells contain organelles, each with a specific function. The term comes from the Latin word meaning 'small room'. Most cells are only visible under a microscope. Cells emerged on Earth about 4 billion years ago. All cells are capable of replication, protein synthesis, and motility.
Cells are broadly categorized into two types: eukaryotic cells, which possess a nucleus, and prokaryotic cells, which lack a nucleus but have a nucleoid region. Prokaryotes are single-celled organisms such as bacteria, whereas eukaryotes can be either single-celled, such as amoebae, or multicellular, such as some algae, plants, animals, and fungi. Eukaryotic cells contain organelles including mitochondria, which provide energy for cell functions; chloroplasts, which create sugars by photosynthesis, in plants; and ribosomes, which synthesise proteins.
Cells were discovered by Robert Hooke in 1665, who named them after their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells.
Cell types
Cells are broadly categorized into two types: eukaryotic cells, which possess a nucleus, and prokaryotic cells, which lack a nucleus but have a nucleoid region. Prokaryotes are single-celled organisms, whereas eukaryotes can be either single-celled or multicellular.
Prokaryotic cells
Prokaryotes include bacteria and archaea, two of the three domains of life. Prokaryotic cells were the first form of life on Earth, characterized by having vital biological processes including cell signaling. They are simpler and smaller than eukaryotic cells, and lack a nucleus, and other membrane-bound organelles. The DNA of a prokaryotic cell consists of a single circular chromosome that is in direct contact with the cytoplasm. The nuclear region in the cytoplasm is called the nucleoid. Most prokaryotes are the smallest of all organisms, ranging from 0.5 to 2.0 μm in diameter.
A prokaryotic cell has three regions:
Enclosing the cell is the cell envelope, generally consisting of a plasma membrane covered by a cell wall which, for some bacteria, may be further covered by a third layer called a capsule. Though most prokaryotes have both a cell membrane and a cell wall, there are exceptions such as Mycoplasma (bacteria) and Thermoplasma (archaea) which only possess the cell membrane layer. The envelope gives rigidity to the cell and separates the interior of the cell from its environment, serving as a protective filter. The cell wall consists of peptidoglycan in bacteria and acts as an additional barrier against exterior forces. It also prevents the cell from expanding and bursting (cytolysis) from osmotic pressure due to a hypotonic environment. Some eukaryotic cells (plant cells and fungal cells) also have a cell wall.
Inside the cell is the cytoplasmic region that contains the genome (DNA), ribosomes and various sorts of inclusions. The genetic material is freely found in the cytoplasm. Prokaryotes can carry extrachromosomal DNA elements called plasmids, which are usually circular. Linear bacterial plasmids have been identified in several species of spirochete bacteria, including members of the genus Borrelia notably Borrelia burgdorferi, which causes Lyme disease. Though not forming a nucleus, the DNA is condensed in a nucleoid. Plasmids encode additional genes, such as antibiotic resistance genes.
On the outside, some prokaryotes have flagella and pili that project from the cell's surface. These are structures made of proteins that facilitate movement and communication between cells.
Eukaryotic cells
Plants, animals, fungi, slime moulds, protozoa, and algae are all eukaryotic. These cells are about fifteen times wider than a typical prokaryote and can be as much as a thousand times greater in volume. The main distinguishing feature of eukaryotes as compared to prokaryotes is compartmentalization: the presence of membrane-bound organelles (compartments) in which specific activities take place. Most important among these is a cell nucleus, an organelle that houses the cell's DNA. This nucleus gives the eukaryote its name, which means "true kernel (nucleus)". Some of the other differences are:
The plasma membrane resembles that of prokaryotes in function, with minor differences in the setup. Cell walls may or may not be present.
The eukaryotic DNA is organized in one or more linear molecules, called chromosomes, which are associated with histone proteins. All chromosomal DNA is stored in the cell nucleus, separated from the cytoplasm by a membrane. Some eukaryotic organelles such as mitochondria also contain some DNA.
Many eukaryotic cells are ciliated with primary cilia. Primary cilia play important roles in chemosensation, mechanosensation, and thermosensation. Each cilium may thus be "viewed as a sensory cellular antennae that coordinates a large number of cellular signaling pathways, sometimes coupling the signaling to ciliary motility or alternatively to cell division and differentiation."
Motile eukaryotes can move using motile cilia or flagella. Motile cells are absent in conifers and flowering plants. Eukaryotic flagella are more complex than those of prokaryotes.
Many groups of eukaryotes are single-celled. Among the many-celled groups are animals and plants. The number of cells in these groups vary with species; it has been estimated that the human body contains around 37 trillion (3.72×1013) cells, and more recent studies put this number at around 30 trillion (~36 trillion cells in the male, ~28 trillion in the female).
Subcellular components
All cells, whether prokaryotic or eukaryotic, have a membrane that envelops the cell, regulates what moves in and out (selectively permeable), and maintains the electric potential of the cell. Inside the membrane, the cytoplasm takes up most of the cell's volume. Except red blood cells, which lack a cell nucleus and most organelles to accommodate maximum space for hemoglobin, all cells possess DNA, the hereditary material of genes, and RNA, containing the information necessary to build various proteins such as enzymes, the cell's primary machinery. There are also other kinds of biomolecules in cells. This article lists these primary cellular components, then briefly describes their function.
Cell membrane
The cell membrane, or plasma membrane, is a selectively permeable biological membrane that surrounds the cytoplasm of a cell. In animals, the plasma membrane is the outer boundary of the cell, while in plants and prokaryotes it is usually covered by a cell wall. This membrane serves to separate and protect a cell from its surrounding environment and is made mostly from a double layer of phospholipids, which are amphiphilic (partly hydrophobic and partly hydrophilic). Hence, the layer is called a phospholipid bilayer, or sometimes a fluid mosaic membrane. Embedded within this membrane is a macromolecular structure called the porosome the universal secretory portal in cells and a variety of protein molecules that act as channels and pumps that move different molecules into and out of the cell. The membrane is semi-permeable, and selectively permeable, in that it can either let a substance (molecule or ion) pass through freely, to a limited extent or not at all. Cell surface membranes also contain receptor proteins that allow cells to detect external signaling molecules such as hormones.
Cytoskeleton
The cytoskeleton acts to organize and maintain the cell's shape; anchors organelles in place; helps during endocytosis, the uptake of external materials by a cell, and cytokinesis, the separation of daughter cells after cell division; and moves parts of the cell in processes of growth and mobility. The eukaryotic cytoskeleton is composed of microtubules, intermediate filaments and microfilaments. In the cytoskeleton of a neuron the intermediate filaments are known as neurofilaments. There are a great number of proteins associated with them, each controlling a cell's structure by directing, bundling, and aligning filaments. The prokaryotic cytoskeleton is less well-studied but is involved in the maintenance of cell shape, polarity and cytokinesis. The subunit protein of microfilaments is a small, monomeric protein called actin. The subunit of microtubules is a dimeric molecule called tubulin. Intermediate filaments are heteropolymers whose subunits vary among the cell types in different tissues. Some of the subunit proteins of intermediate filaments include vimentin, desmin, lamin (lamins A, B and C), keratin (multiple acidic and basic keratins), and neurofilament proteins (NF–L, NF–M).
Genetic material
Two different kinds of genetic material exist: deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). Cells use DNA for their long-term information storage. The biological information contained in an organism is encoded in its DNA sequence. RNA is used for information transport (e.g., mRNA) and enzymatic functions (e.g., ribosomal RNA). Transfer RNA (tRNA) molecules are used to add amino acids during protein translation.
Prokaryotic genetic material is organized in a simple circular bacterial chromosome in the nucleoid region of the cytoplasm. Eukaryotic genetic material is divided into different, linear molecules called chromosomes inside a discrete nucleus, usually with additional genetic material in some organelles like mitochondria and chloroplasts (see endosymbiotic theory).
A human cell has genetic material contained in the cell nucleus (the nuclear genome) and in the mitochondria (the mitochondrial genome). In humans, the nuclear genome is divided into 46 linear DNA molecules called chromosomes, including 22 homologous chromosome pairs and a pair of sex chromosomes. The mitochondrial genome is a circular DNA molecule distinct from nuclear DNA. Although the mitochondrial DNA is very small compared to nuclear chromosomes, it codes for 13 proteins involved in mitochondrial energy production and specific tRNAs.
Foreign genetic material (most commonly DNA) can also be artificially introduced into the cell by a process called transfection. This can be transient, if the DNA is not inserted into the cell's genome, or stable, if it is. Certain viruses also insert their genetic material into the genome.
Organelles
Organelles are parts of the cell that are adapted and/or specialized for carrying out one or more vital functions, analogous to the organs of the human body (such as the heart, lung, and kidney, with each organ performing a different function). Both eukaryotic and prokaryotic cells have organelles, but prokaryotic organelles are generally simpler and are not membrane-bound.
There are several types of organelles in a cell. Some (such as the nucleus and Golgi apparatus) are typically solitary, while others (such as mitochondria, chloroplasts, peroxisomes and lysosomes) can be numerous (hundreds to thousands). The cytosol is the gelatinous fluid that fills the cell and surrounds the organelles.
Eukaryotic
Cell nucleus: A cell's information center, the cell nucleus is the most conspicuous organelle found in a eukaryotic cell. It houses the cell's chromosomes, and is the place where almost all DNA replication and RNA synthesis (transcription) occur. The nucleus is spherical and separated from the cytoplasm by a double membrane called the nuclear envelope, space between these two membrane is called perinuclear space. The nuclear envelope isolates and protects a cell's DNA from various molecules that could accidentally damage its structure or interfere with its processing. During processing, DNA is transcribed, or copied into a special RNA, called messenger RNA (mRNA). This mRNA is then transported out of the nucleus, where it is translated into a specific protein molecule. The nucleolus is a specialized region within the nucleus where ribosome subunits are assembled. In prokaryotes, DNA processing takes place in the cytoplasm.
Mitochondria and chloroplasts: generate energy for the cell. Mitochondria are self-replicating double membrane-bound organelles that occur in various numbers, shapes, and sizes in the cytoplasm of all eukaryotic cells. Respiration occurs in the cell mitochondria, which generate the cell's energy by oxidative phosphorylation, using oxygen to release energy stored in cellular nutrients (typically pertaining to glucose) to generate ATP (aerobic respiration). Mitochondria multiply by binary fission, like prokaryotes. Chloroplasts can only be found in plants and algae, and they capture the sun's energy to make carbohydrates through photosynthesis.
Endoplasmic reticulum: The endoplasmic reticulum (ER) is a transport network for molecules targeted for certain modifications and specific destinations, as compared to molecules that float freely in the cytoplasm. The ER has two forms: the rough ER, which has ribosomes on its surface that secrete proteins into the ER, and the smooth ER, which lacks ribosomes. The smooth ER plays a role in calcium sequestration and release and also helps in synthesis of lipid.
Golgi apparatus: The primary function of the Golgi apparatus is to process and package the macromolecules such as proteins and lipids that are synthesized by the cell.
Lysosomes and peroxisomes: Lysosomes contain digestive enzymes (acid hydrolases). They digest excess or worn-out organelles, food particles, and engulfed viruses or bacteria. Peroxisomes have enzymes that rid the cell of toxic peroxides, Lysosomes are optimally active in an acidic environment. The cell could not house these destructive enzymes if they were not contained in a membrane-bound system.
Centrosome: the cytoskeleton organizer: The centrosome produces the microtubules of a cell—a key component of the cytoskeleton. It directs the transport through the ER and the Golgi apparatus. Centrosomes are composed of two centrioles which lie perpendicular to each other in which each has an organization like a cartwheel, which separate during cell division and help in the formation of the mitotic spindle. A single centrosome is present in the animal cells. They are also found in some fungi and algae cells.
Vacuoles: Vacuoles sequester waste products and in plant cells store water. They are often described as liquid filled spaces and are surrounded by a membrane. Some cells, most notably Amoeba, have contractile vacuoles, which can pump water out of the cell if there is too much water. The vacuoles of plant cells and fungal cells are usually larger than those of animal cells. Vacuoles of plant cells are surrounded by a membrane which transports ions against concentration gradients.
Eukaryotic and prokaryotic
Ribosomes: The ribosome is a large complex of RNA and protein molecules. They each consist of two subunits, and act as an assembly line where RNA from the nucleus is used to synthesise proteins from amino acids. Ribosomes can be found either floating freely or bound to a membrane (the rough endoplasmatic reticulum in eukaryotes, or the cell membrane in prokaryotes).
Plastids: Plastid are membrane-bound organelle generally found in plant cells and euglenoids and contain specific pigments, thus affecting the colour of the plant and organism. And these pigments also helps in food storage and tapping of light energy. There are three types of plastids based upon the specific pigments. Chloroplasts contain chlorophyll and some carotenoid pigments which helps in the tapping of light energy during photosynthesis. Chromoplasts contain fat-soluble carotenoid pigments like orange carotene and yellow xanthophylls which helps in synthesis and storage. Leucoplasts are non-pigmented plastids and helps in storage of nutrients.
Structures outside the cell membrane
Many cells also have structures which exist wholly or partially outside the cell membrane. These structures are notable because they are not protected from the external environment by the cell membrane. In order to assemble these structures, their components must be carried across the cell membrane by export processes.
Cell wall
Many types of prokaryotic and eukaryotic cells have a cell wall. The cell wall acts to protect the cell mechanically and chemically from its environment, and is an additional layer of protection to the cell membrane. Different types of cell have cell walls made up of different materials; plant cell walls are primarily made up of cellulose, fungi cell walls are made up of chitin and bacteria cell walls are made up of peptidoglycan.
Prokaryotic
Capsule
A gelatinous capsule is present in some bacteria outside the cell membrane and cell wall. The capsule may be polysaccharide as in pneumococci, meningococci or polypeptide as Bacillus anthracis or hyaluronic acid as in streptococci.
Capsules are not marked by normal staining protocols and can be detected by India ink or methyl blue, which allows for higher contrast between the cells for observation.
Flagella
Flagella are organelles for cellular mobility. The bacterial flagellum stretches from cytoplasm through the cell membrane(s) and extrudes through the cell wall. They are long and thick thread-like appendages, protein in nature. A different type of flagellum is found in archaea and a different type is found in eukaryotes.
Fimbriae
A fimbria (plural fimbriae also known as a pilus, plural pili) is a short, thin, hair-like filament found on the surface of bacteria. Fimbriae are formed of a protein called pilin (antigenic) and are responsible for the attachment of bacteria to specific receptors on human cells (cell adhesion). There are special types of pili involved in bacterial conjugation.
Cellular processes
Replication
Cell division involves a single cell (called a mother cell) dividing into two daughter cells. This leads to growth in multicellular organisms (the growth of tissue) and to procreation (vegetative reproduction) in unicellular organisms. Prokaryotic cells divide by binary fission, while eukaryotic cells usually undergo a process of nuclear division, called mitosis, followed by division of the cell, called cytokinesis. A diploid cell may also undergo meiosis to produce haploid cells, usually four. Haploid cells serve as gametes in multicellular organisms, fusing to form new diploid cells.
DNA replication, or the process of duplicating a cell's genome, always happens when a cell divides through mitosis or binary fission. This occurs during the S phase of the cell cycle.
In meiosis, the DNA is replicated only once, while the cell divides twice. DNA replication only occurs before meiosis I. DNA replication does not occur when the cells divide the second time, in meiosis II. Replication, like all cellular activities, requires specialized proteins for carrying out the job.
DNA repair
Cells of all organisms contain enzyme systems that scan their DNA for damage and carry out repair processes when it is detected. Diverse repair processes have evolved in organisms ranging from bacteria to humans. The widespread prevalence of these repair processes indicates the importance of maintaining cellular DNA in an undamaged state in order to avoid cell death or errors of replication due to damage that could lead to mutation. E. coli bacteria are a well-studied example of a cellular organism with diverse well-defined DNA repair processes. These include: nucleotide excision repair, DNA mismatch repair, non-homologous end joining of double-strand breaks, recombinational repair and light-dependent repair (photoreactivation).
Growth and metabolism
Between successive cell divisions, cells grow through the functioning of cellular metabolism. Cell metabolism is the process by which individual cells process nutrient molecules. Metabolism has two distinct divisions: catabolism, in which the cell breaks down complex molecules to produce energy and reducing power, and anabolism, in which the cell uses energy and reducing power to construct complex molecules and perform other biological functions.
Complex sugars can be broken down into simpler sugar molecules called monosaccharides such as glucose. Once inside the cell, glucose is broken down to make adenosine triphosphate (ATP), a molecule that possesses readily available energy, through two different pathways. In plant cells, chloroplasts create sugars by photosynthesis, using the energy of light to join molecules of water and carbon dioxide.
Protein synthesis
Cells are capable of synthesizing new proteins, which are essential for the modulation and maintenance of cellular activities. This process involves the formation of new protein molecules from amino acid building blocks based on information encoded in DNA/RNA. Protein synthesis generally consists of two major steps: transcription and translation.
Transcription is the process where genetic information in DNA is used to produce a complementary RNA strand. This RNA strand is then processed to give messenger RNA (mRNA), which is free to migrate through the cell. mRNA molecules bind to protein-RNA complexes called ribosomes located in the cytosol, where they are translated into polypeptide sequences. The ribosome mediates the formation of a polypeptide sequence based on the mRNA sequence. The mRNA sequence directly relates to the polypeptide sequence by binding to transfer RNA (tRNA) adapter molecules in binding pockets within the ribosome. The new polypeptide then folds into a functional three-dimensional protein molecule.
Motility
Unicellular organisms can move in order to find food or escape predators. Common mechanisms of motion include flagella and cilia.
In multicellular organisms, cells can move during processes such as wound healing, the immune response and cancer metastasis. For example, in wound healing in animals, white blood cells move to the wound site to kill the microorganisms that cause infection. Cell motility involves many receptors, crosslinking, bundling, binding, adhesion, motor and other proteins. The process is divided into three steps: protrusion of the leading edge of the cell, adhesion of the leading edge and de-adhesion at the cell body and rear, and cytoskeletal contraction to pull the cell forward. Each step is driven by physical forces generated by unique segments of the cytoskeleton.
Navigation, control and communication
In August 2020, scientists described one way cells—in particular cells of a slime mold and mouse pancreatic cancer-derived cells—are able to navigate efficiently through a body and identify the best routes through complex mazes: generating gradients after breaking down diffused chemoattractants which enable them to sense upcoming maze junctions before reaching them, including around corners.
Multicellularity
Cell specialization/differentiation
Multicellular organisms are organisms that consist of more than one cell, in contrast to single-celled organisms.
In complex multicellular organisms, cells specialize into different cell types that are adapted to particular functions. In mammals, major cell types include skin cells, muscle cells, neurons, blood cells, fibroblasts, stem cells, and others. Cell types differ both in appearance and function, yet are genetically identical. Cells are able to be of the same genotype but of different cell type due to the differential expression of the genes they contain.
Most distinct cell types arise from a single totipotent cell, called a zygote, that differentiates into hundreds of different cell types during the course of development. Differentiation of cells is driven by different environmental cues (such as cell–cell interaction) and intrinsic differences (such as those caused by the uneven distribution of molecules during division).
Origin of multicellularity
Multicellularity has evolved independently at least 25 times, including in some prokaryotes, like cyanobacteria, myxobacteria, actinomycetes, or Methanosarcina. However, complex multicellular organisms evolved only in six eukaryotic groups: animals, fungi, brown algae, red algae, green algae, and plants. It evolved repeatedly for plants (Chloroplastida), once or twice for animals, once for brown algae, and perhaps several times for fungi, slime molds, and red algae. Multicellularity may have evolved from colonies of interdependent organisms, from cellularization, or from organisms in symbiotic relationships.
The first evidence of multicellularity is from cyanobacteria-like organisms that lived between 3 and 3.5 billion years ago. Other early fossils of multicellular organisms include the contested Grypania spiralis and the fossils of the black shales of the Palaeoproterozoic Francevillian Group Fossil B Formation in Gabon.
The evolution of multicellularity from unicellular ancestors has been replicated in the laboratory, in evolution experiments using predation as the selective pressure.
Origins
The origin of cells has to do with the origin of life, which began the history of life on Earth.
Origin of life
Small molecules needed for life may have been carried to Earth on meteorites, created at deep-sea vents, or synthesized by lightning in a reducing atmosphere. There is little experimental data defining what the first self-replicating forms were. RNA may have been the earliest self-replicating molecule, as it can both store genetic information and catalyze chemical reactions.
Cells emerged around 4 billion years ago. The first cells were most likely heterotrophs. The early cell membranes were probably simpler and more permeable than modern ones, with only a single fatty acid chain per lipid. Lipids spontaneously form bilayered vesicles in water, and could have preceded RNA.
First eukaryotic cells
Eukaryotic cells were created some 2.2 billion years ago in a process called eukaryogenesis. This is widely agreed to have involved symbiogenesis, in which archaea and bacteria came together to create the first eukaryotic common ancestor. This cell had a new level of complexity and capability, with a nucleus and facultatively aerobic mitochondria. It evolved some 2 billion years ago into a population of single-celled organisms that included the last eukaryotic common ancestor, gaining capabilities along the way, though the sequence of the steps involved has been disputed, and may not have started with symbiogenesis. It featured at least one centriole and cilium, sex (meiosis and syngamy), peroxisomes, and a dormant cyst with a cell wall of chitin and/or cellulose. In turn, the last eukaryotic common ancestor gave rise to the eukaryotes' crown group, containing the ancestors of animals, fungi, plants, and a diverse range of single-celled organisms. The plants were created around 1.6 billion years ago with a second episode of symbiogenesis that added chloroplasts, derived from cyanobacteria.
History of research
In 1665, Robert Hooke examined a thin slice of cork under his microscope, and saw a structure of small enclosures. He wrote "I could exceeding plainly perceive it to be all perforated and porous, much like a Honey-comb, but that the pores of it were not regular". To further support his theory, Matthias Schleiden and Theodor Schwann both also studied cells of both animal and plants. What they discovered were significant differences between the two types of cells. This put forth the idea that cells were not only fundamental to plants, but animals as well.
1632–1723: Antonie van Leeuwenhoek taught himself to make lenses, constructed basic optical microscopes and drew protozoa, such as Vorticella from rain water, and bacteria from his own mouth.
1665: Robert Hooke discovered cells in cork, then in living plant tissue using an early compound microscope. He coined the term cell (from Latin cellula, meaning "small room") in his book Micrographia (1665).
1839: Theodor Schwann and Matthias Jakob Schleiden elucidated the principle that plants and animals are made of cells, concluding that cells are a common unit of structure and development, and thus founding the cell theory.
1855: Rudolf Virchow stated that new cells come from pre-existing cells by cell division (omnis cellula ex cellula).
1931: Ernst Ruska built the first transmission electron microscope (TEM) at the University of Berlin. By 1935, he had built an EM with twice the resolution of a light microscope, revealing previously unresolvable organelles.
1981: Lynn Margulis published Symbiosis in Cell Evolution detailing how eukaryotic cells were created by symbiogenesis.
See also
Cell cortex
Cell culture
Cellular model
Cytoneme
Cytorrhysis
Cytotoxicity
Lipid raft
List of distinct cell types in the adult human body
Outline of cell biology
Parakaryon myojinensis
Plasmolysis
Syncytium
Tunneling nanotube
Vault (organelle)
References
Further reading
; The fourth edition is freely available from National Center for Biotechnology Information Bookshelf.
External links
MBInfo – Descriptions on Cellular Functions and Processes
Inside the Cell – a science education booklet by National Institutes of Health, in PDF and ePub.
Cell Biology in "The Biology Project" of University of Arizona.
Centre of the Cell online
The Image & Video Library of The American Society for Cell Biology , a collection of peer-reviewed still images, video clips and digital books that illustrate the structure, function and biology of the cell.
WormWeb.org: Interactive Visualization of the C. elegans Cell lineage – Visualize the entire cell lineage tree of the nematode C. elegans
Cell biology
Cell anatomy
1665 in science | Cell (biology) | [
"Biology"
] | 6,497 | [
"Cell biology"
] |
4,260 | https://en.wikipedia.org/wiki/Dots%20and%20boxes | Dots and boxes is a pencil-and-paper game for two players (sometimes more). It was first published in the 19th century by French mathematician Édouard Lucas, who called it . It has gone by many other names, including dots and dashes, game of dots, dot to dot grid, boxes, and pigs in a pen.
The game starts with an empty grid of dots. Usually two players take turns adding a single horizontal or vertical line between two unjoined adjacent dots. A player who completes the fourth side of a 1×1 box earns one point and takes another turn. A point is typically recorded by placing a mark that identifies the player in the box, such as an initial. The game ends when no more lines can be placed. The winner is the player with the most points. The board may be of any size grid. When short on time, or to learn the game, a 2×2 board (3×3 dots) is suitable. A 5×5 board, on the other hand, is good for experts.
Strategy
For most novice players, the game begins with a phase of more-or-less randomly connecting dots, where the only strategy is to avoid adding the third side to any box. This continues until all the remaining (potential) boxes are joined into chains – groups of one or more adjacent boxes in which any move gives all the boxes in the chain to the opponent. At this point, players typically take all available boxes, then open the smallest available chain to their opponent. For example, a novice player faced with a situation like position 1 in the diagram on the right, in which some boxes can be captured, may take all the boxes in the chain, resulting in position 2. But with their last move, they have to open the next, larger chain, and the novice loses the game.
A more experienced player faced with position 1 will instead play the double-cross strategy, taking all but 2 of the boxes in the chain and leaving position 3. The opponent will take these two boxes and then be forced to open the next chain. By achieving position 3, player A wins. The same double-cross strategy applies no matter how many long chains there are: a player using this strategy will take all but two boxes in each chain and take all the boxes in the last chain. If the chains are long enough, then this player will win.
The next level of strategic complexity, between experts who would both use the double-cross strategy (if they were allowed to), is a battle for control: an expert player tries to force their opponent to open the first long chain, because the player who first opens a long chain usually loses. Against a player who does not understand the concept of a sacrifice, the expert simply has to make the correct number of sacrifices to encourage the opponent to hand them the first chain long enough to ensure a win. If the other player also sacrifices, the expert has to additionally manipulate the number of available sacrifices through earlier play.
In combinatorial game theory, Dots and Boxes is an impartial game and many positions can be analyzed using Sprague–Grundy theory. However, Dots and Boxes lacks the normal play convention of most impartial games (where the last player to move wins), which complicates the analysis considerably.
Unusual grids and variants
Dots and Boxes need not be played on a rectangular gridit can be played on a triangular grid or a hexagonal grid.
Dots and boxes has a dual graph form called "Strings-and-Coins". This game is played on a network of coins (vertices) joined by strings (edges). Players take turns cutting a string. When a cut leaves a coin with no strings, the player "pockets" the coin and takes another turn. The winner is the player who pockets the most coins. Strings-and-Coins can be played on an arbitrary graph.
In analyses of Dots and Boxes, a game that starts with outer lines already drawn is called a Swedish board while the standard version that starts fully blank is called an American board. An intermediate version with only the left and bottom sides starting with drawn lines is called an Icelandic board.
A related game is Dots, played by adding coloured dots to a blank grid, and joining them with straight or diagonal line in an attempt to surround an opponent's dots.
References
External links
Abstract strategy games
Mathematical games
Paper-and-pencil games
1889 introductions | Dots and boxes | [
"Mathematics"
] | 901 | [
"Recreational mathematics",
"Mathematical games"
] |
4,279 | https://en.wikipedia.org/wiki/Broadcast%20domain | A broadcast domain is a logical division of a computer network, in which all nodes can reach each other by broadcast at the data link layer. A broadcast domain can be within the same LAN segment or it can be bridged to other LAN segments.
In terms of current popular technologies, any computer connected to the same Ethernet repeater or switch is a member of the same broadcast domain. Further, any computer connected to the same set of interconnected switches or repeaters is a member of the same broadcast domain. Routers and other network-layer devices form boundaries between broadcast domains.
The notion of a broadcast domain can be compared with a collision domain, which would be all nodes on the same set of inter-connected repeaters and divided by switches and network bridges. Collision domains are generally smaller than and contained within broadcast domains. While some data-link-layer devices are able to divide the collision domains, broadcast domains are only divided by network-layer devices such as routers or layer-3 switches. Separating VLANs divides broadcast domains as well.
Further explanation
The distinction between broadcast and collision domains comes about because simple Ethernet and similar systems use a shared medium for communication. In simple Ethernet (without switches or bridges), data frames are transmitted to all other nodes on a network. Each receiving node checks the destination address of each frame and simply ignores any frame not addressed to its own MAC address or the broadcast address.
Switches act as buffers, receiving and analyzing the frames from each connected network segment. Frames destined for nodes connected to the originating segment are not forwarded by the switch. Frames destined for a specific node on a different segment are sent only to that segment. Only broadcast frames are forwarded to all other segments. This reduces unnecessary traffic and collisions.
In such a switched network, transmitted frames may not be received by all other reachable nodes. Nominally, only broadcast frames will be received by all other nodes. Collisions are localized to the physical-layer network segment they occur on. Thus, the broadcast domain is the entire inter-connected layer-2 network, and the segments connected to each switch or bridge port are each a collision domain. To clarify; repeaters do not divide collision domains but switches do. This means that since switches have become commonplace, collision domains are isolated to the specific segment between the switch port and the connected node. Full-duplex segments, or links, don't form a collision domain as there is a dedicated channel between each transmitter and receiver, eliminating the possibility of collisions.
Broadcast domain control
With a sufficiently sophisticated switch, it is possible to create a network in which the normal notion of a broadcast domain is strictly controlled. One implementation of this concept is termed a "private VLAN". Another implementation is possible with Linux and iptables. One analogy is that by creating multiple VLANs, the number of broadcast domains increases, but the size of each broadcast domain decreases. This is because a VLAN (or virtual LAN) is technically a broadcast domain.
This is achieved by designating one or more "server" or "provider" nodes, either by MAC address or switch port. Broadcast frames are allowed to originate from these sources and are sent to all other nodes. Broadcast frames from all other sources are directed only to the server/provider nodes. Traffic from other sources not destined to the server/provider nodes ("peer-to-peer" traffic) is blocked.
The result is a network based on a nominally shared transmission system; like Ethernet, but in which "client" nodes cannot communicate with each other, only with the server/provider. A common application is Internet providers. Allowing direct data link layer communication between customer nodes exposes the network to various security attacks, such as ARP spoofing. Controlling the broadcast domain in this fashion provides many of the advantages of a point-to-point network, using commodity broadcast-based hardware.
See also
Network layer
Collision domain
References
Collision and broadcast domain, Study CCNA
Collision Domains vs. Broadcast Domains , Cisco Skills
Collision Domain and Broadcast Domain on YouTube
Network architecture | Broadcast domain | [
"Engineering"
] | 818 | [
"Network architecture",
"Computer networks engineering"
] |
4,286 | https://en.wikipedia.org/wiki/Biconditional%20introduction | In propositional logic, biconditional introduction is a valid rule of inference. It allows for one to infer a biconditional from two conditional statements. The rule makes it possible to introduce a biconditional statement into a logical proof. If is true, and if is true, then one may infer that is true. For example, from the statements "if I'm breathing, then I'm alive" and "if I'm alive, then I'm breathing", it can be inferred that "I'm breathing if and only if I'm alive". Biconditional introduction is the converse of biconditional elimination. The rule can be stated formally as:
where the rule is that wherever instances of "" and "" appear on lines of a proof, "" can validly be placed on a subsequent line.
Formal notation
The biconditional introduction rule may be written in sequent notation:
where is a metalogical symbol meaning that is a syntactic consequence when and are both in a proof;
or as the statement of a truth-functional tautology or theorem of propositional logic:
where , and are propositions expressed in some formal system.
References
Rules of inference
Theorems in propositional logic | Biconditional introduction | [
"Mathematics"
] | 256 | [
"Rules of inference",
"Theorems in propositional logic",
"Theorems in the foundations of mathematics",
"Proof theory"
] |
4,287 | https://en.wikipedia.org/wiki/Biconditional%20elimination | Biconditional elimination is the name of two valid rules of inference of propositional logic. It allows for one to infer a conditional from a biconditional. If is true, then one may infer that is true, and also that is true. For example, if it's true that I'm breathing if and only if I'm alive, then it's true that if I'm breathing, I'm alive; likewise, it's true that if I'm alive, I'm breathing. The rules can be stated formally as:
and
where the rule is that wherever an instance of "" appears on a line of a proof, either "" or "" can be placed on a subsequent line.
Formal notation
The biconditional elimination rule may be written in sequent notation:
and
where is a metalogical symbol meaning that , in the first case, and in the other are syntactic consequences of in some logical system;
or as the statement of a truth-functional tautology or theorem of propositional logic:
where , and are propositions expressed in some formal system.
See also
Logical biconditional
References
Rules of inference
Theorems in propositional logic | Biconditional elimination | [
"Mathematics"
] | 245 | [
"Theorems in propositional logic",
"Rules of inference",
"Theorems in the foundations of mathematics",
"Proof theory"
] |
4,292 | https://en.wikipedia.org/wiki/Base%20pair | A base pair (bp) is a fundamental unit of double-stranded nucleic acids consisting of two nucleobases bound to each other by hydrogen bonds. They form the building blocks of the DNA double helix and contribute to the folded structure of both DNA and RNA. Dictated by specific hydrogen bonding patterns, "Watson–Crick" (or "Watson–Crick–Franklin") base pairs (guanine–cytosine and adenine–thymine) allow the DNA helix to maintain a regular helical structure that is subtly dependent on its nucleotide sequence. The complementary nature of this based-paired structure provides a redundant copy of the genetic information encoded within each strand of DNA. The regular structure and data redundancy provided by the DNA double helix make DNA well suited to the storage of genetic information, while base-pairing between DNA and incoming nucleotides provides the mechanism through which DNA polymerase replicates DNA and RNA polymerase transcribes DNA into RNA. Many DNA-binding proteins can recognize specific base-pairing patterns that identify particular regulatory regions of genes.
Intramolecular base pairs can occur within single-stranded nucleic acids. This is particularly important in RNA molecules (e.g., transfer RNA), where Watson–Crick base pairs (guanine–cytosine and adenine–uracil) permit the formation of short double-stranded helices, and a wide variety of non–Watson–Crick interactions (e.g., G–U or A–A) allow RNAs to fold into a vast range of specific three-dimensional structures. In addition, base-pairing between transfer RNA (tRNA) and messenger RNA (mRNA) forms the basis for the molecular recognition events that result in the nucleotide sequence of mRNA becoming translated into the amino acid sequence of proteins via the genetic code.
The size of an individual gene or an organism's entire genome is often measured in base pairs because DNA is usually double-stranded. Hence, the number of total base pairs is equal to the number of nucleotides in one of the strands (with the exception of non-coding single-stranded regions of telomeres). The haploid human genome (23 chromosomes) is estimated to be about 3.2 billion base pairs long and to contain 20,000–25,000 distinct protein-coding genes. A kilobase (kb) is a unit of measurement in molecular biology equal to 1000 base pairs of DNA or RNA. The total number of DNA base pairs on Earth is estimated at 5.0 with a weight of 50 billion tonnes. In comparison, the total mass of the biosphere has been estimated to be as much as 4 TtC (trillion tons of carbon).
Hydrogen bonding and stability
Top, a G.C base pair with three hydrogen bonds. Bottom, an A.T base pair with two hydrogen bonds. Non-covalent hydrogen bonds between the bases are shown as dashed lines. The wiggly lines stand for the connection to the pentose sugar and point in the direction of the minor groove.
Hydrogen bonding is the chemical interaction that underlies the base-pairing rules described above. Appropriate geometrical correspondence of hydrogen bond donors and acceptors allows only the "right" pairs to form stably. DNA with high GC-content is more stable than DNA with low GC-content. Crucially, however, stacking interactions are primarily responsible for stabilising the double-helical structure; Watson-Crick base pairing's contribution to global structural stability is minimal, but its role in the specificity underlying complementarity is, by contrast, of maximal importance as this underlies the template-dependent processes of the central dogma (e.g. DNA replication).
The bigger nucleobases, adenine and guanine, are members of a class of double-ringed chemical structures called purines; the smaller nucleobases, cytosine and thymine (and uracil), are members of a class of single-ringed chemical structures called pyrimidines. Purines are complementary only with pyrimidines: pyrimidine–pyrimidine pairings are energetically unfavorable because the molecules are too far apart for hydrogen bonding to be established; purine–purine pairings are energetically unfavorable because the molecules are too close, leading to overlap repulsion. Purine–pyrimidine base-pairing of AT or GC or UA (in RNA) results in proper duplex structure. The only other purine–pyrimidine pairings would be AC and GT and UG (in RNA); these pairings are mismatches because the patterns of hydrogen donors and acceptors do not correspond. The GU pairing, with two hydrogen bonds, does occur fairly often in RNA (see wobble base pair).
Paired DNA and RNA molecules are comparatively stable at room temperature, but the two nucleotide strands will separate above a melting point that is determined by the length of the molecules, the extent of mispairing (if any), and the GC content. Higher GC content results in higher melting temperatures; it is, therefore, unsurprising that the genomes of extremophile organisms such as Thermus thermophilus are particularly GC-rich. On the converse, regions of a genome that need to separate frequently — for example, the promoter regions for often-transcribed genes — are comparatively GC-poor (for example, see TATA box). GC content and melting temperature must also be taken into account when designing primers for PCR reactions.
Examples
The following DNA sequences illustrate pair double-stranded patterns. By convention, the top strand is written from the 5′-end to the 3′-end; thus, the bottom strand is written 3′ to 5′.
A base-paired DNA sequence:
The corresponding RNA sequence, in which uracil is substituted for thymine in the RNA strand:
Base analogs and intercalators
Chemical analogs of nucleotides can take the place of proper nucleotides and establish non-canonical base-pairing, leading to errors (mostly point mutations) in DNA replication and DNA transcription. This is due to their isosteric chemistry. One common mutagenic base analog is 5-bromouracil, which resembles thymine but can base-pair to guanine in its enol form.
Other chemicals, known as DNA intercalators, fit into the gap between adjacent bases on a single strand and induce frameshift mutations by "masquerading" as a base, causing the DNA replication machinery to skip or insert additional nucleotides at the intercalated site. Most intercalators are large polyaromatic compounds and are known or suspected carcinogens. Examples include ethidium bromide and acridine.
Mismatch repair
Mismatched base pairs can be generated by errors of DNA replication and as intermediates during homologous recombination. The process of mismatch repair ordinarily must recognize and correctly repair a small number of base mispairs within a long sequence of normal DNA base pairs. To repair mismatches formed during DNA replication, several distinctive repair processes have evolved to distinguish between the template strand and the newly formed strand so that only the newly inserted incorrect nucleotide is removed (in order to avoid generating a mutation). The proteins employed in mismatch repair during DNA replication, and the clinical significance of defects in this process are described in the article DNA mismatch repair. The process of mispair correction during recombination is described in the article gene conversion.
Length measurements
The following abbreviations are commonly used to describe the length of a D/RNA molecule:
bp = base pair—one bp corresponds to approximately 3.4 Å (340 pm) of length along the strand, and to roughly 618 or 643 daltons for DNA and RNA respectively.
kb (= kbp) = kilo–base-pair = 1,000 bp
Mb (= Mbp) = mega–base-pair = 1,000,000 bp
Gb (= Gbp) = giga–base-pair = 1,000,000,000 bp
For single-stranded DNA/RNA, units of nucleotides are used—abbreviated nt (or knt, Mnt, Gnt)—as they are not paired.
To distinguish between units of computer storage and bases, kbp, Mbp, Gbp, etc. may be used for base pairs.
The centimorgan is also often used to imply distance along a chromosome, but the number of base pairs it corresponds to varies widely. In the human genome, the centimorgan is about 1 million base pairs.
Unnatural base pair (UBP)
An unnatural base pair (UBP) is a designed subunit (or nucleobase) of DNA which is created in a laboratory and does not occur in nature. DNA sequences have been described which use newly created nucleobases to form a third base pair, in addition to the two base pairs found in nature, A-T (adenine – thymine) and G-C (guanine – cytosine). A few research groups have been searching for a third base pair for DNA, including teams led by Steven A. Benner, Philippe Marliere, Floyd E. Romesberg and Ichiro Hirao. Some new base pairs based on alternative hydrogen bonding, hydrophobic interactions and metal coordination have been reported.
In 1989 Steven Benner (then working at the Swiss Federal Institute of Technology in Zurich) and his team led with modified forms of cytosine and guanine into DNA molecules in vitro. The nucleotides, which encoded RNA and proteins, were successfully replicated in vitro. Since then, Benner's team has been trying to engineer cells that can make foreign bases from scratch, obviating the need for a feedstock.
In 2002, Ichiro Hirao's group in Japan developed an unnatural base pair between 2-amino-8-(2-thienyl)purine (s) and pyridine-2-one (y) that functions in transcription and translation, for the site-specific incorporation of non-standard amino acids into proteins. In 2006, they created 7-(2-thienyl)imidazo[4,5-b]pyridine (Ds) and pyrrole-2-carbaldehyde (Pa) as a third base pair for replication and transcription. Afterward, Ds and 4-[3-(6-aminohexanamido)-1-propynyl]-2-nitropyrrole (Px) was discovered as a high fidelity pair in PCR amplification. In 2013, they applied the Ds-Px pair to DNA aptamer generation by in vitro selection (SELEX) and demonstrated the genetic alphabet expansion significantly augment DNA aptamer affinities to target proteins.
In 2012, a group of American scientists led by Floyd Romesberg, a chemical biologist at the Scripps Research Institute in San Diego, California, published that his team designed an unnatural base pair (UBP). The two new artificial nucleotides or Unnatural Base Pair (UBP) were named d5SICS and dNaM. More technically, these artificial nucleotides bearing hydrophobic nucleobases, feature two fused aromatic rings that form a (d5SICS–dNaM) complex or base pair in DNA. His team designed a variety of in vitro or "test tube" templates containing the unnatural base pair and they confirmed that it was efficiently replicated with high fidelity in virtually all sequence contexts using the modern standard in vitro techniques, namely PCR amplification of DNA and PCR-based applications. Their results show that for PCR and PCR-based applications, the d5SICS–dNaM unnatural base pair is functionally equivalent to a natural base pair, and when combined with the other two natural base pairs used by all organisms, A–T and G–C, they provide a fully functional and expanded six-letter "genetic alphabet".
In 2014 the same team from the Scripps Research Institute reported that they synthesized a stretch of circular DNA known as a plasmid containing natural T-A and C-G base pairs along with the best-performing UBP Romesberg's laboratory had designed and inserted it into cells of the common bacterium E. coli that successfully replicated the unnatural base pairs through multiple generations. The transfection did not hamper the growth of the E. coli cells and showed no sign of losing its unnatural base pairs to its natural DNA repair mechanisms. This is the first known example of a living organism passing along an expanded genetic code to subsequent generations. Romesberg said he and his colleagues created 300 variants to refine the design of nucleotides that would be stable enough and would be replicated as easily as the natural ones when the cells divide. This was in part achieved by the addition of a supportive algal gene that expresses a nucleotide triphosphate transporter which efficiently imports the triphosphates of both d5SICSTP and dNaMTP into E. coli bacteria. Then, the natural bacterial replication pathways use them to accurately replicate a plasmid containing d5SICS–dNaM. Other researchers were surprised that the bacteria replicated these human-made DNA subunits.
The successful incorporation of a third base pair is a significant breakthrough toward the goal of greatly expanding the number of amino acids which can be encoded by DNA, from the existing 20 amino acids to a theoretically possible 172, thereby expanding the potential for living organisms to produce novel proteins. The artificial strings of DNA do not encode for anything yet, but scientists speculate they could be designed to manufacture new proteins which could have industrial or pharmaceutical uses. Experts said the synthetic DNA incorporating the unnatural base pair raises the possibility of life forms based on a different DNA code.
Non-canonical base pairing
In addition to the canonical pairing, some conditions can also favour base-pairing with alternative base orientation, and number and geometry of hydrogen bonds. These pairings are accompanied by alterations to the local backbone shape.
The most common of these is the wobble base pairing that occurs between tRNAs and mRNAs at the third base position of many codons during transcription and during the charging of tRNAs by some tRNA synthetases. They have also been observed in the secondary structures of some RNA sequences.
Additionally, Hoogsteen base pairing (typically written as A•U/T and G•C) can exist in some DNA sequences (e.g. CA and TA dinucleotides) in dynamic equilibrium with standard Watson–Crick pairing. They have also been observed in some protein–DNA complexes.
In addition to these alternative base pairings, a wide range of base-base hydrogen bonding is observed in RNA secondary and tertiary structure. These bonds are often necessary for the precise, complex shape of an RNA, as well as its binding to interaction partners.
See also
List of Y-DNA single-nucleotide polymorphisms
Non-canonical base pairing
Chargaff's rules
References
Further reading
(See esp. ch. 6 and 9)
External links
DAN—webserver version of the EMBOSS tool for calculating melting temperatures
Nucleobases
Molecular genetics
Nucleic acids | Base pair | [
"Chemistry",
"Biology"
] | 3,207 | [
"Biomolecules by chemical classification",
"Nucleic acids",
"Molecular genetics",
"Molecular biology"
] |
4,318 | https://en.wikipedia.org/wiki/Big%20Dig | The Big Dig was a megaproject in Boston that rerouted the then elevated Central Artery of Interstate 93 that cut across Boston into the O'Neill Tunnel and built the Ted Williams Tunnel to extend Interstate 90 to Logan International Airport. Those two projects were the origin of the official name, the Central Artery/Tunnel Project (CA/T Project). Additionally, the project constructed the Zakim Bunker Hill Bridge over the Charles River, created the Rose Kennedy Greenway in the space vacated by the previous I-93 elevated roadway, and funded more than a dozen projects to improve the region's public transportation system. Planning for the project began in 1982; the construction work was carried out between 1991 and 2006; and the project concluded on December 31, 2007. The project's general contractor was Bechtel and Parsons Brinckerhoff was the engineer, who worked as a consortium, both overseen by the Massachusetts Highway Department.
The Big Dig was the most expensive highway project in the United States, and was plagued by cost overruns, delays, leaks, design flaws, accusations of poor execution and use of substandard materials, criminal charges and arrests, and the death of one motorist. The project was originally scheduled to be completed in 1998 at an estimated cost of $2.8 billion (US$7.4 billion adjusted for inflation ). However, the project was completed in December 2007 at a cost of over $8.08 billion (in 1982 dollars, $21.5 billion adjusted for inflation), a cost overrun of about 190%. As a result of a death, leaks, and other design flaws, the Parsons Brinckerhoff and Bechtel consortium agreed to pay $407 million in restitution and several smaller companies agreed to pay a combined sum of approximately $51 million.
Origin
This project was developed in response to traffic congestion on Boston's historically tangled streets which were laid out centuries before the advent of the automobile. As early as 1930, the city's Planning Board recommended a raised express highway running north–south through the downtown district in order to draw through traffic off the city streets. Commissioner of Public Works William Callahan promoted plans for the Central Artery, an elevated expressway which eventually was constructed between the downtown area and the waterfront. Governor John Volpe interceded in the 1950s to change the design of the last section of the Central Artery, putting it underground through the Dewey Square Tunnel. While traffic moved somewhat better, the other problems remained. There was chronic congestion on the Central Artery (I-93), the elevated six-lane highway through the center of downtown Boston, which was, in the words of Pete Sigmund, "like a funnel full of slowly-moving, or stopped, cars (and swearing motorists)." In 1959, the 1.5-mile-long (2.4 km) road section carried approximately 75,000 vehicles a day, but by the 1990s, this had grown to 190,000 vehicles a day. Traffic jams of 16 hours were predicted for 2010.
The expressway had tight turns, an excessive number of entrances and exits, entrance ramps without merge lanes, and as the decades passed and other planned expressways were cancelled, continually escalating vehicular traffic that was well beyond its design capacity. Local businesses again wanted relief, city leaders sought a reuniting of the waterfront with the city, and nearby residents desired removal of the matte green-painted elevated road which mayor Thomas Menino called Boston's "other Green Monster" (as an unfavorable comparison to Fenway Park's famed left-field wall). MIT engineers Bill Reynolds and (eventual state Secretary of Transportation) Frederick P. Salvucci envisioned moving the whole expressway underground.
Cancellation of the Inner Belt project
Another important motivation for the final form of the Big Dig was the abandonment of the Massachusetts Department of Public Works' intended expressway system through and around Boston. The Central Artery, as part of Mass. DPW's Master Plan of 1948, was originally planned to be the downtown Boston stretch of Interstate 95, and was signed as such; a bypass road called the Inner Belt, was subsequently renamed Interstate 695. (The law establishing the Interstate highway system was enacted in 1956.) The Inner Belt District was to pass to the west of the downtown core, through the neighborhood of Roxbury and the cities of Brookline, Cambridge, and Somerville. Earlier controversies over impact of the Boston extension of the Massachusetts Turnpike, particularly on the heavily populated neighborhood of Brighton, and the additional large amount of housing that would have had to be destroyed led to massive community opposition to both the Inner Belt and the Boston section of I-95.
By 1970, building demolition and land clearances had been completed along the I-95 right of way through the neighborhoods of Roxbury, Jamaica Plain, the South End and Roslindale, which led to secession threats by Hyde Park, Boston's youngest and southernmost neighborhood (which I-95 was also slated to go through). By 1972, with relatively little work done on the Southwest Corridor portion of I-95 and none on the potentially massively disruptive Inner Belt, Governor Francis Sargent put a moratorium on highway construction within the Route 128 corridor, except for the final short stretch of Interstate 93. In 1974, the remainder of the Master Plan was canceled.
With ever-increasing traffic volumes funneled onto I-93 alone, the Central Artery became chronically gridlocked. The Sargent moratorium led to the rerouting of I-95 away from Boston around the Route 128 beltway and the conversion of the cleared land in the southern part of the city into the Southwest Corridor linear park, as well as a new right-of-way for the Orange Line subway and Amtrak. Parts of the planned I-695 right-of-way remain unused and under consideration for future mass-transit projects.
The original 1948 Master Plan included a Third Harbor Tunnel plan that was hugely controversial in its own right, because it would have disrupted the Maverick Square area of East Boston. It was never built.
Mixing of traffic
A major reason for the all-day congestion was that the Central Artery carried not only north–south traffic, but it also carried east–west traffic. Boston's Logan Airport lies across Boston Harbor in East Boston; and before the Big Dig, the only access to the airport from downtown was through the paired Callahan and Sumner tunnels. Traffic on the major highways from west of Boston—the Massachusetts Turnpike and Storrow Drive—mostly traveled on portions of the Central Artery to reach these tunnels. Getting between the Central Artery and the tunnels involved short diversions onto city streets, increasing local congestion.
Mass transit
A number of public transportation projects were included as part of an environmental mitigation for the Big Dig. The most expensive was the building of the Phase II Silver Line tunnel under Fort Point Channel, done in coordination with Big Dig construction. Silver Line buses now use this tunnel and the Ted Williams Tunnel to link South Station and Logan Airport.
Construction of the MBTA Green Line extension beyond Lechmere to Medford/Tufts station opened on December 12, 2022. , promised projects to connect the Red and Blue subway lines, and to restore the Green Line streetcar service to the Arborway in Jamaica Plain have not been completed. The Red and Blue subway line connection underwent initial design, but no funding has been designated for the project. The Arborway Line restoration has been abandoned, following a final court decision in 2011.
The original Big Dig plan also included the North-South Rail Link, which would have connected North and South Stations (the major passenger train stations in Boston), but this aspect of the project was ultimately dropped by the state transportation administration early in the Dukakis administration. Negotiations with the federal government had led to an agreement to widen some of the lanes in the new harbor tunnel, and accommodating these would require the tunnel to be deeper and mechanically vented; this left no room for the rail lines, and having diesel trains (then in use) passing through the tunnel would have substantially increased the cost of the ventilation system.
Early planning
The project was conceived in the 1970s by the Boston Transportation Planning Review to replace the rusting elevated six-lane Central Artery. The expressway separated downtown from the waterfront, and was increasingly choked with bumper-to-bumper traffic. Business leaders were more concerned about access to Logan Airport, and pushed instead for a third harbor tunnel.
Planning for the Big Dig as a project officially began in 1982, with environmental impact studies starting in 1983. After years of extensive lobbying for federal dollars, a 1987 public works bill appropriating funding for the Big Dig was passed by the US Congress, but it was vetoed by President Ronald Reagan for being too expensive. When Congress overrode the veto, the project had its green light and ground was first broken in 1991.
In 1997, the state legislature created the Metropolitan Highway System and transferred responsibility for the Central Artery and Tunnel "CA/T" Project from the Massachusetts Highway Department and the Massachusetts Governor's Office to the Massachusetts Turnpike Authority (MTA).
The MTA, which had little experience in managing an undertaking of the scope and magnitude of the CA/T Project, hired a joint venture to provide preliminary designs, manage design consultants and construction contractors, track the project's cost and schedule, advise MTA on project decisions, and (in some instances) act as the MTA's representative. Eventually, MTA combined some of its employees with joint venture employees in an integrated project organization. This was intended to make management more efficient, but it hindered MTA's ability to independently oversee project activities because MTA and the joint venture had effectively become partners in the project.
Obstacles
In addition to political and financial difficulties, the project received resistance from residents of Boston's historic North End, who in the 1950s had seen 20% of the neighborhood's businesses displaced by development of the Central Artery. In 1993, the North End Waterfront Central Artery Committee (NEWCAC) created, co-founded by Nancy Caruso, representing residents, businesses, and institutions in the North End and Waterfront neighborhoods of Boston. The NEWCAC Committee's goal included lessening the impact of the Central Artery/Tunnel Project on the community, representing the neighborhoods to government agencies, keeping the community informed, developing a list of priorities of immediate neighborhood concerns, and promoting responsible and appropriate development of the post-construction artery corridor in the North End and Waterfront neighborhoods.
The political, financial and residential obstacles were magnified when several environmental and engineering obstacles occurred.
The downtown area through which the tunnels were to be dug was largely land fill, and included existing Red Line and Blue Line subway tunnels as well as innumerable pipes and utility lines that would have to be replaced or moved. Tunnel workers encountered many unexpected geological and archaeological barriers, ranging from glacial debris to foundations of buried houses and a number of sunken ships lying within the reclaimed land.
The project received approval from state environmental agencies in 1991, after satisfying concerns including release of toxins by the excavation and the possibility of disrupting the homes of millions of rats, causing them to roam the streets of Boston in search of new housing. By the time the federal environmental clearances were delivered in 1994, the process had taken some seven years, during which time inflation greatly increased the project's original cost estimates.
Reworking such a busy corridor without seriously restricting traffic flow required a number of state-of-the-art construction techniques. Because the old elevated highway (which remained in operation throughout the construction process) rested on pylons located throughout the designated dig area, engineers first utilized slurry wall techniques to create concrete walls upon which the highway could rest. These concrete walls also stabilized the sides of the site, preventing cave-ins during the continued excavation process.
The multi-lane Interstate highway also had to pass under South Station's seven railroad tracks, which carried over 40,000 commuters and 400 trains per day. To avoid multiple relocations of train lines while the tunneling advanced, as had been initially planned, a specially designed jack was constructed to support the ground and tracks to allow the excavation to take place below. Construction crews also used ground freezing (an artificial induction of permafrost) to help stabilize surrounding ground as they excavated the tunnel. This was the largest tunneling project undertaken beneath railroad lines anywhere in the world. The ground freezing enabled safer, more efficient excavation, and also assisted in environmental issues, as less contaminated fill needed to be exported than if a traditional cut-and-cover method had been applied.
Other challenges included existing subway tunnels crossing the path of the underground highway. To build slurry walls past these tunnels, it was necessary to dig beneath the tunnels and to build an underground concrete bridge to support the tunnels' weight, without interrupting rail service.
Construction phase
The project was managed by the Massachusetts Turnpike Authority, with the Big Dig and the Turnpike's Boston Extension from the 1960s being financially and legally joined by the legislature as the Metropolitan Highway System. Design and construction was supervised by a joint venture of Bechtel Corporation and Parsons Brinckerhoff. Because of the enormous size of the project—too large for any company to undertake alone—the design and construction of the Big Dig was broken up into dozens of smaller subprojects with well-defined interfaces between contractors. Major heavy-construction contractors on the project included Jay Cashman, Modern Continental, Obayashi Corporation, Perini Corporation, Peter Kiewit Sons' Incorporated, J. F. White, and the Slattery division of Skanska USA. (Of those, Modern Continental was awarded the greatest gross value of contracts, joint ventures included.)
The nature of the Charles River crossing had been a source of major controversy throughout the design phase of the project. Many environmental advocates preferred a river crossing entirely in tunnels, but this, along with 27 other plans, was rejected as too costly. Finally, with a deadline looming to begin construction on a separate project that would connect the Tobin Bridge to the Charles River crossing, Salvucci overrode the objections and chose a variant of the plan known as "Scheme Z". This plan was considered to be reasonably cost-effective, but had the drawback of requiring highway ramps stacked up as high as immediately adjacent to the Charles River.
The city of Cambridge objected to the visual impact of the chosen Charles River crossing design. The city sued to revoke the project's environmental certificate and forced the project planners to redesign the river crossing again.
Swiss engineer Christian Menn took over the design of the bridge. He suggested a cradle cable-stayed bridge that would carry ten lanes of traffic. The plan was accepted and construction began on the Leonard P. Zakim Bunker Hill Memorial Bridge. The bridge employed an asymmetrical design and a hybrid of steel and concrete was used to construct it. The distinctive bridge is supported by two forked towers connected to the span by cables and girders. It was the first bridge in the country to employ this method and it was, at the time, the widest cable-stayed bridge in the world, having since been surpassed by the Eastern span replacement of the San Francisco–Oakland Bay Bridge.
Meanwhile, construction continued on the Tobin Bridge approach. By the time all parties agreed on the I-93 design, construction of the Tobin connector (today known as the "City Square Tunnel" for a Charlestown area it bypasses) was far along, significantly adding to the cost of constructing the US Route 1 interchange and retrofitting the tunnel.
Boston blue clay and other soils extracted from the path of the tunnel were used to cap many local landfills, fill in the Granite Rail Quarry in Quincy, and restore the surface of Spectacle Island in the Boston Harbor Islands National Recreation Area.
The Storrow Drive Connector, a companion bridge to the Zakim, began carrying traffic from I-93 to Storrow Drive in 1999. The project had been under consideration for years, but was opposed by the wealthy residents of the Beacon Hill neighborhood. However, it finally was accepted because it would funnel traffic bound for Storrow Drive and downtown Boston away from the mainline roadway. The Connector ultimately used a pair of ramps that had been constructed for Interstate 695, enabling the mainline I-93 to carry more traffic that would have used I-695 under the original Master Plan.
When construction began, the project cost, including the Charles River crossing, was estimated at $5.8 billion. Eventual cost overruns were so high that the chairman of the Massachusetts Turnpike Authority, James Kerasiotes, was fired in 2000. His replacement had to commit to an $8.55 billion cap on federal contributions. The total expenses eventually passed $15 billion. Interest brought this cost to $21.93 billion.
Engineering methods and details
Several unusual engineering challenges arose during the project, requiring unusual solutions and methods to address them. At the beginning of the project, engineers had to figure out the safest way to build the tunnel without endangering the existing elevated highway above. Eventually, they created horizontal braces as wide as the tunnel, then cut away the elevated highway's struts, and lowered it onto the new braces. Three alternative construction methods were studied with their corresponding structural design to address existing conditions, safety measures, and constructability. In addition to codified loads, construction loads were computed to support final design and field execution .
Final phases
On January 18, 2003, the opening ceremony was held for the I-90 Connector Tunnel, extending the Massachusetts Turnpike (Interstate 90) east into the Ted Williams Tunnel, and onwards to Boston Logan International Airport. The Ted Williams tunnel had been completed and was in limited use for commercial traffic and high-occupancy vehicles since late 1995. The westbound lanes opened on the afternoon of January 18 and the eastbound lanes on January 19.
The next phase, moving the elevated Interstate 93 underground, was completed in two stages: northbound lanes opened on March 29, 2003, and southbound lanes (in a temporary configuration) on December 20, 2003. A tunnel underneath Leverett Circle connecting eastbound Storrow Drive to I-93 North and the Tobin Bridge opened December 19, 2004, easing congestion at the circle. All southbound lanes of I-93 opened to traffic on March 5, 2005, including the left lane of the Zakim Bridge, and all of the refurbished Dewey Square Tunnel.
By the end of December 2004, 95% of the Big Dig was completed. Major construction remained on the surface, including construction of final ramp configurations in the North End and in the South Bay interchange, and reconstruction of the surface streets.
The final ramp downtown—exit 16A (formerly 20B) from I-93 south to Albany Street—opened January 13, 2006.
In 2006, the two Interstate 93 tunnels were dedicated as the Thomas P. O'Neill Jr. Tunnel, after the former Democratic speaker of the House of Representatives from Massachusetts who pushed to have the Big Dig funded by the federal government.
Coordinated projects
The Commonwealth of Massachusetts was required under the Federal Clean Air Act to mitigate air pollution generated by the highway improvements. Secretary of Transportation Fred Salvucci signed an agreement with the Conservation Law Foundation in 1990 enumerating 14 specific projects the state agreed to build. This list was affirmed in a 1992 lawsuit settlement.
Projects which have been completed include:
Restoration of three Old Colony Commuter Rail lines
Expansion of Framingham Line to serve Worcester full-time
Restoration of the Newburyport/Rockport Line
Six-car trains on the MBTA Blue Line, requiring platform lengthening, station modernization, and all new train cars
MBTA Silver Line, a bus rapid transit route to the South Boston waterfront and East Boston, including the airport
1,000 new commuter parking spaces
Fairmount Line improvements
Green Line Extension
However, some projects were removed:
Design of the Red-Blue Connector
Restoration of Green Line E branch service
Surface treatments
Some surface treatments that were part of the original project plan were dropped due to the massive cost overruns on the highway portion of the project.
$99.1 million was allocated for mitigating improvements to the Charles River Basin, including the construction of North Point Park in Cambridge and Paul Revere Park in Charlestown. The North Bank Bridge, providing pedestrian and bicycle connectivity between the parks, was not funded until the American Recovery and Reinvestment Act of 2009. Nashua Street Park on the Boston side was completed in 2003, by McCourt Construction with $7.9 million in funding from MassDOT. As of 2017, $30.5 million had been transferred to the Massachusetts Department of Conservation and Recreation to complete five projects. Another incomplete but required project is the South Bank Bridge over the MBTA Commuter Rail tracks at North Station (connecting Nashua Street Park to the proposed South Bank Park, which is currently a parking lot under the Zakim Bridge at the Charles River locks).
Improvements in the lower Charles River Basin include the new walkway at Lovejoy Wharf (constructed by the developer of 160 North Washington Street, the new headquarters of Converse), the Lynch Family Skate Park (constructed in 2015 by the Charles River Conservancy), rehabilitation of historic operations buildings for the Charles River Dam and lock, a maintenance facility, and a planned pedestrian walkway across the Charles River next to the MBTA Commuter Rail drawbridge at North Station (connecting Nashua Street Park and North Point Park). MassDOT is funding the South Bank Park, and replacement of the North Washington Street Bridge (construction Aug 2018–23). EF Education is funding public greenspace improvements as part of its three-phase expansion at North Point. Remaining funding may be used to construct the North Point Inlet pedestrian bridge, and a pedestrian walkway over Leverett Circle. Before being replaced with surface access during the reconstruction of the Science Park MBTA Green Line station, Leverett Circle had pedestrian bridges with stairs that provided elevated access between the station, the Charles River Parks, and the sidewalk to the Boston Museum of Science. The replacement ramps would comply with Americans with Disabilities Act requirements and allow easy travel by wheelchair or bicycle over the busy intersection.
Public art
While not a legally mandated requirement, public art was part of the urban design planning process (and later design development work) through the Artery Arts Program. The intent of the program was to integrate public art into highway infrastructure (retaining walls, fences, and lighting) and the essential elements of the pedestrian environment (walkways, park landscape elements, and bridges). As overall project costs increased, the Artery Arts Program was seen as a potential liability, even though there was support and interest from the public and professional arts organizations in the area.
At the beginning of the highway design process, a temporary arts program was initiated, and over 50 proposals were selected. However, development began on only a few projects before funding for the program was cut. Permanent public art that was funded includes: super graphic text and facades of former West End houses cast into the concrete elevated highway abutment support walls near North Station by artist Sheila Levrant de Bretteville; Harbor Fog, a sensor-activated mist, light and sound sculptural environment by artist Ross Miller in parcel 17; a historical sculpture celebrating the 18th and 19th century shipbuilding industry and a bust of shipbuilder Donald McKay in East Boston; blue interior lighting of the Zakim Bridge; and the Miller's River Littoral Way walkway and lighting under the loop ramps north of the Charles River.
Extensive landscape planting, as well as a maintenance program to support the plantings, was requested by many community members during public meetings.
Impact on traffic
The Big Dig separated the co-mingled traffic from the Massachusetts Turnpike and the Sumner and Callahan tunnels. While only one net lane in each direction was added to the north–south I-93, several new east–west lanes became available. East–west traffic on the Massachusetts Turnpike/I-90 now proceeds directly through the Ted Williams Tunnel to Logan Airport and Route 1A beyond. Traffic between Storrow Drive and the Callahan and Sumner Tunnels still uses a short portion of I-93, but additional lanes and direct connections are provided for this traffic.
The result was a 62% reduction in vehicle hours of travel on I-93, the airport tunnels, and the connection from Storrow Drive, from an average 38,200 hours per day before construction (1994–1995) to 14,800 hours per day in 2004–2005, after the project was largely complete. The savings for travelers was estimated at $166 million annually in the same 2004–2005 time frame. Travel times on the Central Artery northbound during the afternoon peak hour were reduced 85.6%.
A 2008 Boston Globe report asserted that waiting time for the majority of trips actually increased as a result of demand induced by the increased road capacity. Because more drivers were opting to use the new roads, traffic bottlenecks were only pushed outward from the city, not reduced or eliminated (although some trips are now faster). The report states, "Ultimately, many motorists going to and from the suburbs at peak rush hours are spending more time stuck in traffic, not less." The Globe also asserted that their analysis provides a fuller picture of the traffic situation than a state-commissioned study done two years earlier, in which the Big Dig was credited with helping to save at least $167 million a year by increasing economic productivity and decreasing motor vehicle operating costs. That study did not look at highways outside the Big Dig construction area and did not take into account new congestion elsewhere.
Impact on property values
Towards the end of the Big Dig in 2003, it was estimated that the demolition of the Central Artery highway would cause a $732 million increase in property value in Boston's financial district, with the replacement parks providing an additional $252 million in value. Additionally, as a result of the Big Dig, a large amount of waterfront space was opened up, which is now a high-rent residential and commercial area called the Seaport District. The development of Seaport alone was estimated to create $7 billion in private investment and 43,000 jobs.
Operations Control Center (OCC)
As part of the project, an elaborate Operations Control Center (OCC) control room was constructed in South Boston. Staffed on a "24/7/365" basis, this center monitors and reports on traffic congestion, and responds to emergencies. Continuous video surveillance is provided by hundreds of cameras, and thousands of sensors monitor traffic speed and density, air quality, water levels, temperatures, equipment status, and other conditions inside the tunnel. The OCC can activate emergency ventilation fans, change electronic display signs, and dispatch service crews when necessary.
Problems
Leaks
As far back as 2001, Turnpike Authority officials and contractors knew of thousands of leaks in ceiling and wall fissures, extensive water damage to steel supports and fireproofing systems, and overloaded drainage systems. Many of the leaks were a result of Modern Continental and other subcontractors failing to remove gravel and other debris before pouring concrete. This information was not made public until engineers at MIT (volunteer students and professors) performed several experiments and found serious problems with the tunnel.
On September 15, 2004, a major leak in the Interstate 93 north tunnel forced the closure of the tunnel while repairs were conducted. This also forced the Turnpike Authority to release information regarding its non-disclosure of prior leaks. A follow-up reported on "extensive" leaks that were more severe than state authorities had previously acknowledged. The report went on to state that the tunnel system had more than 400 leaks. A Boston Globe report, however, countered that by stating there were nearly 700 leaks in a single section of tunnel beneath South Station. Turnpike officials also stated that the number of leaks being investigated was down from 1,000 to 500.
The problem of leaks is further aggravated by the fact that many of them involve corrosive salt water. This is caused by the proximity of Boston Harbor and the Atlantic Ocean, causing a mix of salt and fresh water leaks in the tunnel. The situation is made worse by road salt spread in the tunnel to melt ice during freezing weather, or brought in by vehicles passing through. Salt water and salt spray are well-known issues that must be dealt with in any marine environment. It has been reported that "hundreds of thousands of gallons of salt water are pumped out monthly" in the Big Dig, and a map has been prepared showing "hot spots" where water leakage is especially serious. Salt-accelerated corrosion has caused ceiling light fixtures to fail (see below), but can also cause rapid deterioration of embedded rebar and other structural steel reinforcements holding the tunnel walls and ceiling in place.
Substandard materials
Massachusetts State Police searched the offices of Aggregate Industries, the largest concrete supplier for the underground portions of the project, in June 2005. They seized evidence that Aggregate delivered concrete that did not meet contract specifications. In March 2006 Massachusetts Attorney General Tom Reilly announced plans to sue project contractors and others because of poor work on the project. Over 200 complaints were filed by the state of Massachusetts as a result of leaks, cost overruns, quality concerns, and safety violations. In total, the state has sought approximately $100 million from the contractors ($1 for every $141 spent).
In May 2006, six employees of the company were arrested and charged with conspiracy to defraud the United States. The employees were accused of reusing old concrete and double-billing loads. In July 2007, Aggregate Industries settled the case with an agreement to pay $50 million. $42 million of the settlement went to civil cases and $8 million was paid in criminal fines. The company will provide $75 million in insurance for maintenance as well as pay $500,000 toward routine checks on areas suspected to contain substandard concrete. In July 2009, two of the accused, Gerard McNally and Keith Thomas, both managers, pled guilty to charges of conspiracy, mail fraud, and filing false reports. The following month, the remaining four, Robert Prosperi, Mark Blais, Gregory Stevenson, and John Farrar, were found guilty on conspiracy and fraud charges. The four were sentenced to probation and home confinement and Blais and Farrar were additionally sentenced to community service.
Fatal ceiling collapse
A fatal accident raised safety questions and closed part of the project for most of the summer of 2006. On July 10, 2006, concrete ceiling panels and debris weighing and measuring fell on a car traveling on the two-lane ramp connecting northbound I-93 to eastbound I-90 in South Boston, killing Milena Del Valle, who was a passenger, and injuring her husband, Angel Del Valle, who was driving. Immediately following the fatal ceiling collapse, Governor Mitt Romney ordered a "stem-to-stern" safety audit conducted by the engineering firm of Wiss, Janney, Elstner Associates, Inc. to look for additional areas of risk. Said Romney: "We simply cannot live in a setting where a project of this scale has the potential of threatening human life, as has already been seen". The collapse and closure of the tunnel greatly snarled traffic in the city. The resulting traffic jams are cited as contributing to the death of another person, a heart attack victim who died en route to Boston Medical Center when his ambulance was caught in one such traffic jam two weeks after the collapse. On September 1, 2006, one eastbound lane of the connector tunnel was re-opened to traffic.
Following extensive inspections and repairs, Interstate 90 east- and westbound lanes reopened in early January 2007. The final piece of the road network, a high occupancy vehicle lane connecting Interstate 93 north to the Ted Williams Tunnel, reopened on June 1, 2007.
On July 10, 2007, after a lengthy investigation, the National Transportation Safety Board found that epoxy glue used to hold the roof in place during construction was not appropriate for long-term bonding. This was determined to be the cause of the roof collapse. The Power-Fast Epoxy Adhesive used in the installation was designed for short-term loading, such as wind or earthquake loads, not long-term loading, such as the weight of a panel.
Powers Fasteners, the makers of the adhesive, revised their product specifications on May 15, 2007, to increase the safety factor from 4 to 10 for all of their epoxy products intended for use in overhead applications. The safety factor on Power-Fast Epoxy was increased from 4 to 16. On December 24, 2007, the Del Valle family announced they had reached a settlement with Powers Fasteners that would pay the family $6 million. In December 2008, Powers Fasteners agreed to pay $16 million to the state to settle manslaughter charges.
"Ginsu guardrails"
Public safety workers have called the walkway safety handrails in the Big Dig tunnels "ginsu guardrails", because the squared-off edges of the support posts have caused mutilations and deaths of passengers ejected from crashed vehicles. After an eighth reported death involving the safety handrails, MassDOT officials announced plans to cover or remove the allegedly dangerous fixtures, but only near curves or exit ramps. This partial removal of hazards has been criticized by a safety specialist, who suggests that the handrails are just as dangerous in straight sections of the tunnel.
Lighting fixtures
In March 2011, it became known that senior MassDOT officials had failed to disclose an issue with the lighting fixtures in the O'Neill tunnel. In early February 2011, a maintenance crew found a fixture lying in the middle travel lane in the northbound tunnel. Assuming it to be simple road debris, the maintenance team picked it up and brought it back to its home facility. The next day, a supervisor passing through the yard realized that the fixture was not road debris but was in fact one of the fixtures used to light the tunnel itself. Further investigation revealed that the fixture's mounting apparatus had failed, due to galvanic corrosion of incompatible metals, caused by having aluminum in direct contact with stainless steel, in the presence of salt water. The electrochemical potential difference between stainless steel and aluminum is in the range of 0.5 to 1.0V, depending on the exact alloys involved, and can cause considerable corrosion within months under unfavorable conditions.
After the discovery of the reason why the fixture had failed, a comprehensive inspection of the other fixtures in the tunnel revealed that numerous other fixtures were also in the same state of deterioration. Some of the worst fixtures were temporarily shored up with plastic ties. Moving forward with temporary repairs, members of the MassDOT administration team decided not to let the news of the systemic failure and repair of the fixtures be released to the public or to Governor Deval Patrick's administration.
, it appeared that all of the 25,000 light fixtures would have to be replaced, at an estimated cost of $54 million. The replacement work was mostly done at night, and required lane closures or occasional closing of the entire tunnel for safety, and was estimated to take up to two years to complete.
See also
Massachusetts Bay Transportation Authority
Vincent Zarrilli – critic of the Big Dig who proposed the Boston Bypass
Alaskan Way Viaduct replacement tunnel – similar project in Seattle, Washington
Carmel Tunnels – similar project in Haifa, Israel
Central–Wan Chai Bypass – similar project in the areas of Central, Wan Chai and Causeway Bay, within Victoria City, Hong Kong
WestConnex – project of similar scope and scale in Sydney, New South Wales, Australia
Dublin Port Tunnel – similar project on a smaller scale in Dublin, Ireland
Gardiner Expressway – an elevated freeway in Toronto with similar future plans
Autopista de Circunvalación M-30, and – similar project along the banks of Manzanares River, Madrid, Spain
Blanka tunnel complex – similar project in Prague, Czech Republic and the longest city tunnel in Europe (6.4 km / 4.0 mi)
Yamate Tunnel – similar project on a larger scale in Tokyo, Japan
References
Further reading
External links
Official site
The Big Dig: The Story of America's Most Expensive Road—GBH News (podcast)
The Big Dig: The Story of America's Most Expensive Road—via YouTube
2007 establishments in Massachusetts
2007 in Boston
Engineering projects
Interstate 93
Megaprojects
North End, Boston
Road tunnels in Massachusetts
Transport infrastructure completed in 2007
Tunnels completed in 2007
Tunnels in Boston
U.S. Route 1
Underground construction | Big Dig | [
"Engineering"
] | 7,387 | [
"Underground construction",
"Megaprojects",
"Construction",
"Civil engineering",
"nan"
] |
4,322 | https://en.wikipedia.org/wiki/Borel%20measure | In mathematics, specifically in measure theory, a Borel measure on a topological space is a measure that is defined on all open sets (and thus on all Borel sets). Some authors require additional restrictions on the measure, as described below.
Formal definition
Let be a locally compact Hausdorff space, and let be the smallest σ-algebra that contains the open sets of ; this is known as the σ-algebra of Borel sets. A Borel measure is any measure defined on the σ-algebra of Borel sets. A few authors require in addition that is locally finite, meaning that every point has an open neighborhood with finite measure. For Hausdorff spaces, this implies that for every compact set ; and for locally compact Hausdorff spaces, the two conditions are equivalent. If a Borel measure is both inner regular and outer regular, it is called a regular Borel measure. If is both inner regular, outer regular, and locally finite, it is called a Radon measure. Alternatively, if a regular Borel measure is tight, it is a Radon measure.
If is a separable complete metric space, then every Borel measure on is a Radon measure.
On the real line
The real line with its usual topology is a locally compact Hausdorff space; hence we can define a Borel measure on it. In this case, is the smallest σ-algebra that contains the open intervals of . While there are many Borel measures μ, the choice of Borel measure that assigns for every half-open interval is sometimes called "the" Borel measure on . This measure turns out to be the restriction to the Borel σ-algebra of the Lebesgue measure , which is a complete measure and is defined on the Lebesgue σ-algebra. The Lebesgue σ-algebra is actually the completion of the Borel σ-algebra, which means that it is the smallest σ-algebra that contains all the Borel sets and can be equipped with a complete measure. Also, the Borel measure and the Lebesgue measure coincide on the Borel sets (i.e., for every Borel measurable set, where is the Borel measure described above). This idea extends to finite-dimensional spaces (the Cramér–Wold theorem, below) but does not hold, in general, for infinite-dimensional spaces. Infinite-dimensional Lebesgue measures do not exist.
Product spaces
If X and Y are second-countable, Hausdorff topological spaces, then the set of Borel subsets of their product coincides with the product of the sets of Borel subsets of X and Y. That is, the Borel functor
from the category of second-countable Hausdorff spaces to the category of measurable spaces preserves finite products.
Applications
Lebesgue–Stieltjes integral
The Lebesgue–Stieltjes integral is the ordinary Lebesgue integral with respect to a measure known as the Lebesgue–Stieltjes measure, which may be associated to any function of bounded variation on the real line. The Lebesgue–Stieltjes measure is a regular Borel measure, and conversely every regular Borel measure on the real line is of this kind.
Laplace transform
One can define the Laplace transform of a finite Borel measure μ on the real line by the Lebesgue integral
An important special case is where μ is a probability measure or, even more specifically, the Dirac delta function. In operational calculus, the Laplace transform of a measure is often treated as though the measure came from a distribution function f. In that case, to avoid potential confusion, one often writes
where the lower limit of 0− is shorthand notation for
This limit emphasizes that any point mass located at 0 is entirely captured by the Laplace transform. Although with the Lebesgue integral, it is not necessary to take such a limit, it does appear more naturally in connection with the Laplace–Stieltjes transform.
Moment problem
One can define the moments of a finite Borel measure μ on the real line by the integral
For these correspond to the Hamburger moment problem, the Stieltjes moment problem and the Hausdorff moment problem, respectively. The question or problem to be solved is, given a collection of such moments, is there a corresponding measure? For the Hausdorff moment problem, the corresponding measure is unique. For the other variants, in general, there are an infinite number of distinct measures that give the same moments.
Hausdorff dimension and Frostman's lemma
Given a Borel measure μ on a metric space X such that μ(X) > 0 and μ(B(x, r)) ≤ rs holds for some constant s > 0 and for every ball B(x, r) in X, then the Hausdorff dimension dimHaus(X) ≥ s. A partial converse is provided by the Frostman lemma:
Lemma: Let A be a Borel subset of Rn, and let s > 0. Then the following are equivalent:
Hs(A) > 0, where Hs denotes the s-dimensional Hausdorff measure.
There is an (unsigned) Borel measure μ satisfying μ(A) > 0, and such that
holds for all x ∈ Rn and r > 0.
Cramér–Wold theorem
The Cramér–Wold theorem in measure theory states that a Borel probability measure on is uniquely determined by the totality of its one-dimensional projections. It is used as a method for proving joint convergence results. The theorem is named after Harald Cramér and Herman Ole Andreas Wold.
See also
Jacobi operator
References
Further reading
Gaussian measure, a finite-dimensional Borel measure
.
Wiener's lemma related
External links
Borel measure at Encyclopedia of Mathematics
Measures (measure theory) | Borel measure | [
"Physics",
"Mathematics"
] | 1,197 | [
"Measures (measure theory)",
"Quantity",
"Physical quantities",
"Size"
] |
4,361 | https://en.wikipedia.org/wiki/Biological%20warfare | Biological warfare, also known as germ warfare, is the use of biological toxins or infectious agents such as bacteria, viruses, insects, and fungi with the intent to kill, harm or incapacitate humans, animals or plants as an act of war. Biological weapons (often termed "bio-weapons", "biological threat agents", or "bio-agents") are living organisms or replicating entities (i.e. viruses, which are not universally considered "alive"). Entomological (insect) warfare is a subtype of biological warfare.
Biological warfare is subject to a forceful normative prohibition. Offensive biological warfare in international armed conflicts is a war crime under the 1925 Geneva Protocol and several international humanitarian law treaties. In particular, the 1972 Biological Weapons Convention (BWC) bans the development, production, acquisition, transfer, stockpiling and use of biological weapons. In contrast, defensive biological research for prophylactic, protective or other peaceful purposes is not prohibited by the BWC.
Biological warfare is distinct from warfare involving other types of weapons of mass destruction (WMD), including nuclear warfare, chemical warfare, and radiological warfare. None of these are considered conventional weapons, which are deployed primarily for their explosive, kinetic, or incendiary potential.
Biological weapons may be employed in various ways to gain a strategic or tactical advantage over the enemy, either by threats or by actual deployments. Like some chemical weapons, biological weapons may also be useful as area denial weapons. These agents may be lethal or non-lethal, and may be targeted against a single individual, a group of people, or even an entire population. They may be developed, acquired, stockpiled or deployed by nation states or by non-national groups. In the latter case, or if a nation-state uses it clandestinely, it may also be considered bioterrorism.
Biological warfare and chemical warfare overlap to an extent, as the use of toxins produced by some living organisms is considered under the provisions of both the BWC and the Chemical Weapons Convention. Toxins and psychochemical weapons are often referred to as midspectrum agents. Unlike bioweapons, these midspectrum agents do not reproduce in their host and are typically characterized by shorter incubation periods.
Overview
A biological attack could conceivably result in large numbers of civilian casualties and cause severe disruption to economic and societal infrastructure.
A nation or group that can pose a credible threat of mass casualty has the ability to alter the terms under which other nations or groups interact with it. When indexed to weapon mass and cost of development and storage, biological weapons possess destructive potential and loss of life far in excess of nuclear, chemical or conventional weapons. Accordingly, biological agents are potentially useful as strategic deterrents, in addition to their utility as offensive weapons on the battlefield.
As a tactical weapon for military use, a significant problem with biological warfare is that it would take days to be effective, and therefore might not immediately stop an opposing force. Some biological agents (smallpox, pneumonic plague) have the capability of person-to-person transmission via aerosolized respiratory droplets. This feature can be undesirable, as the agent(s) may be transmitted by this mechanism to unintended populations, including neutral or even friendly forces. Worse still, such a weapon could "escape" the laboratory where it was developed, even if there was no intent to use it – for example by infecting a researcher who then transmits it to the outside world before realizing that they were infected. Several cases are known of researchers becoming infected and dying of Ebola, which they had been working with in the lab (though nobody else was infected in those cases) – while there is no evidence that their work was directed towards biological warfare, it demonstrates the potential for accidental infection even of careful researchers fully aware of the dangers. While containment of biological warfare is less of a concern for certain criminal or terrorist organizations, it remains a significant concern for the military and civilian populations of virtually all nations.
History
Antiquity and Middle Ages
Rudimentary forms of biological warfare have been practiced since antiquity. The earliest documented incident of the intention to use biological weapons is recorded in Hittite texts of 1500–1200 BCE, in which victims of an unknown plague (possibly tularemia) were driven into enemy lands, causing an epidemic. The Assyrians poisoned enemy wells with the fungus ergot, though with unknown results. Scythian archers dipped their arrows and Roman soldiers their swords into excrements and cadavers – victims were commonly infected by tetanus as result. In 1346, the bodies of Mongol warriors of the Golden Horde who had died of plague were thrown over the walls of the besieged Crimean city of Kaffa. Specialists disagree about whether this operation was responsible for the spread of the Black Death into Europe, Near East and North Africa, resulting in the deaths of approximately 25 million Europeans.
Biological agents were extensively used in many parts of Africa from the sixteenth century AD, most of the time in the form of poisoned arrows, or powder spread on the war front as well as poisoning of horses and water supply of the enemy forces. In Borgu, there were specific mixtures to kill, hypnotize, make the enemy bold, and to act as an antidote against the poison of the enemy as well. The creation of biologicals was reserved for a specific and professional class of medicine-men.
18th to 19th century
During the French and Indian War, in June 1763 a group of Native Americans laid siege to British-held Fort Pitt. Following instructions of his superior, Colonel Henry Bouquet, the commander of Fort Pitt, Swiss-born Captain Simeon Ecuyer, ordered his men to take smallpox-infested blankets from the infirmary and give it to a Lenape delegation during the siege. A reported outbreak that began the spring before left as many as one hundred Native Americans dead in Ohio Country from 1763 to 1764. It is not clear whether the smallpox was a result of the Fort Pitt incident or the virus was already present among the Delaware people as outbreaks happened on their own every dozen or so years and the delegates were met again later and seemingly had not contracted smallpox. During the American Revolutionary War, Continental Army officer George Washington mentioned to the Continental Congress that he had heard a rumor from a sailor that his opponent during the Siege of Boston, General William Howe, had deliberately sent civilians out of the city in the hopes of spreading the ongoing smallpox epidemic to American lines; Washington, remaining unconvinced, wrote that he "could hardly give credit to" the claim. Washington had already inoculated his soldiers, diminishing the effect of the epidemic. Some historians have claimed that a detachment of the Corps of Royal Marines stationed in New South Wales, Australia, deliberately used smallpox there in 1789. Dr Seth Carus states: "Ultimately, we have a strong circumstantial case supporting the theory that someone deliberately introduced smallpox in the Aboriginal population."
World War I
By 1900 the germ theory and advances in bacteriology brought a new level of sophistication to the techniques for possible use of bio-agents in war. Biological sabotage in the form of anthrax and glanders was undertaken on behalf of the Imperial German government during World War I (1914–1918), with indifferent results. The Geneva Protocol of 1925 prohibited the first use of chemical and biological weapons against enemy nationals in international armed conflicts.
World War II
With the onset of World War II, the Ministry of Supply in the United Kingdom established a biological warfare program at Porton Down, headed by the microbiologist Paul Fildes. The research was championed by Winston Churchill and soon tularemia, anthrax, brucellosis, and botulism toxins had been effectively weaponized. In particular, Gruinard Island in Scotland, was contaminated with anthrax during a series of extensive tests for the next 56 years. Although the UK never offensively used the biological weapons it developed, its program was the first to successfully weaponize a variety of deadly pathogens and bring them into industrial production. Other nations, notably France and Japan, had begun their own biological weapons programs.
When the United States entered the war, Allied resources were pooled at the request of the British. The U.S. then established a large research program and industrial complex at Fort Detrick, Maryland, in 1942 under the direction of George W. Merck. The biological and chemical weapons developed during that period were tested at the Dugway Proving Grounds in Utah. Soon there were facilities for the mass production of anthrax spores, brucellosis, and botulism toxins, although the war was over before these weapons could be of much operational use.
The most notorious program of the period was run by the secret Imperial Japanese Army Unit 731 during the war, based at Pingfan in Manchuria and commanded by Lieutenant General Shirō Ishii. This biological warfare research unit conducted often fatal human experiments on prisoners, and produced biological weapons for combat use. Although the Japanese effort lacked the technological sophistication of the American or British programs, it far outstripped them in its widespread application and indiscriminate brutality. Biological weapons were used against Chinese soldiers and civilians in several military campaigns. In 1940, the Japanese Army Air Force bombed Ningbo with ceramic bombs full of fleas carrying the bubonic plague. Many of these operations were ineffective due to inefficient delivery systems, although up to 400,000 people may have died. During the Zhejiang-Jiangxi Campaign in 1942, around 1,700 Japanese troops died out of a total 10,000 Japanese soldiers who fell ill with disease when their own biological weapons attack rebounded on their own forces.
During the final months of World War II, Japan planned to use plague as a biological weapon against U.S. civilians in San Diego, California, during Operation Cherry Blossoms at Night. The plan was set to launch on 22 September 1945, but it was not executed because of Japan's surrender on 15 August 1945.
Cold War
In Britain, the 1950s saw the weaponization of plague, brucellosis, tularemia and later equine encephalomyelitis and vaccinia viruses, but the programme was unilaterally cancelled in 1956. The United States Army Biological Warfare Laboratories weaponized anthrax, tularemia, brucellosis, Q-fever and others.
In 1969, US President Richard Nixon decided to unilaterally terminate the offensive biological weapons program of the US, allowing only scientific research for defensive measures. This decision increased the momentum of the negotiations for a ban on biological warfare, which took place from 1969 to 1972 in the United Nation's Conference of the Committee on Disarmament in Geneva. These negotiations resulted in the Biological Weapons Convention, which was opened for signature on 10 April 1972 and entered into force on 26 March 1975 after its ratification by 22 states.
Despite being a party and depositary to the BWC, the Soviet Union continued and expanded its massive offensive biological weapons program, under the leadership of the allegedly civilian institution Biopreparat. The Soviet Union attracted international suspicion after the 1979 Sverdlovsk anthrax leak killed approximately 65 to 100 people.
1948 Arab–Israeli War
According to historians Benny Morris and Benjamin Kedar, Israel conducted a biological warfare operation codenamed Operation Cast Thy Bread during the 1948 Arab–Israeli War. The Haganah initially used typhoid bacteria to contaminate water wells in newly cleared Arab villages to prevent the population including militiamen from returning. Later, the biological warfare campaign expanded to include Jewish settlements that were in imminent danger of being captured by Arab troops and inhabited Arab towns not slated for capture. There was also plans to expand the biological warfare campaign into other Arab states including Egypt, Lebanon and Syria, but they were not carried out.
International law
International restrictions on biological warfare began with the 1925 Geneva Protocol, which prohibits the use but not the possession or development of biological and chemical weapons in international armed conflicts. Upon ratification of the Geneva Protocol, several countries made reservations regarding its applicability and use in retaliation. Due to these reservations, it was in practice a "no-first-use" agreement only.
The 1972 Biological Weapons Convention (BWC) supplements the Geneva Protocol by prohibiting the development, production, acquisition, transfer, stockpiling and use of biological weapons. Having entered into force on 26 March 1975, the BWC was the first multilateral disarmament treaty to ban the production of an entire category of weapons of mass destruction. As of March 2021, 183 states have become party to the treaty. The BWC is considered to have established a strong global norm against biological weapons, which is reflected in the treaty's preamble, stating that the use of biological weapons would be "repugnant to the conscience of mankind". The BWC's effectiveness has been limited due to insufficient institutional support and the absence of any formal verification regime to monitor compliance.
In 1985, the Australia Group was established, a multilateral export control regime of 43 countries aiming to prevent the proliferation of chemical and biological weapons.
In 2004, the United Nations Security Council passed Resolution 1540, which obligates all UN Member States to develop and enforce appropriate legal and regulatory measures against the proliferation of chemical, biological, radiological, and nuclear weapons and their means of delivery, in particular, to prevent the spread of weapons of mass destruction to non-state actors.
Bioterrorism
Biological weapons are difficult to detect, economical and easy to use, making them appealing to terrorists. The cost of a biological weapon is estimated to be about 0.05 percent the cost of a conventional weapon in order to produce similar numbers of mass casualties per kilometer square. Moreover, their production is very easy as common technology can be used to produce biological warfare agents, like that used in production of vaccines, foods, spray devices, beverages and antibiotics. A major factor in biological warfare that attracts terrorists is that they can easily escape before the government agencies or secret agencies have even started their investigation. This is because the potential organism has an incubation period of 3 to 7 days, after which the results begin to appear, thereby giving terrorists a lead.
A technique called Clustered, Regularly Interspaced, Short Palindromic Repeat (CRISPR-Cas9) is now so cheap and widely available that scientists fear that amateurs will start experimenting with them. In this technique, a DNA sequence is cut off and replaced with a new sequence, e.g. one that codes for a particular protein, with the intent of modifying an organism's traits. Concerns have emerged regarding do-it-yourself biology research organizations due to their associated risk that a rogue amateur DIY researcher could attempt to develop dangerous bioweapons using genome editing technology.
In 2002, when CNN went through Al-Qaeda's (AQ's) experiments with crude poisons, they found out that AQ had begun planning ricin and cyanide attacks with the help of a loose association of terrorist cells. The associates had infiltrated many countries like Turkey, Italy, Spain, France and others. In 2015, to combat the threat of bioterrorism, a National Blueprint for Biodefense was issued by the Blue-Ribbon Study Panel on Biodefense. Also, 233 potential exposures of select biological agents outside of the primary barriers of the biocontainment in the US were described by the annual report of the Federal Select Agent Program.
Though a verification system can reduce bioterrorism, an employee, or a lone terrorist having adequate knowledge of a bio-technology company's facilities, can cause potential danger by utilizing, without proper oversight and supervision, that company's resources. Moreover, it has been found that about 95% of accidents that have occurred due to low security have been done by employees or those who had a security clearance.
Entomology
Entomological warfare (EW) is a type of biological warfare that uses insects to attack the enemy. The concept has existed for centuries and research and development have continued into the modern era. EW has been used in battle by Japan and several other nations have developed and been accused of using an entomological warfare program. EW may employ insects in a direct attack or as vectors to deliver a biological agent, such as plague. Essentially, EW exists in three varieties. One type of EW involves infecting insects with a pathogen and then dispersing the insects over target areas. The insects then act as a vector, infecting any person or animal they might bite. Another type of EW is a direct insect attack against crops; the insect may not be infected with any pathogen but instead represents a threat to agriculture. The final method uses uninfected insects, such as bees or wasps, to directly attack the enemy.
Genetics
Theoretically, novel approaches in biotechnology, such as synthetic biology could be used in the future to design novel types of biological warfare agents.
Would demonstrate how to render a vaccine ineffective;
Would confer resistance to therapeutically useful antibiotics or antiviral agents;
Would enhance the virulence of a pathogen or render a nonpathogen virulent;
Would increase the transmissibility of a pathogen;
Would alter the host range of a pathogen;
Would enable the evasion of diagnostic/detection tools;
Would enable the weaponization of a biological agent or toxin.
Most of the biosecurity concerns in synthetic biology are focused on the role of DNA synthesis and the risk of producing genetic material of lethal viruses (e.g. 1918 Spanish flu, polio) in the lab. Recently, the CRISPR/Cas system has emerged as a promising technique for gene editing. It was hailed by The Washington Post as "the most important innovation in the synthetic biology space in nearly 30 years." While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks. Due to its ease of use and accessibility, it has raised a number of ethical concerns, especially surrounding its use in the biohacking space.
By target
Anti-personnel
Ideal characteristics of a biological agent to be used as a weapon against humans are high infectivity, high virulence, non-availability of vaccines and availability of an effective and efficient delivery system. Stability of the weaponized agent (the ability of the agent to retain its infectivity and virulence after a prolonged period of storage) may also be desirable, particularly for military applications, and the ease of creating one is often considered. Control of the spread of the agent may be another desired characteristic.
The primary difficulty is not the production of the biological agent, as many biological agents used in weapons can be manufactured relatively quickly, cheaply and easily. Rather, it is the weaponization, storage, and delivery in an effective vehicle to a vulnerable target that pose significant problems.
For example, Bacillus anthracis is considered an effective agent for several reasons. First, it forms hardy spores, perfect for dispersal aerosols. Second, this organism is not considered transmissible from person to person, and thus rarely if ever causes secondary infections. A pulmonary anthrax infection starts with ordinary influenza-like symptoms and progresses to a lethal hemorrhagic mediastinitis within 3–7 days, with a fatality rate that is 90% or higher in untreated patients. Finally, friendly personnel and civilians can be protected with suitable antibiotics.
Agents considered for weaponization, or known to be weaponized, include bacteria such as Bacillus anthracis, Brucella spp., Burkholderia mallei, Burkholderia pseudomallei, Chlamydophila psittaci, Coxiella burnetii, Francisella tularensis, some of the Rickettsiaceae (especially Rickettsia prowazekii and Rickettsia rickettsii), Shigella spp., Vibrio cholerae, and Yersinia pestis. Many viral agents have been studied and/or weaponized, including some of the Bunyaviridae (especially Rift Valley fever virus), Ebolavirus, many of the Flaviviridae (especially Japanese encephalitis virus), Machupo virus, Coronaviruses, Marburg virus, Variola virus, and yellow fever virus. Fungal agents that have been studied include Coccidioides spp.
Toxins that can be used as weapons include ricin, staphylococcal enterotoxin B, botulinum toxin, saxitoxin, and many mycotoxins. These toxins and the organisms that produce them are sometimes referred to as select agents. In the United States, their possession, use, and transfer are regulated by the Centers for Disease Control and Prevention's Select Agent Program.
The former US biological warfare program categorized its weaponized anti-personnel bio-agents as either Lethal Agents (Bacillus anthracis, Francisella tularensis, Botulinum toxin) or Incapacitating Agents (Brucella suis, Coxiella burnetii, Venezuelan equine encephalitis virus, Staphylococcal enterotoxin B).
Anti-agriculture
Anti-crop/anti-vegetation/anti-fisheries
The United States developed an anti-crop capability during the Cold War that used plant diseases (bioherbicides, or mycoherbicides) for destroying enemy agriculture. Biological weapons also target fisheries as well as water-based vegetation. It was believed that the destruction of enemy agriculture on a strategic scale could thwart Sino-Soviet aggression in a general war. Diseases such as wheat blast and rice blast were weaponized in aerial spray tanks and cluster bombs for delivery to enemy watersheds in agricultural regions to initiate epiphytotic (epidemics among plants). On the other hand, some sources report that these agents were stockpiled but never weaponized. When the United States renounced its offensive biological warfare program in 1969 and 1970, the vast majority of its biological arsenal was composed of these plant diseases. Enterotoxins and Mycotoxins were not affected by Nixon's order.
Though herbicides are chemicals, they are often grouped with biological warfare and chemical warfare because they may work in a similar manner as biotoxins or bioregulators. The Army Biological Laboratory tested each agent and the Army's Technical Escort Unit was responsible for the transport of all chemical, biological, radiological (nuclear) materials.
Biological warfare can also specifically target plants to destroy crops or defoliate vegetation. The United States and Britain discovered plant growth regulators (i.e., herbicides) during the Second World War, which were then used by the UK in the counterinsurgency operations of the Malayan Emergency. Inspired by the use in Malaysia, the US military effort in the Vietnam War included a mass dispersal of a variety of herbicides, famously Agent Orange, with the aim of destroying farmland and defoliating forests used as cover by the Viet Cong. Sri Lanka deployed military defoliants in its prosecution of the Eelam War against Tamil insurgents.
Anti-livestock
During World War I, German saboteurs used anthrax and glanders to sicken cavalry horses in U.S. and France, sheep in Romania, and livestock in Argentina intended for the Entente forces. One of these German saboteurs was Anton Dilger. Also, Germany itself became a victim of similar attacks – horses bound for Germany were infected with Burkholderia by French operatives in Switzerland.
During World War II, the U.S. and Canada secretly investigated the use of rinderpest, a highly lethal disease of cattle, as a bioweapon.
In the 1980s Soviet Ministry of Agriculture had successfully developed variants of foot-and-mouth disease, and rinderpest against cows, African swine fever for pigs, and psittacosis for chickens. These agents were prepared to spray them down from tanks attached to airplanes over hundreds of miles. The secret program was code-named "Ecology".
During the Mau Mau Uprising in 1952, the poisonous latex of the African milk bush was used to kill cattle.
Defensive operations
Medical countermeasures
In 2010 at The Meeting of the States Parties to the Convention on the Prohibition of the Development, Production and Stockpiling of Bacteriological (Biological) and Toxin Weapons and Their Destruction in Geneva
the sanitary epidemiological reconnaissance was suggested as well-tested means for enhancing the monitoring of infections and parasitic agents, for the practical implementation of the International Health Regulations (2005). The aim was to prevent and minimize the consequences of natural outbreaks of dangerous infectious diseases as well as the threat of alleged use of biological weapons against BTWC States Parties.
Many countries require their active-duty military personnel to get vaccinated for certain diseases that may potentially be used as a bioweapon such as anthrax, smallpox, and various other vaccines depending on the Area of Operations of the individual military units and commands.
Public health and disease surveillance
Most classical and modern biological weapons' pathogens can be obtained from a plant or an animal which is naturally infected.
In the largest biological weapons accident known—the anthrax outbreak in Sverdlovsk (now Yekaterinburg) in the Soviet Union in 1979—sheep became ill with anthrax as far as 200 kilometers from the release point of the organism from a military facility in the southeastern portion of the city and still off-limits to visitors today, (see Sverdlovsk Anthrax leak).
Thus, a robust surveillance system involving human clinicians and veterinarians may identify a bioweapons attack early in the course of an epidemic, permitting the prophylaxis of disease in the vast majority of people (and/or animals) exposed but not yet ill.
For example, in the case of anthrax, it is likely that by 24–36 hours after an attack, some small percentage of individuals (those with the compromised immune system or who had received a large dose of the organism due to proximity to the release point) will become ill with classical symptoms and signs (including a virtually unique chest X-ray finding, often recognized by public health officials if they receive timely reports). The incubation period for humans is estimated to be about 11.8 days to 12.1 days. This suggested period is the first model that is independently consistent with data from the largest known human outbreak. These projections refine previous estimates of the distribution of early-onset cases after a release and support a recommended 60-day course of prophylactic antibiotic treatment for individuals exposed to low doses of anthrax. By making these data available to local public health officials in real time, most models of anthrax epidemics indicate that more than 80% of an exposed population can receive antibiotic treatment before becoming symptomatic, and thus avoid the moderately high mortality of the disease.
Common epidemiological warnings
From most specific to least specific:
Single cause of a certain disease caused by an uncommon agent, with lack of an epidemiological explanation.
Unusual, rare, genetically engineered strain of an agent.
High morbidity and mortality rates in regards to patients with the same or similar symptoms.
Unusual presentation of the disease.
Unusual geographic or seasonal distribution.
Stable endemic disease, but with an unexplained increase in relevance.
Rare transmission (aerosols, food, water).
No illness presented in people who were/are not exposed to "common ventilation systems (have separate closed ventilation systems) when illness is seen in persons in close proximity who have a common ventilation system."
Different and unexplained diseases coexisting in the same patient without any other explanation.
Rare illness that affects a large, disparate population (respiratory disease might suggest the pathogen or agent was inhaled).
Illness is unusual for a certain population or age-group in which it takes presence.
Unusual trends of death and/or illness in animal populations, previous to or accompanying illness in humans.
Many affected reaching out for treatment at the same time.
Similar genetic makeup of agents in affected individuals.
Simultaneous collections of similar illness in non-contiguous areas, domestic, or foreign.
An abundance of cases of unexplained diseases and deaths.
Bioweapon identification
The goal of biodefense is to integrate the sustained efforts of the national and homeland security, medical, public health, intelligence, diplomatic, and law enforcement communities. Health care providers and public health officers are among the first lines of defense. In some countries private, local, and provincial (state) capabilities are being augmented by and coordinated with federal assets, to provide layered defenses against biological weapon attacks. During the first Gulf War the United Nations activated a biological and chemical response team, Task Force Scorpio, to respond to any potential use of weapons of mass destruction on civilians.
The traditional approach toward protecting agriculture, food, and water: focusing on the natural or unintentional introduction of a disease is being strengthened by focused efforts to address current and anticipated future biological weapons threats that may be deliberate, multiple, and repetitive.
The growing threat of biowarfare agents and bioterrorism has led to the development of specific field tools that perform on-the-spot analysis and identification of encountered suspect materials. One such technology, being developed by researchers from the Lawrence Livermore National Laboratory (LLNL), employs a "sandwich immunoassay", in which fluorescent dye-labeled antibodies aimed at specific pathogens are attached to silver and gold nanowires.
In the Netherlands, the company TNO has designed Bioaerosol Single Particle Recognition eQuipment (BiosparQ). This system would be implemented into the national response plan for bioweapon attacks in the Netherlands.
Researchers at Ben Gurion University in Israel are developing a different device called the BioPen, essentially a "Lab-in-a-Pen", which can detect known biological agents in under 20 minutes using an adaptation of the ELISA, a similar widely employed immunological technique, that in this case incorporates fiber optics.
List of programs, projects and sites by country
United States
Fort Detrick, Maryland
U.S. Army Biological Warfare Laboratories (1943–69)
Building 470
One-Million-Liter Test Sphere
Operation Sea-Spray
Operation Whitecoat (1954–73)
U.S. entomological warfare program
Operation Big Itch
Operation Big Buzz
Operation Drop Kick
Operation May Day
Project Bacchus
Project Clear Vision
Project SHAD
Project 112
Horn Island Testing Station
Fort Terry
Granite Peak Installation
Vigo Ordnance Plant
United Kingdom
Porton Down
Gruinard Island
Nancekuke
Operation Vegetarian (1942–1944)
Open-air field tests:
Operation Harness off Antigua, 1948–1950.
Operation Cauldron off Stornoway, 1952.
Operation Hesperus off Stornoway, 1953.
Operation Ozone off Nassau, 1954.
Operation Negation off Nassau, 1954–5.
Soviet Union and Russia
Biopreparat (18 labs and production centers)
Stepnogorsk Scientific and Technical Institute for Microbiology, Stepnogorsk, northern Kazakhstan
Institute of Ultra Pure Biochemical Preparations, Leningrad, a weaponized plague center
Vector State Research Center of Virology and Biotechnology (VECTOR), a weaponized smallpox center
Institute of Applied Biochemistry, Omutninsk
Kirov bioweapons production facility, Kirov, Kirov Oblast
Zagorsk smallpox production facility, Zagorsk
Berdsk bioweapons production facility, Berdsk
Bioweapons research facility, Obolensk
Sverdlovsk bioweapons production facility (Military Compound 19), Sverdlovsk, a weaponized anthrax center
Institute of Virus Preparations
Poison laboratory of the Soviet secret services
Vozrozhdeniya
Project Bonfire
Project Factor
Japan
Unit 731
Zhongma Fortress
Kaimingjie germ weapon attack
Khabarovsk War Crime Trials
Epidemic Prevention and Water Purification Department
Iraq
Al Hakum
Salman Pak facility
Al Manal facility
South Africa
Project Coast
Delta G Scientific Company
Roodeplaat Research Laboratories
Protechnik
Rhodesia
Canada
Grosse Isle, Quebec, site (1939–45) of research into anthrax and other agents
DRDC Suffield, Suffield, Alberta
List of associated people
Bioweaponeers:
Includes scientists and administrators
Shyh-Ching Lo
Kanatjan Alibekov, known as Ken Alibek
Ira Baldwin
Wouter Basson
Kurt Blome
Eugen von Haagen
Anton Dilger
Paul Fildes
Arthur Galston (unwittingly)
Kurt Gutzeit
Riley D. Housewright
Shiro Ishii
Elvin A. Kabat
George W. Merck
Frank Olson
Vladimir Pasechnik
William C. Patrick III
Sergei Popov
Theodor Rosebury
Rihab Rashid Taha
Prince Tsuneyoshi Takeda
Huda Salih Mahdi Ammash
Nassir al-Hindawi
Erich Traub
Auguste Trillat
Baron Otto von Rosen
Yujiro Wakamatsu
Yazid Sufaat
Writers and activists:
Jack Trudel
Daniel Barenblatt
Leonard A. Cole
Stephen Endicott
Arthur Galston
Jeanne Guillemin
Edward Hagerman
Sheldon H. Harris
Nicholas D. Kristof
Joshua Lederberg
Matthew Meselson
Toby Ord
Richard Preston
Ed Regis
Mark Wheelis
David Willman
Aaron Henderson
In popular culture
See also
Animal-borne bomb attacks
Antibiotic resistance
Asymmetric warfare
Baker Island
Bioaerosol
Biological contamination
Biological pest control
Biosecurity
Chemical weapon
Counterinsurgency
Discredited AIDS origins theories
Enterotoxin
Entomological warfare
Ethnic bioweapon
Herbicidal warfare
Hittite plague
Human experimentation in the United States
John W. Powell
Johnston Atoll Chemical Agent Disposal System
List of CBRN warfare forces
McNeill's law
Military animal
Mycotoxin
Plum Island Animal Disease Center
Project 112
Project AGILE
Project SHAD
Rhodesia and weapons of mass destruction
Trichothecene
Well poisoning
Yellow rain
References
Further reading
Counterproliferation Paper No. 53, USAF Counterproliferation Center, Air University, Maxwell Air Force Base, Alabama, USA.
External links
Biological weapons and international humanitarian law , ICRC
WHO: Health Aspects of Biological and Chemical Weapons
USAMRIID ()—U.S. Army Medical Research Institute of Infectious Diseases
Bioethics
Warfare by type | Biological warfare | [
"Technology",
"Biology"
] | 7,018 | [
"Bioethics",
"Biological warfare",
"Ethics of science and technology"
] |
4,373 | https://en.wikipedia.org/wiki/Buffer%20overflow | In programming and information security, a buffer overflow or buffer overrun is an anomaly whereby a program writes data to a buffer beyond the buffer's allocated memory, overwriting adjacent memory locations.
Buffers are areas of memory set aside to hold data, often while moving it from one section of a program to another, or between programs. Buffer overflows can often be triggered by malformed inputs; if one assumes all inputs will be smaller than a certain size and the buffer is created to be that size, then an anomalous transaction that produces more data could cause it to write past the end of the buffer. If this overwrites adjacent data or executable code, this may result in erratic program behavior, including memory access errors, incorrect results, and crashes.
Exploiting the behavior of a buffer overflow is a well-known security exploit. On many systems, the memory layout of a program, or the system as a whole, is well defined. By sending in data designed to cause a buffer overflow, it is possible to write into areas known to hold executable code and replace it with malicious code, or to selectively overwrite data pertaining to the program's state, therefore causing behavior that was not intended by the original programmer. Buffers are widespread in operating system (OS) code, so it is possible to make attacks that perform privilege escalation and gain unlimited access to the computer's resources. The famed Morris worm in 1988 used this as one of its attack techniques.
Programming languages commonly associated with buffer overflows include C and C++, which provide no built-in protection against accessing or overwriting data in any part of memory and do not automatically check that data written to an array (the built-in buffer type) is within the boundaries of that array. Bounds checking can prevent buffer overflows, but requires additional code and processing time. Modern operating systems use a variety of techniques to combat malicious buffer overflows, notably by randomizing the layout of memory, or deliberately leaving space between buffers and looking for actions that write into those areas ("canaries").
Technical description
A buffer overflow occurs when data written to a buffer also corrupts data values in memory addresses adjacent to the destination buffer due to insufficient bounds checking. This can occur when copying data from one buffer to another without first checking that the data fits within the destination buffer.
Example
In the following example expressed in C, a program has two variables which are adjacent in memory: an 8-byte-long string buffer, A, and a two-byte big-endian integer, B.
char A[8] = "";
unsigned short B = 1979;
Initially, A contains nothing but zero bytes, and B contains the number 1979.
Now, the program attempts to store the null-terminated string with ASCII encoding in the A buffer.
strcpy(A, "excessive");
is 9 characters long and encodes to 10 bytes including the null terminator, but A can take only 8 bytes. By failing to check the length of the string, it also overwrites the value of B:
B's value has now been inadvertently replaced by a number formed from part of the character string. In this example "e" followed by a zero byte would become 25856.
Writing data past the end of allocated memory can sometimes be detected by the operating system to generate a segmentation fault error that terminates the process.
To prevent the buffer overflow from happening in this example, the call to strcpy could be replaced with strlcpy, which takes the maximum capacity of A (including a null-termination character) as an additional parameter and ensures that no more than this amount of data is written to A:
strlcpy(A, "excessive", sizeof(A));
When available, the strlcpy library function is preferred over strncpy which does not null-terminate the destination buffer if the source string's length is greater than or equal to the size of the buffer (the third argument passed to the function). Therefore A may not be null-terminated and cannot be treated as a valid C-style string.
Exploitation
The techniques to exploit a buffer overflow vulnerability vary by architecture, operating system, and memory region. For example, exploitation on the heap (used for dynamically allocated memory), differs markedly from exploitation on the call stack. In general, heap exploitation depends on the heap manager used on the target system, while stack exploitation depends on the calling convention used by the architecture and compiler.
Stack-based exploitation
There are several ways in which one can manipulate a program by exploiting stack-based buffer overflows:
Changing program behavior by overwriting a local variable located near the vulnerable buffer on the stack;
By overwriting the return address in a stack frame to point to code selected by the attacker, usually called the shellcode. Once the function returns, execution will resume at the attacker's shellcode;
By overwriting a function pointer or exception handler to point to the shellcode, which is subsequently executed;
By overwriting a local variable (or pointer) of a different stack frame, which will later be used by the function that owns that frame.
The attacker designs data to cause one of these exploits, then places this data in a buffer supplied to users by the vulnerable code. If the address of the user-supplied data used to affect the stack buffer overflow is unpredictable, exploiting a stack buffer overflow to cause remote code execution becomes much more difficult. One technique that can be used to exploit such a buffer overflow is called "trampolining". Here, an attacker will find a pointer to the vulnerable stack buffer and compute the location of their shellcode relative to that pointer. The attacker will then use the overwrite to jump to an instruction already in memory which will make a second jump, this time relative to the pointer. That second jump will branch execution into the shellcode. Suitable instructions are often present in large code. The Metasploit Project, for example, maintains a database of suitable opcodes, though it lists only those found in the Windows operating system.
Heap-based exploitation
A buffer overflow occurring in the heap data area is referred to as a heap overflow and is exploitable in a manner different from that of stack-based overflows. Memory on the heap is dynamically allocated by the application at run-time and typically contains program data. Exploitation is performed by corrupting this data in specific ways to cause the application to overwrite internal structures such as linked list pointers. The canonical heap overflow technique overwrites dynamic memory allocation linkage (such as malloc meta data) and uses the resulting pointer exchange to overwrite a program function pointer.
Microsoft's GDI+ vulnerability in handling JPEGs is an example of the danger a heap overflow can present.
Barriers to exploitation
Manipulation of the buffer, which occurs before it is read or executed, may lead to the failure of an exploitation attempt. These manipulations can mitigate the threat of exploitation, but may not make it impossible. Manipulations could include conversion to upper or lower case, removal of metacharacters and filtering out of non-alphanumeric strings. However, techniques exist to bypass these filters and manipulations, such as alphanumeric shellcode, polymorphic code, self-modifying code, and return-to-libc attacks. The same methods can be used to avoid detection by intrusion detection systems. In some cases, including where code is converted into Unicode, the threat of the vulnerability has been misrepresented by the disclosers as only Denial of Service when in fact the remote execution of arbitrary code is possible.
Practicalities of exploitation
In real-world exploits there are a variety of challenges which need to be overcome for exploits to operate reliably. These factors include null bytes in addresses, variability in the location of shellcode, differences between environments, and various counter-measures in operation.
NOP sled technique
A NOP-sled is the oldest and most widely known technique for exploiting stack buffer overflows. It solves the problem of finding the exact address of the buffer by effectively increasing the size of the target area. To do this, much larger sections of the stack are corrupted with the no-op machine instruction. At the end of the attacker-supplied data, after the no-op instructions, the attacker places an instruction to perform a relative jump to the top of the buffer where the shellcode is located. This collection of no-ops is referred to as the "NOP-sled" because if the return address is overwritten with any address within the no-op region of the buffer, the execution will "slide" down the no-ops until it is redirected to the actual malicious code by the jump at the end. This technique requires the attacker to guess where on the stack the NOP-sled is instead of the comparatively small shellcode.
Because of the popularity of this technique, many vendors of intrusion prevention systems will search for this pattern of no-op machine instructions in an attempt to detect shellcode in use. A NOP-sled does not necessarily contain only traditional no-op machine instructions. Any instruction that does not corrupt the machine state to a point where the shellcode will not run can be used in place of the hardware assisted no-op. As a result, it has become common practice for exploit writers to compose the no-op sled with randomly chosen instructions which will have no real effect on the shellcode execution.
While this method greatly improves the chances that an attack will be successful, it is not without problems. Exploits using this technique still must rely on some amount of luck that they will guess offsets on the stack that are within the NOP-sled region. An incorrect guess will usually result in the target program crashing and could alert the system administrator to the attacker's activities. Another problem is that the NOP-sled requires a much larger amount of memory in which to hold a NOP-sled large enough to be of any use. This can be a problem when the allocated size of the affected buffer is too small and the current depth of the stack is shallow (i.e., there is not much space from the end of the current stack frame to the start of the stack). Despite its problems, the NOP-sled is often the only method that will work for a given platform, environment, or situation, and as such it is still an important technique.
The jump to address stored in a register technique
The "jump to register" technique allows for reliable exploitation of stack buffer overflows without the need for extra room for a NOP-sled and without having to guess stack offsets. The strategy is to overwrite the return pointer with something that will cause the program to jump to a known pointer stored within a register which points to the controlled buffer and thus the shellcode. For example, if register A contains a pointer to the start of a buffer then any jump or call taking that register as an operand can be used to gain control of the flow of execution.
In practice a program may not intentionally contain instructions to jump to a particular register. The traditional solution is to find an unintentional instance of a suitable opcode at a fixed location somewhere within the program memory. Figure E on the left contains an example of such an unintentional instance of the i386 jmp esp instruction. The opcode for this instruction is FF E4. This two-byte sequence can be found at a one-byte offset from the start of the instruction call DbgPrint at address 0x7C941EED. If an attacker overwrites the program return address with this address the program will first jump to 0x7C941EED, interpret the opcode FF E4 as the jmp esp instruction, and will then jump to the top of the stack and execute the attacker's code.
When this technique is possible the severity of the vulnerability increases considerably. This is because exploitation will work reliably enough to automate an attack with a virtual guarantee of success when it is run. For this reason, this is the technique most commonly used in Internet worms that exploit stack buffer overflow vulnerabilities.
This method also allows shellcode to be placed after the overwritten return address on the Windows platform. Since executables are mostly based at address 0x00400000 and x86 is a little endian architecture, the last byte of the return address must be a null, which terminates the buffer copy and nothing is written beyond that. This limits the size of the shellcode to the size of the buffer, which may be overly restrictive. DLLs are located in high memory (above 0x01000000) and so have addresses containing no null bytes, so this method can remove null bytes (or other disallowed characters) from the overwritten return address. Used in this way, the method is often referred to as "DLL trampolining".
Protective countermeasures
Various techniques have been used to detect or prevent buffer overflows, with various tradeoffs. The following sections describe the choices and implementations available.
Choice of programming language
Assembly, C, and C++ are popular programming languages that are vulnerable to buffer overflow in part because they allow direct access to memory and are not strongly typed. C provides no built-in protection against accessing or overwriting data in any part of memory. More specifically, it does not check that data written to a buffer is within the boundaries of that buffer. The standard C++ libraries provide many ways of safely buffering data, and C++'s Standard Template Library (STL) provides containers that can optionally perform bounds checking if the programmer explicitly calls for checks while accessing data. For example, a vector's member function at() performs a bounds check and throws an out_of_range exception if the bounds check fails. However, C++ behaves just like C if the bounds check is not explicitly called. Techniques to avoid buffer overflows also exist for C.
Languages that are strongly typed and do not allow direct memory access, such as COBOL, Java, Eiffel, Python, and others, prevent buffer overflow in most cases. Many programming languages other than C or C++ provide runtime checking and in some cases even compile-time checking which might send a warning or raise an exception, while C or C++ would overwrite data and continue to execute instructions until erroneous results are obtained, potentially causing the program to crash. Examples of such languages include Ada, Eiffel, Lisp, Modula-2, Smalltalk, OCaml and such C-derivatives as Cyclone, Rust and D. The Java and .NET Framework bytecode environments also require bounds checking on all arrays. Nearly every interpreted language will protect against buffer overflow, signaling a well-defined error condition. Languages that provide enough type information to do bounds checking often provide an option to enable or disable it. Static code analysis can remove many dynamic bound and type checks, but poor implementations and awkward cases can significantly decrease performance. Software engineers should carefully consider the tradeoffs of safety versus performance costs when deciding which language and compiler setting to use.
Use of safe libraries
The problem of buffer overflows is common in the C and C++ languages because they expose low level representational details of buffers as containers for data types. Buffer overflows can be avoided by maintaining a high degree of correctness in code that performs buffer management. It has also long been recommended to avoid standard library functions that are not bounds checked, such as gets, scanf and strcpy. The Morris worm exploited a gets call in fingerd.
Well-written and tested abstract data type libraries that centralize and automatically perform buffer management, including bounds checking, can reduce the occurrence and impact of buffer overflows. The primary data types in languages in which buffer overflows are common are strings and arrays. Thus, libraries preventing buffer overflows in these data types can provide the vast majority of the necessary coverage. However, failure to use these safe libraries correctly can result in buffer overflows and other vulnerabilities, and naturally any bug in the library is also a potential vulnerability. "Safe" library implementations include "The Better String Library", Vstr and Erwin. The OpenBSD operating system's C library provides the strlcpy and strlcat functions, but these are more limited than full safe library implementations.
In September 2007, Technical Report 24731, prepared by the C standards committee, was published. It specifies a set of functions that are based on the standard C library's string and IO functions, with additional buffer-size parameters. However, the efficacy of these functions for reducing buffer overflows is disputable. They require programmer intervention on a per function call basis that is equivalent to intervention that could make the analogous older standard library functions buffer overflow safe.
Buffer overflow protection
Buffer overflow protection is used to detect the most common buffer overflows by checking that the stack has not been altered when a function returns. If it has been altered, the program exits with a segmentation fault. Three such systems are Libsafe, and the StackGuard and ProPolice gcc patches.
Microsoft's implementation of Data Execution Prevention (DEP) mode explicitly protects the pointer to the Structured Exception Handler (SEH) from being overwritten.
Stronger stack protection is possible by splitting the stack in two: one for data and one for function returns. This split is present in the Forth language, though it was not a security-based design decision. Regardless, this is not a complete solution to buffer overflows, as sensitive data other than the return address may still be overwritten.
This type of protection is also not entirely accurate because it does not detect all attacks. Systems like StackGuard are more centered around the behavior of the attacks, which makes them efficient and faster in comparison to range-check systems.
Pointer protection
Buffer overflows work by manipulating pointers, including stored addresses. PointGuard was proposed as a compiler-extension to prevent attackers from reliably manipulating pointers and addresses. The approach works by having the compiler add code to automatically XOR-encode pointers before and after they are used. Theoretically, because the attacker does not know what value will be used to encode and decode the pointer, one cannot predict what the pointer will point to if it is overwritten with a new value. PointGuard was never released, but Microsoft implemented a similar approach beginning in Windows XP SP2 and Windows Server 2003 SP1. Rather than implement pointer protection as an automatic feature, Microsoft added an API routine that can be called. This allows for better performance (because it is not used all of the time), but places the burden on the programmer to know when its use is necessary.
Because XOR is linear, an attacker may be able to manipulate an encoded pointer by overwriting only the lower bytes of an address. This can allow an attack to succeed if the attacker can attempt the exploit multiple times or complete an attack by causing a pointer to point to one of several locations (such as any location within a NOP sled). Microsoft added a random rotation to their encoding scheme to address this weakness to partial overwrites.
Executable space protection
Executable space protection is an approach to buffer overflow protection that prevents execution of code on the stack or the heap. An attacker may use buffer overflows to insert arbitrary code into the memory of a program, but with executable space protection, any attempt to execute that code will cause an exception.
Some CPUs support a feature called NX ("No eXecute") or XD ("eXecute Disabled") bit, which in conjunction with software, can be used to mark pages of data (such as those containing the stack and the heap) as readable and writable but not executable.
Some Unix operating systems (e.g. OpenBSD, macOS) ship with executable space protection (e.g. W^X). Some optional packages include:
PaX
Exec Shield
Openwall
Newer variants of Microsoft Windows also support executable space protection, called Data Execution Prevention. Proprietary add-ons include:
BufferShield
StackDefender
Executable space protection does not generally protect against return-to-libc attacks, or any other attack that does not rely on the execution of the attackers code. However, on 64-bit systems using ASLR, as described below, executable space protection makes it far more difficult to execute such attacks.
Address space layout randomization
Address space layout randomization (ASLR) is a computer security feature that involves arranging the positions of key data areas, usually including the base of the executable and position of libraries, heap, and stack, randomly in a process' address space.
Randomization of the virtual memory addresses at which functions and variables can be found can make exploitation of a buffer overflow more difficult, but not impossible. It also forces the attacker to tailor the exploitation attempt to the individual system, which foils the attempts of internet worms. A similar but less effective method is to rebase processes and libraries in the virtual address space.
Deep packet inspection
The use of deep packet inspection (DPI) can detect, at the network perimeter, very basic remote attempts to exploit buffer overflows by use of attack signatures and heuristics. This technique can block packets that have the signature of a known attack. It was formerly used in situations in which a long series of No-Operation instructions (known as a NOP-sled) was detected and the location of the exploit's payload was slightly variable.
Packet scanning is not an effective method since it can only prevent known attacks and there are many ways that a NOP-sled can be encoded. Shellcode used by attackers can be made alphanumeric, metamorphic, or self-modifying to evade detection by heuristic packet scanners and intrusion detection systems.
Testing
Checking for buffer overflows and patching the bugs that cause them helps prevent buffer overflows. One common automated technique for discovering them is fuzzing. Edge case testing can also uncover buffer overflows, as can static analysis. Once a potential buffer overflow is detected it should be patched. This makes the testing approach useful for software that is in development, but less useful for legacy software that is no longer maintained or supported.
History
Buffer overflows were understood and partially publicly documented as early as 1972, when the Computer Security Technology Planning Study laid out the technique: "The code performing this function does not check the source and destination addresses properly, permitting portions of the monitor to be overlaid by the user. This can be used to inject code into the monitor that will permit the user to seize control of the machine." Today, the monitor would be referred to as the kernel.
The earliest documented hostile exploitation of a buffer overflow was in 1988. It was one of several exploits used by the Morris worm to propagate itself over the Internet. The program exploited was a service on Unix called finger. Later, in 1995, Thomas Lopatic independently rediscovered the buffer overflow and published his findings on the Bugtraq security mailing list. A year later, in 1996, Elias Levy (also known as Aleph One) published in Phrack magazine the paper "Smashing the Stack for Fun and Profit", a step-by-step introduction to exploiting stack-based buffer overflow vulnerabilities.
Since then, at least two major internet worms have exploited buffer overflows to compromise a large number of systems. In 2001, the Code Red worm exploited a buffer overflow in Microsoft's Internet Information Services (IIS) 5.0 and in 2003 the SQL Slammer worm compromised machines running Microsoft SQL Server 2000.
In 2003, buffer overflows present in licensed Xbox games have been exploited to allow unlicensed software, including homebrew games, to run on the console without the need for hardware modifications, known as modchips. The PS2 Independence Exploit also used a buffer overflow to achieve the same for the PlayStation 2. The Twilight hack accomplished the same with the Wii, using a buffer overflow in The Legend of Zelda: Twilight Princess.
See also
Billion laughs
Buffer over-read
Coding conventions
Computer security
End-of-file
Heap overflow
Ping of death
Port scanner
Return-to-libc attack
Safety-critical system
Security-focused operating system
Self-modifying code
Software quality
Shellcode
Stack buffer overflow
Uncontrolled format string
References
External links
"Discovering and exploiting a remote buffer overflow vulnerability in an FTP server" by Raykoid666
"Smashing the Stack for Fun and Profit" by Aleph One
CERT Secure Coding Standards
CERT Secure Coding Initiative
Secure Coding in C and C++
SANS: inside the buffer overflow attack
"Advances in adjacent memory overflows" by Nomenumbra
A Comparison of Buffer Overflow Prevention Implementations and Weaknesses
More Security Whitepapers about Buffer Overflows
Chapter 12: Writing Exploits III from Sockets, Shellcode, Porting & Coding: Reverse Engineering Exploits and Tool Coding for Security Professionals by James C. Foster (). Detailed explanation of how to use Metasploit to develop a buffer overflow exploit from scratch.
Computer Security Technology Planning Study, James P. Anderson, ESD-TR-73-51, ESD/AFSC, Hanscom AFB, Bedford, MA 01731 (October 1972) [NTIS AD-758 206]
"Buffer Overflows: Anatomy of an Exploit" by Nevermore
Secure Programming with GCC and GLibc (2008), by Marcel Holtmann
"Criação de Exploits com Buffer Overflor – Parte 0 – Um pouco de teoria " (2018), by Helvio Junior (M4v3r1ck)
Software bugs
Computer memory
Computer security exploits
Articles with example C code | Buffer overflow | [
"Technology"
] | 5,369 | [
"Computer security exploits"
] |
4,393 | https://en.wikipedia.org/wiki/Bioterrorism | Bioterrorism is terrorism involving the intentional release or dissemination of biological agents. These agents include bacteria, viruses, insects, fungi, and/or their toxins, and may be in a naturally occurring or a human-modified form, in much the same way as in biological warfare. Further, modern agribusiness is vulnerable to anti-agricultural attacks by terrorists, and such attacks can seriously damage economy as well as consumer confidence. The latter destructive activity is called agrobioterrorism and is a subtype of agro-terrorism.
Definition
Bioterrorism agents are typically found in nature, but could be mutated or altered to increase their ability to cause disease, make them resistant to current medicines, or to increase their ability to be spread into the environment. Biological agents can be spread through the air, water, or in food. Biological agents are attractive to terrorists because they are extremely difficult to detect and do not cause illness for several hours to several days. Some bioterrorism agents, like the smallpox virus, can be spread from person to person and some, like anthrax, cannot.
Bioterrorism may be favored because biological agents are relatively easy and inexpensive to obtain, can be easily disseminated, and can cause widespread fear and panic beyond the actual physical damage. Military leaders, however, have learned that, as a military asset, bioterrorism has some important limitations; it is difficult to use a bioweapon in a way that only affects the enemy and not friendly forces. A biological weapon is useful to terrorists mainly as a method of creating mass panic and disruption to a state or a country. However, technologists such as Bill Joy have warned of the potential power which genetic engineering might place in the hands of future bio-terrorists.
The use of agents that do not cause harm to humans, but disrupt the economy, have also been discussed. One such pathogen is the foot-and-mouth disease (FMD) virus, which is capable of causing widespread economic damage and public concern (as witnessed in the 2001 and 2007 FMD outbreaks in the UK), while having almost no capacity to infect humans.
History
By the time World War I began, attempts to use anthrax were directed at animal populations. This generally proved to be ineffective.
Shortly after the start of World War I, Germany launched a biological sabotage campaign in the United States, Russia, Romania, and France. At that time, Anton Dilger lived in Germany, but in 1915 he was sent to the United States carrying cultures of glanders, a virulent disease of horses and mules. Dilger set up a laboratory in his home in Chevy Chase, Maryland. He used stevedores working the docks in Baltimore to infect horses with glanders while they were waiting to be shipped to Britain. Dilger was under suspicion as being a German agent, but was never arrested. Dilger eventually fled to Madrid, Spain, where he died during the Influenza Pandemic of 1918. In 1916, the Russians arrested a German agent with similar intentions. Germany and its allies infected French cavalry horses and many of Russia's mules and horses on the Eastern Front. These actions hindered artillery and troop movements, as well as supply convoys.
In 1972, police in Chicago arrested two college students, Allen Schwander and Stephen Pera, who had planned to poison the city's water supply with typhoid and other bacteria. Schwander had founded a terrorist group, "R.I.S.E.", while Pera collected and grew cultures from the hospital where he worked. The two men fled to Cuba after being released on bail. Schwander died of natural causes in 1974, while Pera returned to the U.S. in 1975 and was put on probation.
In 1979, anthrax spores killed around 66 people after the spores were unintentionally released from a military lab near Sverdlovsk, Russia. This occurrence of inhalational anthrax had provided a majority of the knowledge scientists understand about clinical anthrax. Soviet officials and physicians claimed the epidemic was produced by the consumption of infected game meat, but further investigation proves the source of infection were the inhaled spores. There is continued discussion about the intentionality of the epidemic and some speculate it was calculated by the Soviet government.
In 1980, the World Health Organization (WHO) announced the eradication of smallpox, a highly contagious and incurable disease. Although the disease has been eliminated in the wild, frozen stocks of smallpox virus are still maintained by the governments of the United States and Russia. Disastrous consequences are feared if rogue politicians or terrorists were to get hold of the smallpox strains. Since vaccination programs are now terminated, the world population is more susceptible to smallpox than ever before.
In Oregon in 1984, followers of the Bhagwan Shree Rajneesh attempted to control a local election by incapacitating the local population. They infected salad bars in 11 restaurants, produce in grocery stores, doorknobs, and other public domains with Salmonella typhimurium bacteria in the city of The Dalles, Oregon. The attack infected 751 people with severe food poisoning. There were no fatalities. This incident was the first known bioterrorist attack in the United States in the 20th century. It was also the single largest bioterrorism attack on U.S. soil.
In June 1993, the religious group Aum Shinrikyo released anthrax in Tokyo. Eyewitnesses reported a foul odor. The attack was a failure, because it did not infect a single person. The reason for this is due to the fact that the group used the vaccine strain of the bacterium. The spores which were recovered from the site of the attack showed that they were identical to an anthrax vaccine strain that was given to animals at the time. These vaccine strains are missing the genes that cause a symptomatic response.
In September and October 2001, several cases of anthrax broke out in the United States, apparently deliberately caused. Letters laced with infectious anthrax were concurrently delivered to news media offices and the U.S. Congress. The letters killed five people.
Scenarios
There are multiple considerable scenarios, how terrorists might employ biological agents. In 2000, tests conducted by various US agencies showed that indoor attacks in densely populated spaces are much more serious than outdoor attacks. Such enclosed spaces are large buildings, trains, indoor arenas, theaters, malls, tunnels and similar. Contra-measures against such scenarios are building architecture and ventilation systems engineering. In 1993, sewage was spilled out into a river, subsequently drawn into the water system and affected 400,000 people in Milwaukee, Wisconsin. The disease-causing organism was cryptosporidium parvum. This man-made disaster can be a template for a terrorist scenario. Nevertheless, terrorist scenarios are considered more likely near the points of delivery than at the water sources before the water treatment. Release of biological agents is more likely for a single building or a neighborhood. Counter-measures against this scenario include the further limitation of access to the water supply systems, tunnels, and infrastructure. Agricultural crop-duster flights might be misused as delivery devices for biological agents as well. Counter-measures against this scenario are background checks of employees of crop-dusting companies and surveillance procedures.
In the most common hoax scenario, no biological agents are employed. For instance, an envelope with powder in it that says, “You've just been exposed to anthrax.” Such hoaxes have been shown to have a large psychological impact on the population.
Anti-agriculture attacks are considered to require relatively little expertise and technology. Biological agents that attack livestock, fish, vegetation, and crops are mostly not contagious to humans and are therefore easier for attackers to handle. Even a few cases of infection can disrupt a country's agricultural production and exports for months, as evidenced by FMD outbreaks.
Types of agents
Under current United States law, bio-agents which have been declared by the U.S. Department of Health and Human Services or the U.S. Department of Agriculture to have the "potential to pose a severe threat to public health and safety" are officially defined as "select agents." The CDC categorizes these agents (A, B or C) and administers the Select Agent Program, which regulates the laboratories which may possess, use, or transfer select agents within the United States. As with US attempts to categorize harmful recreational drugs, designer viruses are not yet categorized and avian H5N1 has been shown to achieve high mortality and human-communication in a laboratory setting.
Category A
These high-priority agents pose a risk to national security, can be easily transmitted and disseminated, result in high mortality, have potential major public health impact, may cause public panic, or require special action for public health preparedness.
SARS and COVID-19, though not as lethal as other diseases, was concerning to scientists and policymakers for its social and economic disruption potential. After the global containment of the pandemic, the United States President George W. Bush stated "...A global influenza pandemic that infects millions and lasts from one to three years could be far worse."
Tularemia or "rabbit fever": Tularemia has a very low fatality rate if treated, but can severely incapacitate. The disease is caused by the Francisella tularensis bacterium, and can be contracted through contact with fur, inhalation, ingestion of contaminated water or insect bites. Francisella tularensis is very infectious. A small number of organisms (10–50 or so) can cause disease. If F. tularensis were used as a weapon, the bacteria would likely be made airborne for exposure by inhalation. People who inhale an infectious aerosol would generally experience severe respiratory illness, including life-threatening pneumonia and systemic infection, if they are not treated. The bacteria that cause tularemia occur widely in nature and could be isolated and grown in quantity in a laboratory, although manufacturing an effective aerosol weapon would require considerable sophistication.
Anthrax: Anthrax is a non-contagious disease caused by the spore-forming bacterium Bacillus anthracis. The ability of Anthrax to produce within small spores, or bacilli bacterium, makes it readily permeable to porous skin and can cause abrupt symptoms within 24 hours of exposure. The dispersal of this pathogen among densely populated areas is said to carry less than one percent mortality rate, for cutaneous exposure, to a ninety percent or higher mortality for untreated inhalational infections. An anthrax vaccine does exist but requires many injections for stable use. When discovered early, anthrax can be cured by administering antibiotics (such as ciprofloxacin). Its first modern incidence in biological warfare were when Scandinavian "freedom fighters" supplied by the German General Staff used anthrax with unknown results against the Imperial Russian Army in Finland in 1916. In 1993, the Aum Shinrikyo used anthrax in an unsuccessful attempt in Tokyo with zero fatalities. Anthrax was used in a series of attacks by a microbiologist at the US Army Medical Research Institute of Infection Disease on the offices of several United States senators in late 2001. The anthrax was in a powder form and it was delivered by the mail. This bioterrorist attack inevitably prompted seven cases of cutaneous anthrax and eleven cases of inhalation anthrax, with five leading to deaths. Additionally, an estimated 10 to 26 cases had prevented fatality through treatment supplied to over 30,000 individuals. Anthrax is one of the few biological agents that federal employees have been vaccinated for. In the US an anthrax vaccine, Anthrax Vaccine Adsorbed (AVA) exists and requires five injections for stable use. Other anthrax vaccines also exist. The strain used in the 2001 anthrax attacks was identical to the strain used by the USAMRIID.
Smallpox: Smallpox is a highly contagious virus. It is transmitted easily through the atmosphere and has a high mortality rate (20–40%). Smallpox was eradicated in the world in the 1970s, thanks to a worldwide vaccination program. However, some virus samples are still available in Russian and American laboratories. Some believe that after the collapse of the Soviet Union, cultures of smallpox have become available in other countries. Although people born pre-1970 will have been vaccinated for smallpox under the WHO program, the effectiveness of vaccination is limited since the vaccine provides high level of immunity for only 3 to 5 years. Revaccination's protection lasts longer. As a biological weapon smallpox is dangerous because of the highly contagious nature of both the infected and their pox. Also, the infrequency with which vaccines are administered among the general population since the eradication of the disease would leave most people unprotected in the event of an outbreak. Smallpox occurs only in humans, and has no external hosts or vectors.
Botulinum toxin: The neurotoxin Botulinum is the deadliest toxin known to man, and is produced by the bacterium Clostridium botulinum. Botulism causes death by respiratory failure and paralysis. Furthermore, the toxin is readily available worldwide due to its cosmetic applications in injections.
Bubonic plague: Plague is a disease caused by the Yersinia pestis bacterium. Rodents are the normal host of plague, and the disease is transmitted to humans by flea bites and occasionally by aerosol in the form of pneumonic plague. The disease has a history of use in biological warfare dating back many centuries, and is considered a threat due to its ease of culture and ability to remain in circulation among local rodents for a long period of time. The weaponized threat comes mainly in the form of pneumonic plague (infection by inhalation) It was the disease that caused the Black Death in Medieval Europe.
Viral hemorrhagic fevers: This includes hemorrhagic fevers caused by members of the family Filoviridae (Marburg virus and Ebola virus), and by the family Arenaviridae (for example Lassa virus and Machupo virus). Ebola virus disease, in particular, has caused high fatality rates ranging from 25 to 90% with a 50% average. No cure currently exists, although vaccines are in development. The Soviet Union investigated the use of filoviruses for biological warfare, and the Aum Shinrikyo group unsuccessfully attempted to obtain cultures of Ebola virus. Death from Ebola virus disease is commonly due to multiple organ failure and hypovolemic shock. Marburg virus was first discovered in Marburg, Germany. No treatments currently exist aside from supportive care. The arenaviruses have a somewhat reduced case-fatality rate compared to disease caused by filoviruses, but are more widely distributed, chiefly in central Africa and South America.
Category B
Category B agents are moderately easy to disseminate and have low mortality rates.
Brucellosis (Brucella species)
Epsilon toxin of Clostridium perfringens
Food safety threats (for example, Salmonella species, E coli O157:H7, Shigella, Staphylococcus aureus)
Glanders (Burkholderia mallei)
Melioidosis (Burkholderia pseudomallei)
Psittacosis (Chlamydia psittaci)
Q fever (Coxiella burnetii)
Ricin toxin from Ricinus communis (castor beans)
Abrin toxin from Abrus precatorius (Rosary peas)
Staphylococcal enterotoxin B
Typhus (Rickettsia prowazekii)
Viral encephalitis (alphaviruses, for example,: Venezuelan equine encephalitis, eastern equine encephalitis, western equine encephalitis)
Water supply threats (for example, Vibrio cholerae, Cryptosporidium parvum)
Category C
Category C agents are emerging pathogens that might be engineered for mass dissemination because of their availability, ease of production and dissemination, high mortality rate, or ability to cause a major health impact.
Nipah virus
Hantavirus
Planning and monitoring
Planning may involve the development of biological identification systems. Until recently in the United States, most biological defense strategies have been geared to protecting soldiers on the battlefield rather than ordinary people in cities. Financial cutbacks have limited the tracking of disease outbreaks. Some outbreaks, such as food poisoning due to E. coli or Salmonella, could be of either natural or deliberate origin.
Global defense strategies have also been put into place including the introduction of the Biological and Toxin Weapons Convention in 1975. A majority of countries across the globe participated in the conventions (144) but a handful chose not to take part in the defense. Many of the countries who opted out of the convention are located in the Middle East and former Soviet Union countries.
Preparedness
Export controls on biological agents are not applied uniformly, providing terrorists a route for acquisition. Laboratories are working on advanced detection systems to provide early warning, identify contaminated areas and populations at risk, and to facilitate prompt treatment. Methods for predicting the use of biological agents in urban areas as well as assessing the area for the hazards associated with a biological attack are being established in major cities. In addition, forensic technologies are working on identifying biological agents, their geographical origins and/or their initial source. Efforts include decontamination technologies to restore facilities without causing additional environmental concerns.
Early detection and rapid response to bioterrorism depend on close cooperation between public health authorities and law enforcement; however, such cooperation is lacking. National detection assets and vaccine stockpiles are not useful if local and state officials do not have access to them.
Aspects of protection against bioterrorism in the United States include:
Detection and resilience strategies in combating bioterrorism. This occurs primarily through the efforts of the Office of Health Affairs (OHA), a part of the Department of Homeland Security (DHS), whose role is to prepare for an emergency situation that impacts the health of the American populace. Detection has two primary technological factors. First there is OHA's BioWatch program in which collection devices are disseminated to thirty high risk areas throughout the country to detect the presence of aerosolized biological agents before symptoms present in patients. This is significant primarily because it allows a more proactive response to a disease outbreak rather than the more passive treatment of the past.
Implementation of the Generation-3 automated detection system. This advancement is significant simply because it enables action to be taken in four to six hours due to its automatic response system, whereas the previous system required aerosol detectors to be manually transported to laboratories. Resilience is a multifaceted issue as well, as addressed by OHA. One way in which this is ensured is through exercises that establish preparedness; programs like the Anthrax Response Exercise Series exist to ensure that, regardless of the incident, all emergency personnel will be aware of the role they must fill. Moreover, by providing information and education to public leaders, emergency medical services and all employees of the DHS, OHS suggests it can significantly decrease the impact of bioterrorism.
Enhancing the technological capabilities of first responders is accomplished through numerous strategies. The first of these strategies was developed by the Science and Technology Directorate (S&T) of DHS to ensure that the danger of suspicious powders could be effectively assessed, (as many dangerous biological agents such as anthrax exist as a white powder). By testing the accuracy and specificity of commercially available systems used by first responders, the hope is that all biologically harmful powders can be rendered ineffective.
Enhanced equipment for first responders. One recent advancement is the commercialization of a new form of Tyvex™ armor which protects first responders and patients from chemical and biological contaminants. There has also been a new generation of Self-Contained Breathing Apparatuses (SCBA) which has been recently made more robust against bioterrorism agents. All of these technologies combine to form what seems like a relatively strong deterrent to bioterrorism. However, New York City as an entity has numerous organizations and strategies that effectively serve to deter and respond to bioterrorism as it comes. From here the logical progression is into the realm of New York City's specific strategies to prevent bioterrorism.
Excelsior Challenge. In the second week of September 2016, the state of New York held a large emergency response training exercise called the Excelsior Challenge, with over 100 emergency responders participating. According to WKTV, "This is the fourth year of the Excelsior Challenge, a training exercise designed for police and first responders to become familiar with techniques and practices should a real incident occur." The event was held over three days and hosted by the State Preparedness Training Center in Oriskany, New York. Participants included bomb squads, canine handlers, tactical team officers and emergency medical services. In an interview with Homeland Preparedness News, Bob Stallman, assistant director at the New York State Preparedness Training Center, said, "We're constantly seeing what's happening around the world and we tailor our training courses and events for those types of real-world events." For the first time, the 2016 training program implemented New York's new electronic system. The system, called NY Responds, electronically connects every county in New York to aid in disaster response and recovery. As a result, "counties have access to a new technology known as Mutualink, which improves interoperability by integrating telephone, radio, video, and file-sharing into one application to allow local emergency staff to share real-time information with the state and other counties." The State Preparedness Training Center in Oriskany was designed by the State Division of Homeland Security, and Emergency Services (DHSES) in 2006. It cost $42 million to construct on over 1100 acres and is available for training 360 days a year. Students from SUNY Albany's College of Emergency Preparedness, Homeland Security and Cybersecurity, were able to participate in this year's exercise and learn how "DHSES supports law enforcement specialty teams."
Project BioShield. The accrual of vaccines and treatments for potential biological threats, also known as medical countermeasures has been an important aspect in preparing for a potential bioterrorist attack; this took the form of a program beginning in 2004, referred to as Project BioShield. The significance of this program should not be overlooked as “there is currently enough smallpox vaccine to inoculate every United States citizen and a variety of therapeutic drugs to treat the infected.” The Department of Defense also has a variety of laboratories currently working to increase the quantity and efficacy of countermeasures that comprise the national stockpile. Efforts have also been taken to ensure that these medical countermeasures can be disseminated effectively in the event of a bioterrorist attack. The National Association of Chain Drug Stores championed this cause by encouraging the participation of the private sector in improving the distribution of such countermeasures if required.
On a CNN news broadcast in 2011, the CNN chief medical correspondent, Dr. Sanjay Gupta, weighed in on the American government's recent approach to bioterrorist threats. He explains how, even though the United States would be better fending off bioterrorist attacks now than they would be a decade ago, the amount of money available to fight bioterrorism over the last three years has begun to decrease. Looking at a detailed report that examined the funding decrease for bioterrorism in fifty-one American cities, Dr. Gupta stated that the cities "wouldn't be able to distribute vaccines as well" and "wouldn't be able to track viruses." He also said that film portrayals of global pandemics, such as Contagion, were actually quite possible and may occur in the United States under the right conditions.
A news broadcast by MSNBC in 2010 also stressed the low levels of bioterrorism preparedness in the United States. The broadcast stated that a bipartisan report gave the Obama administration a failing grade for its efforts to respond to a bioterrorist attack. The news broadcast invited the former New York City police commissioner, Howard Safir, to explain how the government would fare in combating such an attack. He said how "biological and chemical weapons are probable and relatively easy to disperse." Furthermore, Safir thought that efficiency in bioterrorism preparedness is not necessarily a question of money, but is instead dependent on putting resources in the right places. The broadcast suggested that the nation was not ready for something more serious.
In a September 2016 interview conducted by Homeland Preparedness News, Daniel Gerstein, a senior policy researcher for the RAND Corporation, stresses the importance in preparing for potential bioterrorist attacks on the nation. He implored the U.S. government to take the proper and necessary actions to implement a strategic plan of action to save as many lives as possible and to safeguard against potential chaos and confusion. He believes that because there have been no significant instances of bioterrorism since the anthrax attacks in 2001, the government has allowed itself to become complacent making the country that much more vulnerable to unsuspecting attacks, thereby further endangering the lives of U.S. citizens.
Gerstein formerly served in the Science and Technology Directorate of the Department of Homeland Security from 2011 to 2014. He claims there has not been a serious plan of action since 2004 during George W. Bush's presidency, in which he issued a Homeland Security directive delegating responsibilities among various federal agencies. He also stated that the blatant mishandling of the Ebola virus outbreak in 2014 attested to the government's lack of preparation. This past May, legislation that would create a national defense strategy was introduced in the Senate, coinciding with the timing of ISIS-affiliated terrorist groups get closer to weaponizing biological agents. In May 2016, Kenyan officials apprehended two members of an Islamic extremist group in motion to set off a biological bomb containing anthrax. Mohammed Abdi Ali, the believed leader of the group, who was a medical intern, was arrested along with his wife, a medical student. The two were caught just before carrying out their plan. The Blue Ribbon Study Panel on Biodefense, which comprises a group of experts on national security and government officials, in which Gerstein had previously testified to, submitted its National Blueprint for Biodefense to Congress in October 2015 listing their recommendations for devising an effective plan.
Bill Gates said in a February 18, 2017 Business Insider op-ed (published near the time of his Munich Security Conference speech) that it is possible for an airborne pathogen to kill at least 30 million people over the course of a year. In a New York Times report, the Gates Foundation predicted that a modern outbreak similar to the Spanish Influenza pandemic (which killed between 50 million and 100 million people) could end up killing more than 360 million people worldwide, even considering widespread availability of vaccines and other healthcare tools. The report cited increased globalization, rapid international air travel, and urbanization as increased reasons for concern. In a March 9, 2017, interview with CNBC, former U.S. Senator Joe Lieberman, who was co-chair of the bipartisan Blue Ribbon Study Panel on Biodefense, said a worldwide pandemic could end the lives of more people than a nuclear war. Lieberman also expressed worry that a terrorist group like ISIS could develop a synthetic influenza strain and introduce it to the world to kill civilians. In July 2017, Robert C. Hutchinson, former agent at the Department of Homeland Security, called for a "whole-of-government" response to the next global health threat, which he described as including strict security procedures at our borders and proper execution of government preparedness plans.
Also, novel approaches in biotechnology, such as synthetic biology, could be used in the future to design new types of biological warfare agents. Special attention has to be laid on future experiments (of concern) that:
Would demonstrate how to render a vaccine ineffective;
Would confer resistance to therapeutically useful antibiotics or antiviral agents;
Would enhance the virulence of a pathogen or render a nonpathogen virulent;
Would increase transmissibility of a pathogen;
Would alter the host range of a pathogen;
Would enable the evasion of diagnostic/detection tools;
Would enable the weaponization of a biological agent or toxin
Most of the biosecurity concerns in synthetic biology, however, are focused on the role of DNA synthesis and the risk of producing genetic material of lethal viruses (e.g. 1918 Spanish flu, polio) in the lab. The CRISPR/Cas system has emerged as a promising technique for gene editing. It was hailed by The Washington Post as "the most important innovation in the synthetic biology space in nearly 30 years." While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks. However, due to its ease of use and accessibility, it has raised a number of ethical concerns, especially surrounding its use in the biohacking space.
Biosurveillance
In 1999, the University of Pittsburgh's Center for Biomedical Informatics deployed the first automated bioterrorism detection system, called RODS (Real-Time Outbreak Disease Surveillance). RODS is designed to collect data from many data sources and use them to perform signal detection, that is, to detect a possible bioterrorism event at the earliest possible moment. RODS, and other systems like it, collect data from sources including clinic data, laboratory data, and data from over-the-counter drug sales. In 2000, Michael Wagner, the codirector of the RODS laboratory, and Ron Aryel, a subcontractor, conceived the idea of obtaining live data feeds from "non-traditional" (non-health-care) data sources. The RODS laboratory's first efforts eventually led to the establishment of the National Retail Data Monitor, a system which collects data from 20,000 retail locations nationwide.
On February 5, 2002, George W. Bush visited the RODS laboratory and used it as a model for a $300 million spending proposal to equip all 50 states with biosurveillance systems. In a speech delivered at the nearby Masonic temple, Bush compared the RODS system to a modern "DEW" line (referring to the Cold War ballistic missile early warning system).
The principles and practices of biosurveillance, a new interdisciplinary science, were defined and described in the Handbook of Biosurveillance, edited by Michael Wagner, Andrew Moore and Ron Aryel, and published in 2006. Biosurveillance is the science of real-time disease outbreak detection. Its principles apply to both natural and man-made epidemics (bioterrorism).
Data which potentially could assist in early detection of a bioterrorism event include many categories of information. Health-related data such as that from hospital computer systems, clinical laboratories, electronic health record systems, medical examiner record-keeping systems, 911 call center computers, and veterinary medical record systems could be of help; researchers are also considering the utility of data generated by ranching and feedlot operations, food processors, drinking water systems, school attendance recording, and physiologic monitors, among others.
In Europe, disease surveillance is beginning to be organized on the continent-wide scale needed to track a biological emergency. The system not only monitors infected persons, but attempts to discern the origin of the outbreak.
Researchers have experimented with devices to detect the existence of a threat:
Tiny electronic chips that would contain living nerve cells to warn of the presence of bacterial toxins (identification of broad range toxins)
Fiber-optic tubes lined with antibodies coupled to light-emitting molecules (identification of specific pathogens, such as anthrax, botulinum, ricin)
Some research shows that ultraviolet avalanche photodiodes offer the high gain, reliability and robustness needed to detect anthrax and other bioterrorism agents in the air. The fabrication methods and device characteristics were described at the 50th Electronic Materials Conference in Santa Barbara on June 25, 2008. Details of the photodiodes were also published in the February 14, 2008, issue of the journal Electronics Letters and the November 2007 issue of the journal IEEE Photonics Technology Letters.
The United States Department of Defense conducts global biosurveillance through several programs, including the Global Emerging Infections Surveillance and Response System.
Another powerful tool developed within New York City for use in countering bioterrorism is the development of the New York City Syndromic Surveillance System. This system is essentially a way of tracking disease progression throughout New York City, and was developed by the New York City Department of Health and Mental Hygiene (NYC DOHMH) in the wake of the 9/11 attacks. The system works by tracking the symptoms of those taken into the emergency department—based on the location of the hospital to which they are taken and their home address—and assessing any patterns in symptoms. These established trends can then be observed by medical epidemiologists to determine if there are any disease outbreaks in any particular locales; maps of disease prevalence can then be created rather easily. This is an obviously beneficial tool in fighting bioterrorism as it provides a means through which such attacks could be discovered in their nascence; assuming bioterrorist attacks result in similar symptoms across the board, this strategy allows New York City to respond immediately to any bioterrorist threats that they may face with some level of alacrity.
Response to bioterrorism incident or threat
Government agencies which would be called on to respond to a bioterrorism incident would include law enforcement, hazardous materials and decontamination units, and emergency medical units, if available.
The US military has specialized units, which can respond to a bioterrorism event; among them are the United States Marine Corps' Chemical Biological Incident Response Force and the U.S. Army's 20th Support Command (CBRNE), which can detect, identify, and neutralize threats, and decontaminate victims exposed to bioterror agents. US response would include the Centers for Disease Control.
Historically, governments and authorities have relied on quarantines to protect their populations. International bodies such as the World Health Organization already devote some of their resources to monitoring epidemics and have served clearing-house roles in historical epidemics.
Media attention toward the seriousness of biological attacks increased in 2013 to 2014. In July 2013, Forbes published an article with the title "Bioterrorism: A Dirty Little Threat With Huge Potential Consequences." In November 2013, Fox News reported on a new strain of botulism, saying that the Centers for Disease and Control lists botulism as one of two agents that have "the highest risks of mortality and morbidity", noting that there is no antidote for botulism. USA Today reported that the U.S. military in November was trying to develop a vaccine for troops from the bacteria that cause the disease Q fever, an agent the military once used as a biological weapon. In February 2014, the former special assistant and senior director for biodefense policy to President George W. Bush called the bioterrorism risk imminent and uncertain and Congressman Bill Pascrell called for increasing federal measures against bioterrorism as a "matter of life or death." The New York Times wrote a story saying the United States would spend $40 million to help certain low and middle-income countries deal with the threats of bioterrorism and infectious diseases.
Bioterrorism can additionally harm the psychological aspect of victims and the general public. Victims exposed to biological weapons have shown an increased presence of clinical anxiety compared to the normal population.
Bill Gates has warned that bioterrorism could kill more people than nuclear war.
In February 2018, a CNN employee discovered on an airplane a "sensitive, top-secret document in the seatback pouch explaining how the Department of Homeland Security would respond to a bioterrorism attack at the Super Bowl."
2017 U.S. budget proposal affecting bioterrorism programs
President Donald Trump promoted his first budget around keeping America safe. However, one aspect of defense would receive less money: "protecting the nation from deadly pathogens, man-made or natural," according to The New York Times. Agencies tasked with biosecurity get a decrease in funding under the Administration's budget proposal.
For example:
The Office of Public Health Preparedness and Response would be cut by $136 million, or 9.7 percent. The office tracks outbreaks of disease.
The National Center for Emerging and Zoonotic Infectious Diseases would be cut by $65 million, or 11 percent. The center is a branch of the Centers for Disease Control and Prevention that fights threats like anthrax and the Ebola virus, and additionally towards research on HIV/AIDS vaccines.
Within the National Institutes of Health, the National Institute of Allergy and Infectious Diseases (NIAID) would lose 18 percent of its budget. NIAID oversees responses to Zika, Ebola and HIV/AIDS vaccine research.
"The next weapon of mass destruction may not be a bomb," Lawrence O. Gostin, the director of the World Health Organization's Collaborating Center on Public Health Law and Human Rights, told The New York Times. "It may be a tiny pathogen that you can't see, smell or taste, and by the time we discover it, it'll be too late."
Lack of international standards on public health experiments
Tom Inglesy, the CEO and director of the Center for Health Security at the Johns Hopkins Bloomberg School of Public Health and an internationally recognized expert on public health preparedness, pandemic and emerging infectious disease said in 2017 that the lack of an internationally standardized approval process that could be used to guide countries in conducting public health experiments for resurrecting a disease that has already been eradicated increases the risk that the disease could be used in bioterrorism. This was in reference to the lab synthesis of horsepox in 2017 by researchers at the University of Alberta. The researchers recreated horsepox, an extinct cousin of the smallpox virus, in order to research new ways to treat cancer.
In popular culture
Incidents
See also
Biodefence
Biological Weapons Convention
Biorisk
Biosecurity
Project Bacchus
Select agent
Global Health Security Initiative
References
Bibliography
Further reading
Resolution 1540 "affirms that the proliferation of nuclear, chemical and biological weapons and their means of delivery constitutes a threat to international peace and security. The resolution obliges States, inter alia, to refrain from supporting by any means non-State actors from developing, acquiring, manufacturing, possessing, transporting, transferring or using nuclear, chemical or biological weapons and their means of delivery".
NOVA: Bioterror
Carus, W. Seth Working Paper: Bioterrorism and Biocrimes. The Illicit Use of Biological Agents Since 1900, Feb 2001 revision. (Final published version: )
United States
Recommended Policy Guidance for Departmental Development of Review Mechanisms for Potential Pandemic Pathogen Care and Oversight (P3CO). Obama Administration. January 9, 2017.
Terrorism by method | Bioterrorism | [
"Biology"
] | 8,123 | [
"Bioterrorism",
"Biological warfare"
] |
4,403 | https://en.wikipedia.org/wiki/BCS%20theory | In physics, the Bardeen–Cooper–Schrieffer (BCS) theory (named after John Bardeen, Leon Cooper, and John Robert Schrieffer) is the first microscopic theory of superconductivity since Heike Kamerlingh Onnes's 1911 discovery. The theory describes superconductivity as a microscopic effect caused by a condensation of Cooper pairs. The theory is also used in nuclear physics to describe the pairing interaction between nucleons in an atomic nucleus.
It was proposed by Bardeen, Cooper, and Schrieffer in 1957; they received the Nobel Prize in Physics for this theory in 1972.
History
Rapid progress in the understanding of superconductivity gained momentum in the mid-1950s. It began with the 1948 paper, "On the Problem of the Molecular Theory of Superconductivity", where Fritz London proposed that the phenomenological London equations may be consequences of the coherence of a quantum state. In 1953, Brian Pippard, motivated by penetration experiments, proposed that this would modify the London equations via a new scale parameter called the coherence length. John Bardeen then argued in the 1955 paper, "Theory of the Meissner Effect in Superconductors", that such a modification naturally occurs in a theory with an energy gap. The key ingredient was Leon Cooper's calculation of the bound states of electrons subject to an attractive force in his 1956 paper, "Bound Electron Pairs in a Degenerate Fermi Gas".
In 1957 Bardeen and Cooper assembled these ingredients and constructed such a theory, the BCS theory, with Robert Schrieffer. The theory was first published in April 1957 in the letter, "Microscopic theory of superconductivity". The demonstration that the phase transition is second order, that it reproduces the Meissner effect and the calculations of specific heats and penetration depths appeared in the December 1957 article, "Theory of superconductivity". They received the Nobel Prize in Physics in 1972 for this theory.
In 1986, high-temperature superconductivity was discovered in La-Ba-Cu-O, at temperatures up to 30 K. Following experiments determined more materials with transition temperatures up to about 130 K, considerably above the previous limit of about 30 K. It is experimentally very well known that the transition temperature strongly depends on pressure. In general, it is believed that BCS theory alone cannot explain this phenomenon and that other effects are in play. These effects are still not yet fully understood; it is possible that they even control superconductivity at low temperatures for some materials.
Overview
At sufficiently low temperatures, electrons near the Fermi surface become unstable against the formation of Cooper pairs. Cooper showed such binding will occur in the presence of an attractive potential, no matter how weak. In conventional superconductors, an attraction is generally attributed to an electron-lattice interaction. The BCS theory, however, requires only that the potential be attractive, regardless of its origin. In the BCS framework, superconductivity is a macroscopic effect which results from the condensation of Cooper pairs. These have some bosonic properties, and bosons, at sufficiently low temperature, can form a large Bose–Einstein condensate. Superconductivity was simultaneously explained by Nikolay Bogolyubov, by means of the Bogoliubov transformations.
In many superconductors, the attractive interaction between electrons (necessary for pairing) is brought about indirectly by the interaction between the electrons and the vibrating crystal lattice (the phonons). Roughly speaking the picture is the following:
An electron moving through a conductor will attract nearby positive charges in the lattice. This deformation of the lattice causes another electron, with opposite spin, to move into the region of higher positive charge density. The two electrons then become correlated. Because there are a lot of such electron pairs in a superconductor, these pairs overlap very strongly and form a highly collective condensate. In this "condensed" state, the breaking of one pair will change the energy of the entire condensate - not just a single electron, or a single pair. Thus, the energy required to break any single pair is related to the energy required to break all of the pairs (or more than just two electrons). Because the pairing increases this energy barrier, kicks from oscillating atoms in the conductor (which are small at sufficiently low temperatures) are not enough to affect the condensate as a whole, or any individual "member pair" within the condensate. Thus the electrons stay paired together and resist all kicks, and the electron flow as a whole (the current through the superconductor) will not experience resistance. Thus, the collective behavior of the condensate is a crucial ingredient necessary for superconductivity.
Details
BCS theory starts from the assumption that there is some attraction between electrons, which can overcome the Coulomb repulsion. In most materials (in low temperature superconductors), this attraction is brought about indirectly by the coupling of electrons to the crystal lattice (as explained above). However, the results of BCS theory do not depend on the origin of the attractive interaction. For instance, Cooper pairs have been observed in ultracold gases of fermions where a homogeneous magnetic field has been tuned to their Feshbach resonance. The original results of BCS (discussed below) described an s-wave superconducting state, which is the rule among low-temperature superconductors but is not realized in many unconventional superconductors such as the d-wave high-temperature superconductors.
Extensions of BCS theory exist to describe these other cases, although they are insufficient to completely describe the observed features of high-temperature superconductivity.
BCS is able to give an approximation for the quantum-mechanical many-body state of the system of (attractively interacting) electrons inside the metal. This state is now known as the BCS state. In the normal state of a metal, electrons move independently, whereas in the BCS state, they are bound into Cooper pairs by the attractive interaction. The BCS formalism is based on the reduced potential for the electrons' attraction. Within this potential, a variational ansatz for the wave function is proposed. This ansatz was later shown to be exact in the dense limit of pairs. Note that the continuous crossover between the dilute and dense regimes of attracting pairs of fermions is still an open problem, which now attracts a lot of attention within the field of ultracold gases.
Underlying evidence
The hyperphysics website pages at Georgia State University summarize some key background to BCS theory as follows:
Evidence of a band gap at the Fermi level (described as "a key piece in the puzzle")
the existence of a critical temperature and critical magnetic field implied a band gap, and suggested a phase transition, but single electrons are forbidden from condensing to the same energy level by the Pauli exclusion principle. The site comments that "a drastic change in conductivity demanded a drastic change in electron behavior". Conceivably, pairs of electrons might perhaps act like bosons instead, which are bound by different condensate rules and do not have the same limitation.
Isotope effect on the critical temperature, suggesting lattice interactions
The Debye frequency of phonons in a lattice is proportional to the inverse of the square root of the mass of lattice ions. It was shown that the superconducting transition temperature of mercury indeed showed the same dependence, by substituting the most abundant natural mercury isotope, 202Hg, with a different isotope, 198Hg.
An exponential rise in heat capacity near the critical temperature for some superconductors
An exponential increase in heat capacity near the critical temperature also suggests an energy bandgap for the superconducting material. As superconducting vanadium is warmed toward its critical temperature, its heat capacity increases greatly in a very few degrees; this suggests an energy gap being bridged by thermal energy.
The lessening of the measured energy gap towards the critical temperature
This suggests a type of situation where some kind of binding energy exists but it is gradually weakened as the temperature increases toward the critical temperature. A binding energy suggests two or more particles or other entities that are bound together in the superconducting state. This helped to support the idea of bound particles – specifically electron pairs – and together with the above helped to paint a general picture of paired electrons and their lattice interactions.
Implications
BCS derived several important theoretical predictions that are independent of the details of the interaction, since the quantitative predictions mentioned below hold for any sufficiently weak attraction between the electrons and this last condition is fulfilled for many low temperature superconductors - the so-called weak-coupling case. These have been confirmed in numerous experiments:
The electrons are bound into Cooper pairs, and these pairs are correlated due to the Pauli exclusion principle for the electrons, from which they are constructed. Therefore, in order to break a pair, one has to change energies of all other pairs. This means there is an energy gap for single-particle excitation, unlike in the normal metal (where the state of an electron can be changed by adding an arbitrarily small amount of energy). This energy gap is highest at low temperatures but vanishes at the transition temperature when superconductivity ceases to exist. The BCS theory gives an expression that shows how the gap grows with the strength of the attractive interaction and the (normal phase) single particle density of states at the Fermi level. Furthermore, it describes how the density of states is changed on entering the superconducting state, where there are no electronic states any more at the Fermi level. The energy gap is most directly observed in tunneling experiments and in reflection of microwaves from superconductors.
BCS theory predicts the dependence of the value of the energy gap Δ at temperature T on the critical temperature Tc. The ratio between the value of the energy gap at zero temperature and the value of the superconducting transition temperature (expressed in energy units) takes the universal value independent of material. Near the critical temperature the relation asymptotes to which is of the form suggested the previous year by M. J. Buckingham based on the fact that the superconducting phase transition is second order, that the superconducting phase has a mass gap and on Blevins, Gordy and Fairbank's experimental results the previous year on the absorption of millimeter waves by superconducting tin.
Due to the energy gap, the specific heat of the superconductor is suppressed strongly (exponentially) at low temperatures, there being no thermal excitations left. However, before reaching the transition temperature, the specific heat of the superconductor becomes even higher than that of the normal conductor (measured immediately above the transition) and the ratio of these two values is found to be universally given by 2.5.
BCS theory correctly predicts the Meissner effect, i.e. the expulsion of a magnetic field from the superconductor and the variation of the penetration depth (the extent of the screening currents flowing below the metal's surface) with temperature.
It also describes the variation of the critical magnetic field (above which the superconductor can no longer expel the field but becomes normal conducting) with temperature. BCS theory relates the value of the critical field at zero temperature to the value of the transition temperature and the density of states at the Fermi level.
In its simplest form, BCS gives the superconducting transition temperature Tc in terms of the electron-phonon coupling potential V and the Debye cutoff energy ED: where N(0) is the electronic density of states at the Fermi level. For more details, see Cooper pairs.
The BCS theory reproduces the isotope effect, which is the experimental observation that for a given superconducting material, the critical temperature is inversely proportional to the square-root of the mass of the isotope used in the material. The isotope effect was reported by two groups on 24 March 1950, who discovered it independently working with different mercury isotopes, although a few days before publication they learned of each other's results at the ONR conference in Atlanta. The two groups are Emanuel Maxwell, and C. A. Reynolds, B. Serin, W. H. Wright, and L. B. Nesbitt. The choice of isotope ordinarily has little effect on the electrical properties of a material, but does affect the frequency of lattice vibrations. This effect suggests that superconductivity is related to vibrations of the lattice. This is incorporated into BCS theory, where lattice vibrations yield the binding energy of electrons in a Cooper pair.
Little–Parks experiment - One of the first indications to the importance of the Cooper-pairing principle.
See also
Magnesium diboride, considered a BCS superconductor
Quasiparticle
Little–Parks effect, one of the first indications of the importance of the Cooper pairing principle.
References
Primary sources
Further reading
John Robert Schrieffer, Theory of Superconductivity, (1964),
Michael Tinkham, Introduction to Superconductivity,
Pierre-Gilles de Gennes, Superconductivity of Metals and Alloys, .
Schmidt, Vadim Vasil'evich. The physics of superconductors: Introduction to fundamentals and applications. Springer Science & Business Media, 2013.
External links
Hyperphysics page on BCS
Dance analogy of BCS theory as explained by Bob Schrieffer (audio recording)
Mean-Field Theory: Hartree-Fock and BCS in E. Pavarini, E. Koch, J. van den Brink, and G. Sawatzky: Quantum materials: Experiments and Theory, Jülich 2016,
Superconductivity | BCS theory | [
"Physics",
"Materials_science",
"Engineering"
] | 2,866 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
4,410 | https://en.wikipedia.org/wiki/Brewing | Brewing is the production of beer by steeping a starch source (commonly cereal grains, the most popular of which is barley) in water and fermenting the resulting sweet liquid with yeast. It may be done in a brewery by a commercial brewer, at home by a homebrewer, or communally. Brewing has taken place since around the 6th millennium BC, and archaeological evidence suggests that emerging civilizations, including ancient Egypt, China, and Mesopotamia, brewed beer. Since the nineteenth century the brewing industry has been part of most western economies.
The basic ingredients of beer are water and a fermentable starch source such as malted barley. Most beer is fermented with a brewer's yeast and flavoured with hops. Less widely used starch sources include millet, sorghum and cassava. Secondary sources (adjuncts), such as maize (corn), rice, or sugar, may also be used, sometimes to reduce cost, or to add a feature, such as adding wheat to aid in retaining the foamy head of the beer. The most common starch source is ground cereal or "grist" – the proportion of the starch or cereal ingredients in a beer recipe may be called grist, grain bill, or simply mash ingredients.
Steps in the brewing process include malting, milling, mashing, lautering, boiling, fermenting, conditioning, filtering, and packaging. There are three main fermentation methods: warm, cool and spontaneous. Fermentation may take place in an open or closed fermenting vessel; a secondary fermentation may also occur in the cask or bottle. There are several additional brewing methods, such as Burtonisation, double dropping, and Yorkshire Square, as well as post-fermentation treatment such as filtering, and barrel-ageing.
History
Brewing has taken place since around the 6th millennium BC, and archaeological evidence suggests emerging civilizations including China, ancient Egypt, and Mesopotamia brewed beer. Descriptions of various beer recipes can be found in cuneiform (the oldest known writing) from ancient Mesopotamia. In Mesopotamia the brewer's craft was the only profession which derived social sanction and divine protection from female deities/goddesses, specifically: Ninkasi, who covered the production of beer, Siris, who was used in a metonymic way to refer to beer, and Siduri, who covered the enjoyment of beer. In pre-industrial times, and in developing countries, women are frequently the main brewers.
As almost any cereal containing certain sugars can undergo spontaneous fermentation due to wild yeasts in the air, it is possible that beer-like beverages were independently developed throughout the world soon after a tribe or culture had domesticated cereal. Chemical tests of ancient pottery jars reveal that beer was produced as far back as about 7,000 years ago in what is today Iran. This discovery reveals one of the earliest known uses of fermentation and is the earliest evidence of brewing to date. In Mesopotamia, the oldest evidence of beer is believed to be a 6,000-year-old Sumerian tablet depicting people drinking a beverage through reed straws from a communal bowl. A 3900-year-old Sumerian poem honouring Ninkasi, the patron goddess of brewing, contains the oldest surviving beer recipe, describing the production of beer from barley via bread. The invention of bread and beer has been argued to be responsible for humanity's ability to develop technology and build civilization. The earliest chemically confirmed barley beer to date was discovered at Godin Tepe in the central Zagros Mountains of Iran, where fragments of a jug, at least 5,000 years old was found to be coated with beerstone, a by-product of the brewing process. Beer may have been known in Neolithic Europe as far back as 5,000 years ago, and was mainly brewed on a domestic scale.
Ale produced before the Industrial Revolution continued to be made and sold on a domestic scale, although by the 7th century AD beer was also being produced and sold by European monasteries. During the Industrial Revolution, the production of beer moved from artisanal manufacture to industrial manufacture, and domestic manufacture ceased to be significant by the end of the 19th century. The development of hydrometers and thermometers changed brewing by allowing the brewer more control of the process, and greater knowledge of the results. Today, the brewing industry is a global business, consisting of several dominant multinational companies and many thousands of smaller producers ranging from brewpubs to regional breweries. More than 133 billion litres (35 billion gallons) are sold per year—producing total global revenues of $294.5 billion (£147.7 billion) in 2006.
Ingredients
The basic ingredients of beer are water; a starch source, such as malted barley, able to be fermented (converted into alcohol); a brewer's yeast to produce the fermentation; and a flavouring, such as hops, to offset the sweetness of the malt. A mixture of starch sources may be used, with a secondary saccharide, such as maize (corn), rice, or sugar, these often being termed adjuncts, especially when used as a lower-cost substitute for malted barley. Less widely used starch sources include millet, sorghum, and cassava root in Africa, potato in Brazil, and agave in Mexico, among others. The most common starch source is ground cereal or "grist" – the proportion of the starch or cereal ingredients in a beer recipe may be called grist, grain bill, or simply mash ingredients.
Water
Beer is composed mostly of water. Regions have water with different mineral components; as a result, different regions were originally better suited to making certain types of beer, thus giving them a regional character. For example, Dublin has hard water well suited to making stout, such as Guinness; while Pilsen has soft water well suited to making pale lager, such as Pilsner Urquell. The waters of Burton in England contain gypsum, which benefits making pale ale to such a degree that brewers of pale ales will add gypsum to the local water in a process known as Burtonisation.
Starch source
The starch source in a beer provides the fermentable material and is a key determinant of the strength and flavour of the beer. The most common starch source used in beer is malted grain. Grain is malted by soaking it in water, allowing it to begin germination, and then drying the partially germinated grain in a kiln. Malting grain produces enzymes that will allow conversion from starches in the grain into fermentable sugars during the mash process. Different roasting times and temperatures are used to produce different colours of malt from the same grain. Darker malts will produce darker beers.
Nearly all beer includes barley malt as the majority of the starch. This is because of its fibrous husk, which is important not only in the sparging stage of brewing (in which water is washed over the mashed barley grains to form the wort) but also as a rich source of amylase, a digestive enzyme that facilitates conversion of starch into sugars. Other malted and unmalted grains (including wheat, rice, oats, and rye, and, less frequently, maize (corn) and sorghum) may be used. In recent years, a few brewers have produced gluten-free beer made with sorghum with no barley malt for people who cannot digest gluten-containing grains like wheat, barley, and rye.
Hops
Hops are the female flower clusters or seed cones of the hop vine Humulus lupulus, which are used as a flavouring and preservative agent in nearly all beer made today. Hops had been used for medicinal and food flavouring purposes since Roman times; by the 7th century in Carolingian monasteries in what is now Germany, beer was being made with hops, though it isn't until the thirteenth century that widespread cultivation of hops for use in beer is recorded. Before the thirteenth century, beer was flavoured with plants such as yarrow, wild rosemary, and bog myrtle, and other ingredients such as juniper berries, aniseed and ginger, which would be combined into a mixture known as gruit and used as hops are now used; between the thirteenth and the sixteenth century, during which hops took over as the dominant flavouring, beer flavoured with gruit was known as ale, while beer flavoured with hops was known as beer. Some beers today, such as Fraoch by the Scottish Heather Ales company and Cervoise Lancelot by the French Brasserie-Lancelot company, use plants other than hops for flavouring.
Hops contain several characteristics that brewers desire in beer: they contribute a bitterness that balances the sweetness of the malt; they provide floral, citrus, and herbal aromas and flavours; they have an antibiotic effect that favours the activity of brewer's yeast over less desirable microorganisms; and they aid in "head retention", the length of time that the foam on top of the beer (the beer head) will last. The preservative in hops comes from the lupulin glands which contain soft resins with alpha and beta acids. Though much studied, the preservative nature of the soft resins is not yet fully understood, though it has been observed that unless stored at a cool temperature, the preservative nature will decrease. Brewing is the sole major commercial use of hops.
Yeast
Yeast is the microorganism that is responsible for fermentation in beer. Yeast metabolises the sugars extracted from grains, which produces alcohol and carbon dioxide, and thereby turns wort into beer. In addition to fermenting the beer, yeast influences the character and flavour.
The dominant types of yeast used to make beer are Saccharomyces cerevisiae, known as ale yeast, and Saccharomyces pastorianus, known as lager yeast; Brettanomyces ferments lambics, and Torulaspora delbrueckii ferments Bavarian weissbier. Before the role of yeast in fermentation was understood, fermentation involved wild or airborne yeasts, and a few styles such as lambics still use this method today. Emil Christian Hansen, a Danish biochemist employed by the Carlsberg Laboratory, developed pure yeast cultures which were introduced into the Carlsberg brewery in 1883, and pure yeast strains are now the main fermenting source used worldwide.
Clarifying agent
Some brewers add one or more clarifying agents to beer, which typically precipitate (collect as a solid) out of the beer along with protein solids and are found only in trace amounts in the finished product. This process makes the beer appear bright and clean, rather than the cloudy appearance of ethnic and older styles of beer such as wheat beers.
Examples of clarifying agents include isinglass, obtained from swim bladders of fish; Irish moss, a seaweed; kappa carrageenan, from the seaweed kappaphycus; polyclar (a commercial brand of clarifier); and gelatin. If a beer is marked "suitable for Vegans", it was generally clarified either with seaweed or with artificial agents, although the "Fast Cask" method invented by Marston's in 2009 may provide another method.
Brewing process
There are several steps in the brewing process, which may include malting, mashing, lautering, boiling, fermenting, conditioning, filtering, and packaging. The brewing equipment needed to make beer has grown more sophisticated over time, and now covers most aspects of the brewing process.
Malting is the process where barley grain is made ready for brewing. Malting is broken down into three steps in order to help to release the starches in the barley. First, during steeping, the grain is added to a vat with water and allowed to soak for approximately 40 hours. During germination, the grain is spread out on the floor of the germination room for around 5 days. The final part of malting is kilning when the malt goes through a very high temperature drying in a kiln; with gradual temperature increase over several hours. When kilning is complete, the grains are now termed malt, and they will be milled or crushed to break apart the kernels and expose the cotyledon, which contains the majority of the carbohydrates and sugars; this makes it easier to extract the sugars during mashing.
Mashing converts the starches released during the malting stage into sugars that can be fermented. The milled grain is mixed with hot water in a large vessel known as a mash tun. In this vessel, the grain and water are mixed together to create a cereal mash. During the mash, naturally occurring enzymes present in the malt convert the starches (long chain carbohydrates) in the grain into smaller molecules or simple sugars (mono-, di-, and tri-saccharides). This "conversion" is called saccharification which occurs between the temperatures . The result of the mashing process is a sugar-rich liquid or "wort", which is then strained through the bottom of the mash tun in a process known as lautering. Prior to lautering, the mash temperature may be raised to about (known as a mashout) to free up more starch and reduce mash viscosity. Additional water may be sprinkled on the grains to extract additional sugars (a process known as sparging).
The wort is moved into a large tank known as a "copper" or kettle where it is boiled with hops and sometimes other ingredients such as herbs or sugars. This stage is where many chemical reactions take place, and where important decisions about the flavour, colour, and aroma of the beer are made. The boiling process serves to terminate enzymatic processes, precipitate proteins, isomerize hop resins, and concentrate and sterilize the wort. Hops add flavour, aroma and bitterness to the beer. At the end of the boil, the hopped wort settles to clarify in a vessel called a "whirlpool", where the more solid particles in the wort are separated out.
After the whirlpool, the wort is drawn away from the compacted hop trub, and rapidly cooled via a heat exchanger to a temperature where yeast can be added. A variety of heat exchanger designs are used in breweries, with the most common a plate-style. Water or glycol run in channels in the opposite direction of the wort, causing a rapid drop in temperature. It is very important to quickly cool the wort to a level where yeast can be added safely as yeast is unable to grow in very high temperatures, and will start to die in temperatures above . After the wort goes through the heat exchanger, the cooled wort goes into a fermentation tank. A type of yeast is selected and added, or "pitched", to the fermentation tank. When the yeast is added to the wort, the fermenting process begins, where the sugars turn into alcohol, carbon dioxide and other components. When the fermentation is complete the brewer may rack the beer into a new tank, called a conditioning tank. Conditioning of the beer is the process in which the beer ages, the flavour becomes smoother, and flavours that are unwanted dissipate. After conditioning for a week to several months, the beer may be filtered and force carbonated for bottling, or fined in the cask.
Mashing
Mashing is the process of combining a mix of milled grain (typically malted barley with supplementary grains such as corn, sorghum, rye or wheat), known as the "grist" or "grain bill", and water, known as "liquor", and heating this mixture in a vessel called a "mash tun". Mashing is a form of steeping, and defines the act of brewing, such as with making tea, sake, and soy sauce. Technically, wine, cider and mead are not brewed but rather vinified, as there is no steeping process involving solids. Mashing allows the enzymes in the malt to break down the starch in the grain into sugars, typically maltose to create a malty liquid called wort. There are two main methods – infusion mashing, in which the grains are heated in one vessel; and decoction mashing, in which a proportion of the grains are boiled and then returned to the mash, raising the temperature. Mashing involves pauses at certain temperatures (notably ), and takes place in a "mash tun" – an insulated brewing vessel with a false bottom. The end product of mashing is called a "mash".
Mashing usually takes 1 to 2 hours, and during this time the various temperature rests activate different enzymes depending upon the type of malt being used, its modification level, and the intention of the brewer. The activity of these enzymes convert the starches of the grains to dextrins and then to fermentable sugars such as maltose. A mash rest from activates various proteases, which break down proteins that might otherwise cause the beer to be hazy. This rest is generally used only with undermodified (i.e. undermalted) malts which are decreasingly popular in Germany and the Czech Republic, or non-malted grains such as corn and rice, which are widely used in North American beers. A mash rest at activates β-glucanase, which breaks down gummy β-glucans in the mash, making the sugars flow out more freely later in the process. In the modern mashing process, commercial fungal based β-glucanase may be added as a supplement. Finally, a mash rest temperature of is used to convert the starches in the malt to sugar, which is then usable by the yeast later in the brewing process. Doing the latter rest at the lower end of the range favours β-amylase enzymes, producing more low-order sugars like maltotriose, maltose, and glucose which are more fermentable by the yeast. This in turn creates a beer lower in body and higher in alcohol. A rest closer to the higher end of the range favours α-amylase enzymes, creating more higher-order sugars and dextrins which are less fermentable by the yeast, so a fuller-bodied beer with less alcohol is the result. Duration and pH variances also affect the sugar composition of the resulting wort.
Lautering
Lautering is the separation of the wort (the liquid containing the sugar extracted during mashing) from the grains. This is done either in a mash tun outfitted with a false bottom, in a lauter tun, or in a mash filter. Most separation processes have two stages: first wort run-off, during which the extract is separated in an undiluted state from the spent grains, and sparging, in which extract which remains with the grains is rinsed off with hot water. The lauter tun is a tank with holes in the bottom small enough to hold back the large bits of grist and hulls (the ground or milled cereal). The bed of grist that settles on it is the actual filter. Some lauter tuns have provision for rotating rakes or knives to cut into the bed of grist to maintain good flow. The knives can be turned so they push the grain, a feature used to drive the spent grain out of the vessel. The mash filter is a plate-and-frame filter. The empty frames contain the mash, including the spent grains, and have a capacity of around one hectoliter. The plates contain a support structure for the filter cloth. The plates, frames, and filter cloths are arranged in a carrier frame like so: frame, cloth, plate, cloth, with plates at each end of the structure. Newer mash filters have bladders that can press the liquid out of the grains between spargings. The grain does not act like a filtration medium in a mash filter.
Boiling
After mashing, the beer wort is boiled with hops (and other flavourings if used) in a large tank known as a "copper" or brew kettle – though historically the mash vessel was used and is still in some small breweries. The boiling process is where chemical reactions take place, including sterilization of the wort to remove unwanted bacteria, releasing of hop flavours, bitterness and aroma compounds through isomerization, stopping of enzymatic processes, precipitation of proteins, and concentration of the wort. Finally, the vapours produced during the boil volatilise off-flavours, including dimethyl sulfide precursors. The boil is conducted so that it is even and intense – a continuous "rolling boil". The boil on average lasts between 45 and 90 minutes, depending on its intensity, the hop addition schedule, and volume of water the brewer expects to evaporate. At the end of the boil, solid particles in the hopped wort are separated out, usually in a vessel called a "whirlpool".
Brew kettle or copper
Copper is the traditional material for the boiling vessel for two main reasons: firstly because copper transfers heat quickly and evenly; secondly because the bubbles produced during boiling, which could act as an insulator against the heat, do not cling to the surface of copper, so the wort is heated in a consistent manner. The simplest boil kettles are direct-fired, with a burner underneath. These can produce a vigorous and favourable boil, but are also apt to scorch the wort where the flame touches the kettle, causing caramelisation and making cleanup difficult. Most breweries use a steam-fired kettle, which uses steam jackets in the kettle to boil the wort. Breweries usually have a boiling unit either inside or outside of the kettle, usually a tall, thin cylinder with vertical tubes, called a calandria, through which wort is pumped.
Whirlpool
At the end of the boil, solid particles in the hopped wort are separated out, usually in a vessel called a "whirlpool" or "settling tank". The whirlpool was devised by Henry Ranulph Hudston while working for the Molson Brewery in 1960 to utilise the so-called tea leaf paradox to force the denser solids known as "trub" (coagulated proteins, vegetable matter from hops) into a cone in the centre of the whirlpool tank. Whirlpool systems vary: smaller breweries tend to use the brew kettle, larger breweries use a separate tank, and design will differ, with tank floors either flat, sloped, conical or with a cup in the centre. The principle in all is that by swirling the wort the centripetal force will push the trub into a cone at the centre of the bottom of the tank, where it can be easily removed.
Hopback
A hopback is a traditional additional chamber that acts as a sieve or filter by using whole hops to clear debris (or "trub") from the unfermented (or "green") wort, as the whirlpool does, and also to increase hop aroma in the finished beer. It is a chamber between the brewing kettle and wort chiller. Hops are added to the chamber, the hot wort from the kettle is run through it, and then immediately cooled in the wort chiller before entering the fermentation chamber. Hopbacks utilizing a sealed chamber facilitate maximum retention of volatile hop aroma compounds that would normally be driven off when the hops contact the hot wort. While a hopback has a similar filtering effect as a whirlpool, it operates differently: a whirlpool uses centrifugal forces, a hopback uses a layer of whole hops to act as a filter bed. Furthermore, while a whirlpool is useful only for the removal of pelleted hops (as flowers do not tend to separate as easily), in general hopbacks are used only for the removal of whole flower hops (as the particles left by pellets tend to make it through the hopback). The hopback has mainly been substituted in modern breweries by the whirlpool.
Wort cooling
After the whirlpool, the wort must be brought down to fermentation temperatures before yeast is added. In modern breweries this is achieved through a plate heat exchanger. A plate heat exchanger has sereral ridged plates, which form two separate paths. The wort is pumped into the heat exchanger, and goes through every other gap between the plates. The cooling medium, usually water from a cold liquor tank, goes through the other gaps. The ridges in the plates ensure turbulent flow. A good heat exchanger can drop wort to while warming the cooling medium from about to . The last few plates often use a cooling medium which can be cooled to below the freezing point, which allows a finer control over the wort-out temperature, and also enables cooling to around . After cooling, oxygen is often dissolved into the wort to revitalize the yeast and aid its reproduction.
While boiling, it is useful to recover some of the energy used to boil the wort. On its way out of the brewery, the steam created during the boil is passed over a coil through which unheated water flows. By adjusting the rate of flow, the output temperature of the water can be controlled. This is also often done using a plate heat exchanger. The water is then stored for later use in the next mash, in equipment cleaning, or wherever necessary. Another common method of energy recovery takes place during the wort cooling. When cold water is used to cool the wort in a heat exchanger, the water is significantly warmed. In an efficient brewery, cold water is passed through the heat exchanger at a rate set to maximize the water's temperature upon exiting. This now-hot water is then stored in a hot water tank.
Fermenting
Fermentation takes place in fermentation vessels which come in various forms, from enormous cylindroconical vessels, through open stone vessels, to wooden vats. After the wort is cooled and aerated – usually with sterile air – yeast is added to it, and it begins to ferment. It is during this stage that sugars won from the malt are converted into alcohol and carbon dioxide, and the product can be called beer for the first time.
Most breweries today use cylindroconical vessels, or CCVs, which have a conical bottom and a cylindrical top. The cone's angle is typically around 60°, an angle that will allow the yeast to flow towards the cone's apex, but is not so steep as to take up too much vertical space. CCVs can handle both fermenting and conditioning in the same tank. At the end of fermentation, the yeast and other solids which have fallen to the cone's apex can be simply flushed out of a port at the apex. Open fermentation vessels are also used, often for show in brewpubs, and in Europe in wheat beer fermentation. These vessels have no tops, which makes harvesting top-fermenting yeasts very easy. The open tops of the vessels make the risk of infection greater, but with proper cleaning procedures and careful protocol about who enters fermentation chambers, the risk can be well controlled. Fermentation tanks are typically made of stainless steel. If they are simple cylindrical tanks with beveled ends, they are arranged vertically, as opposed to conditioning tanks which are usually laid out horizontally. Only a very few breweries still use wooden vats for fermentation as wood is difficult to keep clean and infection-free and must be repitched more or less yearly.
Fermentation methods
There are three main fermentation methods, warm, cool, and wild or spontaneous. Fermentation may take place in open or closed vessels. There may be a secondary fermentation which can take place in the brewery, in the cask or in the bottle.
Brewing yeasts are traditionally classed as "top-cropping" (or "top-fermenting") and "bottom-cropping" (or "bottom-fermenting"); the yeasts classed as top-fermenting are generally used in warm fermentations, where they ferment quickly, and the yeasts classed as bottom-fermenting are used in cooler fermentations where they ferment more slowly. Yeast were termed top or bottom cropping, because the yeast was collected from the top or bottom of the fermenting wort to be reused for the next brew. This terminology is somewhat inappropriate in the modern era; after the widespread application of brewing mycology it was discovered that the two separate collecting methods involved two different yeast species that favoured different temperature regimes, namely Saccharomyces cerevisiae in top-cropping at warmer temperatures and Saccharomyces pastorianus in bottom-cropping at cooler temperatures. As brewing methods changed in the 20th century, cylindro-conical fermenting vessels became the norm and the collection of yeast for both Saccharomyces species is done from the bottom of the fermenter. Thus the method of collection no longer implies a species association. There are a few remaining breweries who collect yeast in the top-cropping method, such as Samuel Smiths brewery in Yorkshire, Marstons in Staffordshire and several German hefeweizen producers.
For both types, yeast is fully distributed through the beer while it is fermenting, and both equally flocculate (clump together and precipitate to the bottom of the vessel) when fermentation is finished. By no means do all top-cropping yeasts demonstrate this behaviour, but it features strongly in many English yeasts that may also exhibit chain forming (the failure of budded cells to break from the mother cell), which is in the technical sense different from true flocculation. The most common top-cropping brewer's yeast, Saccharomyces cerevisiae, is the same species as the common baking yeast. However, baking and brewing yeasts typically belong to different strains, cultivated to favour different characteristics: baking yeast strains are more aggressive, in order to carbonate dough in the shortest amount of time; brewing yeast strains act slower, but tend to tolerate higher alcohol concentrations (normally 12–15% abv is the maximum, though under special treatment some ethanol-tolerant strains can be coaxed up to around 20%). Modern quantitative genomics has revealed the complexity of Saccharomyces species to the extent that yeasts involved in beer and wine production commonly involve hybrids of so-called pure species. As such, the yeasts involved in what has been typically called top-cropping or top-fermenting ale may be both Saccharomyces cerevisiae and complex hybrids of Saccharomyces cerevisiae and Saccharomyces kudriavzevii. Three notable ales, Chimay, Orval and Westmalle, are fermented with these hybrid strains, which are identical to wine yeasts from Switzerland.
Warm fermentation
In general, yeasts such as Saccharomyces cerevisiae are fermented at warm temperatures between , occasionally as high as , while the yeast used by Brasserie Dupont for saison ferments even higher at . They generally form a foam on the surface of the fermenting beer, which is called barm, as during the fermentation process its hydrophobic surface causes the flocs to adhere to CO2 and rise; because of this, they are often referred to as "top-cropping" or "top-fermenting" – though this distinction is less clear in modern brewing with the use of cylindro-conical tanks. Generally, warm-fermented beers, which are usually termed ale, are ready to drink within three weeks after the beginning of fermentation, although some brewers will condition or mature them for several months.
Cool fermentation
When a beer has been brewed using a cool fermentation of around , compared to typical warm fermentation temperatures of , then stored (or lagered) for typically several weeks (or months) at temperatures close to freezing point, it is termed a "lager". During the lagering or storage phase several flavour components developed during fermentation dissipate, resulting in a "cleaner" flavour. Though it is the slow, cool fermentation and cold conditioning (or lagering) that defines the character of lager, the main technical difference is with the yeast generally used, which is Saccharomyces pastorianus. Technical differences include the ability of lager yeast to metabolize melibiose, and the tendency to settle at the bottom of the fermenter (though ale yeasts can also become bottom settling by selection); though these technical differences are not considered by scientists to be influential in the character or flavour of the finished beer, brewers feel otherwise – sometimes cultivating their own yeast strains which may suit their brewing equipment or for a particular purpose, such as brewing beers with a high abv.
Brewers in Bavaria had for centuries been selecting cold-fermenting yeasts by storing ("lagern") their beers in cold alpine caves. The process of natural selection meant that the wild yeasts that were most cold tolerant would be the ones that would remain actively fermenting
in the beer that was stored in the caves. A sample of these Bavarian yeasts was sent from the Spaten brewery in Munich to the Carlsberg brewery in Copenhagen in 1845 who began brewing with it. In 1883 Emile Hansen completed a study on pure yeast culture isolation and the pure strain obtained from Spaten went into industrial production in 1884 as Carlsberg yeast No 1. Another specialized pure yeast production plant was installed at the Heineken Brewery in Rotterdam the following year and together they began the supply of pure cultured yeast to brewers across Europe. This yeast strain was originally classified as Saccharomyces carlsbergensis, a now defunct species name which has been superseded by the currently accepted taxonomic classification Saccharomyces pastorianus.
Spontaneous fermentation
Lambic beers are historically brewed in Brussels and the nearby Pajottenland region of Belgium without any yeast inoculation. The wort is cooled in open vats (called "coolships"), where the yeasts and microbiota present in the brewery (such as Brettanomyces) are allowed to settle to create a spontaneous fermentation, and are then conditioned or matured in oak barrels for typically one to three years.
Conditioning
After an initial or primary fermentation, beer is conditioned, matured or aged, in one of several ways, which can take from 2 to 4 weeks, several months, or several years, depending on the brewer's intention for the beer. The beer is usually transferred into a second container, so that it is no longer exposed to the dead yeast and other debris (also known as "trub") that have settled to the bottom of the primary fermenter. This prevents the formation of unwanted flavours and harmful compounds such as acetaldehyde.
Kräusening
Kräusening (pronounced ) is a conditioning method in which fermenting wort is added to the finished beer. The active yeast will restart fermentation in the finished beer, and so introduce fresh carbon dioxide; the conditioning tank will be then sealed so that the carbon dioxide is dissolved into the beer producing a lively "condition" or level of carbonation. The kräusening method may also be used to condition bottled beer.
Lagering
Lagers are stored at cellar temperature or below for 1–6 months while still on the yeast. The process of storing, or conditioning, or maturing, or aging a beer at a low temperature for a long period is called "lagering", and while it is associated with lagers, the process may also be done with ales, with the same result – that of cleaning up various chemicals, acids and compounds.
Secondary fermentation
During secondary fermentation, most of the remaining yeast will settle to the bottom of the second fermenter, yielding a less hazy product.
Bottle fermentation
Some beers undergo an additional fermentation in the bottle giving natural carbonation. This may be a second and/or third fermentation. They are bottled with a viable yeast population in suspension. If there is no residual fermentable sugar left, sugar or wort or both may be added in a process known as priming. The resulting fermentation generates CO2 that is trapped in the bottle, remaining in solution and providing natural carbonation. Bottle-conditioned beers may be either filled unfiltered direct from the fermentation or conditioning tank, or filtered and then reseeded with yeast.
Cask conditioning
Cask ale (or cask-conditioned beer) is unfiltered, unpasteurised beer that is conditioned by a secondary fermentation in a metal, plastic or wooden cask. It is dispensed from the cask by being either poured from a tap by gravity, or pumped up from a cellar via a beer engine (hand pump). Sometimes a cask breather is used to keep the beer fresh by allowing carbon dioxide to replace oxygen as the beer is drawn off the cask. Until 2018, the Campaign for Real Ale (CAMRA) defined real ale as beer "served without the use of extraneous carbon dioxide", which would disallow the use of a cask breather, a policy which was reversed in April 2018 to allow beer served with the use of cask breathers to meet its definition of real ale.
Barrel-ageing
Barrel-ageing (US: Barrel aging) is the process of ageing beer in wooden barrels to achieve a variety of effects in the final product. Sour beers such as lambics are fully fermented in wood, while other beers are aged in barrels which were previously used for maturing wines or spirits. In 2016 "Craft Beer and Brewing" wrote: "Barrel-aged beers are so trendy that nearly every taphouse and beer store has a section of them.
Filtering
Filtering stabilises the flavour of beer, holding it at a point acceptable to the brewer, and preventing further development from the yeast, which under poor conditions can release negative components and flavours. Filtering also removes haze, clearing the beer, and so giving it a "polished shine and brilliance". Beer with a clear appearance has been commercially desirable for brewers since the development of glass vessels for storing and drinking beer, along with the commercial success of pale lager, which – due to the lagering process in which haze and particles settle to the bottom of the tank and so the beer "drops bright" (clears) – has a natural bright appearance and shine.
There are several forms of filters; they may be in the form of sheets or "candles", or they may be a fine powder such as diatomaceous earth (also called kieselguhr), which is added to the beer to form a filtration bed which allows liquid to pass, but holds onto suspended particles such as yeast. Filters range from rough filters that remove much of the yeast and any solids (e.g., hops, grain particles) left in the beer, to filters tight enough to strain colour and body from the beer. Filtration ratings are divided into rough, fine, and sterile. Rough filtration leaves some cloudiness in the beer, but it is noticeably clearer than unfiltered beer. Fine filtration removes almost all cloudiness. Sterile filtration removes almost all microorganisms.
Sheet (pad) filters
These filters use sheets that allow only particles smaller than a given size to pass through. The sheets are placed into a filtering frame, sanitized (with boiling water, for example) and then used to filter the beer. The sheets can be flushed if the filter becomes blocked. The sheets are usually disposable and are replaced between filtration sessions. Often the sheets contain powdered filtration media to aid in filtration.
Pre-made filters have two sides. One with loose holes, and the other with tight holes. Flow goes from the side with loose holes to the side with the tight holes, with the intent that large particles get stuck in the large holes while leaving enough room around the particles and filter medium for smaller particles to go through and get stuck in tighter holes.
Sheets are sold in nominal ratings, and typically 90% of particles larger than the nominal rating are caught by the sheet.
Kieselguhr filters
Filters that use a powder medium are considerably more complicated to operate, but can filter much more beer before regeneration. Common media include diatomaceous earth and perlite.
By-products
Brewing by-products are "spent grain" and the sediment (or "dregs") from the filtration process which may be dried and resold as "brewers dried yeast" for poultry feed, or made into yeast extract which is used in brands such as Vegemite and Marmite. The process of turning the yeast sediment into edible yeast extract was discovered by German scientist Justus von Liebig.
Brewer's spent grain (also called spent grain, brewer's grain or draff) is the main by-product of the brewing process; it consists of the residue of malt and grain which remains in the lauter tun after the lautering process. It consists primarily of grain husks, pericarp, and fragments of endosperm. As it mainly consists of carbohydrates and proteins, and is readily consumed by animals, spent grain is used in animal feed. Spent grains can also be used as fertilizer, whole grains in bread, as well as in the production of flour and biogas. Spent grain is also an ideal medium for growing mushrooms, such as shiitake, and some breweries are already either growing their own mushrooms or supplying spent grain to mushroom farms. Spent grains can be used in the production of red bricks, to improve the open porosity and reduce thermal conductivity of the ceramic mass.
Brewing industry
The brewing industry is a global business, consisting of several dominant multinational companies and many thousands of other producers known as microbreweries or regional breweries or craft breweries depending on size, region, and marketing preference. More than are sold per year—producing total global revenues of $294.5 billion (£147.7 billion) as of 2006. SABMiller became the largest brewing company in the world when it acquired Royal Grolsch, brewer of Dutch premium beer brand Grolsch. InBev was the second-largest beer-producing company in the world and Anheuser-Busch held the third spot, but after the acquisition of Anheuser-Busch by InBev, the new Anheuser-Busch InBev company is currently the largest brewer in the world.
Brewing at home is subject to regulation and prohibition in many countries. Restrictions on homebrewing were lifted in the UK in 1963, Australia followed suit in 1972, and the US in 1978, though individual states were allowed to pass their own laws limiting production.
References
Sources
Bamforth, Charles; Food, Fermentation and Micro-organisms, Wiley-Blackwell, 2005,
Bamforth, Charles; Beer: Tap into the Art and Science of Brewing, Oxford University Press, 2009
Boulton, Christopher; Encyclopaedia of Brewing, Wiley-Blackwell, 2013,
Briggs, Dennis E., et al.; Malting and Brewing Science, Aspen Publishers, 1982,
Ensminger, Audrey; Foods & Nutrition Encyclopedia, CRC Press, 1994,
Esslinger, Hans Michael; Handbook of Brewing: Processes, Technology, Markets, Wiley-VCH, 2009,
Hornsey, Ian Spencer; Brewing, Royal Society of Chemistry, 1999,
Hui, Yiu H.; Food Biotechnology, Wiley-IEEE, 1994,
Hui, Yiu H., and Smith, J. Scott; Food Processing: Principles and Applications, Wiley-Blackwell, 2004,
Andrew G.H. Lea, John Raymond Piggott, John R. Piggott ; Fermented Beverage Production, Kluwer Academic/Plenum Publishers, 2003,
McFarland, Ben; World's Best Beers, Sterling Publishing, 2009,
Oliver, Garrett (ed); The Oxford Companion to Beer, Oxford University Press, 2011
Priest, Fergus G.; Handbook of Brewing, CRC Press, 2006,
Stevens, Roger, et al.; Brewing: Science and Practice, Woodhead Publishing, 2004,
Unger, Richard W.; Beer in the Middle Ages and the Renaissance, University of Pennsylvania Press, 2004,
External links
An overview of the microbiology behind beer brewing from the Science Creative Quarterly
A pictorial overview of the brewing process at the Heriot-Watt University Pilot Brewery
Fermentation in food processing | Brewing | [
"Chemistry"
] | 9,442 | [
"Fermentation in food processing",
"Fermentation"
] |
4,436 | https://en.wikipedia.org/wiki/Brownian%20motion | Brownian motion is the random motion of particles suspended in a medium (a liquid or a gas).
This motion pattern typically consists of random fluctuations in a particle's position inside a fluid sub-domain, followed by a relocation to another sub-domain. Each relocation is followed by more fluctuations within the new closed volume. This pattern describes a fluid at thermal equilibrium, defined by a given temperature. Within such a fluid, there exists no preferential direction of flow (as in transport phenomena). More specifically, the fluid's overall linear and angular momenta remain null over time. The kinetic energies of the molecular Brownian motions, together with those of molecular rotations and vibrations, sum up to the caloric component of a fluid's internal energy (the equipartition theorem).
This motion is named after the Scottish botanist Robert Brown, who first described the phenomenon in 1827, while looking through a microscope at pollen of the plant Clarkia pulchella immersed in water. In 1900, the French mathematician Louis Bachelier modeled the stochastic process now called Brownian motion in his doctoral thesis, The Theory of Speculation (Théorie de la spéculation), prepared under the supervision of Henri Poincaré. Then, in 1905, theoretical physicist Albert Einstein published a paper where he modeled the motion of the pollen particles as being moved by individual water molecules, making one of his first major scientific contributions.
The direction of the force of atomic bombardment is constantly changing, and at different times the particle is hit more on one side than another, leading to the seemingly random nature of the motion. This explanation of Brownian motion served as convincing evidence that atoms and molecules exist and was further verified experimentally by Jean Perrin in 1908. Perrin was awarded the Nobel Prize in Physics in 1926 "for his work on the discontinuous structure of matter".
The many-body interactions that yield the Brownian pattern cannot be solved by a model accounting for every involved molecule. Consequently, only probabilistic models applied to molecular populations can be employed to describe it. Two such models of the statistical mechanics, due to Einstein and Smoluchowski, are presented below. Another, pure probabilistic class of models is the class of the stochastic process models. There exist sequences of both simpler and more complicated stochastic processes which converge (in the limit) to Brownian motion (see random walk and Donsker's theorem).
History
The Roman philosopher-poet Lucretius' scientific poem "On the Nature of Things" () has a remarkable description of the motion of dust particles in verses 113–140 from Book II. He uses this as a proof of the existence of atoms:
Although the mingling, tumbling motion of dust particles is caused largely by air currents, the glittering, jiggling motion of small dust particles is caused chiefly by true Brownian dynamics; Lucretius "perfectly describes and explains the Brownian movement by a wrong example".
While Jan Ingenhousz described the irregular motion of coal dust particles on the surface of alcohol in 1785, the discovery of this phenomenon is often credited to the botanist Robert Brown in 1827. Brown was studying pollen grains of the plant Clarkia pulchella suspended in water under a microscope when he observed minute particles, ejected by the pollen grains, executing a jittery motion. By repeating the experiment with particles of inorganic matter he was able to rule out that the motion was life-related, although its origin was yet to be explained.
The first person to describe the mathematics behind Brownian motion was Thorvald N. Thiele in a paper on the method of least squares published in 1880. This was followed independently by Louis Bachelier in 1900 in his PhD thesis "The theory of speculation", in which he presented a stochastic analysis of the stock and option markets. The Brownian model of financial markets is often cited, but Benoit Mandelbrot rejected its applicability to stock price movements in part because these are discontinuous.
Albert Einstein (in one of his 1905 papers) and Marian Smoluchowski (1906) brought the solution of the problem to the attention of physicists, and presented it as a way to indirectly confirm the existence of atoms and molecules. Their equations describing Brownian motion were subsequently verified by the experimental work of Jean Baptiste Perrin in 1908.
The instantaneous velocity of the Brownian motion can be defined as , when , where is the momentum relaxation time.
In 2010, the instantaneous velocity of a Brownian particle (a glass microsphere trapped in air with optical tweezers) was measured successfully. The velocity data verified the Maxwell–Boltzmann velocity distribution, and the equipartition theorem for a Brownian particle.
Statistical mechanics theories
Einstein's theory
There are two parts to Einstein's theory: the first part consists in the formulation of a diffusion equation for Brownian particles, in which the diffusion coefficient is related to the mean squared displacement of a Brownian particle, while the second part consists in relating the diffusion coefficient to measurable physical quantities. In this way Einstein was able to determine the size of atoms, and how many atoms there are in a mole, or the molecular weight in grams, of a gas. In accordance to Avogadro's law, this volume is the same for all ideal gases, which is 22.414 liters at standard temperature and pressure. The number of atoms contained in this volume is referred to as the Avogadro number, and the determination of this number is tantamount to the knowledge of the mass of an atom, since the latter is obtained by dividing the molar mass of the gas by the Avogadro constant.
The first part of Einstein's argument was to determine how far a Brownian particle travels in a given time interval. Classical mechanics is unable to determine this distance because of the enormous number of bombardments a Brownian particle will undergo, roughly of the order of 1014 collisions per second.
He regarded the increment of particle positions in time in a one-dimensional (x) space (with the coordinates chosen so that the origin lies at the initial position of the particle) as a random variable () with some probability density function (i.e., is the probability density for a jump of magnitude , i.e., the probability density of the particle incrementing its position from to in the time interval ). Further, assuming conservation of particle number, he expanded the number density (number of particles per unit volume around ) at time in a Taylor series,
where the second equality is by definition of . The integral in the first term is equal to one by the definition of probability, and the second and other even terms (i.e. first and other odd moments) vanish because of space symmetry. What is left gives rise to the following relation:
Where the coefficient after the Laplacian, the second moment of probability of displacement , is interpreted as mass diffusivity D:
Then the density of Brownian particles at point at time satisfies the diffusion equation:
Assuming that N particles start from the origin at the initial time t = 0, the diffusion equation has the solution
This expression (which is a normal distribution with the mean and variance usually called Brownian motion ) allowed Einstein to calculate the moments directly. The first moment is seen to vanish, meaning that the Brownian particle is equally likely to move to the left as it is to move to the right. The second moment is, however, non-vanishing, being given by
This equation expresses the mean squared displacement in terms of the time elapsed and the diffusivity. From this expression Einstein argued that the displacement of a Brownian particle is not proportional to the elapsed time, but rather to its square root. His argument is based on a conceptual switch from the "ensemble" of Brownian particles to the "single" Brownian particle: we can speak of the relative number of particles at a single instant just as well as of the time it takes a Brownian particle to reach a given point.
The second part of Einstein's theory relates the diffusion constant to physically measurable quantities, such as the mean squared displacement of a particle in a given time interval. This result enables the experimental determination of the Avogadro number and therefore the size of molecules. Einstein analyzed a dynamic equilibrium being established between opposing forces. The beauty of his argument is that the final result does not depend upon which forces are involved in setting up the dynamic equilibrium.
In his original treatment, Einstein considered an osmotic pressure experiment, but the same conclusion can be reached in other ways.
Consider, for instance, particles suspended in a viscous fluid in a gravitational field. Gravity tends to make the particles settle, whereas diffusion acts to homogenize them, driving them into regions of smaller concentration. Under the action of gravity, a particle acquires a downward speed of , where is the mass of the particle, is the acceleration due to gravity, and is the particle's mobility in the fluid. George Stokes had shown that the mobility for a spherical particle with radius is , where is the dynamic viscosity of the fluid. In a state of dynamic equilibrium, and under the hypothesis of isothermal fluid, the particles are distributed according to the barometric distribution
where is the difference in density of particles separated by a height difference, of , is the Boltzmann constant (the ratio of the universal gas constant, , to the Avogadro constant, ), and is the absolute temperature.
Dynamic equilibrium is established because the more that particles are pulled down by gravity, the greater the tendency for the particles to migrate to regions of lower concentration. The flux is given by Fick's law,
where . Introducing the formula for , we find that
In a state of dynamical equilibrium, this speed must also be equal to . Both expressions for are proportional to , reflecting that the derivation is independent of the type of forces considered. Similarly, one can derive an equivalent formula for identical charged particles of charge in a uniform electric field of magnitude , where is replaced with the electrostatic force . Equating these two expressions yields the Einstein relation for the diffusivity, independent of or or other such forces:
Here the first equality follows from the first part of Einstein's theory, the third equality follows from the definition of the Boltzmann constant as , and the fourth equality follows from Stokes's formula for the mobility. By measuring the mean squared displacement over a time interval along with the universal gas constant , the temperature , the viscosity , and the particle radius , the Avogadro constant can be determined.
The type of dynamical equilibrium proposed by Einstein was not new. It had been pointed out previously by J. J. Thomson in his series of lectures at Yale University in May 1903 that the dynamic equilibrium between the velocity generated by a concentration gradient given by Fick's law and the velocity due to the variation of the partial pressure caused when ions are set in motion "gives us a method of determining Avogadro's constant which is independent of any hypothesis as to the shape or size of molecules, or of the way in which they act upon each other".
An identical expression to Einstein's formula for the diffusion coefficient was also found by Walther Nernst in 1888 in which he expressed the diffusion coefficient as the ratio of the osmotic pressure to the ratio of the frictional force and the velocity to which it gives rise. The former was equated to the law of van 't Hoff while the latter was given by Stokes's law. He writes for the diffusion coefficient , where is the osmotic pressure and is the ratio of the frictional force to the molecular viscosity which he assumes is given by Stokes's formula for the viscosity. Introducing the ideal gas law per unit volume for the osmotic pressure, the formula becomes identical to that of Einstein's. The use of Stokes's law in Nernst's case, as well as in Einstein and Smoluchowski, is not strictly applicable since it does not apply to the case where the radius of the sphere is small in comparison with the mean free path.
At first, the predictions of Einstein's formula were seemingly refuted by a series of experiments by Svedberg in 1906 and 1907, which gave displacements of the particles as 4 to 6 times the predicted value, and by Henri in 1908 who found displacements 3 times greater than Einstein's formula predicted. But Einstein's predictions were finally confirmed in a series of experiments carried out by Chaudesaigues in 1908 and Perrin in 1909. The confirmation of Einstein's theory constituted empirical progress for the kinetic theory of heat. In essence, Einstein showed that the motion can be predicted directly from the kinetic model of thermal equilibrium. The importance of the theory lay in the fact that it confirmed the kinetic theory's account of the second law of thermodynamics as being an essentially statistical law.
Smoluchowski model
Smoluchowski's theory of Brownian motion starts from the same premise as that of Einstein and derives the same probability distribution for the displacement of a Brownian particle along the in time . He therefore gets the same expression for the mean squared displacement: However, when he relates it to a particle of mass moving at a velocity which is the result of a frictional force governed by Stokes's law, he finds
where is the viscosity coefficient, and is the radius of the particle. Associating the kinetic energy with the thermal energy , the expression for the mean squared displacement is times that found by Einstein. The fraction 27/64 was commented on by Arnold Sommerfeld in his necrology on Smoluchowski: "The numerical coefficient of Einstein, which differs from Smoluchowski by 27/64 can only be put in doubt."
Smoluchowski attempts to answer the question of why a Brownian particle should be displaced by bombardments of smaller particles when the probabilities for striking it in the forward and rear directions are equal.
If the probability of gains and losses follows a binomial distribution,
with equal probabilities of 1/2, the mean total gain is
If is large enough so that Stirling's approximation can be used in the form
then the expected total gain will be
showing that it increases as the square root of the total population.
Suppose that a Brownian particle of mass is surrounded by lighter particles of mass which are traveling at a speed . Then, reasons Smoluchowski, in any collision between a surrounding and Brownian particles, the velocity transmitted to the latter will be . This ratio is of the order of . But we also have to take into consideration that in a gas there will be more than 1016 collisions in a second, and even greater in a liquid where we expect that there will be 1020 collision in one second. Some of these collisions will tend to accelerate the Brownian particle; others will tend to decelerate it. If there is a mean excess of one kind of collision or the other to be of the order of 108 to 1010 collisions in one second, then velocity of the Brownian particle may be anywhere between . Thus, even though there are equal probabilities for forward and backward collisions there will be a net tendency to keep the Brownian particle in motion, just as the ballot theorem predicts.
These orders of magnitude are not exact because they don't take into consideration the velocity of the Brownian particle, , which depends on the collisions that tend to accelerate and decelerate it. The larger is, the greater will be the collisions that will retard it so that the velocity of a Brownian particle can never increase without limit. Could such a process occur, it would be tantamount to a perpetual motion of the second type. And since equipartition of energy applies, the kinetic energy of the Brownian particle, will be equal, on the average, to the kinetic energy of the surrounding fluid particle,
In 1906 Smoluchowski published a one-dimensional model to describe a particle undergoing Brownian motion. The model assumes collisions with where is the test particle's mass and the mass of one of the individual particles composing the fluid. It is assumed that the particle collisions are confined to one dimension and that it is equally probable for the test particle to be hit from the left as from the right. It is also assumed that every collision always imparts the same magnitude of . If is the number of collisions from the right and the number of collisions from the left then after collisions the particle's velocity will have changed by . The multiplicity is then simply given by:
and the total number of possible states is given by . Therefore, the probability of the particle being hit from the right times is:
As a result of its simplicity, Smoluchowski's 1D model can only qualitatively describe Brownian motion. For a realistic particle undergoing Brownian motion in a fluid, many of the assumptions don't apply. For example, the assumption that on average occurs an equal number of collisions from the right as from the left falls apart once the particle is in motion. Also, there would be a distribution of different possible s instead of always just one in a realistic situation.
Langevin equation
The diffusion equation yields an approximation of the time evolution of the probability density function associated with the position of the particle going under a Brownian movement under the physical definition. The approximation is valid on short timescales.
The time evolution of the position of the Brownian particle over all time scales described using the Langevin equation, an equation that involves a random force field representing the effect of the thermal fluctuations of the solvent on the particle. In Langevin dynamics and Brownian dynamics, the Langevin equation is used to efficiently simulate the dynamics of molecular systems that exhibit a strong Brownian component.
Astrophysics: star motion within galaxies
In stellar dynamics, a massive body (star, black hole, etc.) can experience Brownian motion as it responds to gravitational forces from surrounding stars. The rms velocity of the massive object, of mass , is related to the rms velocity of the background stars by
where is the mass of the background stars. The gravitational force from the massive object causes nearby stars to move faster than they otherwise would, increasing both and . The Brownian velocity of Sgr A*, the supermassive black hole at the center of the Milky Way galaxy, is predicted from this formula to be less than 1 km s−1.
Mathematics
In mathematics, Brownian motion is described by the Wiener process, a continuous-time stochastic process named in honor of Norbert Wiener. It is one of the best known Lévy processes (càdlàg stochastic processes with stationary independent increments) and occurs frequently in pure and applied mathematics, economics and physics.
The Wiener process is characterized by four facts:
is almost surely continuous
has independent increments
denotes the normal distribution with expected value and variance . The condition that it has independent increments means that if then and are independent random variables. In addition, for some filtration is measurable for all
An alternative characterisation of the Wiener process is the so-called Lévy characterisation that says that the Wiener process is an almost surely continuous martingale with and quadratic variation
A third characterisation is that the Wiener process has a spectral representation as a sine series whose coefficients are independent random variables. This representation can be obtained using the Kosambi–Karhunen–Loève theorem.
The Wiener process can be constructed as the scaling limit of a random walk, or other discrete-time stochastic processes with stationary independent increments. This is known as Donsker's theorem. Like the random walk, the Wiener process is recurrent in one or two dimensions (meaning that it returns almost surely to any fixed neighborhood of the origin infinitely often) whereas it is not recurrent in dimensions three and higher. Unlike the random walk, it is scale invariant.
A d-dimensional Gaussian free field has been described as "a d-dimensional-time analog of Brownian motion."
Statistics
The Brownian motion can be modeled by a random walk.
In the general case, Brownian motion is a Markov process and described by stochastic integral equations.
Lévy characterisation
The French mathematician Paul Lévy proved the following theorem, which gives a necessary and sufficient condition for a continuous -valued stochastic process to actually be -dimensional Brownian motion. Hence, Lévy's condition can actually be used as an alternative definition of Brownian motion.
Let be a continuous stochastic process on a probability space taking values in . Then the following are equivalent:
is a Brownian motion with respect to , i.e., the law of with respect to is the same as the law of an -dimensional Brownian motion, i.e., the push-forward measure is classical Wiener measure on .
both
is a martingale with respect to (and its own natural filtration); and
for all , is a martingale with respect to (and its own natural filtration), where denotes the Kronecker delta.
Spectral content
The spectral content of a stochastic process can be found from the power spectral density, formally defined as
where stands for the expected value. The power spectral density of Brownian motion is found to be
where is the diffusion coefficient of . For naturally occurring signals, the spectral content can be found from the power spectral density of a single realization, with finite available time, i.e.,
which for an individual realization of a Brownian motion trajectory, it is found to have expected value
and variance
For sufficiently long realization times, the expected value of the power spectrum of a single trajectory converges to the formally defined power spectral density but its coefficient of variation tends to This implies the distribution of is broad even in the infinite time limit.
Riemannian manifold
The infinitesimal generator (and hence characteristic operator) of a Brownian motion on is easily calculated to be , where denotes the Laplace operator. In image processing and computer vision, the Laplacian operator has been used for various tasks such as blob and edge detection. This observation is useful in defining Brownian motion on an -dimensional Riemannian manifold : a Brownian motion on is defined to be a diffusion on whose characteristic operator in local coordinates , , is given by , where is the Laplace–Beltrami operator given in local coordinates by
where in the sense of the inverse of a square matrix.
Narrow escape
The narrow escape problem is a ubiquitous problem in biology, biophysics and cellular biology which has the following formulation: a Brownian particle (ion, molecule, or protein) is confined to a bounded domain (a compartment or a cell) by a reflecting boundary, except for a small window through which it can escape. The narrow escape problem is that of calculating the mean escape time. This time diverges as the window shrinks, thus rendering the calculation a singular perturbation problem.
See also
Brownian bridge: a Brownian motion that is required to "bridge" specified values at specified times
Brownian covariance
Brownian dynamics
Brownian motor
Brownian noise
Brownian ratchet
Brownian surface
Brownian tree
Brownian web
Rotational Brownian motion
Diffusion equation
Geometric Brownian motion
Itô diffusion: a generalisation of Brownian motion
Langevin equation
Lévy arcsine law
Local time (mathematics)
Many-body problem
Marangoni effect
Nanoparticle tracking analysis
Narrow escape problem
Osmosis
Random walk
Schramm–Loewner evolution
Single particle trajectories
Single particle tracking
Statistical mechanics
Stochastic Eulerian Lagrangian methods : simulation methods for the Brownian motion of spatially extended structures and hydrodynamic coupling.
Stokesian dynamics
Surface diffusion: a type of constrained Brownian motion.
Thermal equilibrium
Thermodynamic equilibrium
Tyndall effect: a phenomenon where particles are involved; used to differentiate between the different types of mixtures.
Ultramicroscope
References
Further reading
Also includes a subsequent defense by Brown of his original observations, Additional remarks on active molecules.
Lucretius, On The Nature of Things, translated by William Ellery Leonard. (on-line version, from Project Gutenberg. See the heading 'Atomic Motions'; this translation differs slightly from the one quoted).
Nelson, Edward, (1967). Dynamical Theories of Brownian Motion. (PDF version of this out-of-print book, from the author's webpage.) This is primarily a mathematical work, but the first four chapters discuss the history of the topic, in the era from Brown to Einstein.
See also Perrin's book "Les Atomes" (1914).
Theile, T. N.
Danish version: "Om Anvendelse af mindste Kvadraters Methode i nogle Tilfælde, hvor en Komplikation af visse Slags uensartede tilfældige Fejlkilder giver Fejlene en 'systematisk' Karakter".
French version: "Sur la compensation de quelques erreurs quasi-systématiques par la méthodes de moindre carrés" published simultaneously in Vidensk. Selsk. Skr. 5. Rk., naturvid. og mat. Afd., 12:381–408, 1880.
External links
Einstein on Brownian Motion
Discusses history, botany and physics of Brown's original observations, with videos
"Einstein's prediction finally witnessed one century later" : a test to observe the velocity of Brownian motion
Large-Scale Brownian Motion Demonstration
Statistical mechanics
Wiener process
Fractals
Colloidal chemistry
Robert Brown (botanist, born 1773)
Albert Einstein
Articles containing video clips
Lévy processes | Brownian motion | [
"Physics",
"Chemistry",
"Mathematics"
] | 5,269 | [
"Functions and mappings",
"Colloidal chemistry",
"Mathematical analysis",
"Mathematical objects",
"Surface science",
"Colloids",
"Fractals",
"Mathematical relations",
"Statistical mechanics"
] |
4,459 | https://en.wikipedia.org/wiki/Backward%20compatibility | In telecommunications and computing, backward compatibility (or backwards compatibility) is a property of an operating system, software, real-world product, or technology that allows for interoperability with an older legacy system, or with input designed for such a system.
Modifying a system in a way that does not allow backward compatibility is sometimes called "breaking" backward compatibility. Such breaking usually incurs various types of costs, such as switching cost.
A complementary concept is forward compatibility; a design that is forward-compatible usually has a roadmap for compatibility with future standards and products.
Usage
In hardware
A simple example of both backward and forward compatibility is the introduction of FM radio in stereo. FM radio was initially mono, with only one audio channel represented by one signal. With the introduction of two-channel stereo FM radio, many listeners had only mono FM receivers. Forward compatibility for mono receivers with stereo signals was achieved by sending the sum of both left and right audio channels in one signal and the difference in another signal. That allows mono FM receivers to receive and decode the sum signal while ignoring the difference signal, which is necessary only for separating the audio channels. Stereo FM receivers can receive a mono signal and decode it without the need for a second signal, and they can separate a sum signal to left and right channels if both sum and difference signals are received. Without the requirement for backward compatibility, a simpler method could have been chosen.
Full backward compatibility is particularly important in computer instruction set architectures, two of the most successful being the IBM 360/370/390/Zseries families of mainframes, and the Intel x86 family of microprocessors.
IBM announced the first 360 models in 1964 and has continued to update the series ever since, with migration over the decades from 32-bit register/24-bit addresses to 64-bit registers and addresses.
Intel announced the first Intel 8086/8088 processors in 1978, again with migrations over the decades from 16-bit to 64-bit. (The 8086/8088, in turn, were designed with easy machine-translatability of programs written for its predecessor in mind, although they were not instruction-set compatible with the 8-bit Intel 8080 processor of 1974. The Zilog Z80, however, was fully backward compatible with the Intel 8080.)
Fully backward compatible processors can process the same binary executable software instructions as their predecessors, allowing the use of a newer processor without having to acquire new applications or operating systems. Similarly, the success of the Wi-Fi digital communication standard is attributed to its broad forward and backward compatibility; it became more popular than other standards that were not backward compatible.
In software
In software development, backward compatibility is a general notion of interoperation between software pieces that will not produce any errors when its functionality is invoked via API. The software is considered stable when its API that is used to invoke functions is stable across different versions.
In operating systems, upgrades to newer versions are said to be backward compatible if executables and other files from the previous versions will work as usual.
In compilers, backward compatibility may refer to the ability of a compiler for a newer version of the language to accept source code of programs or data that worked under the previous version.
A data format is said to be backward compatible when a newer version of the program can open it without errors just like its predecessor.
Tradeoffs
Benefits
There are several incentives for a company to implement backward compatibility. Backward compatibility can be used to preserve older software that would have otherwise been lost when a manufacturer decides to stop supporting older hardware. Classic video games are a common example used when discussing the value of supporting older software. The cultural impact of video games is a large part of their continued success, and some believe ignoring backward compatibility would cause these titles to disappear. Backward compatibility also acts as a selling point for new hardware, as an existing player base can more affordably upgrade to subsequent generations of a console. This also helps to make up for lack of titles at the launch of new systems, as users can pull from the previous console's library of games while developers transition to the new hardware. Moreover, studies in the mid-1990s found that even consumers who never play older games after purchasing a new system consider backward compatibility a highly desirable feature, valuing the mere ability to continue to play an existing collection of games even if they choose never to do so. Backward compatibility with the original PlayStation (PS) software discs and peripherals is considered to have been a key selling point for the PlayStation 2 (PS2) during its early months on the market.
Despite not being included at launch, Microsoft slowly incorporated backward compatibility for select titles on the Xbox One several years into its product life cycle. Players have racked up over a billion hours with backward-compatible games on Xbox, and the newest generation of consoles such as PlayStation 5 and Xbox Series X/S also support this feature. A large part of the success and implementation of this feature is that the hardware within newer generation consoles is both powerful and similar enough to legacy systems that older titles can be broken down and re-configured to run on the Xbox One. This program has proven incredibly popular with Xbox players and goes against the recent trend of studio-made remasters of classic titles, creating what some believe to be an important shift in console makers' strategies.
Costs
The monetary costs of supporting old software is considered a large drawback to the usage of backward compatibility. The associated costs of backward compatibility are a larger bill of materials if hardware is required to support the legacy systems; increased complexity of the product that may lead to longer time to market, technological hindrances, and slowing innovation; and increased expectations from users in terms of compatibility. It also introduces the risk that developers will favor developing games that are compatible with both the old and new systems, since this gives them a larger base of potential buyers, resulting in a dearth of software which uses the advanced features of the new system. Because of this, several console manufacturers phased out backward compatibility towards the end of the console generation in order to reduce cost and briefly reinvigorate sales before the arrival of newer hardware.
It is possible to bypass some of these hardware costs. For instance, earlier PlayStation 2 (PS2) systems used the core of the original PlayStation (PS1) CPU as a dual-purpose processor, either as the main CPU for PS1 mode or upclocking itself to offload I/O in PS2 mode. This coprocessor was replaced with a PowerPC-based processor in later systems to serve the same functions, emulating the PS1 CPU core. Such an approach can backfire, though, as was the case of the Super Nintendo Entertainment System (Super NES). It opted for the more peculiar 65C816 CPU over the more popular 16-bit microprocessors on the basis that it would allow for easy backwards compatibility with the original Nintendo Entertainment System (NES), but ultimately did not proved to be workable once the rest of the Super NES's architecture was designed.
See also
References
External links
Interoperability | Backward compatibility | [
"Engineering"
] | 1,451 | [
"Telecommunications engineering",
"Interoperability"
] |
4,460 | https://en.wikipedia.org/wiki/Bacterial%20conjugation | Bacterial conjugation is the transfer of genetic material between bacterial cells by direct cell-to-cell contact or by a bridge-like connection between two cells. This takes place through a pilus. It is a parasexual mode of reproduction in bacteria.
It is a mechanism of horizontal gene transfer as are transformation and transduction although these two other mechanisms do not involve cell-to-cell contact.
Classical E. coli bacterial conjugation is often regarded as the bacterial equivalent of sexual reproduction or mating, since it involves the exchange of genetic material. However, it is not sexual reproduction, since no exchange of gamete occurs, and indeed no generation of a new organism: instead, an existing organism is transformed. During classical E. coli conjugation, the donor cell provides a conjugative or mobilizable genetic element that is most often a plasmid or transposon. Most conjugative plasmids have systems ensuring that the recipient cell does not already contain a similar element.
The genetic information transferred is often beneficial to the recipient. Benefits may include antibiotic resistance, xenobiotic tolerance or the ability to use new metabolites. Other elements can be detrimental, and may be viewed as bacterial parasites.
Conjugation in Escherichia coli by spontaneous zygogenesis and in Mycobacterium smegmatis by distributive conjugal transfer differ from the better studied classical E. coli conjugation in that these cases involve substantial blending of the parental genomes.
History
The process was discovered by Joshua Lederberg and Edward Tatum in 1946.
Mechanism
Conjugation diagram
Donor cell produces pilus.
Pilus attaches to recipient cell and brings the two cells together.
The mobile plasmid is nicked and a single strand of DNA is then transferred to the recipient cell.
Both cells synthesize a complementary strand to produce a double stranded circular plasmid and also reproduce pili; both cells are now viable donor for the F-factor.
The F-factor is an episome (a plasmid that can integrate itself into the bacterial chromosome by homologous recombination) with a length of about 100 kb. It carries its own origin of replication, the oriV, and an origin of transfer, or oriT. There can only be one copy of the F-plasmid in a given bacterium, either free or integrated, and bacteria that possess a copy are called F-positive or F-plus (denoted F+). Cells that lack F plasmids are called F-negative or F-minus (F−) and as such can function as recipient cells.
Among other genetic information, the F-plasmid carries a tra and trb locus, which together are about 33 kb long and consist of about 40 genes. The tra locus includes the pilin gene and regulatory genes, which together form pili on the cell surface. The locus also includes the genes for the proteins that attach themselves to the surface of F− bacteria and initiate conjugation. Though there is some debate on the exact mechanism of conjugation it seems that the pili are the structures through which DNA exchange occurs. The F-pili are extremely resistant to mechanical and thermochemical stress, which guarantees successful conjugation in a variety of environments. Several proteins coded for in the tra or trb locus seem to open a channel between the bacteria and it is thought that the traD enzyme, located at the base of the pilus, initiates membrane fusion.
When conjugation is initiated by a signal, the relaxase enzyme creates a nick in one of the strands of the conjugative plasmid at the oriT. Relaxase may work alone, or in a complex of over a dozen proteins known collectively as a relaxosome. In the F-plasmid system, the relaxase enzyme is called TraI and the relaxosome consists of TraI, TraY, TraM and the integrated host factor IHF. The nicked strand, or T-strand, is then unwound from the unbroken strand and transferred to the recipient cell in a 5'-terminus to 3'-terminus direction. The remaining strand is replicated either independent of conjugative action (vegetative replication beginning at the oriV) or in concert with conjugation (conjugative replication similar to the rolling circle replication of lambda phage). Conjugative replication may require a second nick before successful transfer can occur. A recent report claims to have inhibited conjugation with chemicals that mimic an intermediate step of this second nicking event.
If the F-plasmid that is transferred has previously been integrated into the donor's genome (producing an Hfr strain ["High Frequency of Recombination"]) some of the donor's chromosomal DNA may also be transferred with the plasmid DNA. The amount of chromosomal DNA that is transferred depends on how long the two conjugating bacteria remain in contact. In common laboratory strains of E. coli the transfer of the entire bacterial chromosome takes about 100 minutes. The transferred DNA can then be integrated into the recipient genome via homologous recombination.
A cell culture that contains in its population cells with non-integrated F-plasmids usually also contains a few cells that have accidentally integrated their plasmids. It is these cells that are responsible for the low-frequency chromosomal gene transfers that occur in such cultures. Some strains of bacteria with an integrated F-plasmid can be isolated and grown in pure culture. Because such strains transfer chromosomal genes very efficiently they are called Hfr (high frequency of recombination). The E. coli genome was originally mapped by interrupted mating experiments in which various Hfr cells in the process of conjugation were sheared from recipients after less than 100 minutes (initially using a Waring blender). The genes that were transferred were then investigated.
Since integration of the F-plasmid into the E. coli chromosome is a rare spontaneous occurrence, and since the numerous genes promoting DNA transfer are in the plasmid genome rather than in the bacterial genome, it has been argued that conjugative bacterial gene transfer, as it occurs in the E. coli Hfr system, is not an evolutionary adaptation of the bacterial host, nor is it likely ancestral to eukaryotic sex.
Spontaneous zygogenesis in E. coli
In addition to classical bacterial conjugation described above for E. coli, a form of conjugation referred to as spontaneous zygogenesis (Z-mating for short) is observed in certain strains of E. coli. In Z-mating there is complete genetic mixing, and unstable diploids are formed that throw off phenotypically haploid cells, of which some show a parental phenotype and some are true recombinants.
Conjugal transfer in mycobacteria
Conjugation in Mycobacteria smegmatis, like conjugation in E. coli, requires stable and extended contact between a donor and a recipient strain, is DNase resistant, and the transferred DNA is incorporated into the recipient chromosome by homologous recombination. However, unlike E. coli Hfr conjugation, mycobacterial conjugation is chromosome rather than plasmid based. Furthermore, in contrast to E. coli Hfr conjugation, in M. smegmatis all regions of the chromosome are transferred with comparable efficiencies. The lengths of the donor segments vary widely, but have an average length of 44.2kb. Since a mean of 13 tracts are transferred, the average total of transferred DNA per genome is 575kb. This process is referred to as "Distributive conjugal transfer." Gray et al. found substantial blending of the parental genomes as a result of conjugation and regarded this blending as reminiscent of that seen in the meiotic products of sexual reproduction.
Conjugation-like DNA transfer in hyperthermophilic archaea
Hyperthermophilic archaea encode pili structurally similar to the bacterial conjugative pili. However, unlike in bacteria, where conjugation apparatus typically mediates the transfer of mobile genetic elements, such as plasmids or transposons, the conjugative machinery of hyperthermophilic archaea, called Ced (Crenarchaeal system for exchange of DNA) and Ted (Thermoproteales system for exchange of DNA), appears to be responsible for the transfer of cellular DNA between members of the same species. It has been suggested that in these archaea the conjugation machinery has been fully domesticated for promoting DNA repair through homologous recombination rather than spread of mobile genetic elements. In addition to the VirB2-like conjugative pilus, the Ced and Ted systems include components for the VirB6-like transmembrane mating pore and the VirB4-like ATPase.
Inter-kingdom transfer
Bacteria related to the nitrogen fixing Rhizobia are an interesting case of inter-kingdom conjugation. For example, the tumor-inducing (Ti) plasmid of Agrobacterium and the root-tumor inducing (Ri) plasmid of A. rhizogenes contain genes that are capable of transferring to plant cells. The expression of these genes effectively transforms the plant cells into opine-producing factories. Opines are used by the bacteria as sources of nitrogen and energy. Infected cells form crown gall or root tumors. The Ti and Ri plasmids are thus endosymbionts of the bacteria, which are in turn endosymbionts (or parasites) of the infected plant.
The Ti and Ri plasmids can also be transferred between bacteria using a system (the tra, or transfer, operon) that is different and independent of the system used for inter-kingdom transfer (the vir, or virulence, operon). Such transfers create virulent strains from previously avirulent strains.
Genetic engineering applications
Conjugation is a convenient means for transferring genetic material to a variety of targets. In laboratories, successful transfers have been reported from bacteria to yeast, plants, mammalian cells, diatoms and isolated mammalian mitochondria. Conjugation has advantages over other forms of genetic transfer including minimal disruption of the target's cellular envelope and the ability to transfer relatively large amounts of genetic material (see the above discussion of E. coli chromosome transfer). In plant engineering, Agrobacterium-like conjugation complements other standard vehicles such as tobacco mosaic virus (TMV). While TMV is capable of infecting many plant families these are primarily herbaceous dicots. Agrobacterium-like conjugation is also primarily used for dicots, but monocot recipients are not uncommon.
See also
Sexual conjugation in algae and ciliates
Transfection
Triparental mating
Zygotic induction
References
External links
Bacterial conjugation (a Flash animation)
Antimicrobial resistance
Bacteriology
Biotechnology
Modification of genetic information
Molecular biology | Bacterial conjugation | [
"Chemistry",
"Biology"
] | 2,338 | [
"Modification of genetic information",
"Biotechnology",
"Molecular genetics",
"nan",
"Molecular biology",
"Biochemistry"
] |
4,474 | https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein%20condensate | In condensed matter physics, a Bose–Einstein condensate (BEC) is a state of matter that is typically formed when a gas of bosons at very low densities is cooled to temperatures very close to absolute zero, i.e., . Under such conditions, a large fraction of bosons occupy the lowest quantum state, at which microscopic quantum-mechanical phenomena, particularly wavefunction interference, become apparent macroscopically.
More generally, condensation refers to the appearance of macroscopic occupation of one or several states: for example, in BCS theory, a superconductor is a condensate of Cooper pairs. As such, condensation can be associated with phase transition, and the macroscopic occupation of the state is the order parameter.
Bose–Einstein condensate was first predicted, generally, in 1924–1925 by Albert Einstein, crediting a pioneering paper by Satyendra Nath Bose on the new field now known as quantum statistics. In 1995, the Bose–Einstein condensate was created by Eric Cornell and Carl Wieman of the University of Colorado Boulder using rubidium atoms; later that year, Wolfgang Ketterle of MIT produced a BEC using sodium atoms. In 2001 Cornell, Wieman, and Ketterle shared the Nobel Prize in Physics "for the achievement of Bose–Einstein condensation in dilute gases of alkali atoms, and for early fundamental studies of the properties of the condensates".
History
Bose first sent a paper to Einstein on the quantum statistics of light quanta (now called photons), in which he derived Planck's quantum radiation law without any reference to classical physics. Einstein was impressed, translated the paper himself from English to German and submitted it for Bose to the Zeitschrift für Physik, which published it in 1924. (The Einstein manuscript, once believed to be lost, was found in a library at Leiden University in 2005.) Einstein then extended Bose's ideas to matter in two other papers. The result of their efforts is the concept of a Bose gas, governed by Bose–Einstein statistics, which describes the statistical distribution of identical particles with integer spin, now called bosons. Bosons are allowed to share a quantum state. Einstein proposed that cooling bosonic atoms to a very low temperature would cause them to fall (or "condense") into the lowest accessible quantum state, resulting in a new form of matter. Bosons include the photon, polaritons, magnons, some atoms and molecules (depending on the number of nucleons, see #Isotopes) such as atomic hydrogen, helium-4, lithium-7, rubidium-87 or strontium-84.
In 1938, Fritz London proposed the BEC as a mechanism for superfluidity in and superconductivity.
The quest to produce a Bose–Einstein condensate in the laboratory was stimulated by a paper published in 1976 by two program directors at the National Science Foundation (William Stwalley and Lewis Nosanow), proposing to use spin-polarized atomic hydrogen to produce a gaseous BEC. This led to the immediate pursuit of the idea by four independent research groups; these were led by Isaac Silvera (University of Amsterdam), Walter Hardy (University of British Columbia), Thomas Greytak (Massachusetts Institute of Technology) and David Lee (Cornell University). However, cooling atomic hydrogen turned out to be technically difficult, and Bose-Einstein condensation of atomic hydrogen was only realized in 1998.
On 5 June 1995, the first gaseous condensate was produced by Eric Cornell and Carl Wieman at the University of Colorado at Boulder NIST–JILA lab, in a gas of rubidium atoms cooled to 170 nanokelvins (nK). Shortly thereafter, Wolfgang Ketterle at MIT produced a Bose–Einstein Condensate in a gas of sodium atoms. For their achievements Cornell, Wieman, and Ketterle received the 2001 Nobel Prize in Physics. Bose-Einstein condensation of alkali gases is easier because they can be pre-cooled with laser cooling techniques, unlike atomic hydrogen at the time, which give a significant head start when performing the final forced evaporative cooling to cross the condensation threshold. These early studies founded the field of ultracold atoms, and hundreds of research groups around the world now routinely produce BECs of dilute atomic vapors in their labs.
Since 1995, many other atomic species have been condensed (see #Isotopes), and BECs have also been realized using molecules, polaritons, other quasi-particles. BECs of photons can also be made, for example, in dye microcavites with wavelength-scale mirror separation, making a two-dimensional harmonically confined photon gas with tunable chemical potential.
Critical temperature
This transition to BEC occurs below a critical temperature, which for a uniform three-dimensional gas consisting of non-interacting particles with no apparent internal degrees of freedom is given by
where:
is the critical temperature,
is the particle density,
is the mass per boson,
is the reduced Planck constant,
is the Boltzmann constant,
is the Riemann zeta function ().
Interactions shift the value, and the corrections can be calculated by mean-field theory.
This formula is derived from finding the gas degeneracy in the Bose gas using Bose–Einstein statistics.
The critical temperature depends on the density. A more concise and experimentally relevant condition involves the phase-space density , where
is the thermal de Broglie wavelength. It is a dimensionless quantity. The transition to BEC occurs when the phase-space density is greater than critical value:
in 3D uniform space. This is equivalent to the above condition on the temperature. In a 3D harmonic potential, the critical value is instead
where has to be understood as the peak density.
Derivation
Ideal Bose gas
For an ideal Bose gas we have the equation of state
where is the per-particle volume, is the thermal wavelength, is the fugacity, and
It is noticeable that is a monotonically growing function of in , which are the only values for which the series converge.
Recognizing that the second term on the right-hand side contains the expression for the average occupation number of the fundamental state , the equation of state can be rewritten as
Because the left term on the second equation must always be positive, , and because , a stronger condition is
which defines a transition between a gas phase and a condensed phase. On the critical region it is possible to define a critical temperature and thermal wavelength:
recovering the value indicated on the previous section. The critical values are such that if or , we are in the presence of a Bose–Einstein condensate.
Understanding what happens with the fraction of particles on the fundamental level is crucial. As so, write the equation of state for , obtaining
and equivalently
So, if , the fraction , and if , the fraction . At temperatures near to absolute 0, particles tend to condense in the fundamental state, which is the state with momentum .
Experimental observation
Superfluid helium-4
In 1938, Pyotr Kapitsa, John Allen and Don Misener discovered that helium-4 became a new kind of fluid, now known as a superfluid, at temperatures less than 2.17 K (the lambda point). Superfluid helium has many unusual properties, including zero viscosity (the ability to flow without dissipating energy) and the existence of quantized vortices. It was quickly believed that the superfluidity was due to partial Bose–Einstein condensation of the liquid. In fact, many properties of superfluid helium also appear in gaseous condensates created by Cornell, Wieman and Ketterle (see below). Superfluid helium-4 is a liquid rather than a gas, which means that the interactions between the atoms are relatively strong; the original theory of Bose–Einstein condensation must be heavily modified in order to describe it. Bose–Einstein condensation remains, however, fundamental to the superfluid properties of helium-4. Note that helium-3, a fermion, also enters a superfluid phase (at a much lower temperature) which can be explained by the formation of bosonic Cooper pairs of two atoms (see also fermionic condensate).
Dilute atomic gases
The first "pure" Bose–Einstein condensate was created by Eric Cornell, Carl Wieman, and co-workers at JILA on 5 June 1995. They cooled a dilute vapor of approximately two thousand rubidium-87 atoms to below 170 nK using a combination of laser cooling (a technique that won its inventors Steven Chu, Claude Cohen-Tannoudji, and William D. Phillips the 1997 Nobel Prize in Physics) and magnetic evaporative cooling. About four months later, an independent effort led by Wolfgang Ketterle at MIT condensed sodium-23. Ketterle's condensate had a hundred times more atoms, allowing important results such as the observation of quantum mechanical interference between two different condensates. Cornell, Wieman and Ketterle won the 2001 Nobel Prize in Physics for their achievements.
A group led by Randall Hulet at Rice University announced a condensate of lithium atoms only one month following the JILA work. Lithium has attractive interactions, causing the condensate to be unstable and collapse for all but a few atoms. Hulet's team subsequently showed the condensate could be stabilized by confinement quantum pressure for up to about 1000 atoms. Various isotopes have since been condensed.
Velocity-distribution data graph
In the image accompanying this article, the velocity-distribution data indicates the formation of a Bose–Einstein condensate out of a gas of rubidium atoms. The false colors indicate the number of atoms at each velocity, with red being the fewest and white being the most. The areas appearing white and light blue are at the lowest velocities. The peak is not infinitely narrow because of the Heisenberg uncertainty principle: spatially confined atoms have a minimum width velocity distribution. This width is given by the curvature of the magnetic potential in the given direction. More tightly confined directions have bigger widths in the ballistic velocity distribution. This anisotropy of the peak on the right is a purely quantum-mechanical effect and does not exist in the thermal distribution on the left. This graph served as the cover design for the 1999 textbook Thermal Physics by Ralph Baierlein.
Quasiparticles
Bose–Einstein condensation also applies to quasiparticles in solids. Magnons, excitons, and polaritons have integer spin which means they are bosons that can form condensates.
Magnons, electron spin waves, can be controlled by a magnetic field. Densities from the limit of a dilute gas to a strongly interacting Bose liquid are possible. Magnetic ordering is the analog of superfluidity. In 1999 condensation was demonstrated in antiferromagnetic , at temperatures as great as 14 K. The high transition temperature (relative to atomic gases) is due to the magnons' small mass (near that of an electron) and greater achievable density. In 2006, condensation in a ferromagnetic yttrium-iron-garnet thin film was seen even at room temperature, with optical pumping.
Excitons, electron-hole pairs, were predicted to condense at low temperature and high density by Boer et al., in 1961. Bilayer system experiments first demonstrated condensation in 2003, by Hall voltage disappearance. Fast optical exciton creation was used to form condensates in sub-kelvin in 2005 on.
Polariton condensation was first detected for exciton-polaritons in a quantum well microcavity kept at 5 K.
In zero gravity
In June 2020, the Cold Atom Laboratory experiment on board the International Space Station successfully created a BEC of rubidium atoms and observed them for over a second in free-fall. Although initially just a proof of function, early results showed that, in the microgravity environment of the ISS, about half of the atoms formed into a magnetically insensitive halo-like cloud around the main body of the BEC.
Models
Bose Einstein's non-interacting gas
Consider a collection of N non-interacting particles, which can each be in one of two quantum states, and . If the two states are equal in energy, each different configuration is equally likely.
If we can tell which particle is which, there are different configurations, since each particle can be in or independently. In almost all of the configurations, about half the particles are in and the other half in . The balance is a statistical effect: the number of configurations is largest when the particles are divided equally.
If the particles are indistinguishable, however, there are only different configurations. If there are particles in state , there are particles in state . Whether any particular particle is in state or in state cannot be determined, so each value of determines a unique quantum state for the whole system.
Suppose now that the energy of state is slightly greater than the energy of state by an amount . At temperature , a particle will have a lesser probability to be in state by . In the distinguishable case, the particle distribution will be biased slightly towards state . But in the indistinguishable case, since there is no statistical pressure toward equal numbers, the most-likely outcome is that most of the particles will collapse into state .
In the distinguishable case, for large N, the fraction in state can be computed. It is the same as flipping a coin with probability proportional to to land tails.
In the indistinguishable case, each value of is a single state, which has its own separate Boltzmann probability. So the probability distribution is exponential:
For large , the normalization constant is . The expected total number of particles not in the lowest energy state, in the limit that , is equal to
It does not grow when N is large; it just approaches a constant. This will be a negligible fraction of the total number of particles. So a collection of enough Bose particles in thermal equilibrium will mostly be in the ground state, with only a few in any excited state, no matter how small the energy difference.
Consider now a gas of particles, which can be in different momentum states labeled . If the number of particles is less than the number of thermally accessible states, for high temperatures and low densities, the particles will all be in different states. In this limit, the gas is classical. As the density increases or the temperature decreases, the number of accessible states per particle becomes smaller, and at some point, more particles will be forced into a single state than the maximum allowed for that state by statistical weighting. From this point on, any extra particle added will go into the ground state.
To calculate the transition temperature at any density, integrate, over all momentum states, the expression for maximum number of excited particles, :
When the integral (also known as Bose–Einstein integral) is evaluated with factors of and restored by dimensional analysis, it gives the critical temperature formula of the preceding section. Therefore, this integral defines the critical temperature and particle number corresponding to the conditions of negligible chemical potential . In Bose–Einstein statistics distribution, is actually still nonzero for BECs; however, is less than the ground state energy. Except when specifically talking about the ground state, can be approximated for most energy or momentum states as .
Bogoliubov theory for weakly interacting gas
Nikolay Bogoliubov considered perturbations on the limit of dilute gas, finding a finite pressure at zero temperature and positive chemical potential. This leads to corrections for the ground state. The Bogoliubov state has pressure : .
The original interacting system can be converted to a system of non-interacting particles with a dispersion law.
Gross–Pitaevskii equation
In some simplest cases, the state of condensed particles can be described with a nonlinear Schrödinger equation, also known as Gross–Pitaevskii or Ginzburg–Landau equation. The validity of this approach is actually limited to the case of ultracold temperatures, which fits well for the most alkali atoms experiments.
This approach originates from the assumption that the state of the BEC can be described by the unique wavefunction of the condensate . For a system of this nature, is interpreted as the particle density, so the total number of atoms is
Provided essentially all atoms are in the condensate (that is, have condensed to the ground state), and treating the bosons using mean-field theory, the energy (E) associated with the state is:
Minimizing this energy with respect to infinitesimal variations in , and holding the number of atoms constant, yields the Gross–Pitaevski equation (GPE) (also a non-linear Schrödinger equation):
where:
{|cellspacing="0" cellpadding="0"
|-
|
| is the mass of the bosons,
|-
|
| is the external potential, and
|-
|
| represents the inter-particle interactions.
|}
In the case of zero external potential, the dispersion law of interacting Bose–Einstein-condensed particles is given by so-called Bogoliubov spectrum (for ):
The Gross-Pitaevskii equation (GPE) provides a relatively good description of the behavior of atomic BEC's. However, GPE does not take into account the temperature dependence of dynamical variables, and is therefore valid only for .
It is not applicable, for example, for the condensates of excitons, magnons and photons, where the critical temperature is comparable to room temperature.
Numerical solution
The Gross-Pitaevskii equation is a partial differential equation in space and time variables. Usually it does not have analytic solution and
different numerical methods, such as split-step
Crank–Nicolson
and Fourier spectral methods, are used for its solution. There are different Fortran and C programs for its solution for contact interaction
and long-range dipolar interaction which can be freely used.
Weaknesses of Gross–Pitaevskii model
The Gross–Pitaevskii model of BEC is a physical approximation valid for certain classes of BECs. By construction, the GPE uses the following simplifications: it assumes that interactions between condensate particles are of the contact two-body type and also neglects anomalous contributions to self-energy. These assumptions are suitable mostly for the dilute three-dimensional condensates. If one relaxes any of these assumptions, the equation for the condensate wavefunction acquires the terms containing higher-order powers of the wavefunction. Moreover, for some physical systems the amount of such terms turns out to be infinite, therefore, the equation becomes essentially non-polynomial. The examples where this could happen are the Bose–Fermi composite condensates, effectively lower-dimensional condensates, and dense condensates and superfluid clusters and droplets. It is found that one has to go beyond the Gross-Pitaevskii equation. For example, the logarithmic term found in the Logarithmic Schrödinger equation must be added to the Gross-Pitaevskii equation along with a Ginzburg–Sobyanin contribution to correctly determine that the speed of sound scales as the cubic root of pressure for Helium-4 at very low temperatures in close agreement with experiment.
Other
However, it is clear that in a general case the behaviour of Bose–Einstein condensate can be described by coupled evolution equations for condensate density, superfluid velocity and distribution function of elementary excitations. This problem was solved in 1977 by Peletminskii et al. in microscopical approach. The Peletminskii equations are valid for any finite temperatures below the critical point. Years after, in 1985, Kirkpatrick and Dorfman obtained similar equations using another microscopical approach. The Peletminskii equations also reproduce Khalatnikov hydrodynamical equations for superfluid as a limiting case.
Superfluidity of BEC and Landau criterion
The phenomena of superfluidity of a Bose gas and superconductivity of a strongly-correlated Fermi gas (a gas of Cooper pairs) are tightly connected to Bose–Einstein condensation. Under corresponding conditions, below the temperature of phase transition, these phenomena were observed in helium-4 and different classes of superconductors. In this sense, the superconductivity is often called the superfluidity of Fermi gas. In the simplest form, the origin of superfluidity can be seen from the weakly interacting bosons model.
Peculiar properties
Quantized vortices
As in many other systems, vortices can exist in BECs.
Vortices can be created, for example, by "stirring" the condensate with lasers,
rotating the confining trap,
or by rapid cooling across the phase transition.
The vortex created will be a quantum vortex with core shape determined by the interactions. Fluid circulation around any point is quantized due to the single-valued nature of the order BEC order parameter or wavefunction, that can be written in the form where and are as in the cylindrical coordinate system, and is the angular quantum number (a.k.a. the "charge" of the vortex). Since the energy of a vortex is proportional to the square of its angular momentum, in trivial topology only vortices can exist in the steady state; Higher-charge vortices will have a tendency to split into vortices, if allowed by the topology of the geometry.
An axially symmetric (for instance, harmonic) confining potential is commonly used for the study of vortices in BEC. To determine , the energy of must be minimized, according to the constraint . This is usually done computationally, however, in a uniform medium, the following analytic form demonstrates the correct behavior, and is a good approximation:
Here, is the density far from the vortex and , where is the healing length of the condensate.
A singly charged vortex () is in the ground state, with its energy given by
where is the farthest distance from the vortices considered.(To obtain an energy which is well defined it is necessary to include this boundary .)
For multiply charged vortices () the energy is approximated by
which is greater than that of singly charged vortices, indicating that these multiply charged vortices are unstable to decay. Research has, however, indicated they are metastable states, so may have relatively long lifetimes.
Closely related to the creation of vortices in BECs is the generation of so-called dark solitons in one-dimensional BECs. These topological objects feature a phase gradient across their nodal plane, which stabilizes their shape even in propagation and interaction. Although solitons carry no charge and are thus prone to decay, relatively long-lived dark solitons have been produced and studied extensively.
Attractive interactions
Experiments led by Randall Hulet at Rice University from 1995 through 2000 showed that lithium condensates with attractive interactions could stably exist up to a critical atom number. Quench cooling the gas, they observed the condensate to grow, then subsequently collapse as the attraction overwhelmed the zero-point energy of the confining potential, in a burst reminiscent of a supernova, with an explosion preceded by an implosion.
Further work on attractive condensates was performed in 2000 by the JILA team, of Cornell, Wieman and coworkers. Their instrumentation now had better control so they used naturally attracting atoms of rubidium-85 (having negative atom–atom scattering length). Through Feshbach resonance involving a sweep of the magnetic field causing spin flip collisions, they lowered the characteristic, discrete energies at which rubidium bonds, making their Rb-85 atoms repulsive and creating a stable condensate. The reversible flip from attraction to repulsion stems from quantum interference among wave-like condensate atoms.
When the JILA team raised the magnetic field strength further, the condensate suddenly reverted to attraction, imploded and shrank beyond detection, then exploded, expelling about two-thirds of its 10,000 atoms. About half of the atoms in the condensate seemed to have disappeared from the experiment altogether, not seen in the cold remnant or expanding gas cloud. Carl Wieman explained that under current atomic theory this characteristic of Bose–Einstein condensate could not be explained because the energy state of an atom near absolute zero should not be enough to cause an implosion; however, subsequent mean-field theories have been proposed to explain it. Most likely they formed molecules of two rubidium atoms; energy gained by this bond imparts velocity sufficient to leave the trap without being detected.
The process of creation of molecular Bose condensate during the sweep of the magnetic field throughout the Feshbach resonance, as well as the reverse process, are described by the exactly solvable model that can explain many experimental observations.
Current research
Compared to more commonly encountered states of matter, Bose–Einstein condensates are extremely fragile. The slightest interaction with the external environment can be enough to warm them past the condensation threshold, eliminating their interesting properties and forming a normal gas.
Nevertheless, they have proven useful in exploring a wide range of questions in fundamental physics, and the years since the initial discoveries by the JILA and MIT groups have seen an increase in experimental and theoretical activity.
Bose–Einstein condensates composed of a wide range of isotopes have been produced; see below.
Fundamental research
Examples include experiments that have demonstrated interference between condensates due to wave–particle duality, the study of superfluidity and quantized vortices, the creation of bright matter wave solitons from Bose condensates confined to one dimension, and the slowing of light pulses to very low speeds using electromagnetically induced transparency. Vortices in Bose–Einstein condensates are also currently the subject of analogue gravity research, studying the possibility of modeling black holes and their related phenomena in such environments in the laboratory.
Experimenters have also realized "optical lattices", where the interference pattern from overlapping lasers provides a periodic potential. These are used to explore the transition between a superfluid and a Mott insulator.
They are also useful in studying Bose–Einstein condensation in fewer than three dimensions, for example the Lieb–Liniger model (an the limit of strong interactions, the Tonks–Girardeau gas) in 1D and the Berezinskii–Kosterlitz–Thouless transition in 2D. Indeed, a deep optical lattice allows the experimentalist to freeze the motion of the particles along one or two directions, effectively eliminating one or two dimension from the system.
Further, the sensitivity of the pinning transition of strongly interacting bosons confined in a shallow one-dimensional optical lattice originally observed by Haller has been explored via a tweaking of the primary optical lattice by a secondary weaker one. Thus for a resulting weak bichromatic optical lattice, it has been found that the pinning transition is robust against the
introduction of the weaker secondary optical lattice.
Studies of vortices in nonuniform Bose–Einstein condensates as well as excitations of these systems by the application of moving repulsive or attractive obstacles, have also been undertaken. Within this context, the conditions for order and chaos in the dynamics of a trapped Bose–Einstein condensate have been explored by the application of moving blue and red-detuned laser beams (hitting frequencies slightly above and below the resonance frequency, respectively) via the time-dependent Gross-Pitaevskii equation.
Applications
In 1999, Danish physicist Lene Hau led a team from Harvard University which slowed a beam of light to about 17 meters per second using a superfluid. Hau and her associates have since made a group of condensate atoms recoil from a light pulse such that they recorded the light's phase and amplitude, recovered by a second nearby condensate, in what they term "slow-light-mediated atomic matter-wave amplification" using Bose–Einstein condensates.
Another current research interest is the creation of Bose–Einstein condensates in microgravity in order to use its properties for high precision atom interferometry. The first demonstration of a BEC in weightlessness was achieved in 2008 at a drop tower in Bremen, Germany by a consortium of researchers led by Ernst M. Rasel from Leibniz University Hannover. The same team demonstrated in 2017 the first creation of a Bose–Einstein condensate in space and it is also the subject of two upcoming experiments on the International Space Station.
Researchers in the new field of atomtronics use the properties of Bose–Einstein condensates in the emerging quantum technology of matter-wave circuits.
In 1970, BECs were proposed by Emmanuel David Tannenbaum for anti-stealth technology.
Isotopes
Bose-Einstein condensation has mainly been observed on alkaline atoms, some of which have collisional properties particularly suitable for evaporative cooling in traps, and which where the first to laser-cooled. As of 2021, using ultra-low temperatures of or below, Bose–Einstein condensates had been obtained for a multitude of isotopes with more or less ease, mainly of alkali metal, alkaline earth metal, and lanthanide atoms (, , , , , , , , , , , , , , , , , , and metastable (orthohelium)). Research was finally successful in atomic hydrogen with the aid of the newly developed method of 'evaporative cooling'.
In contrast, the superfluid state of below is differs significantly from dilute degenerate atomic gases because the interaction between the atoms is strong. Only 8% of atoms are in the condensed fraction near absolute zero, rather than near 100% of a weakly interacting BEC.
The bosonic behavior of some of these alkaline gases appears odd at first sight, because their nuclei have half-integer total spin. It arises from the interplay of electronic and nuclear spins: at ultra-low temperatures and corresponding excitation energies, the half-integer total spin of the electronic shell (one outer electron) and half-integer total spin of the nucleus are coupled by a very weak hyperfine interaction. The total spin of the atom, arising from this coupling, is an integer value. Conversely, alkali isotopes which have an integer nuclear spin (such as and ) are fermions and can form degenerate Fermi gases, also called "Fermi condensates".
Cooling fermions to extremely low temperatures has created degenerate gases, subject to the Pauli exclusion principle. To exhibit Bose–Einstein condensation, the fermions must "pair up" to form bosonic compound particles (e.g. molecules or Cooper pairs). The first molecular condensates were created in November 2003 by the groups of Rudolf Grimm at the University of Innsbruck, Deborah S. Jin at the University of Colorado at Boulder and Wolfgang Ketterle at MIT. Jin quickly went on to create the first fermionic condensate, working with the same system but outside the molecular regime.
Continuous Bose–Einstein condensation
Limitations of evaporative cooling have restricted atomic BECs to "pulsed" operation, involving a highly inefficient duty cycle that discards more than 99% of atoms to reach BEC. Achieving continuous BEC has been a major open problem of experimental BEC research, driven by the same motivations as continuous optical laser development: high flux, high coherence matter waves produced continuously would enable new sensing applications.
Continuous BEC was achieved for the first time in 2022 with .
In solid state physics
In 2020, researchers reported the development of superconducting BEC and that there appears to be a "smooth transition between" BEC and Bardeen–Cooper–Shrieffer regimes.
Dark matter
P. Sikivie and Q. Yang showed that cold dark matter axions would form a Bose–Einstein condensate by thermalisation because of gravitational self-interactions. Axions have not yet been confirmed to exist. However the important search for them has been greatly enhanced with the completion of upgrades to the Axion Dark Matter Experiment (ADMX) at the University of Washington in early 2018.
In 2014, a potential dibaryon was detected at the Jülich Research Center at about 2380 MeV. The center claimed that the measurements confirm results from 2011, via a more replicable method. The particle existed for 10−23 seconds and was named d*(2380). This particle is hypothesized to consist of three up and three down quarks. It is theorized that groups of d* (d-stars) could form Bose–Einstein condensates due to prevailing low temperatures in the early universe, and that BECs made of such hexaquarks with trapped electrons could behave like dark matter.
In fiction
In the 2016 film Spectral, the US military battles mysterious enemy creatures fashioned out of Bose–Einstein condensates.
In the 2003 novel Blind Lake, scientists observe sentient life on a planet 51 light-years away using telescopes powered by Bose–Einstein condensate-based quantum computers.
The video game franchise Mass Effect has cryonic ammunition whose flavour text describes it as being filled with Bose–Einstein condensates. Upon impact, the bullets rupture and spray supercooled liquid on the enemy.
See also
Atom laser
Atomic coherence
Bose–Einstein correlations
Bose–Einstein condensation: a network theory approach
Bose–Einstein condensation of quasiparticles
Bose–Einstein statistics
Cold Atom Laboratory
Electromagnetically induced transparency
Fermionic condensate
Gas in a box
Gross–Pitaevskii equation
Macroscopic quantum phenomena
Macroscopic quantum self-trapping
Slow light
Super-heavy atom
Superconductivity
Superfluid film
Superfluid helium-4
Supersolid
Tachyon condensation
Timeline of low-temperature technology
Ultracold atom
Wiener sausage
References
Further reading
,
.
.
.
C. J. Pethick and H. Smith, Bose–Einstein Condensation in Dilute Gases, Cambridge University Press, Cambridge, 2001.
Lev P. Pitaevskii and S. Stringari, Bose–Einstein Condensation, Clarendon Press, Oxford, 2003.
Monique Combescot and Shiue-Yuan Shiau, "Excitons and Cooper Pairs: Two Composite Bosons in Many-Body Physics", Oxford University Press ().
External links
Bose–Einstein Condensation 2009 Conference – Frontiers in Quantum Gases
BEC Homepage General introduction to Bose–Einstein condensation
Nobel Prize in Physics 2001 – for the achievement of Bose–Einstein condensation in dilute gases of alkali atoms, and for early fundamental studies of the properties of the condensates
Bose–Einstein condensates at JILA
Atomcool at Rice University
Alkali Quantum Gases at MIT
Atom Optics at UQ
Einstein's manuscript on the Bose–Einstein condensate discovered at Leiden University
Bose–Einstein condensate on arxiv.org
Bosons – The Birds That Flock and Sing Together
Easy BEC machine – information on constructing a Bose–Einstein condensate machine.
Verging on absolute zero – Cosmos Online
Lecture by W Ketterle at MIT in 2001
Bose–Einstein Condensation at NIST – NIST resource on BEC
Albert Einstein
Condensed matter physics
Exotic matter
Phases of matter
Articles containing video clips | Bose–Einstein condensate | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 7,357 | [
"Bose–Einstein condensates",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Exotic matter",
"Matter"
] |
4,476 | https://en.wikipedia.org/wiki/Beer%E2%80%93Lambert%20law | The Beer–Bouguer–Lambert (BBL) extinction law is an empirical relationship describing the attenuation in intensity of a radiation beam passing through a macroscopically homogenous medium with which it interacts. Formally, it states that the intensity of radiation decays exponentially in the absorbance of the medium, and that said absorbance is proportional to the length of beam passing through the medium, the concentration of interacting matter along that path, and a constant representing said matter's propensity to interact.
The extinction law's primary application is in chemical analysis, where it underlies the Beer–Lambert law, commonly called Beer's law. Beer's law states that a beam of visible light passing through a chemical solution of fixed geometry experiences absorption proportional to the solute concentration. Other applications appear in physical optics, where it quantifies astronomical extinction and the absorption of photons, neutrons, or rarefied gases.
Forms of the BBL law date back to the mid-eighteenth century, but it only took its modern form during the early twentieth.
History
The first work towards the BBL law began with astronomical observations Pierre Bouguer performed in the early eighteenth century and published in 1729. Bouguer needed to compensate for the refraction of light by the earth's atmosphere, and found it necessary to measure the local height of the atmosphere. The latter, he sought to obtain through variations in the observed intensity of known stars. When calibrating this effect, Bouguer discovered that light intensity had an exponential dependence on length traveled through the atmosphere (in Bouguer's terms, a geometric progression).
Bouguer's work was then popularized in Johann Heinrich Lambert's Photometria in 1760. Lambert expressed the law, which states that the loss of light intensity when it propagates in a medium is directly proportional to intensity and path length, in a mathematical form quite similar to that used in modern physics. Lambert began by assuming that the intensity of light traveling into an absorbing body would be given by the differential equation which is compatible with Bouguer's observations. The constant of proportionality was often termed the "optical density" of the body. As long as is constant along a distance , the exponential attenuation law, follows from integration.
In 1852, August Beer noticed that colored solutions also appeared to exhibit a similar attenuation relation. In his analysis, Beer does not discuss Bouguer and Lambert's prior work, writing in his introduction that "Concerning the absolute magnitude of the absorption that a particular ray of light suffers during its propagation through an absorbing medium, there is no information available." Beer may have omitted reference to Bouguer's work because there is a subtle physical difference between color absorption in solutions and astronomical contexts. Solutions are homogeneous and do not scatter light at common analytical wavelengths (ultraviolet, visible, or infrared), except at entry and exit. Thus light within a solution is reasonably approximated as due to absorption alone. In Bouguer's context, atmospheric dust or other inhomogeneities could also scatter light away from the detector. Modern texts combine the two laws because scattering and absorption have the same effect. Thus a scattering coefficient and an absorption coefficient can be combined into a total extinction coefficient .
Importantly, Beer also seems to have conceptualized his result in terms of a given thickness' opacity, writing "If is the coefficient (fraction) of diminution, then this coefficient (fraction) will have the value for double this thickness." Although this geometric progression is mathematically equivalent to the modern law, modern treatments instead emphasize the logarithm of , which clarifies that concentration and path length have equivalent effects on the absorption. An early, possibly the first, modern formulation was given by Robert Luther and Andreas Nikolopulos in 1913.
Mathematical formulations
There are several equivalent formulations of the BBL law, depending on the precise choice of measured quantities. All of them state that, provided that the physical state is held constant, the extinction process is linear in the intensity of radiation and amount of radiatively-active matter, a fact sometimes called the fundamental law of extinction. Many of them then connect the quantity of radiatively-active matter to a length traveled and a concentration or number density . The latter two are related by Avogadro's number: .
A collimated beam (directed radiation) with cross-sectional area will encounter particles (on average) during its travel. However, not all of these particles interact with the beam. Propensity to interact is a material-dependent property, typically summarized in absorptivity or scattering cross-section . These almost exhibit another Avogadro-type relationship: . The factor of appears because physicists tend to use natural logarithms and chemists decadal logarithms.
Beam intensity can also be described in terms of multiple variables: the intensity or radiant flux . In the case of a collimated beam, these are related by , but is often used in non-collimated contexts. The ratio of intensity (or flux) in to out is sometimes summarized as a transmittance coefficient .
When considering an extinction law, dimensional analysis can verify the consistency of the variables, as logarithms (being nonlinear) must always be dimensionless.
Formulation
The simplest formulation of Beer's relates the optical attenuation of a physical material containing a single attenuating species of uniform concentration to the optical path length through the sample and absorptivity of the species. This expression is:The quantities so equated are defined to be the absorbance , which depends on the logarithm base. The Naperian absorbance is then given by and satisfies
If multiple species in the material interact with the radiation, then their absorbances add. Thus a slightly more general formulation is that where the sum is over all possible radiation-interacting ("translucent") species, and indexes those species.
In situations where length may vary significantly, absorbance is sometimes summarized in terms of an attenuation coefficient
In atmospheric science and radiation shielding applications, the attenuation coefficient may vary significantly through an inhomogenous material. In those situations, the most general form of the Beer–Lambert law states that the total attenuation can be obtained by integrating the attenuation coefficient over small slices of the beamline: These formulations then reduce to the simpler versions when there is only one active species and the attenuation coefficients are constant.
Derivation
There are two factors that determine the degree to which a medium containing particles will attenuate a light beam: the number of particles encountered by the light beam, and the degree to which each particle extinguishes the light.
Assume that a beam of light enters a material sample. Define as an axis parallel to the direction of the beam. Divide the material sample into thin slices, perpendicular to the beam of light, with thickness sufficiently small that one particle in a slice cannot obscure another particle in the same slice when viewed along the direction. The radiant flux of the light that emerges from a slice is reduced, compared to that of the light that entered, by where is the (Napierian) attenuation coefficient, which yields the following first-order linear, ordinary differential equation:
The attenuation is caused by the photons that did not make it to the other side of the slice because of scattering or absorption. The solution to this differential equation is obtained by multiplying the integrating factorthroughout to obtainwhich simplifies due to the product rule (applied backwards) to
Integrating both sides and solving for for a material of real thickness , with the incident radiant flux upon the slice and the transmitted radiant flux givesand finally
Since the decadic attenuation coefficient is related to the (Napierian) attenuation coefficient by we also have
To describe the attenuation coefficient in a way independent of the number densities of the attenuating species of the material sample, one introduces the attenuation cross section has the dimension of an area; it expresses the likelihood of interaction between the particles of the beam and the particles of the species in the material sample:
One can also use the molar attenuation coefficients where is the Avogadro constant, to describe the attenuation coefficient in a way independent of the amount concentrations of the attenuating species of the material sample:
Validity
Under certain conditions the Beer–Lambert law fails to maintain a linear relationship between attenuation and concentration of analyte. These deviations are classified into three categories:
Real—fundamental deviations due to the limitations of the law itself.
Chemical—deviations observed due to specific chemical species of the sample which is being analyzed.
Instrument—deviations which occur due to how the attenuation measurements are made.
There are at least six conditions that need to be fulfilled in order for the Beer–Lambert law to be valid. These are:
The attenuators must act independently of each other.
The attenuating medium must be homogeneous in the interaction volume.
The attenuating medium must not scatter the radiation—no turbidity—unless this is accounted for as in DOAS.
The incident radiation must consist of parallel rays, each traversing the same length in the absorbing medium.
The incident radiation should preferably be monochromatic, or have at least a width that is narrower than that of the attenuating transition. Otherwise a spectrometer as detector for the power is needed instead of a photodiode which cannot discriminate between wavelengths.
The incident flux must not influence the atoms or molecules; it should only act as a non-invasive probe of the species under study. In particular, this implies that the light should not cause optical saturation or optical pumping, since such effects will deplete the lower level and possibly give rise to stimulated emission.
If any of these conditions are not fulfilled, there will be deviations from the Beer–Lambert law.
The law tends to break down at very high concentrations, especially if the material is highly scattering. Absorbance within range of 0.2 to 0.5 is ideal to maintain linearity in the Beer–Lambert law. If the radiation is especially intense, nonlinear optical processes can also cause variances. The main reason, however, is that the concentration dependence is in general non-linear and Beer's law is valid only under certain conditions as shown by derivation below. For strong oscillators and at high concentrations the deviations are stronger. If the molecules are closer to each other interactions can set in. These interactions can be roughly divided into physical and chemical interactions. Physical interaction do not alter the polarizability of the molecules as long as the interaction is not so strong that light and molecular quantum state intermix (strong coupling), but cause the attenuation cross sections to be non-additive via electromagnetic coupling. Chemical interactions in contrast change the polarizability and thus absorption.
In solids, attenuation is usually an addition of absorption coefficient (creation of electron-hole pairs) or scattering (for example Rayleigh scattering if the scattering centers are much smaller than the incident wavelength). Also note that for some systems we can put (1 over inelastic mean free path) in place of
Applications
In plasma physics
The BBL extinction law also arises as a solution to the BGK equation.
Chemical analysis by spectrophotometry
The Beer–Lambert law can be applied to the analysis of a mixture by spectrophotometry, without the need for extensive pre-processing of the sample. An example is the determination of bilirubin in blood plasma samples. The spectrum of pure bilirubin is known, so the molar attenuation coefficient is known. Measurements of decadic attenuation coefficient are made at one wavelength that is nearly unique for bilirubin and at a second wavelength in order to correct for possible interferences. The amount concentration is then given by
For a more complicated example, consider a mixture in solution containing two species at amount concentrations and . The decadic attenuation coefficient at any wavelength is, given by
Therefore, measurements at two wavelengths yields two equations in two unknowns and will suffice to determine the amount concentrations and as long as the molar attenuation coefficients of the two components, and are known at both wavelengths. This two system equation can be solved using Cramer's rule. In practice it is better to use linear least squares to determine the two amount concentrations from measurements made at more than two wavelengths. Mixtures containing more than two components can be analyzed in the same way, using a minimum of wavelengths for a mixture containing components.
The law is used widely in infra-red spectroscopy and near-infrared spectroscopy for analysis of polymer degradation and oxidation (also in biological tissue) as well as to measure the concentration of various compounds in different food samples. The carbonyl group attenuation at about 6 micrometres can be detected quite easily, and degree of oxidation of the polymer calculated.
In-atmosphere astronomy
The Bouguer–Lambert law may be applied to describe the attenuation of solar or stellar radiation as it travels through the atmosphere. In this case, there is scattering of radiation as well as absorption. The optical depth for a slant path is , where refers to a vertical path, is called the relative airmass, and for a plane-parallel atmosphere it is determined as where is the zenith angle corresponding to the given path. The Bouguer-Lambert law for the atmosphere is usually written
where each is the optical depth whose subscript identifies the source of the absorption or scattering it describes:
refers to aerosols (that absorb and scatter);
are uniformly mixed gases (mainly carbon dioxide (CO2) and molecular oxygen (O2) which only absorb);
is nitrogen dioxide, mainly due to urban pollution (absorption only);
are effects due to Raman scattering in the atmosphere;
is water vapour absorption;
is ozone (absorption only);
is Rayleigh scattering from molecular oxygen () and nitrogen () (responsible for the blue color of the sky);
the selection of the attenuators which have to be considered depends on the wavelength range and can include various other compounds. This can include tetraoxygen, HONO, formaldehyde, glyoxal, a series of halogen radicals and others.
is the optical mass or airmass factor, a term approximately equal (for small and moderate values of ) to where is the observed object's zenith angle (the angle measured from the direction perpendicular to the Earth's surface at the observation site). This equation can be used to retrieve , the aerosol optical thickness, which is necessary for the correction of satellite images and also important in accounting for the role of aerosols in climate.
See also
Applied spectroscopy
Atomic absorption spectroscopy
Absorption spectroscopy
Cavity ring-down spectroscopy
Clausius–Mossotti relation
Infra-red spectroscopy
Job plot
Laser absorption spectrometry
Lorentz–Lorenz relation
Logarithm
Polymer degradation
Scientific laws named after people
Quantification of nucleic acids
Tunable diode laser absorption spectroscopy
Transmittance#Beer–Lambert law
References
External links
Beer–Lambert Law Calculator
Beer–Lambert Law Simpler Explanation
Eponymous laws of physics
Scattering, absorption and radiative transfer (optics)
Spectroscopy
Electromagnetic radiation
Visibility | Beer–Lambert law | [
"Physics",
"Chemistry",
"Mathematics"
] | 3,140 | [
"Physical phenomena",
"Visibility",
"Molecular physics",
"Spectrum (physical sciences)",
"Physical quantities",
" absorption and radiative transfer (optics)",
"Electromagnetic radiation",
"Instrumental analysis",
"Quantity",
"Scattering",
"Radiation",
"Wikipedia categories named after physical... |
4,485 | https://en.wikipedia.org/wiki/Bakelite | Bakelite ( ), formally , is a thermosetting phenol formaldehyde resin, formed from a condensation reaction of phenol with formaldehyde. The first plastic made from synthetic components, it was developed by Leo Baekeland in Yonkers, New York, in 1907, and patented on December 7, 1909.
Bakelite was one of the first plastic-like materials to be introduced into the modern world and was popular because it could be moulded and then hardened into any shape.
Because of its electrical nonconductivity and heat-resistant properties, it became a great commercial success. It was used in electrical insulators, radio and telephone casings, and such diverse products as kitchenware, jewelry, pipe stems, children's toys, and firearms.
The retro appeal of old Bakelite products has made them collectible.
The creation of a synthetic plastic was revolutionary for the chemical industry, which at the time made most of its income from cloth dyes and explosives. Bakelite's commercial success inspired the industry to develop other synthetic plastics. As the world's first commercial synthetic plastic, Bakelite was named a National Historic Chemical Landmark by the American Chemical Society.
History
Bakelite was produced for the first time in 1872 by Adolf von Baeyer, though its use as a commercial product was not considered at the time.
Leo Baekeland was already wealthy due to his invention of Velox photographic paper when he began to investigate the reactions of phenol and formaldehyde in his home laboratory. Chemists had begun to recognize that many natural resins and fibers were polymers. Baekeland's initial intent was to find a replacement for shellac, a material in limited supply because it was made naturally from the secretion of lac insects (specifically Kerria lacca). He produced a soluble phenol-formaldehyde shellac called Novolak, but it was not a market success, even though it is still used to this day (e.g., as a photoresist).
He then began experimenting on strengthening wood by impregnating it with a synthetic resin rather than coating it. By controlling the pressure and temperature applied to phenol and formaldehyde, he produced a hard moldable material that he named Bakelite, after himself. It was the first synthetic thermosetting plastic produced, and Baekeland speculated on "the thousand and one ... articles" it could be used to make. He considered the possibilities of using a wide variety of filling materials, including cotton, powdered bronze, and slate dust, but was most successful with wood and asbestos fibers, though asbestos was gradually abandoned by all manufacturers due to stricter environmental laws.
Baekeland filed a substantial number of related patents. Bakelite, his "method of making insoluble products of phenol and formaldehyde", was filed on July 13, 1907, and granted on December 7, 1909. He also filed for patent protection in other countries, including Belgium, Canada, Denmark, Hungary, Japan, Mexico, Russia, and Spain. He announced his invention at a meeting of the American Chemical Society on February 5, 1909.
Baekeland started semi-commercial production of his new material in his home laboratory, marketing it as a material for electrical insulators. In the summer of 1909, he licensed the continental European rights to Rütger AG. The subsidiary formed at that time, Bakelite AG, was the first to produce Bakelite on an industrial scale.
By 1910, Baekeland was producing enough material in the US to justify expansion. He formed the General Bakelite Company of Perth Amboy, New Jersey, as a U.S. company to manufacture and market his new industrial material, and made overseas connections to produce it in other countries.
The Bakelite Company produced "transparent" cast resin (which did not include filler) for a small market during the 1910s and 1920s. Blocks or rods of cast resin, also known as "artificial amber", were machined and carved to create items such as pipe stems, cigarette holders, and jewelry. However, the demand for molded plastics led the company to concentrate on molding rather than cast solid resins.
The Bakelite Corporation was formed in 1922 after patent litigation favorable to Baekeland, from a merger of three companies: Baekeland's General Bakelite Company; the Condensite Company, founded by J. W. Aylesworth; and the Redmanol Chemical Products Company, founded by Lawrence V. Redman. Under director of advertising and public relations Allan Brown, who came to Bakelite from Condensite, Bakelite was aggressively marketed as "the material of a thousand uses". A filing for a trademark featuring the letter B above the mathematical symbol for infinity was made August 25, 1925, and claimed the mark was in use as of December 1, 1924. A wide variety of uses were listed in their trademark applications.
The first issue of Plastics magazine, October 1925, featured Bakelite on its cover and included the article "Bakelite – What It Is" by Allan Brown. The range of colors that were available included "black, brown, red, yellow, green, gray, blue, and blends of two or more of these". The article emphasized that Bakelite came in various forms.
In a 1925 report, the United States Tariff Commission hailed the commercial manufacture of synthetic phenolic resin as "distinctly an American achievement", and noted that "the publication of figures, however, would be a virtual disclosure of the production of an individual company".
In England, Bakelite Limited, a merger of three British phenol formaldehyde resin suppliers (Damard Lacquer Company Limited of Birmingham, Mouldensite Limited of Darley Dale and Redmanol Chemical Products Company of London), was formed in 1926. A new Bakelite factory opened in Tyseley, Birmingham, around 1928. It was the "heart of Bakelite production in the UK" until it closed in 1987.
A factory to produce phenolic resins and precursors opened in Bound Brook, New Jersey, in 1931.
In 1939, the companies were acquired by Union Carbide and Carbon Corporation.
In 2005, German Bakelite manufacturer Bakelite AG was acquired by Borden Chemical of Columbus, Ohio, now Hexion Inc.
In addition to the original Bakelite material, these companies eventually made a wide range of other products, many of which were marketed under the brand name "Bakelite plastics". These included other types of cast phenolic resins similar to Catalin, and urea-formaldehyde resins, which could be made in brighter colors than .
Once Baekeland's heat and pressure patents expired in 1927, Bakelite Corporation faced serious competition from other companies. Because molded Bakelite incorporated fillers to give it strength, it tended to be made in concealing dark colors. In 1927, beads, bangles, and earrings were produced by the Catalin company, through a different process which enabled them to introduce 15 new colors. Translucent jewelry, poker chips and other items made of phenolic resins were introduced in the 1930s or 1940s by the Catalin company under the Prystal name. The creation of marbled phenolic resins may also be attributable to the Catalin company.
Synthesis
Making Bakelite is a multi-stage process. It begins with the heating of phenol and formaldehyde in the presence of a catalyst such as hydrochloric acid, zinc chloride, or the base ammonia. This creates a liquid condensation product, referred to as Bakelite A, which is soluble in alcohol, acetone, or additional phenol. Heated further, the product becomes partially soluble and can still be softened by heat. Sustained heating results in an "insoluble hard gum". However, the high temperatures required to create this tend to cause violent foaming of the mixture when done at standard atmospheric pressure, which results in the cooled material being porous and breakable. Baekeland's innovative step was to put his "last condensation product" into an egg-shaped "Bakelizer". By heating it under pressure, at about , Baekeland was able to suppress the foaming that would otherwise occur. The resulting substance is extremely hard and both infusible and insoluble.
Compression molding
Molded Bakelite forms in a condensation reaction of phenol and formaldehyde, with wood flour or asbestos fiber as a filler, under high pressure and heat in a time frame of a few minutes of curing. The result is a hard plastic material. Asbestos was gradually abandoned as filler because many countries banned the production of asbestos.
Bakelite's molding process had a number of advantages. Bakelite resin could be provided either as powder or as preformed partially cured slugs, increasing the speed of the casting. Thermosetting resins such as Bakelite required heat and pressure during the molding cycle but could be removed from the molding process without being cooled, again making the molding process faster. Also, because of the smooth polished surface that resulted, Bakelite objects required less finishing. Millions of parts could be duplicated quickly and relatively cheaply.
Phenolic sheet
Another market for Bakelite resin was the creation of phenolic sheet materials. A phenolic sheet is a hard, dense material made by applying heat and pressure to layers of paper or glass cloth impregnated with synthetic resin. Paper, cotton fabrics, synthetic fabrics, glass fabrics, and unwoven fabrics are all possible materials used in lamination. When heat and pressure are applied, polymerization transforms the layers into thermosetting industrial laminated plastic.
Bakelite phenolic sheet is produced in many commercial grades and with various additives to meet diverse mechanical, electrical, and thermal requirements. Some common types include:
Paper reinforced NEMA XX per MIL-I-24768 PBG. Normal electrical applications, moderate mechanical strength, continuous operating temperature of .
Canvas-reinforced NEMA C per MIL-I-24768 TYPE FBM NEMA CE per MIL-I-24768 TYPE FBG. Good mechanical and impact strength with a continuous operating temperature of 250 °F.
Linen-reinforced NEMA L per MIL-I-24768 TYPE FBI NEMA LE per MIL-I-24768 TYPE FEI. Good mechanical and electrical strength. Recommended for intricate high-strength parts. Continuous operating temperature convert 250 °F.
Nylon reinforced NEMA N-1 per MIL-I-24768 TYPE NPG. Superior electrical properties under humid conditions, fungus resistant, continuous operating temperature of .
Properties
Bakelite has a number of important properties. It can be molded very quickly, decreasing production time. Moldings are smooth, retain their shape, and are resistant to heat, scratches, and destructive solvents. It is also resistant to electricity, and prized for its low conductivity. It is not flexible.
Phenolic resin products may swell slightly under conditions of extreme humidity or perpetual dampness. When rubbed or burnt, Bakelite has a distinctive, acrid, sickly-sweet or fishy odor.
Applications and uses
The characteristics of Bakelite made it particularly suitable as a molding compound, an adhesive or binding agent, a varnish, and a protective coating, as well as for the emerging electrical and automobile industries because of its extraordinarily high resistance to electricity, heat, and chemical action.
The earliest commercial use of Bakelite in the electrical industry was the molding of tiny insulating bushings, made in 1908 for the Weston Electrical Instrument Corporation by Richard W. Seabury of the Boonton Rubber Company. Bakelite was soon used for non-conducting parts of telephones, radios, and other electrical devices, including bases and sockets for light bulbs and electron tubes (vacuum tubes), supports for any type of electrical components, automobile distributor caps, and other insulators. By 1912, it was being used to make billiard balls, since its elasticity and the sound it made were similar to ivory.
During World War I, Bakelite was used widely, particularly in electrical systems. Important projects included the Liberty airplane engine, the wireless telephone and radio phone, and the use of micarta-bakelite propellers in the NBS-1 bomber and the DH-4B aeroplane.
Bakelite's availability and ease and speed of molding helped to lower the costs and increase product availability so that telephones and radios became common household consumer goods. It was also very important to the developing automobile industry. It was soon found in myriad other consumer products ranging from pipe stems and buttons to saxophone mouthpieces, cameras, early machine guns, and appliance casings. Bakelite was also very commonly used in making molded grip panels on handguns, as furniture for submachine guns and machineguns, the classic Bakelite magazines for Kalashnikov rifles, as well as numerous knife handles and "scales" through the first half of the 20th century.
Beginning in the 1920s, it became a popular material for jewelry. Designer Coco Chanel included Bakelite bracelets in her costume jewelry collections. Designers such as Elsa Schiaparelli used it for jewelry and also for specially designed dress buttons. Later, Diana Vreeland, editor of Vogue, was enthusiastic about Bakelite. Bakelite was also used to make presentation boxes for Breitling watches.
By 1930, designer Paul T. Frankl considered Bakelite a "Materia Nova", "expressive of our own age". By the 1930s, Bakelite was used for game pieces like chess pieces, poker chips, dominoes, and mahjong sets. Kitchenware made with Bakelite, including canisters and tableware, was promoted for its resistance to heat and to chipping. In the mid-1930s, Northland marketed a line of skis with a black "Ebonite" base, a coating of Bakelite. By 1935, it was used in solid-body electric guitars. Performers such as Jerry Byrd loved the tone of Bakelite guitars but found them difficult to keep in tune.
Charles Plimpton patented BAYKO in 1933 and rushed out his first construction sets for Christmas 1934. He called the toy Bayko Light Constructional Sets, the words "Bayko Light" being a pun on the word "Bakelite".
During World War II, Bakelite was used in a variety of wartime equipment including pilots' goggles and field telephones. It was also used for patriotic wartime jewelry. In 1943, the thermosetting phenolic resin was even considered for the manufacture of coins, due to a shortage of traditional material. Bakelite and other non-metal materials were tested for usage for the one cent coin in the US before the Mint settled on zinc-coated steel.
During World War II, Bakelite buttons were part of British uniforms. These included brown buttons for the Army and black buttons for the RAF.
In 1947, Dutch art forger Han van Meegeren was convicted of forgery, after chemist and curator Paul B. Coremans proved that a purported Vermeer contained Bakelite, which van Meegeren had used as a paint hardener.
Bakelite was sometimes used in the pistol grip, hand guard, and buttstock of firearms. The AKM and some early AK-74 rifles are frequently mistakenly identified as using Bakelite, but most were made with AG-4S.
By the late 1940s, newer materials were superseding Bakelite in many areas. Phenolics are less frequently used in general consumer products today due to their cost and complexity of production and their brittle nature. They still appear in some applications where their specific properties are required, such as small precision-shaped components, molded disc brake cylinders, saucepan handles, electrical plugs, switches and parts for electrical irons, printed circuit boards, as well as in the area of inexpensive board and tabletop games produced in China, Hong Kong, and India. Items such as billiard balls, dominoes and pieces for board games such as chess, checkers, and backgammon are constructed of Bakelite for its look, durability, fine polish, weight, and sound. Common dice are sometimes made of Bakelite for weight and sound, but the majority are made of a thermoplastic polymer such as acrylonitrile butadiene styrene (ABS).
Bakelite continues to be used for wire insulation, brake pads and related automotive components, and industrial electrical-related applications. Bakelite stock is still manufactured and produced in sheet, rod, and tube form for industrial applications in the electronics, power generation, and aerospace industries, and under a variety of commercial brand names.
Phenolic resins have been commonly used in ablative heat shields. Soviet heatshields for ICBM warheads and spacecraft reentry consisted of asbestos textolite, impregnated with Bakelite. Bakelite is also used in the mounting of metal samples in metallography.
Collectible status
Bakelite items, particularly jewelry and radios, have become popular collectibles.
The term Bakelite is sometimes used in the resale market as a catch-all for various types of early plastics, including Catalin and Faturan, which may be brightly colored, as well as items made of true Bakelite material.
Due to its aesthetics, a similar material fakelite (fake bakelite) exists made from modern safer materials which do not contain asbestos.
Patents
The United States Patent and Trademark Office granted Baekeland a patent for a "Method of making insoluble products of phenol and formaldehyde" on December 7, 1909. Producing hard, compact, insoluble, and infusible condensation products of phenols and formaldehyde marked the beginning of the modern plastics industry.
Similar plastics
Catalin is also a phenolic resin, similar to Bakelite, but contains different mineral fillers that allow the production of light colors.
Condensites are similar thermoset materials having much the same properties, characteristics, and uses.
Crystalate is an early plastic.
Faturan is a phenolic resin, also similar to Bakelite, that turns red over time, regardless of its original color.
Galalith is an early plastic derived from milk products.
Micarta is an early composite insulating plate that used Bakelite as a binding agent. It was developed in 1910 by the Westinghouse Electric & Manufacturing Company, which put the new material to use for casting synthetic blades for Westinghouse electric fans.
Novotext is a brand name for cotton textile-phenolic resin.
G-10 or garolite is made with fiberglass and epoxy resin.
See also
Ericsson DBH 1001 telephone
Prodema, a construction material with a bakelite core
Lacquer, including antique Asian lacquerware
Tenite, a cellulosic thermoplastic material
Vulcanised rubber, a similar material
References
External links
All Things Bakelite: The Age of Plastic—trailer for a film by John Maher, with additional video & resources
Amsterdam Bakelite Collection
Large Bakelite Collection
Bakelite: The Material of a Thousand Uses
Virtual Bakelite Museum of Ghent 1907–2007
Plastics
1909 introductions
Belgian inventions
Composite materials
Dielectrics
Phenol formaldehyde resins
Plastic brands
Thermosetting plastics | Bakelite | [
"Physics"
] | 4,077 | [
"Unsolved problems in physics",
"Composite materials",
"Materials",
"Dielectrics",
"Amorphous solids",
"Matter",
"Plastics"
] |
4,487 | https://en.wikipedia.org/wiki/Bean | A bean is the seed of any plant in the legume family (Fabaceae) used as a vegetable for human consumption or animal feed. The seeds are often preserved through drying, but fresh beans are also sold. Most beans are traditionally soaked and boiled, but they can be cooked in many different ways, including frying and baking, and are used in many traditional dishes throughout the world. The unripe seedpods of some varieties are also eaten whole as green beans or edamame (immature soybean), but fully ripened beans contain toxins like phytohemagglutinin and require cooking.
Terminology
The word 'bean', for the Old World vegetable, existed in Old English, long before the New World genus Phaseolus was known in Europe. With the Columbian exchange of domestic plants between Europe and the Americas, use of the word was extended to pod-borne seeds of Phaseolus, such as the common bean and the runner bean, and the related genus Vigna. The term has long been applied generally to seeds of similar form, such as Old World soybeans and lupins, and to the fruits or seeds of unrelated plants such as coffee beans, vanilla beans, castor beans, and cocoa beans.
History
Beans in an early cultivated form were grown in Thailand from the early seventh millennium BCE, predating ceramics. Beans were deposited with the dead in ancient Egypt. Not until the second millennium BCE did cultivated, large-seeded broad beans appear in the Aegean region, Iberia, and transalpine Europe. In the Iliad (8th century BCE), there is a passing mention of beans and chickpeas cast on the threshing floor.
The oldest-known domesticated beans in the Americas were found in Guitarrero Cave, Peru, dated to around the second millennium BCE. Genetic analyses of the common bean Phaseolus show that it originated in Mesoamerica, and subsequently spread southward, along with maize and squash, traditional companion crops.
Most of the kinds of beans commonly eaten today are part of the genus Phaseolus, which originated in the Americas. The first European to encounter them was Christopher Columbus, while exploring what may have been the Bahamas, and saw them growing in fields. Five kinds of Phaseolus beans were domesticated by pre-Columbian peoples, selecting pods that did not open and scatter their seeds when ripe: common beans (P. vulgaris) grown from Chile to the northern part of the United States; lima and sieva beans (P. lunatus); and the less widely distributed teparies (P. acutifolius), scarlet runner beans (P. coccineus), and polyanthus beans.
Pre-Columbian peoples as far north as the Atlantic seaboard grew beans in the "Three Sisters" method of companion planting. The beans were interplanted with maize and squash. Beans were cultivated across Chile in Pre-Hispanic times, likely as far south as the Chiloé Archipelago.
Diversity
Taxonomic range
Most beans are legumes, but from many different genera, native to different regions.
Conservation of cultivars
The biodiversity of bean cultivars is threatened by modern plant breeding, which selects a small number of the most productive varieties. Efforts are being made to conserve the germplasm of older varieties in different countries. As of 2023, the Norwegian Svalbard Global Seed Vault holds more than 40,000 accessions of Phaseolus bean species.
Cultivation
Agronomy
Unlike the closely related pea, beans are a summer crop that needs warm temperatures to grow. Legumes are capable of nitrogen fixation and hence need less fertiliser than most plants. Maturity is typically 55–60 days from planting to harvest. As the pods mature, they turn yellow and dry up, and the beans inside change from green to their mature colour. Many beans are vines needing external support, such as "bean cages" or poles. Native Americans customarily grew them along with corn and squash, the tall stalks acting as support for the beans.
More recently, the commercial "bush bean" which does not require support and produces all its pods simultaneously has been developed.
Production
The production data for legumes are published by FAO in three categories:
Pulses dry: all mature and dry seeds of leguminous plants except soybeans and groundnuts.
Oil crops: soybeans and groundnuts.
Fresh vegetable: immature green fresh fruits of leguminous plants.
The following is a summary of FAO data.
The world leader in production of dry beans (Phaseolus spp), is India, followed by Myanmar (Burma) and Brazil. In Africa, the most important producer is Tanzania.
Source: UN Food and Agriculture Organization (FAO)
Uses
Nutrition
Raw green beans are 90% water, 7% carbohydrates, 2% protein, and contain negligible fat. In a reference serving, raw green beans supply 31 calories of food energy, and are a moderate source (10-19% of the Daily Value, DV) of vitamin C (15% DV) and vitamin B6 (11% DV), with no other micronutrients in significant content (table).
Culinary
Beans can be cooked in a wide variety of casseroles, curries, salads, soups, and stews. They can be served whole or mashed alongside meat or toast, or included in an omelette or a flatbread wrap. Other options are to include them in a bake with a cheese sauce, a Mexican-style chili con carne, or to use them as a meat substitute in a burger or in falafels. The French cassoulet is a slow-cooked stew with haricot beans, sausage, pork, mutton, and preserved goose. Soybeans can be processed into bean curd (tofu) or fermented into a cake (tempeh); these can be eaten fried or roasted like meat, or included in stir-fries, curries, and soups.
Other
Guar beans are used for their gum, a galactomannan polysaccharide. It is used to thicken and stabilise foods and other products.
Health concerns
Toxins
Some kinds of raw beans contain a harmful, flavourless toxin: the lectin phytohaemagglutinin, which must be destroyed by cooking. Red kidney beans are particularly toxic, but other types also pose risks of food poisoning. Even small quantities (4 or 5 raw beans) may cause severe stomachache, vomiting, and diarrhea. This risk does not apply to canned beans because they have already been cooked. A recommended method is to boil the beans for at least ten minutes; under-cooked beans may be more toxic than raw beans.
Cooking beans, without bringing them to a boil, in a slow cooker at a temperature well below boiling may not destroy toxins. A case of poisoning by butter beans used to make falafel was reported; the beans were used instead of traditional broad beans or chickpeas, soaked and ground without boiling, made into patties, and shallow fried.
Bean poisoning is not well known in the medical community, and many cases may be misdiagnosed or never reported; figures appear not to be available. In the case of the UK National Poisons Information Service, available only to health professionals, the dangers of beans other than red beans were not flagged .
Fermentation is used in some parts of Africa to improve the nutritional value of beans by removing toxins. Inexpensive fermentation improves the nutritional impact of flour from dry beans and improves digestibility, according to research co-authored by Emire Shimelis, from the Food Engineering Program at Addis Ababa University. Beans are a major source of dietary protein in Kenya, Malawi, Tanzania, Uganda and Zambia.
Other hazards
It is common to make beansprouts by letting some types of bean, often mung beans, germinate in moist and warm conditions; beansprouts may be used as ingredients in cooked dishes, or eaten raw or lightly cooked. There have been many outbreaks of disease from bacterial contamination, often by salmonella, listeria, and Escherichia coli, of beansprouts not thoroughly cooked, some causing significant mortality.
Many types of bean like kidney bean contain significant amounts of antinutrients that inhibit some enzyme processes in the body. Phytic acid, present in beans, interferes with bone growth and interrupts vitamin D metabolism.
Many beans, including broad beans, navy beans, kidney beans and soybeans, contain large sugar molecules, oligosaccharides (particularly raffinose and stachyose). A suitable oligosaccharide-cleaving enzyme is necessary to digest these. As the human digestive tract does not contain such enzymes, consumed oligosaccharides are digested by bacteria in the large intestine, producing gases such as methane, released as flatulence.
In human society
Beans have often been thought of as a food of the poor, as small farmers ate grains, vegetables, and got their protein from beans, while the wealthier classes were able to afford meat. European society has what Ken Albala calls "a class-based antagonism" to beans.
Different cultures agree in disliking the flatulence that beans cause, and possess their own seasonings to attempt to remedy it: Mexico uses the herb epazote; India the aromatic resin asafoetida; Germany applies the herb savory; in the Middle East, cumin; and Japan the seaweed kombu. A substance for which there is evidence of effectiveness in reducing flatulence is the enzyme alpha-galactosidase; extracted from the mould fungus Aspergillus niger, it breaks down glycolipids and glycoproteins. The reputation of beans for flatulence is the theme of a children's song "Beans, Beans, the Musical Fruit".
The Mexican jumping bean is a segment of a seed pod occupied by the larva of the moth Cydia saltitans, and sold as a novelty. The pods start to jump when warmed in the palm of the hand. Scientists have suggested that the random walk that results may help the larva to find shade and so to survive on hot days.
See also
Baked beans
List of bean soups
Fassoulada – a bean soup
List of legume dishes
References
Bibliography
External links
Everett H. Bickley Collection, 1919–1980 Archives Center, National Museum of American History, Smithsonian Institution.
Discovery Online: The Skinny On Why Beans Give You Gas
Fermentation improves nutritional value of beans
Cook's Thesaurus on Beans
Edible legumes
Pod vegetables
Staple foods
Vegan cuisine
Vegetarian cuisine
Crops
Plant common names | Bean | [
"Biology"
] | 2,229 | [
"Plant common names",
"Common names of organisms",
"Plants"
] |
4,489 | https://en.wikipedia.org/wiki/Breast | The breasts are two prominences located on the upper ventral region of the torso among humans and other primates. Both sexes develop breasts from the same embryological tissues. The relative size and development of the breasts is a major secondary sex distinction between females and males. There is also considerable variation in size between individuals. Female humans are the only mammals which permanently develop breasts at puberty; all other mammals develop their mammary tissue during the latter period of pregnancy; at puberty, estrogens, in conjunction with growth hormone, cause permanent breast growth.
In females, the breast serves as the mammary gland, which produces and secretes milk to feed infants. Subcutaneous fat covers and envelops a network of ducts that converge on the nipple, and these tissues give the breast its distinct size and globular shape. At the ends of the ducts are lobules, or clusters of alveoli, where milk is produced and stored in response to hormonal signals. During pregnancy, the breast responds to a complex interaction of hormones, including estrogens, progesterone, and prolactin, that mediate the completion of its development, namely lobuloalveolar maturation, in preparation of lactation and breastfeeding.
Along with their major function in providing nutrition for infants, several cultures ascribe social and sexual characteristics to female breasts, and may regard bare breasts in public as immodest or indecent. Breasts have been featured in ancient and modern sculpture, art, and photography. Breasts can represent fertility, femininity, or abundance. They can figure prominently in the perception of a woman's body and sexual attractiveness. Breasts, especially the nipples, can be an erogenous zone.
Etymology and terminology
The English word breast derives from the Old English word from Proto-Germanic , from the Proto-Indo-European base . The breast spelling conforms to the Scottish and North English dialectal pronunciations. The Merriam-Webster Dictionary states that "Middle English , [comes] from Old English ; akin to Old High German ..., Old Irish [belly], [and] Russian "; the first known usage of the term was before the 12th century.
Breasts is often used to refer to female breasts in particular, though the stricter anatomical term refers to the same region on members of either sex. Male breasts are sometimes referred to in the singular to mean the collective upper chest area, whereas female breasts are referred to in the plural unless speaking of a specific left or right breast.
A large number of colloquial terms for female breasts are used in English, ranging from fairly polite terms to vulgar or slang. Some vulgar slang expressions may be considered to be derogatory or sexist to women.
Evolutionary development
Humans are the only mammals whose breasts become permanently enlarged after sexual maturity (known in humans as puberty). The reason for this evolutionary change is unknown. Several hypotheses have been put forward:
A link has been proposed to processes for synthesizing the endogenous steroid hormone precursor dehydroepiandrosterone which takes place in fat rich regions of the body like the buttocks and breasts. These contributed to human brain development and played a part in increasing brain size. Breast enlargement may for this purpose have occurred as early as Homo ergaster (1.7–1.4 MYA). Other breast formation hypotheses may have then taken over as principal drivers.
It has been suggested by zoologists Avishag and Amotz Zahavi that the size of the human breasts can be explained by the handicap theory of sexual dimorphism. This would see the explanation for larger breasts as them being an honest display of the women's health and ability to grow and carry them in her life. Prospective mates can then evaluate the genes of a potential mate for their ability to sustain her health even with the additional energy demanding burden she is carrying.
The zoologist Desmond Morris describes a sociobiological approach in his science book The Naked Ape. He suggests, by making comparisons with the other primates, that breasts evolved to replace swelling buttocks as a sex signal of ovulation. He notes how humans have, relatively speaking, large penises as well as large breasts. Furthermore, early humans adopted bipedalism and face-to-face coitus. He therefore suggested enlarged sexual signals helped maintain the bond between a mated male and female even though they performed different duties and therefore were separated for lengths of time.
A 2001 study proposed that the rounded shape of a woman's breast evolved to prevent the sucking infant offspring from suffocating while feeding at the teat; that is, because of the human infant's small jaw, which did not project from the face to reach the nipple, they might block the nostrils against the mother's breast if it were of a flatter form (compare with the common chimpanzee). Theoretically, as the human jaw receded into the face, the woman's body compensated with round breasts.
Ashley Montague (1965) proposed that breasts came about as an adaptation for infant feeding for a different reason, as early human ancestors adopted bipedalism and the loss of body hair. Human upright stance meant infants must be carried at the hip or shoulder instead of on the back as in the apes. This gives the infant less opportunity to find the nipple or the purchase to cling on to the mother's body hair. The mobility of the nipple on a large breast in most human females gives the infant more ability to find it, grasp it and feed.
Other suggestions include simply that permanent breasts attracted mates, that "pendulous" breasts gave infants something to cling to, or that permanent breasts shared the function of a camel's hump, to store fat as an energy reserve.
Structure
In women, the breasts overlie the pectoralis major muscles and extend on average from the level of the second rib to the level of the sixth rib in the front of the rib cage; thus, the breasts cover much of the chest area and the chest walls. At the front of the chest, the breast tissue can extend from the clavicle (collarbone) to the middle of the sternum (breastbone). At the sides of the chest, the breast tissue can extend into the axilla (armpit), and can reach as far to the back as the latissimus dorsi muscle, extending from the lower back to the humerus bone (the bone of the upper arm). As a mammary gland, the breast is composed of differing layers of tissue, predominantly two types: adipose tissue; and glandular tissue, which affects the lactation functions of the breasts. The natural resonant frequency of the human breast is about 2 hertz.
Morphologically, the breast is tear-shaped. The superficial tissue layer (superficial fascia) is separated from the skin by 0.5–2.5 cm of subcutaneous fat (adipose tissue). The suspensory Cooper's ligaments are fibrous-tissue prolongations that radiate from the superficial fascia to the skin envelope. The female adult breast contains 14–18 irregular lactiferous lobes that converge at the nipple. The 2.0–4.5 mm milk ducts are immediately surrounded with dense connective tissue that support the glands. Milk exits the breast through the nipple, which is surrounded by a pigmented area of skin called the areola. The size of the areola can vary widely among women. The areola contains modified sweat glands known as Montgomery's glands. These glands secrete oily fluid that lubricate and protect the nipple during breastfeeding. Volatile compounds in these secretions may also serve as an olfactory stimulus for the newborn's appetite.
The dimensions and weight of the breast vary widely among women. A small-to-medium-sized breast weighs 500 grams (1.1 pounds) or less, and a large breast can weigh approximately 750 to 1,000 grams (1.7 to 2.2 pounds) or more. In terms of composition, the breasts are about 80 to 90% stromal tissue (fat and connective tissue), while epithelial or glandular tissue only accounts for about 10 to 20% of the volume of the breasts. The tissue composition ratios of the breast also vary among women. Some women's breasts have a higher proportion of glandular tissue than of adipose or connective tissues. The fat-to-connective-tissue ratio determines the density or firmness of the breast. During a woman's life, her breasts change size, shape, and weight due to hormonal changes during puberty, the menstrual cycle, pregnancy, breastfeeding, and menopause.
Glandular structure
The breast is an apocrine gland that produces the milk used to feed an infant. The nipple of the breast is surrounded by the areola (nipple-areola complex). The areola has many sebaceous glands, and the skin color varies from pink to dark brown. The basic units of the breast are the terminal duct lobular units (TDLUs), which produce the fatty breast milk. They give the breast its offspring-feeding functions as a mammary gland. They are distributed throughout the body of the breast. Approximately two-thirds of the lactiferous tissue is within 30 mm of the base of the nipple. The terminal lactiferous ducts drain the milk from TDLUs into 4–18 lactiferous ducts, which drain to the nipple. The milk-glands-to-fat ratio is 2:1 in a lactating woman, and 1:1 in a non-lactating woman. In addition to the milk glands, the breast is also composed of connective tissues (collagen, elastin), white fat, and the suspensory Cooper's ligaments. Sensation in the breast is provided by the peripheral nervous system innervation by means of the front (anterior) and side (lateral) cutaneous branches of the fourth-, fifth-, and sixth intercostal nerves. The T-4 nerve (Thoracic spinal nerve 4), which innervates the dermatomic area, supplies sensation to the nipple-areola complex.
Lymphatic drainage
Approximately 75% of the lymph from the breast travels to the axillary lymph nodes on the same side of the body, while 25% of the lymph travels to the parasternal nodes (beside the sternum bone). A small amount of remaining lymph travels to the other breast and to the abdominal lymph nodes. The subareolar region has a lymphatic plexus known as the "subareolar plexus of Sappey". The axillary lymph nodes include the pectoral (chest), subscapular (under the scapula), and humeral (humerus-bone area) lymph-node groups, which drain to the central axillary lymph nodes and to the apical axillary lymph nodes. The lymphatic drainage of the breasts is especially relevant to oncology because breast cancer is common to the mammary gland, and cancer cells can metastasize (break away) from a tumor and be dispersed to other parts of the body by means of the lymphatic system.
Morphology
The morphologic variations in the size, shape, volume, tissue density, pectoral locale, and spacing of the breasts determine their natural shape, appearance, and position on a woman's chest. Breast size and other characteristics do not predict the fat-to-milk-gland ratio or the potential for the woman to nurse an infant. The size and the shape of the breasts are influenced by normal-life hormonal changes (thelarche, menstruation, pregnancy, menopause) and medical conditions (e.g. virginal breast hypertrophy). The shape of the breasts is naturally determined by the support of the suspensory Cooper's ligaments, the underlying muscle and bone structures of the chest, and by the skin envelope. The suspensory ligaments sustain the breast from the clavicle (collarbone) and the clavico-pectoral fascia (collarbone and chest) by traversing and encompassing the fat and milk-gland tissues. The breast is positioned, affixed to, and supported upon the chest wall, while its shape is established and maintained by the skin envelope. In most women, one breast is slightly larger than the other. More obvious and persistent asymmetry in breast size occurs in up to 25% of women.
The base of each breast is attached to the chest by the deep fascia over the pectoralis major muscles. The base of the breast is semi-circular, however the shape and position of the breast above the surface is variable. The space between the breast and the pectoralis major muscle, called retromammary space, gives mobility to the breast. The chest (thoracic cavity) progressively slopes outwards from the thoracic inlet (atop the breastbone) and above to the lowest ribs that support the breasts. The inframammary fold (IMF), where the lower portion of the breast meets the chest, is an anatomic feature created by the adherence of the breast skin and the underlying connective tissues of the chest; the IMF is the lower-most extent of the anatomic breast. Normal breast tissue has a texture that feels nodular or granular, with considerable variation from woman to woman.
Breasts have been categorized into four general morphological groups: "flat, spheric, protruded, and drooped", or "small/flat, large/inward, upward, and droopy".
Support
While it is a common belief that breastfeeding causes breasts to sag, researchers have found that a woman's breasts sag due to four key factors: cigarette smoking, number of pregnancies, gravity, and weight loss or gain. Women sometimes wear bras because they mistakenly believe they prevent breasts from sagging as they get older. Physicians, lingerie retailers, teenagers, and adult women used to believe that bras were medically required to support breasts. In a 1952 article in Parents' Magazine, Frank H. Crowell erroneously reported that it was important for teen girls to begin wearing bras early. According to Crowell, this would prevent sagging breasts, stretched blood vessels, and poor circulation later on. This belief was based on the false idea that breasts cannot anatomically support themselves.
Sports bras are sometimes used for cardiovascular exercise, sports bras are designed to secure the breasts closely to the body to prevent movement during high-motion activity such as running. Studies have indicated sports bras which are overly tight may restrict respiratory function.
Development
The breasts are principally composed of adipose, glandular, and connective tissues. Because these tissues have hormone receptors, their sizes and volumes fluctuate according to the hormonal changes particular to thelarche (sprouting of breasts), menstruation (egg production), pregnancy (reproduction), lactation (feeding of offspring), and menopause (end of menstruation).
Puberty
The morphological structure of the human breast is identical in males and females until puberty. For pubescent girls in thelarche (the breast-development stage), the female sex hormones (principally estrogens) in conjunction with growth hormone promote the sprouting, growth, and development of the breasts. During this time, the mammary glands grow in size and volume and begin resting on the chest. These development stages of secondary sex characteristics (breasts, pubic hair, etc.) are illustrated in the five-stage Tanner scale.
During thelarche, the developing breasts are sometimes of unequal size, and usually the left breast is slightly larger. This condition of asymmetry is transitory and statistically normal in female physical and sexual development. Medical conditions can cause overdevelopment (e.g., virginal breast hypertrophy, macromastia) or underdevelopment (e.g., tuberous breast deformity, micromastia) in girls and women.
Approximately two years after the onset of puberty (a girl's first menstrual cycle), estrogen and growth hormone stimulate the development and growth of the glandular fat and suspensory tissues that compose the breast. This continues for approximately four years until the final shape of the breast (size, volume, density) is established at about the age of 21. Mammoplasia (breast enlargement) in girls begins at puberty, unlike all other primates, in which breasts enlarge only during lactation.
Hormone replacement therapy
Hormone replacement therapy, including gender-affirming hormone therapy, stimulates the growth of glandular and adipose tissue through estrogen supplementation.
In menopausal women, HRT helps restore breast volume and skin elasticity diminished by declining estrogen levels, typically using oral or transdermal estradiol.
In gender-affirming hormone therapy, breast development is induced through feminizing HRT, often combining estrogen with anti-androgens to suppress testosterone. Maximum growth is usually achieved after 2–3 years.
Factors such as age, genetics, and hormone dosage influence outcomes.
Changes during the menstrual cycle
During the menstrual cycle, the breasts are enlarged by premenstrual water retention and temporary growth as influenced by changing hormone levels.
Pregnancy and breastfeeding
The breasts reach full maturity only when a woman's first pregnancy occurs. Changes to the breasts are among the first signs of pregnancy. The breasts become larger, the nipple-areola complex becomes larger and darker, the Montgomery's glands enlarge, and veins sometimes become more visible. Breast tenderness during pregnancy is common, especially during the first trimester. By mid-pregnancy, the breast is physiologically capable of lactation and some women can express colostrum, a form of breast milk.
Pregnancy causes elevated levels of the hormone prolactin, which has a key role in the production of milk. However, milk production is blocked by the hormones progesterone and estrogen until after delivery, when progesterone and estrogen levels plummet.
Menopause
At menopause, breast atrophy occurs. The breasts can decrease in size when the levels of circulating estrogen decline. The adipose tissue and milk glands also begin to wither. The breasts can also become enlarged from adverse side effects of combined oral contraceptive pills. The size of the breasts can also increase and decrease in response to weight fluctuations.
Physical changes to the breasts are often recorded in the stretch marks of the skin envelope; they can serve as historical indicators of the increments and the decrements of the size and volume of a woman's breasts throughout the course of her life.
Breast changes during menopause are sometimes treated with hormone replacement therapy.
Cancer
Breast cancer is a cancer that develops from breast tissue. Signs of breast cancer may include a lump in the breast, a change in breast shape, dimpling of the skin, milk rejection, fluid coming from the nipple, a newly inverted nipple, or a red or scaly patch of skin. In those with distant spread of the disease, there may be bone pain, swollen lymph nodes, shortness of breath, or yellow skin.
Risk factors for developing breast cancer include obesity, a lack of physical exercise, alcohol consumption, hormone replacement therapy during menopause, ionizing radiation, an early age at first menstruation, having children late in life (or not at all), older age, having a prior history of breast cancer, and a family history of breast cancer. About five to ten percent of cases are the result of an inherited genetic predisposition, including BRCA mutations among others. Breast cancer most commonly develops in cells from the lining of milk ducts and the lobules that supply these ducts with milk. Cancers developing from the ducts are known as ductal carcinomas, while those developing from lobules are known as lobular carcinomas. There are more than 18 other sub-types of breast cancer. Some, such as ductal carcinoma in situ, develop from pre-invasive lesions. The diagnosis of breast cancer is confirmed by taking a biopsy of the concerning tissue. Once the diagnosis is made, further tests are carried out to determine if the cancer has spread beyond the breast and which treatments are most likely to be effective.
Breastfeeding
The primary function of the breasts, as mammary glands, is the nourishing of an infant with breast milk. Milk is produced in milk-secreting cells in the alveoli. When the breasts are stimulated by the suckling of her baby, the mother's brain secretes oxytocin. High levels of oxytocin trigger the contraction of muscle cells surrounding the alveoli, causing milk to flow along the ducts that connect the alveoli to the nipple.
Full-term newborns have an instinct and a need to suck on a nipple, and breastfed babies nurse for both nutrition and for comfort. Breast milk provides all necessary nutrients for the first six months of life, and then remains an important source of nutrition, alongside solid foods, until at least one or two years of age.
Exercise
Biomechanical studies have demonstrated that, depending on the activity and the size of a woman's breast, when she walks or runs braless, her breasts may move up and down by or more, and also oscillate side to side. Researchers have also found that as women's breast size increased, they took part in less physical activity, especially vigorous exercise. Few very-large-breasted women jogged, for example. To avoid exercise-related discomfort and pain, medical experts suggest women wear a well-fitted sports bra during activity.
Clinical significance
The breast is susceptible to numerous benign and malignant conditions. The most frequent benign conditions are puerperal mastitis, fibrocystic breast changes and mastalgia.
Lactation unrelated to pregnancy is known as galactorrhea. It can be caused by certain drugs (such as antipsychotic medications), extreme physical stress, or endocrine disorders. Lactation in newborns is caused by hormones from the mother that crossed into the baby's bloodstream during pregnancy.
Breast cancer
Breast cancer is the most common cause of cancer death among women and it is one of the leading causes of death among women. Factors that appear to be implicated in decreasing the risk of breast cancer are regular breast examinations by health care professionals, regular mammograms, self-examination of breasts, healthy diet, exercise to decrease excess body fat, and breastfeeding.
Male breasts
Both females and males develop breasts from the same embryological tissues. Anatomically, male breasts do not normally contain lobules and acini that are present in females. In rare instances, it is possible for very few lobules to be present; this makes it possible for some men to develop lobular carcinoma of the breast. Normally, males produce lower levels of estrogens and higher levels of androgens, namely testosterone, which suppress the effects of estrogens in developing excessive breast tissue. In boys and men, abnormal breast development is manifested as gynecomastia, the consequence of a biochemical imbalance between the normal levels of estrogen and testosterone in the male body. Around 70% of boys temporarily develop breast tissue during adolescence. The condition usually resolves by itself within two years. When male lactation occurs, it is considered a symptom of a disorder of the pituitary gland.
Plastic surgery
Plastic surgery can be performed to augment or reduce the size of breasts, or reconstruct the breast in cases of deformative disease, such as breast cancer. Breast augmentation and breast lift (mastopexy) procedures are done only for cosmetic reasons, whereas breast reduction is sometimes medically indicated. In cases where a woman's breasts are severely asymmetrical, surgery can be performed to either enlarge the smaller breast, reduce the size of the larger breast, or both.
Breast augmentation surgery generally does not interfere with future ability to breastfeed. Breast reduction surgery more frequently leads to decreased sensation in the nipple-areola complex, and to low milk supply in women who choose to breastfeed. Implants can interfere with mammography (breast x-ray images).
Society and culture
General
In Christian iconography, some works of art depict women with their breasts in their hands or on a platter, signifying that they died as a martyr by having their breasts severed; one example of this is Saint Agatha of Sicily.
Femen is a feminist activist group which uses topless protests as part of their campaigns against sex tourism religious institutions, sexism, and homophobia. Femen activists have been regularly detained by police in response to their protests.
There is a long history of female breasts being used by comedians as a subject for comedy fodder (e.g., British comic Benny Hill's burlesque/slapstick routines).
Art history
In European pre-historic societies, sculptures of female figures with pronounced or highly exaggerated breasts were common. A typical example is the so-called Venus of Willendorf, one of many Paleolithic Venus figurines with ample hips and bosom. Artifacts such as bowls, rock carvings and sacred statues with breasts have been recorded from 15,000 BC up to late antiquity all across Europe, North Africa and the Middle East.
Many female deities representing love and fertility were associated with breasts and breast milk. Figures of the Phoenician goddess Astarte were represented as pillars studded with breasts. Isis, an Egyptian goddess who represented, among many other things, ideal motherhood, was often portrayed as suckling pharaohs, thereby confirming their divine status as rulers. Even certain male deities representing regeneration and fertility were occasionally depicted with breast-like appendices, such as the river god Hapy who was considered to be responsible for the annual overflowing of the Nile.
Female breasts were also prominent in Minoan art in the form of the famous Snake Goddess statuettes, and a few other pieces, though most female breasts are covered. In Ancient Greece there were several cults worshipping the "Kourotrophos", the suckling mother, represented by goddesses such as Gaia, Hera and Artemis. The worship of deities symbolized by the female breast in Greece became less common during the first millennium. The popular adoration of female goddesses decreased significantly during the rise of the Greek city states, a legacy which was passed on to the later Roman Empire.
During the middle of the first millennium BC, Greek culture experienced a gradual change in the perception of female breasts. Women in art were covered in clothing from the neck down, including female goddesses like Athena, the patron of Athens who represented heroic endeavor. There were exceptions: Aphrodite, the goddess of love, was more frequently portrayed fully nude, though in postures that were intended to portray shyness or modesty, a portrayal that has been compared to modern pin ups by historian Marilyn Yalom. Although nude men were depicted standing upright, most depictions of female nudity in Greek art occurred "usually with drapery near at hand and with a forward-bending, self-protecting posture". A popular legend at the time was of the Amazons, a tribe of fierce female warriors who socialized with men only for procreation and even removed one breast to become better warriors (the idea being that the right breast would interfere with the operation of a bow and arrow). The legend was a popular motif in art during Greek and Roman antiquity and served as an antithetical cautionary tale.
Body image
Many women regard their breasts as important to their sexual attractiveness, as a sign of femininity that is important to their sense of self. A woman with smaller breasts may regard her breasts as less attractive.
Clothing
Because breasts are mostly fatty tissue, their shape can—within limits—be molded by clothing, such as foundation garments. Bras are commonly worn by about 90% of Western women, and are often worn for support. The social norm in most Western cultures is to cover breasts in public, though the extent of coverage varies depending on the social context. Some religions ascribe a special status to the female breast, either in formal teachings or through symbolism. Islam forbids free women from exposing their breasts in public.
Many cultures, including Western cultures in North America, associate breasts with sexuality and tend to regard bare breasts as immodest or indecent. In some cultures, like the Himba in northern Namibia, bare-breasted women are normal. In some African cultures, for example, the thigh is regarded as highly sexualized and never exposed in public, but breast exposure is not taboo. In a few Western countries and regions female toplessness at a beach is acceptable, although it may not be acceptable in the town center.
Social attitudes and laws regarding breastfeeding in public vary widely. In many countries, breastfeeding in public is common, legally protected, and generally not regarded as an issue. However, even though the practice may be legal or socially accepted, some mothers may nevertheless be reluctant to expose a breast in public to breastfeed due to actual or potential objections by other people, negative comments, or harassment. It is estimated that around 63% of mothers across the world have publicly breast-fed. Bare-breasted women are legal and culturally acceptable at public beaches in Australia and much of Europe. Filmmaker Lina Esco made a film entitled Free the Nipple, which is about "...laws against female toplessness or restrictions on images of female, but not male, nipples", which Esco states is an example of sexism in society.
Breast binding, also known as chest binding, is the flattening and hiding of breasts with constrictive materials such as cloth strips or purpose-built undergarments. Binders may also be used as alternatives to bras or for reasons of propriety. People who bind include women, trans men, non-binary people, and cisgender men with gynecomastia.
Sexual characteristic
In some cultures, breasts play a role in human sexual activity. Breasts and especially the nipples are among the various human erogenous zones. They are sensitive to the touch as they have many nerve endings; and it is common to press or massage them with hands or orally before or during sexual activity. During sexual arousal, breast size increases, venous patterns across the breasts become more visible, and nipples harden. Compared to other primates, human breasts are proportionately large throughout adult females' lives. Some writers have suggested that they may have evolved as a visual signal of sexual maturity and fertility. In Patterns of Sexual Behavior, a 1951 analysis of 191 traditional cultures, the researchers noted that stimulation of the female breast by a male sexual partner "seemed absent in all subhuman forms, although it is common among the members of many different human societies."
Many people regard bare female breasts to be aesthetically pleasing or erotic, and they can elicit heightened sexual desires in men in many cultures. In the ancient Indian work the Kama Sutra, light scratching of the breasts with nails and biting with teeth are considered erotic. Some people show a sexual interest in female breasts distinct from that of the person, which may be regarded as a breast fetish. A number of Western fashions include clothing which accentuate the breasts, such as the use of push-up bras and decollete (plunging neckline) gowns and blouses which show cleavage. While U.S. culture prefers breasts that are youthful and upright, some cultures venerate women with drooping breasts, indicating mothering and the wisdom of experience.
Research conducted at the Victoria University of Wellington showed that breasts are often the first thing men look at, and for a longer time than other body parts. The writers of the study had initially speculated that the reason for this is due to endocrinology with larger breasts indicating higher levels of estrogen and a sign of greater fertility, but the researchers said that "Men may be looking more often at the breasts because they are simply aesthetically pleasing, regardless of the size."
Some women report achieving an orgasm from nipple stimulation, but this is rare. Research suggests that the orgasms are genital orgasms, and may also be directly linked to "the genital area of the brain". In these cases, it seems that sensation from the nipples travels to the same part of the brain as sensations from the vagina, clitoris and cervix. Nipple stimulation may trigger uterine contractions, which then produce a sensation in the genital area of the brain.
Anthropomorphic geography
There are many mountains named after the breast because they resemble it in appearance and so are objects of religious and ancestral veneration as a fertility symbol and of well-being. In Asia, there was "Breast Mountain", which had a cave where the Buddhist monk Bodhidharma (Da Mo) spent much time in meditation. Other such breast mountains are Mount Elgon on the Uganda–Kenya border; and the Maiden Paps in Scotland; the ('Maiden's breast mountains') in Talim Island, Philippines, the twin hills known as the Paps of Anu ( or 'the breasts of Anu'), near Killarney in Ireland; the 2,086 m high or in the , Spain; in Thailand, in Puerto Rico; and the Breasts of Aphrodite in Mykonos, among many others. In the United States, the Teton Range is named after the French word for 'nipple'.
Measurement
The maturation and size of the breasts can be measured by a variety of different methods. These include Tanner staging, bra cup size, breast volume, breast–chest difference, the breast unit, breast hemicircumference, and breast circumference, among other measures.
See also
Udder
Notes
References
Bibliography
Morris, Desmond The Naked Ape: a zoologist's study of the human animal Bantam Books, Canada. 1967
External links
Breastfeeding
Human sexuality
Human physiology
Anatomy
Human body
Health | Breast | [
"Physics",
"Biology"
] | 7,128 | [
"Human sexuality",
"Behavior",
"Matter",
"Human behavior",
"Human body",
"Physical objects",
"Sexuality",
"Anatomy"
] |
4,495 | https://en.wikipedia.org/wiki/British%20thermal%20unit | The British thermal unit (Btu) is a measure of heat, which is a form of energy. It was originally defined as the amount of heat required to raise the temperature of one pound of water by one degree Fahrenheit. It is also part of the United States customary units. The SI unit for energy is the joule (J); one Btu equals about 1,055 J (varying within the range of 1,054–1,060 J depending on the specific definition of BTU; see below).
While units of heat are often supplanted by energy units in scientific work, they are still used in some fields. For example, in the United States the price of natural gas is quoted in dollars per the amount of natural gas that would give 1 million Btu (1 "MMBtu") of heat energy if burned.
Definitions
A Btu was originally defined as the amount of heat required to raise the temperature of one pound of liquid water by one degree Fahrenheit at a constant pressure of one atmospheric unit. There are several different definitions of the Btu that differ slightly. This reflects the fact that the temperature change of a mass of water due to the addition of a specific amount of heat (calculated in energy units, usually joules) depends slightly upon the water's initial temperature. As seen in the table below, definitions of the Btu based on different water temperatures vary by up to 0.5%.
Prefixes
Units of kBtu are used in building energy use tracking and heating system sizing. Energy Use Index (EUI) represents kBtu per square foot of conditioned floor area. "k" stands for 1,000.
The unit Mbtu is used in natural gas and other industries to indicate 1,000 Btu. However, there is an ambiguity in that the metric system (SI) uses the prefix "M" to indicate 'Mega-', one million (1,000,000). Even so, "MMbtu" is often used to indicate one million Btu particularly in the oil and gas industry.
Energy analysts accustomed to the metric "k" ('kilo-') for 1,000 are more likely to use MBtu to represent one million, especially in documents where M represents one million in other energy or cost units, such as MW, MWh and $.
The unit 'therm' is used to represent 100,000 Btu. A decatherm is 10 therms or one million Btu. The unit quad is commonly used to represent one quadrillion (1015) Btu.
Conversions
One Btu is approximately:
(kilojoules)
(watt hours)
(calories)
(kilocalories)
25,031 to 25,160 ft⋅pdl (foot-poundal)
(foot-pounds-force)
5.40395 (lbf/in2)⋅ft3
A Btu can be approximated as the heat produced by burning a single wooden kitchen match or as the amount of energy it takes to lift a weight .
For natural gas
In natural gas pricing, the Canadian definition is that ≡ .
The energy content (high or low heating value) of a volume of natural gas varies with the composition of the natural gas, which means there is no universal conversion factor for energy to volume. of average natural gas yields ≈ 1,030 Btu (between 1,010 Btu and 1,070 Btu, depending on quality, when burned)
As a coarse approximation, of natural gas yields ≈ ≈ .
For natural gas price conversion ≈ 36.9 million Btu and ≈
BTU/h
The SI unit of power for heating and cooling systems is the watt. Btu per hour (Btu/h) is sometimes used in North America and the United Kingdom - the latter for air conditioning mainly, though "Btu/h" is sometimes abbreviated to just "Btu". MBH—thousands of Btu per hour—is also common.
1 W is approximately
1,000 Btu/h is approximately
1 hp is approximately
Associated units
1 ton of cooling, a common unit in North American refrigeration and air conditioning applications, is . It is the rate of heat transfer needed to freeze of water into ice in 24 hours.
In the United States and Canada, the R-value that describes the performance of thermal insulation is typically quoted in square foot degree Fahrenheit hours per British thermal unit (ft2⋅°F⋅h/Btu). For one square foot of the insulation, one Btu per hour of heat flows across the insulator for each degree of temperature difference across it.
1 therm is defined in the United States as 100,000 Btu using the definition. In the EU it was listed in 1979 with the BTUIT definition and planned to be discarded as a legal unit of trade by 1994. United Kingdom regulations were amended to replace therms with joules with effect from 1 January 2000. the therm was still used in natural gas pricing in the United Kingdom.
1 quad (short for quadrillion Btu) is 1015 Btu, which is about 1 exajoule (). Quads are used in the United States for representing the annual energy consumption of large economies: for example, the U.S. economy used 99.75 quads in 2005. One quad/year is about 33.43 gigawatts.
The Btu should not be confused with the Board of Trade Unit (BTU), an obsolete UK synonym for kilowatt hour ().
The Btu is often used to express the conversion-efficiency of heat into electrical energy in power plants. Figures are quoted in terms of the quantity of heat in Btu required to generate 1 kW⋅h of electrical energy. A typical coal-fired power plant works at , an efficiency of 32–33%.
The centigrade heat unit (CHU) is the amount of heat required to raise the temperature of of water by one Celsius degree. It is equal to 1.8 Btu or 1,899 joules. In 1974, this unit was "still sometimes used" in the United Kingdom as an alternative to Btu.
Another legacy unit for energy in the metric system is the calorie, which is defined as the amount of heat required to raise the temperature of one gram of water by one degree Celsius.
See also
Conversion of units
Latent heat
Metrication
Ton of refrigeration
Notes
References
External links
Units of energy
Imperial units
Customary units of measurement in the United States | British thermal unit | [
"Mathematics"
] | 1,354 | [
"Quantity",
"Units of energy",
"Units of measurement"
] |
4,502 | https://en.wikipedia.org/wiki/Biotechnology | Biotechnology is a multidisciplinary field that involves the integration of natural sciences and engineering sciences in order to achieve the application of organisms and parts thereof for products and services.
The term biotechnology was first used by Károly Ereky in 1919 to refer to the production of products from raw materials with the aid of living organisms. The core principle of biotechnology involves harnessing biological systems and organisms, such as bacteria, yeast, and plants, to perform specific tasks or produce valuable substances.
Biotechnology had a significant impact on many areas of society, from medicine to agriculture to environmental science. One of the key techniques used in biotechnology is genetic engineering, which allows scientists to modify the genetic makeup of organisms to achieve desired outcomes. This can involve inserting genes from one organism into another, and consequently, create new traits or modifying existing ones.
Other important techniques used in biotechnology include tissue culture, which allows researchers to grow cells and tissues in the lab for research and medical purposes, and fermentation, which is used to produce a wide range of products such as beer, wine, and cheese.
The applications of biotechnology are diverse and have led to the development of products like life-saving drugs, biofuels, genetically modified crops, and innovative materials. It has also been used to address environmental challenges, such as developing biodegradable plastics and using microorganisms to clean up contaminated sites.
Biotechnology is a rapidly evolving field with significant potential to address pressing global challenges and improve the quality of life for people around the world; however, despite its numerous benefits, it also poses ethical and societal challenges, such as questions around genetic modification and intellectual property rights. As a result, there is ongoing debate and regulation surrounding the use and application of biotechnology in various industries and fields.
Definition
The concept of biotechnology encompasses a wide range of procedures for modifying living organisms for human purposes, going back to domestication of animals, cultivation of plants, and "improvements" to these through breeding programs that employ artificial selection and hybridization. Modern usage also includes genetic engineering, as well as cell and tissue culture technologies. The American Chemical Society defines biotechnology as the application of biological organisms, systems, or processes by various industries to learning about the science of life and the improvement of the value of materials and organisms, such as pharmaceuticals, crops, and livestock. As per the European Federation of Biotechnology, biotechnology is the integration of natural science and organisms, cells, parts thereof, and molecular analogues for products and services. Biotechnology is based on the basic biological sciences (e.g., molecular biology, biochemistry, cell biology, embryology, genetics, microbiology) and conversely provides methods to support and perform basic research in biology.
Biotechnology is the research and development in the laboratory using bioinformatics for exploration, extraction, exploitation, and production from any living organisms and any source of biomass by means of biochemical engineering where high value-added products could be planned (reproduced by biosynthesis, for example), forecasted, formulated, developed, manufactured, and marketed for the purpose of sustainable operations (for the return from bottomless initial investment on R & D) and gaining durable patents rights (for exclusives rights for sales, and prior to this to receive national and international approval from the results on animal experiment and human experiment, especially on the pharmaceutical branch of biotechnology to prevent any undetected side-effects or safety concerns by using the products). The utilization of biological processes, organisms or systems to produce products that are anticipated to improve human lives is termed biotechnology.
By contrast, bioengineering is generally thought of as a related field that more heavily emphasizes higher systems approaches (not necessarily the altering or using of biological materials directly) for interfacing with and utilizing living things. Bioengineering is the application of the principles of engineering and natural sciences to tissues, cells, and molecules. This can be considered as the use of knowledge from working with and manipulating biology to achieve a result that can improve functions in plants and animals. Relatedly, biomedical engineering is an overlapping field that often draws upon and applies biotechnology (by various definitions), especially in certain sub-fields of biomedical or chemical engineering such as tissue engineering, biopharmaceutical engineering, and genetic engineering.
History
Although not normally what first comes to mind, many forms of human-derived agriculture clearly fit the broad definition of "utilizing a biotechnological system to make products". Indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise.
Agriculture has been theorized to have become the dominant way of producing food since the Neolithic Revolution. Through early biotechnology, the earliest farmers selected and bred the best-suited crops (e.g., those with the highest yields) to produce enough food to support a growing population. As crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by-products could effectively fertilize, restore nitrogen, and control pests. Throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants — one of the first forms of biotechnology.
These processes also were included in early fermentation of beer. These processes were introduced in early Mesopotamia, Egypt, China and India, and still use the same basic biological methods. In brewing, malted grains (containing enzymes) convert starch from grains into sugar and then adding specific yeasts to produce beer. In this process, carbohydrates in the grains broke down into alcohols, such as ethanol. Later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. Fermentation was also used in this time period to produce leavened bread. Although the process of fermentation was not fully understood until Louis Pasteur's work in 1857, it is still the first use of biotechnology to convert a food source into another form.
Before the time of Charles Darwin's work and life, animal and plant scientists had already used selective breeding. Darwin added to that body of work with his scientific observations about the ability of science to change species. These accounts contributed to Darwin's theory of natural selection.
For thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. In selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. For example, this technique was used with corn to produce the largest and sweetest crops.
In the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. In 1917, Chaim Weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using Clostridium acetobutylicum, to produce acetone, which the United Kingdom desperately needed to manufacture explosives during World War I.
Biotechnology has also led to the development of antibiotics. In 1928, Alexander Fleming discovered the mold Penicillium. His work led to the purification of the antibiotic formed by the mold by Howard Florey, Ernst Boris Chain and Norman Heatley – to form what we today know as penicillin. In 1940, penicillin became available for medicinal use to treat bacterial infections in humans.
The field of modern biotechnology is generally thought of as having been born in 1971 when Paul Berg's (Stanford) experiments in gene splicing had early success. Herbert W. Boyer (Univ. Calif. at San Francisco) and Stanley N. Cohen (Stanford) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. The commercial viability of a biotechnology industry was significantly expanded on June 16, 1980, when the United States Supreme Court ruled that a genetically modified microorganism could be patented in the case of Diamond v. Chakrabarty. Indian-born Ananda Chakrabarty, working for General Electric, had modified a bacterium (of the genus Pseudomonas) capable of breaking down crude oil, which he proposed to use in treating oil spills. (Chakrabarty's work did not involve gene manipulation but rather the transfer of entire organelles between strains of the Pseudomonas bacterium).
The MOSFET invented at Bell Labs between 1955 and 1960, Two years later, Leland C. Clark and Champ Lyons invented the first biosensor in 1962. Biosensor MOSFETs were later developed, and they have since been widely used to measure physical, chemical, biological and environmental parameters. The first BioFET was the ion-sensitive field-effect transistor (ISFET), invented by Piet Bergveld in 1970. It is a special type of MOSFET, where the metal gate is replaced by an ion-sensitive membrane, electrolyte solution and reference electrode. The ISFET is widely used in biomedical applications, such as the detection of DNA hybridization, biomarker detection from blood, antibody detection, glucose measurement, pH sensing, and genetic technology.
By the mid-1980s, other BioFETs had been developed, including the gas sensor FET (GASFET), pressure sensor FET (PRESSFET), chemical field-effect transistor (ChemFET), reference ISFET (REFET), enzyme-modified FET (ENFET) and immunologically modified FET (IMFET). By the early 2000s, BioFETs such as the DNA field-effect transistor (DNAFET), gene-modified FET (GenFET) and cell-potential BioFET (CPFET) had been developed.
A factor influencing the biotechnology sector's success is improved intellectual property rights legislation—and enforcement—worldwide, as well as strengthened demand for medical and pharmaceutical products.
Rising demand for biofuels is expected to be good news for the biotechnology sector, with the Department of Energy estimating ethanol usage could reduce U.S. petroleum-derived fuel consumption by up to 30% by 2030. The biotechnology sector has allowed the U.S. farming industry to rapidly increase its supply of corn and soybeans—the main inputs into biofuels—by developing genetically modified seeds that resist pests and drought. By increasing farm productivity, biotechnology boosts biofuel production.
Examples
Biotechnology has applications in four major industrial areas, including health care (medical), crop production and agriculture, non-food (industrial) uses of crops and other products (e.g., biodegradable plastics, vegetable oil, biofuels), and environmental uses.
For example, one application of biotechnology is the directed use of microorganisms for the manufacture of organic products (examples include beer and milk products). Another example is using naturally present bacteria by the mining industry in bioleaching. Biotechnology is also used to recycle, treat waste, clean up sites contaminated by industrial activities (bioremediation), and also to produce biological weapons.
A series of derived terms have been coined to identify several branches of biotechnology, for example:
Bioinformatics (or "gold biotechnology") is an interdisciplinary field that addresses biological problems using computational techniques, and makes the rapid organization as well as analysis of biological data possible. The field may also be referred to as computational biology, and can be defined as, "conceptualizing biology in terms of molecules and then applying informatics techniques to understand and organize the information associated with these molecules, on a large scale". Bioinformatics plays a key role in various areas, such as functional genomics, structural genomics, and proteomics, and forms a key component in the biotechnology and pharmaceutical sector.
Blue biotechnology is based on the exploitation of sea resources to create products and industrial applications. This branch of biotechnology is the most used for the industries of refining and combustion principally on the production of bio-oils with photosynthetic micro-algae.
Green biotechnology is biotechnology applied to agricultural processes. An example would be the selection and domestication of plants via micropropagation. Another example is the designing of transgenic plants to grow under specific environments in the presence (or absence) of chemicals. One hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. An example of this is the engineering of a plant to express a pesticide, thereby ending the need of external application of pesticides. An example of this would be Bt corn. Whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate. It is commonly considered as the next phase of green revolution, which can be seen as a platform to eradicate world hunger by using technologies which enable the production of more fertile and resistant, towards biotic and abiotic stress, plants and ensures application of environmentally friendly fertilizers and the use of biopesticides, it is mainly focused on the development of agriculture. On the other hand, some of the uses of green biotechnology involve microorganisms to clean and reduce waste.
Red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. This branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. As well as the development of hormones, stem cells, antibodies, siRNA and diagnostic tests.
White biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. An example is the designing of an organism to produce a useful chemical. Another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous/polluting chemicals. White biotechnology tends to consume less in resources than traditional processes used to produce industrial goods.
"Yellow biotechnology" refers to the use of biotechnology in food production (food industry), for example in making wine (winemaking), cheese (cheesemaking), and beer (brewing) by fermentation. It has also been used to refer to biotechnology applied to insects. This includes biotechnology-based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches.
Gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of pollutants.
Brown biotechnology is related to the management of arid lands and deserts. One application is the creation of enhanced seeds that resist extreme environmental conditions of arid regions, which is related to the innovation, creation of agriculture techniques and management of resources.
Violet biotechnology is related to law, ethical and philosophical issues around biotechnology.
Microbial biotechnology has been proposed for the rapidly emerging area of biotechnology applications in space and microgravity (space bioeconomy)
Dark biotechnology is the color associated with bioterrorism or biological weapons and biowarfare which uses microorganisms, and toxins to cause diseases and death in humans, livestock and crops.
Medicine
In medicine, modern biotechnology has many applications in areas such as pharmaceutical drug discoveries and production, pharmacogenomics, and genetic testing (or genetic screening). In 2021, nearly 40% of the total company value of pharmaceutical biotech companies worldwide were active in Oncology with Neurology and Rare Diseases being the other two big applications.
Pharmacogenomics (a combination of pharmacology and genomics) is the technology that analyses how genetic makeup affects an individual's response to drugs. Researchers in the field investigate the influence of genetic variation on drug responses in patients by correlating gene expression or single-nucleotide polymorphisms with a drug's efficacy or toxicity. The purpose of pharmacogenomics is to develop rational means to optimize drug therapy, with respect to the patients' genotype, to ensure maximum efficacy with minimal adverse effects. Such approaches promise the advent of "personalized medicine"; in which drugs and drug combinations are optimized for each individual's unique genetic makeup.
Biotechnology has contributed to the discovery and manufacturing of traditional small molecule pharmaceutical drugs as well as drugs that are the product of biotechnology – biopharmaceutics. Modern biotechnology can be used to manufacture existing medicines relatively easily and cheaply. The first genetically engineered products were medicines designed to treat human diseases. To cite one example, in 1978 Genentech developed synthetic humanized insulin by joining its gene with a plasmid vector inserted into the bacterium Escherichia coli. Insulin, widely used for the treatment of diabetes, was previously extracted from the pancreas of abattoir animals (cattle or pigs). The genetically engineered bacteria are able to produce large quantities of synthetic human insulin at relatively low cost. Biotechnology has also enabled emerging therapeutics like gene therapy. The application of biotechnology to basic science (for example through the Human Genome Project) has also dramatically improved our understanding of biology and as our scientific knowledge of normal and disease biology has increased, our ability to develop new medicines to treat previously untreatable diseases has increased as well.
Genetic testing allows the genetic diagnosis of vulnerabilities to inherited diseases, and can also be used to determine a child's parentage (genetic mother and father) or in general a person's ancestry. In addition to studying chromosomes to the level of individual genes, genetic testing in a broader sense includes biochemical tests for the possible presence of genetic diseases, or mutant forms of genes associated with increased risk of developing genetic disorders. Genetic testing identifies changes in chromosomes, genes, or proteins. Most of the time, testing is used to find changes that are associated with inherited disorders. The results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person's chance of developing or passing on a genetic disorder. As of 2011 several hundred genetic tests were in use. Since genetic testing may open up ethical or psychological problems, genetic testing is often accompanied by genetic counseling.
Agriculture
Genetically modified crops ("GM crops", or "biotech crops") are plants used in agriculture, the DNA of which has been modified with genetic engineering techniques. In most cases, the main aim is to introduce a new trait that does not occur naturally in the species. Biotechnology firms can contribute to future food security by improving the nutrition and viability of urban agriculture. Furthermore, the protection of intellectual property rights encourages private sector investment in agrobiotechnology.
Examples in food crops include resistance to certain pests, diseases, stressful environmental conditions, resistance to chemical treatments (e.g. resistance to a herbicide), reduction of spoilage, or improving the nutrient profile of the crop. Examples in non-food crops include production of pharmaceutical agents, biofuels, and other industrially useful goods, as well as for bioremediation.
Farmers have widely adopted GM technology. Between 1996 and 2011, the total surface area of land cultivated with GM crops had increased by a factor of 94, from . 10% of the world's crop lands were planted with GM crops in 2010. As of 2011, 11 different transgenic crops were grown commercially on in 29 countries such as the US, Brazil, Argentina, India, Canada, China, Paraguay, Pakistan, South Africa, Uruguay, Bolivia, Australia, Philippines, Myanmar, Burkina Faso, Mexico and Spain.
Genetically modified foods are foods produced from organisms that have had specific changes introduced into their DNA with the methods of genetic engineering. These techniques have allowed for the introduction of new crop traits as well as a far greater control over a food's genetic structure than previously afforded by methods such as selective breeding and mutation breeding. Commercial sale of genetically modified foods began in 1994, when Calgene first marketed its Flavr Savr delayed ripening tomato. To date most genetic modification of foods have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton seed oil. These have been engineered for resistance to pathogens and herbicides and better nutrient profiles. GM livestock have also been experimentally developed; in November 2013 none were available on the market, but in 2015 the FDA approved the first GM salmon for commercial production and consumption.
There is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, but that each GM food needs to be tested on a case-by-case basis before introduction. Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe. The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation.
GM crops also provide a number of ecological benefits, if not used in excess. Insect-resistant crops have proven to lower pesticide usage, therefore reducing the environmental impact of pesticides as a whole. However, opponents have objected to GM crops per se on several grounds, including environmental concerns, whether food produced from GM crops is safe, whether GM crops are needed to address the world's food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law.
Biotechnology has several applications in the realm of food security. Crops like Golden rice are engineered to have higher nutritional content, and there is potential for food products with longer shelf lives. Though not a form of agricultural biotechnology, vaccines can help prevent diseases found in animal agriculture. Additionally, agricultural biotechnology can expedite breeding processes in order to yield faster results and provide greater quantities of food. Transgenic biofortification in cereals has been considered as a promising method to combat malnutrition in India and other countries.
Industrial
Industrial biotechnology (known mainly in Europe as white biotechnology) is the application of biotechnology for industrial purposes, including industrial fermentation. It includes the practice of using cells such as microorganisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper and pulp, textiles and biofuels. In the current decades, significant progress has been done in creating genetically modified organisms (GMOs) that enhance the diversity of applications and economical viability of industrial biotechnology. By using renewable raw materials to produce a variety of chemicals and fuels, industrial biotechnology is actively advancing towards lowering greenhouse gas emissions and moving away from a petrochemical-based economy.
Synthetic biology is considered one of the essential cornerstones in industrial biotechnology due to its financial and sustainable contribution to the manufacturing sector. Jointly biotechnology and synthetic biology play a crucial role in generating cost-effective products with nature-friendly features by using bio-based production instead of fossil-based. Synthetic biology can be used to engineer model microorganisms, such as Escherichia coli, by genome editing tools to enhance their ability to produce bio-based products, such as bioproduction of medicines and biofuels. For instance, E. coli and Saccharomyces cerevisiae in a consortium could be used as industrial microbes to produce precursors of the chemotherapeutic agent paclitaxel by applying the metabolic engineering in a co-culture approach to exploit the benefits from the two microbes.
Another example of synthetic biology applications in industrial biotechnology is the re-engineering of the metabolic pathways of E. coli by CRISPR and CRISPRi systems toward the production of a chemical known as 1,4-butanediol, which is used in fiber manufacturing. In order to produce 1,4-butanediol, the authors alter the metabolic regulation of the Escherichia coli by CRISPR to induce point mutation in the gltA gene, knockout of the sad gene, and knock-in six genes (cat1, sucD, 4hbd, cat2, bld, and bdh). Whereas CRISPRi system used to knockdown the three competing genes (gabD, ybgC, and tesB) that affect the biosynthesis pathway of 1,4-butanediol. Consequently, the yield of 1,4-butanediol significantly increased from 0.9 to 1.8 g/L.
Environmental
Environmental biotechnology includes various disciplines that play an essential role in reducing environmental waste and providing environmentally safe processes, such as biofiltration and biodegradation. The environment can be affected by biotechnologies, both positively and adversely. Vallero and others have argued that the difference between beneficial biotechnology (e.g., bioremediation is to clean up an oil spill or hazard chemical leak) versus the adverse effects stemming from biotechnological enterprises (e.g., flow of genetic material from transgenic organisms into wild strains) can be seen as applications and implications, respectively. Cleaning up environmental wastes is an example of an application of environmental biotechnology; whereas loss of biodiversity or loss of containment of a harmful microbe are examples of environmental implications of biotechnology.
Many cities have installed CityTrees, which use biotechnology to filter pollutants from urban atmospheres.
Regulation
The regulation of genetic engineering concerns approaches taken by governments to assess and manage the risks associated with the use of genetic engineering technology, and the development and release of genetically modified organisms (GMO), including genetically modified crops and genetically modified fish. There are differences in the regulation of GMOs between countries, with some of the most marked differences occurring between the US and Europe. Regulation varies in a given country depending on the intended use of the products of the genetic engineering. For example, a crop not intended for food use is generally not reviewed by authorities responsible for food safety. The European Union differentiates between approval for cultivation within the EU and approval for import and processing. While only a few GMOs have been approved for cultivation in the EU a number of GMOs have been approved for import and processing. The cultivation of GMOs has triggered a debate about the coexistence of GM and non-GM crops. Depending on the coexistence regulations, incentives for the cultivation of GM crops differ.
Database for the GMOs used in the EU
The EUginius (European GMO Initiative for a Unified Database System) database is intended to help companies, interested private users and competent authorities to find precise information on the presence, detection and identification of GMOs used in the European Union. The information is provided in English.
Learning
In 1988, after prompting from the United States Congress, the National Institute of General Medical Sciences (National Institutes of Health) (NIGMS) instituted a funding mechanism for biotechnology training. Universities nationwide compete for these funds to establish Biotechnology Training Programs (BTPs). Each successful application is generally funded for five years then must be competitively renewed. Graduate students in turn compete for acceptance into a BTP; if accepted, then stipend, tuition and health insurance support are provided for two or three years during the course of their PhD thesis work. Nineteen institutions offer NIGMS supported BTPs. Biotechnology training is also offered at the undergraduate level and in community colleges.
References and notes
External links
What is Biotechnology? – A curated collection of resources about the people, places and technologies that have enabled biotechnology | Biotechnology | [
"Biology"
] | 5,522 | [
"nan",
"Biotechnology"
] |
4,526 | https://en.wikipedia.org/wiki/Brick | A brick is a type of construction material used to build walls, pavements and other elements in masonry construction. Properly, the term brick denotes a unit primarily composed of clay, but is now also used informally to denote units made of other materials or other chemically cured construction blocks. Bricks can be joined using mortar, adhesives or by interlocking. Bricks are usually produced at brickworks in numerous classes, types, materials, and sizes which vary with region, and are produced in bulk quantities.
Block is a similar term referring to a rectangular building unit composed of clay or concrete, but is usually larger than a brick. Lightweight bricks (also called lightweight blocks) are made from expanded clay aggregate.
Fired bricks are one of the longest-lasting and strongest building materials, sometimes referred to as artificial stone, and have been used since . Air-dried bricks, also known as mudbricks, have a history older than fired bricks, and have an additional ingredient of a mechanical binder such as straw.
Bricks are laid in courses and numerous patterns known as bonds, collectively known as brickwork, and may be laid in various kinds of mortar to hold the bricks together to make a durable structure.
History
Middle East and South Asia
The earliest bricks were dried mudbricks, meaning that they were formed from clay-bearing earth or mud and dried (usually in the sun) until they were strong enough for use. The oldest discovered bricks, originally made from shaped mud and dating before 7500 BC, were found at Tell Aswad, in the upper Tigris region and in southeast Anatolia close to Diyarbakir.
Mudbrick construction was used at Çatalhöyük, from c. 7,400 BC.
Mudbrick structures, dating to c. 7,200 BC have been located in Jericho, Jordan Valley. These structures were made up of the first bricks with dimension 400x150x100 mm.
Between 5000 and 4500 BC, Mesopotamia had discovered fired brick. The standard brick sizes in Mesopotamia followed a general rule: the width of the dried or burned brick would be twice its thickness, and its length would be double its width.
The South Asian inhabitants of Mehrgarh also constructed air-dried mudbrick structures between 7000 and 3300 BC and later the ancient Indus Valley cities of Mohenjo-daro, Harappa, and Mehrgarh. Ceramic, or fired brick was used as early as 3000 BC in early Indus Valley cities like Kalibangan.
In the middle of the third millennium BC, there was a rise in monumental baked brick architecture in Indus cities. Examples included the Great Bath at Mohenjo-daro, the fire altars of Kaalibangan, and the granary of Harappa. There was a uniformity to the brick sizes throughout the Indus Valley region, conforming to the 1:2:4, thickness, width, and length ratio. As the Indus civilization began its decline at the start of the second millennium BC, Harappans migrated east, spreading their knowledge of brickmaking technology. This led to the rise of cities like Pataliputra, Kausambi, and Ujjain, where there was an enormous demand for kiln-made bricks.
By 604 BC, bricks were the construction materials for architectural wonders such as the Hanging Gardens of Babylon, where glazed fired bricks were put into practice.
China
The earliest fired bricks appeared in Neolithic China around 4400 BC at Chengtoushan, a walled settlement of the Daxi culture. These bricks were made of red clay, fired on all sides to above 600 °C, and used as flooring for houses. By the Qujialing period (3300 BC), fired bricks were being used to pave roads and as building foundations at Chengtoushan.
According to Lukas Nickel, the use of ceramic pieces for protecting and decorating floors and walls dates back at various cultural sites to 3000-2000 BC and perhaps even before, but these elements should be rather qualified as tiles. For the longest time builders relied on wood, mud and rammed earth, while fired brick and mudbrick played no structural role in architecture. Proper brick construction, for erecting walls and vaults, finally emerges in the third century BC, when baked bricks of regular shape began to be employed for vaulting underground tombs. Hollow brick tomb chambers rose in popularity as builders were forced to adapt due to a lack of readily available wood or stone. The oldest extant brick building above ground is possibly Songyue Pagoda, dated to 523 AD.
By the end of the third century BC in China, both hollow and small bricks were available for use in building walls and ceilings. Fired bricks were first mass-produced during the construction of the tomb of China's first Emperor, Qin Shi Huangdi. The floors of the three pits of the terracotta army were paved with an estimated 230,000 bricks, with the majority measuring 28x14x7 cm, following a 4:2:1 ratio. The use of fired bricks in Chinese city walls first appeared in the Eastern Han dynasty (25 AD-220 AD). Up until the Middle Ages, buildings in Central Asia were typically built with unbaked bricks. It was only starting in the ninth century CE when buildings were entirely constructed using fired bricks.
The carpenter's manual Yingzao Fashi, published in 1103 at the time of the Song dynasty described the brick making process and glazing techniques then in use. Using the 17th-century encyclopaedic text Tiangong Kaiwu, historian Timothy Brook outlined the brick production process of Ming dynasty China:
Europe
Early civilisations around the Mediterranean, including the Ancient Greeks and Romans, adopted the use of fired bricks. By the early first century CE, standardised fired bricks were being heavily produced in Rome. The Roman legions operated mobile kilns, and built large brick structures throughout the Roman Empire, stamping the bricks with the seal of the legion. The Romans used brick for walls, arches, forts, aqueducts, etc. Notable mentions of Roman brick structures are the Herculaneum gate of Pompeii and the baths of Caracalla.
During the Early Middle Ages the use of bricks in construction became popular in Northern Europe, after being introduced there from Northwestern Italy. An independent style of brick architecture, known as brick Gothic (similar to Gothic architecture) flourished in places that lacked indigenous sources of rocks. Examples of this architectural style can be found in modern-day Denmark, Germany, Poland, and Kaliningrad (former East Prussia).
This style evolved into the Brick Renaissance as the stylistic changes associated with the Italian Renaissance spread to northern Europe, leading to the adoption of Renaissance elements into brick building. Identifiable attributes included a low-pitched hipped or flat roof, symmetrical facade, round arch entrances and windows, columns and pilasters, and more.
A clear distinction between the two styles only developed at the transition to Baroque architecture. In Lübeck, for example, Brick Renaissance is clearly recognisable in buildings equipped with terracotta reliefs by the artist Statius von Düren, who was also active at Schwerin (Schwerin Castle) and Wismar (Fürstenhof).
Long-distance bulk transport of bricks and other construction equipment remained prohibitively expensive until the development of modern transportation infrastructure, with the construction of canal, roads, and railways.
Industrial era
Production of bricks increased massively with the onset of the Industrial Revolution and the rise in factory building in England. For reasons of speed and economy, bricks were increasingly preferred as building material to stone, even in areas where the stone was readily available. It was at this time in London that bright red brick was chosen for construction to make the buildings more visible in the heavy fog and to help prevent traffic accidents.
The transition from the traditional method of production known as hand-moulding to a mechanised form of mass-production slowly took place during the first half of the nineteenth century. The first brick-making machine was patented by Richard A. Ver Valen of Haverstraw, New York, in 1852. The Bradley & Craven Ltd 'Stiff-Plastic Brickmaking Machine' was patented in 1853. Bradley & Craven went on to be a dominant manufacturer of brickmaking machinery. Henry Clayton, employed at the Atlas Works in Middlesex, England, in 1855, patented a brick-making machine that was capable of producing up to 25,000 bricks daily with minimal supervision. His mechanical apparatus soon achieved widespread attention after it was adopted for use by the South Eastern Railway Company for brick-making at their factory near Folkestone.
At the end of the 19th century, the Hudson River region of New York State would become the world's largest brick manufacturing region, with 130 brickyards lining the shores of the Hudson River from Mechanicsville to Haverstraw and employing 8,000 people. At its peak, about 1 billion bricks were produced a year, with many being sent to New York City for use in its construction industry.
The demand for high office building construction at the turn of the 20th century led to a much greater use of cast and wrought iron, and later, steel and concrete. The use of brick for skyscraper construction severely limited the size of the building – the Monadnock Building, built in 1896 in Chicago, required exceptionally thick walls to maintain the structural integrity of its 17 storeys.
Following pioneering work in the 1950s at the Swiss Federal Institute of Technology and the Building Research Establishment in Watford, UK, the use of improved masonry for the construction of tall structures up to 18 storeys high was made viable. However, the use of brick has largely remained restricted to small to medium-sized buildings, as steel and concrete remain superior materials for high-rise construction.
Methods of manufacture
Four basic types of brick are un-fired, fired, chemically set bricks, and compressed earth blocks. Each type is manufactured differently for various purposes.
Mudbrick
Unfired bricks, also known as mudbrick, are made from a mixture of silt, clay, sand and other earth materials like gravel and stone, combined with tempers and binding agents such as chopped straw, grasses, tree bark, or dung. Since these bricks are made up of natural materials and only require heat from the Sun to bake, mudbricks have a relatively low embodied energy and carbon footprint.
The ingredients are first harvested and added together, with clay content ranging from 30% to 70%. The mixture is broken up with hoes or adzes, and stirred with water to form a homogenous blend. Next, the tempers and binding agents are added in a ratio, roughly one part straw to five parts earth to reduce weight and reinforce the brick by helping reduce shrinkage. However, additional clay could be added to reduce the need for straw, which would prevent the likelihood of insects deteriorating the organic material of the bricks, subsequently weakening the structure. These ingredients are thoroughly mixed together by hand or by treading and are then left to ferment for about a day.
The mix is then kneaded with water and molded into rectangular prisms of a desired size. Bricks are lined up and left to dry in the sun for three days on both sides. After the six days, the bricks continue drying until required for use. Typically, longer drying times are preferred, but the average is eight to nine days spanning from initial stages to its application in structures. Unfired bricks could be made in the spring months and left to dry over the summer for use in the autumn. Mudbricks are commonly employed in arid environments to allow for adequate air drying.
Fired brick
Fired bricks are baked in a kiln which makes them durable. Modern, fired, clay bricks are formed in one of three processes – soft mud, dry press, or extruded. Depending on the country, either the extruded or soft mud method is the most common, since they are the most economical.
Clay and shale are the raw ingredients in the recipe for a fired brick. They are the product of thousands of years of decomposition and erosion of rocks, such as pegmatite and granite, leading to a material that has properties of being highly chemically stable and inert. Within the clays and shales are the materials of aluminosilicate (pure clay), free silica (quartz), and decomposed rock.
One proposed optimal mix is:
Silica (sand) – 50% to 60% by weight
Alumina (clay) – 20% to 30% by weight
Lime – 2 to 5% by weight
Iron oxide – ≤ 7% by weight
Magnesia – less than 1% by weight
Shaping methods
Three main methods are used for shaping the raw materials into bricks to be fired:
Moulded bricks – These bricks start with raw clay, preferably in a mix with 25–30% sand to reduce shrinkage. The clay is first ground and mixed with water to the desired consistency. The clay is then pressed into steel moulds with a hydraulic press. The shaped clay is then fired at to achieve strength.
Dry-pressed bricks – The dry-press method is similar to the soft-mud moulded method, but starts with a much thicker clay mix, so it forms more accurate, sharper-edged bricks. The greater force in pressing and the longer firing time make this method more expensive.
Extruded bricks – For extruded bricks the clay is mixed with 10–15% water (stiff extrusion) or 20–25% water (soft extrusion) in a pugmill. This mixture is forced through a die to create a long cable of material of the desired width and depth. This mass is then cut into bricks of the desired length by a wall of wires. Most structural bricks are made by this method as it produces hard, dense bricks, and suitable dies can produce perforations as well. The introduction of such holes reduces the volume of clay needed, and hence the cost. Hollow bricks are lighter and easier to handle, and have different thermal properties from solid bricks. The cut bricks are hardened by drying for 20 to 40 hours at before being fired. The heat for drying is often waste heat from the kiln.
Kilns
In many modern brickworks, bricks are usually fired in a continuously fired tunnel kiln, in which the bricks are fired as they move slowly through the kiln on conveyors, rails, or kiln cars, which achieves a more consistent brick product. The bricks often have lime, ash, and organic matter added, which accelerates the burning process.
The other major kiln type is the Bull's Trench Kiln (BTK), based on a design developed by British engineer W. Bull in the late 19th century.
An oval or circular trench is dug, wide, deep, and in circumference. A tall exhaust chimney is constructed in the centre. Half or more of the trench is filled with "green" (unfired) bricks which are stacked in an open lattice pattern to allow airflow. The lattice is capped with a roofing layer of finished brick.
In operation, new green bricks, along with roofing bricks, are stacked at one end of the brick pile. Historically, a stack of unfired bricks covered for protection from the weather was called a "hack". Cooled finished bricks are removed from the other end for transport to their destinations. In the middle, the brick workers create a firing zone by dropping fuel (coal, wood, oil, debris, etc.) through access holes in the roof above the trench. The constant source of fuel maybe grown on the woodlots.
The advantage of the BTK design is a much greater energy efficiency compared with clamp or scove kilns. Sheet metal or boards are used to route the airflow through the brick lattice so that fresh air flows first through the recently burned bricks, heating the air, then through the active burning zone. The air continues through the green brick zone (pre-heating and drying the bricks), and finally out the chimney, where the rising gases create suction that pulls air through the system. The reuse of heated air yields savings in fuel cost.
As with the rail process, the BTK process is continuous. A half-dozen labourers working around the clock can fire approximately 15,000–25,000 bricks a day. Unlike the rail process, in the BTK process the bricks do not move. Instead, the locations at which the bricks are loaded, fired, and unloaded gradually rotate through the trench.
Influences on colour
The colour of fired clay bricks is influenced by the chemical and mineral content of the raw materials, the firing temperature, and the atmosphere in the kiln. For example, pink bricks are the result of a high iron content, white or yellow bricks have a higher lime content. Most bricks burn to various red hues; as the temperature is increased the colour moves through dark red, purple, and then to brown or grey at around . The names of bricks may reflect their origin and colour, such as London stock brick and Cambridgeshire White. Brick tinting may be performed to change the colour of bricks to blend-in areas of brickwork with the surrounding masonry.
An impervious and ornamental surface may be laid on brick either by salt glazing, in which salt is added during the burning process, or by the use of a slip, which is a glaze material into which the bricks are dipped. Subsequent reheating in the kiln fuses the slip into a glazed surface integral with the brick base.
Chemically set bricks
Chemically set bricks are not fired but may have the curing process accelerated by the application of heat and pressure in an autoclave.
Calcium-silicate bricks
Calcium-silicate bricks are also called sandlime or flintlime bricks, depending on their ingredients. Rather than being made with clay they are made with lime binding the silicate material. The raw materials for calcium-silicate bricks include lime mixed in a proportion of about 1 to 10 with sand, quartz, crushed flint, or crushed siliceous rock together with mineral colourants. The materials are mixed and left until the lime is completely hydrated; the mixture is then pressed into moulds and cured in an autoclave for three to fourteen hours to speed the chemical hardening. The finished bricks are very accurate and uniform, although the sharp arrises need careful handling to avoid damage to brick and bricklayer. The bricks can be made in a variety of colours; white, black, buff, and grey-blues are common, and pastel shades can be achieved. This type of brick is common in Sweden as well as Russia and other post-Soviet countries, especially in houses built or renovated in the 1970s. A version known as fly ash bricks, manufactured using fly ash, lime, and gypsum (known as the FaL-G process) are common in South Asia. Calcium-silicate bricks are also manufactured in Canada and the United States, and meet the criteria set forth in ASTM C73 – 10 Standard Specification for Calcium Silicate Brick (Sand-Lime Brick).
Concrete bricks
Bricks formed from concrete are usually termed as blocks or concrete masonry unit, and are typically pale grey. They are made from a dry, small aggregate concrete which is formed in steel moulds by vibration and compaction in either an "egglayer" or static machine. The finished blocks are cured, rather than fired, using low-pressure steam. Concrete bricks and blocks are manufactured in a wide range of shapes, sizes and face treatments – a number of which simulate the appearance of clay bricks.
Concrete bricks are available in many colours and as an engineering brick made with sulfate-resisting Portland cement or equivalent. When made with adequate amount of cement they are suitable for harsh environments such as wet conditions and retaining walls. They are made to standards BS 6073, EN 771-3 or ASTM C55. Concrete bricks contract or shrink so they need movement joints every 5 to 6 metres, but are similar to other bricks of similar density in thermal and sound resistance and fire resistance.
Compressed earth blocks
Compressed earth blocks are made mostly from slightly moistened local soils compressed with a mechanical hydraulic press or manual lever press. A small amount of a cement binder may be added, resulting in a stabilised compressed earth block.
Types
There are thousands of types of bricks that are named for their use, size, forming method, origin, quality, texture, and/or materials.
Categorized by manufacture method:
Extruded – made by being forced through an opening in a steel die, with a very consistent size and shape.
Wire-cut – cut to size after extrusion with a tensioned wire which may leave drag marks
Moulded – shaped in moulds rather than being extruded
Machine-moulded – clay is forced into moulds using pressure
Handmade – clay is forced into moulds by a person
Dry-pressed – similar to soft mud method, but starts with a much thicker clay mix and is compressed with great force.
Categorized by use:
Common or building – A brick not intended to be visible, used for internal structure
Face – A brick used on exterior surfaces to present a clean appearance
Hollow – not solid, the holes are less than 25% of the brick volume
Perforated – holes greater than 25% of the brick volume
Keyed – indentations in at least one face and end to be used with rendering and plastering
Paving – brick intended to be in ground contact as a walkway or roadway
Thin – brick with normal height and length but thin width to be used as a veneer
Specialized use bricks:
Chemically resistant – bricks made with resistance to chemical reactions
Acid brick – acid resistant bricks
Engineering – a type of hard, dense, brick used where strength, low water porosity or acid (flue gas) resistance are needed. Further classified as type A and type B based on their compressive strength
Accrington – a type of engineering brick from England
Fire or refractory – highly heat-resistant bricks
Clinker – a vitrified brick
Ceramic glazed – fire bricks with a decorative glazing
Bricks named for place of origin:
Chicago common brick - a soft brick made near Chicago, Illinois with a range of colors, like buff yellow, salmon pink, or deep red
Cream City brick – a light yellow brick made in Milwaukee, Wisconsin
Dutch brick – a hard light coloured brick originally from the Netherlands
Fareham red brick – a type of construction brick
London stock brick – type of handmade brick which was used for the majority of building work in London and South East England until the growth in the use of machine-made bricks
Nanak Shahi bricks – a type of decorative brick in India
Roman brick – a long, flat brick typically used by the Romans
Staffordshire blue brick – a type of construction brick from England
Optimal dimensions, characteristics, and strength
For efficient handling and laying, bricks must be small enough and light enough to be picked up by the bricklayer using one hand (leaving the other hand free for the trowel). Bricks are usually laid flat, and as a result, the effective limit on the width of a brick is set by the distance which can conveniently be spanned between the thumb and fingers of one hand, normally about . In most cases, the length of a brick is twice its width plus the width of a mortar joint, about or slightly more. This allows bricks to be laid bonded in a structure which increases stability and strength (for an example, see the illustration of bricks laid in English bond, at the head of this article). The wall is built using alternating courses of stretchers, bricks laid longways, and headers, bricks laid crossways. The headers tie the wall together over its width. In fact, this wall is built in a variation of English bond called English cross bond where the successive layers of stretchers are displaced horizontally from each other by half a brick length. In true English bond, the perpendicular lines of the stretcher courses are in line with each other.
A bigger brick makes for a thicker (and thus more insulating) wall. Historically, this meant that bigger bricks were necessary in colder climates (see for instance the slightly larger size of the Russian brick in table below), while a smaller brick was adequate, and more economical, in warmer regions. A notable illustration of this correlation is the Green Gate in Gdansk; built in 1571 of imported Dutch brick, too small for the colder climate of Gdansk, it was notorious for being a chilly and drafty residence. Nowadays this is no longer an issue, as modern walls typically incorporate specialised insulation materials.
The correct brick for a job can be selected from a choice of colour, surface texture, density, weight, absorption, and pore structure, thermal characteristics, thermal and moisture movement, and fire resistance.
In England, the length and width of the common brick remained fairly constant from 1625 when the size was regulated by statute at 9 x x 3 inches (but see brick tax), but the depth has varied from about or smaller in earlier times to about more recently. In the United Kingdom, the usual size of a modern brick (from 1965) is , which, with a nominal mortar joint, forms a unit size of , for a ratio of 6:3:2.
In the United States, modern standard bricks are specified for various uses; The most commonly used is the modular brick has the actual dimensions of × × inches (194 × 92 × 57 mm). With the standard inch mortar joint, this gives the nominal dimensions of 8 x 4 x inches which eases the calculation of the number of bricks in a given wall. The 2:1 ratio of modular bricks means that when they turn corners, a 1/2 running bond is formed without needing to cut the brick down or fill the gap with a cut brick; and the height of modular bricks means that a soldier course matches the height of three modular running courses, or one standard CMU course.
Some brickmakers create innovative sizes and shapes for bricks used for plastering (and therefore not visible on the inside of the building) where their inherent mechanical properties are more important than their visual ones. These bricks are usually slightly larger, but not as large as blocks and offer the following advantages:
A slightly larger brick requires less mortar and handling (fewer bricks), which reduces cost
Their ribbed exterior aids plastering
More complex interior cavities allow improved insulation, while maintaining strength.
Blocks have a much greater range of sizes. Standard co-ordinating sizes in length and height (in mm) include 400×200, 450×150, 450×200, 450×225, 450×300, 600×150, 600×200, and 600×225; depths (work size, mm) include 60, 75, 90, 100, 115, 140, 150, 190, 200, 225, and 250. They are usable across this range as they are lighter than clay bricks. The density of solid clay bricks is around 2000 kg/m3: this is reduced by frogging, hollow bricks, and so on, but aerated autoclaved concrete, even as a solid brick, can have densities in the range of 450–850 kg/m3.
Bricks may also be classified as solid (less than 25% perforations by volume, although the brick may be "frogged," having indentations on one of the longer faces), perforated (containing a pattern of small holes through the brick, removing no more than 25% of the volume), cellular (containing a pattern of holes removing more than 20% of the volume, but closed on one face), or hollow (containing a pattern of large holes removing more than 25% of the brick's volume). Blocks may be solid, cellular or hollow.
The term "frog" can refer to the indentation or the implement used to make it. Modern brickmakers usually use plastic frogs but in the past they were made of wood.
The compressive strength of bricks produced in the United States ranges from about , varying according to the use to which the brick are to be put. In England clay bricks can have strengths of up to 100 MPa, although a common house brick is likely to show a range of 20–40 MPa.
Uses
Bricks are a versatile building material, able to participate in a wide variety of applications, including:
Structural walls, exterior and interior walls
Bearing and non-bearing sound proof partitions
The fireproofing of structural-steel members in the form of firewalls, party walls, enclosures and fire towers
Foundations for stucco
Chimneys and fireplaces
Porches and terraces
Outdoor steps, brick walks and paved floors
Swimming pools
In the United States, bricks have been used for both buildings and pavement. Examples of brick use in buildings can be seen in colonial era buildings and other notable structures around the country. Bricks have been used in paving roads and sidewalks especially during the late 19th century and early 20th century. The introduction of asphalt and concrete reduced the use of brick for paving, but they are still sometimes installed as a method of traffic calming or as a decorative surface in pedestrian precincts. For example, in the early 1900s, most of the streets in the city of Grand Rapids, Michigan, were paved with bricks. Today, there are only about 20 blocks of brick-paved streets remaining (totalling less than 0.5 percent of all the streets in the city limits). Much like in Grand Rapids, municipalities across the United States began replacing brick streets with inexpensive asphalt concrete by the mid-20th century.
In Northwest Europe, bricks have been used in construction for centuries. Until recently, almost all houses were built almost entirely from bricks. Although many houses are now built using a mixture of concrete blocks and other materials, many houses are skinned with a layer of bricks on the outside for aesthetic appeal.
Bricks in the metallurgy and glass industries are often used for lining furnaces, in particular refractory bricks such as silica, magnesia, chamotte and neutral (chromomagnesite) refractory bricks. This type of brick must have good thermal shock resistance, refractoriness under load, high melting point, and satisfactory porosity. There is a large refractory brick industry, especially in the United Kingdom, Japan, the United States, Belgium and the Netherlands.
Engineering bricks are used where strength, low water porosity or acid (flue gas) resistance are needed.
In the UK a red brick university is one founded in the late 19th or early 20th century. The term is used to refer to such institutions collectively to distinguish them from the older Oxbridge institutions, and refers to the use of bricks, as opposed to stone, in their buildings.
Colombian architect Rogelio Salmona was noted for his extensive use of red bricks in his buildings and for using natural shapes like spirals, radial geometry and curves in his designs.
Limitations
Starting in the 20th century, the use of brickwork declined in some areas due to concerns about earthquakes. Earthquakes such as the San Francisco earthquake of 1906 and the 1933 Long Beach earthquake revealed the weaknesses of unreinforced brick masonry in earthquake-prone areas. During seismic events, the mortar cracks and crumbles, so that the bricks are no longer held together. Brick masonry with steel reinforcement, which helps hold the masonry together during earthquakes, has been used to replace unreinforced bricks in many buildings. Retrofitting older unreinforced masonry structures has been mandated in many jurisdictions. However, similar to steel corrosion in reinforced concrete, rebar rusting will compromise the structural integrity of reinforced brick and ultimately limit the expected lifetime, so there is a trade-off between earthquake safety and longevity to a certain extent.
Accessibility
The United States Access Board does not specify which materials a sidewalk must be made of in order to be ADA compliant, but states that sidewalks must not have surface variances of greater than one inch. Due to the accessibility challenges of bricks, the Federal Highway Administration recommends against the use of bricks as well as cobblestones in its accessibility guide for sidewalks and crosswalks. The Brick Industry Association maintains standards for making brick more accessible for disabled people, with proper and regular maintenance being necessary to keep brick accessible.
Some US jurisdictions, such as San Francisco, have taken steps to remove brick sidewalks from certain areas such as Market Street in order to improve accessibility.
Gallery
See also
References
Further reading
Hudson, Kenneth (1972) Building Materials; chap. 3: Bricks and tiles. London: Longman; pp. 28–42
External links
Brick in 20th-Century Architecture
Brick Industry Association United States
Brick Development Association UK
Think Brick Australia
International Brick Collectors Association
Building materials
Masonry
Soil-based building materials | Brick | [
"Physics",
"Engineering"
] | 6,676 | [
"Masonry",
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
4,542 | https://en.wikipedia.org/wiki/Bra%E2%80%93ket%20notation | Bra–ket notation, also called Dirac notation, is a notation for linear algebra and linear operators on complex vector spaces together with their dual space both in the finite-dimensional and infinite-dimensional case. It is specifically designed to ease the types of calculations that frequently come up in quantum mechanics. Its use in quantum mechanics is quite widespread.
Bra–ket notation was created by Paul Dirac in his 1939 publication A New Notation for Quantum Mechanics. The notation was introduced as an easier way to write quantum mechanical expressions. The name comes from the English word "bracket".
Quantum mechanics
In quantum mechanics, bra–ket notation is used ubiquitously to denote quantum states. The notation uses angle brackets, and , and a vertical bar , to construct "bras" and "kets".
A ket is of the form . Mathematically it denotes a vector, , in an abstract (complex) vector space , and physically it represents a state of some quantum system.
A bra is of the form . Mathematically it denotes a linear form , i.e. a linear map that maps each vector in to a number in the complex plane . Letting the linear functional act on a vector is written as .
Assume that on there exists an inner product with antilinear first argument, which makes an inner product space. Then with this inner product each vector can be identified with a corresponding linear form, by placing the vector in the anti-linear first slot of the inner product: . The correspondence between these notations is then . The linear form is a covector to , and the set of all covectors forms a subspace of the dual vector space , to the initial vector space . The purpose of this linear form can now be understood in terms of making projections onto the state to find how linearly dependent two states are, etc.
For the vector space , kets can be identified with column vectors, and bras with row vectors. Combinations of bras, kets, and linear operators are interpreted using matrix multiplication. If has the standard Hermitian inner product , under this identification, the identification of kets and bras and vice versa provided by the inner product is taking the Hermitian conjugate (denoted ).
It is common to suppress the vector or linear form from the bra–ket notation and only use a label inside the typography for the bra or ket. For example, the spin operator on a two-dimensional space of spinors has eigenvalues with eigenspinors . In bra–ket notation, this is typically denoted as , and . As above, kets and bras with the same label are interpreted as kets and bras corresponding to each other using the inner product. In particular, when also identified with row and column vectors, kets and bras with the same label are identified with Hermitian conjugate column and row vectors.
Bra–ket notation was effectively established in 1939 by Paul Dirac; it is thus also known as Dirac notation, despite the notation having a precursor in Hermann Grassmann's use of for inner products nearly 100 years earlier.
Vector spaces
Vectors vs kets
In mathematics, the term "vector" is used for an element of any vector space. In physics, however, the term "vector" tends to refer almost exclusively to quantities like displacement or velocity, which have components that relate directly to the three dimensions of space, or relativistically, to the four of spacetime. Such vectors are typically denoted with over arrows (), boldface () or indices ().
In quantum mechanics, a quantum state is typically represented as an element of a complex Hilbert space, for example, the infinite-dimensional vector space of all possible wavefunctions (square integrable functions mapping each point of 3D space to a complex number) or some more abstract Hilbert space constructed more algebraically. To distinguish this type of vector from those described above, it is common and useful in physics to denote an element of an abstract complex vector space as a ket , to refer to it as a "ket" rather than as a vector, and to pronounce it "ket-" or "ket-A" for .
Symbols, letters, numbers, or even words—whatever serves as a convenient label—can be used as the label inside a ket, with the making clear that the label indicates a vector in vector space. In other words, the symbol "" has a recognizable mathematical meaning as to the kind of variable being represented, while just the "" by itself does not. For example, is not necessarily equal to . Nevertheless, for convenience, there is usually some logical scheme behind the labels inside kets, such as the common practice of labeling energy eigenkets in quantum mechanics through a listing of their quantum numbers. At its simplest, the label inside the ket is the eigenvalue of a physical operator, such as , , , etc.
Notation
Since kets are just vectors in a Hermitian vector space, they can be manipulated using the usual rules of linear algebra. For example:
Note how the last line above involves infinitely many different kets, one for each real number .
Since the ket is an element of a vector space, a bra is an element of its dual space, i.e. a bra is a linear functional which is a linear map from the vector space to the complex numbers. Thus, it is useful to think of kets and bras as being elements of different vector spaces (see below however) with both being different useful concepts.
A bra and a ket (i.e. a functional and a vector), can be combined to an operator of rank one with outer product
Inner product and bra–ket identification on Hilbert space
The bra–ket notation is particularly useful in Hilbert spaces which have an inner product that allows Hermitian conjugation and identifying a vector with a continuous linear functional, i.e. a ket with a bra, and vice versa (see Riesz representation theorem). The inner product on Hilbert space (with the first argument anti linear as preferred by physicists) is fully equivalent to an (anti-linear) identification between the space of kets and that of bras in the bra ket notation: for a vector ket define a functional (i.e. bra) by
Bras and kets as row and column vectors
In the simple case where we consider the vector space , a ket can be identified with a column vector, and a bra as a row vector. If, moreover, we use the standard Hermitian inner product on , the bra corresponding to a ket, in particular a bra and a ket with the same label are conjugate transpose. Moreover, conventions are set up in such a way that writing bras, kets, and linear operators next to each other simply imply matrix multiplication. In particular the outer product of a column and a row vector ket and bra can be identified with matrix multiplication (column vector times row vector equals matrix).
For a finite-dimensional vector space, using a fixed orthonormal basis, the inner product can be written as a matrix multiplication of a row vector with a column vector:
Based on this, the bras and kets can be defined as:
and then it is understood that a bra next to a ket implies matrix multiplication.
The conjugate transpose (also called Hermitian conjugate) of a bra is the corresponding ket and vice versa:
because if one starts with the bra
then performs a complex conjugation, and then a matrix transpose, one ends up with the ket
Writing elements of a finite dimensional (or mutatis mutandis, countably infinite) vector space as a column vector of numbers requires picking a basis. Picking a basis is not always helpful because quantum mechanics calculations involve frequently switching between different bases (e.g. position basis, momentum basis, energy eigenbasis), and one can write something like "" without committing to any particular basis. In situations involving two different important basis vectors, the basis vectors can be taken in the notation explicitly and here will be referred simply as "" and "".
Non-normalizable states and non-Hilbert spaces
Bra–ket notation can be used even if the vector space is not a Hilbert space.
In quantum mechanics, it is common practice to write down kets which have infinite norm, i.e. non-normalizable wavefunctions. Examples include states whose wavefunctions are Dirac delta functions or infinite plane waves. These do not, technically, belong to the Hilbert space itself. However, the definition of "Hilbert space" can be broadened to accommodate these states (see the Gelfand–Naimark–Segal construction or rigged Hilbert spaces). The bra–ket notation continues to work in an analogous way in this broader context.
Banach spaces are a different generalization of Hilbert spaces. In a Banach space , the vectors may be notated by kets and the continuous linear functionals by bras. Over any vector space without topology, we may also notate the vectors by kets and the linear functionals by bras. In these more general contexts, the bracket does not have the meaning of an inner product, because the Riesz representation theorem does not apply.
Usage in quantum mechanics
The mathematical structure of quantum mechanics is based in large part on linear algebra:
Wave functions and other quantum states can be represented as vectors in a complex Hilbert space. (The exact structure of this Hilbert space depends on the situation.) In bra–ket notation, for example, an electron might be in the "state" . (Technically, the quantum states are rays of vectors in the Hilbert space, as corresponds to the same state for any nonzero complex number .)
Quantum superpositions can be described as vector sums of the constituent states. For example, an electron in the state is in a quantum superposition of the states and .
Measurements are associated with linear operators (called observables) on the Hilbert space of quantum states.
Dynamics are also described by linear operators on the Hilbert space. For example, in the Schrödinger picture, there is a linear time evolution operator with the property that if an electron is in state right now, at a later time it will be in the state , the same for every possible .
Wave function normalization is scaling a wave function so that its norm is 1.
Since virtually every calculation in quantum mechanics involves vectors and linear operators, it can involve, and often does involve, bra–ket notation. A few examples follow:
Spinless position–space wave function
The Hilbert space of a spin-0 point particle is spanned by a "position basis" , where the label extends over the set of all points in position space. This label is the eigenvalue of the position operator acting on such a basis state, . Since there are an uncountably infinite number of vector components in the basis, this is an uncountably infinite-dimensional Hilbert space. The dimensions of the Hilbert space (usually infinite) and position space (usually 1, 2 or 3) are not to be conflated.
Starting from any ket in this Hilbert space, one may define a complex scalar function of , known as a wavefunction,
On the left-hand side, is a function mapping any point in space to a complex number; on the right-hand side,
is a ket consisting of a superposition of kets with relative coefficients specified by that function.
It is then customary to define linear operators acting on wavefunctions in terms of linear operators acting on kets, by
For instance, the momentum operator has the following coordinate representation,
One occasionally even encounters an expression such as , though this is something of an abuse of notation. The differential operator must be understood to be an abstract operator, acting on kets, that has the effect of differentiating wavefunctions once the expression is projected onto the position basis,
even though, in the momentum basis, this operator amounts to a mere multiplication operator (by ). That is, to say,
or
Overlap of states
In quantum mechanics the expression is typically interpreted as the probability amplitude for the state to collapse into the state . Mathematically, this means the coefficient for the projection of onto . It is also described as the projection of state onto state .
Changing basis for a spin-1/2 particle
A stationary spin- particle has a two-dimensional Hilbert space. One orthonormal basis is:
where is the state with a definite value of the spin operator equal to + and is the state with a definite value of the spin operator equal to −.
Since these are a basis, any quantum state of the particle can be expressed as a linear combination (i.e., quantum superposition) of these two states:
where and are complex numbers.
A different basis for the same Hilbert space is:
defined in terms of rather than .
Again, any state of the particle can be expressed as a linear combination of these two:
In vector form, you might write
depending on which basis you are using. In other words, the "coordinates" of a vector depend on the basis used.
There is a mathematical relationship between , , and ; see change of basis.
Pitfalls and ambiguous uses
There are some conventions and uses of notation that may be confusing or ambiguous for the non-initiated or early student.
Separation of inner product and vectors
A cause for confusion is that the notation does not separate the inner-product operation from the notation for a (bra) vector. If a (dual space) bra-vector is constructed as a linear combination of other bra-vectors (for instance when expressing it in some basis) the notation creates some ambiguity and hides mathematical details. We can compare bra–ket notation to using bold for vectors, such as , and for the inner product. Consider the following dual space bra-vector in the basis :
It has to be determined by convention if the complex numbers are inside or outside of the inner product, and each convention gives different results.
Reuse of symbols
It is common to use the same symbol for labels and constants. For example, , where the symbol is used simultaneously as the name of the operator , its eigenvector and the associated eigenvalue . Sometimes the hat is also dropped for operators, and one can see notation such as .
Hermitian conjugate of kets
It is common to see the usage , where the dagger () corresponds to the Hermitian conjugate. This is however not correct in a technical sense, since the ket, , represents a vector in a complex Hilbert-space , and the bra, , is a linear functional on vectors in . In other words, is just a vector, while is the combination of a vector and an inner product.
Operations inside bras and kets
This is done for a fast notation of scaling vectors. For instance, if the vector is scaled by , it may be denoted . This can be ambiguous since is simply a label for a state, and not a mathematical object on which operations can be performed. This usage is more common when denoting vectors as tensor products, where part of the labels are moved outside the designed slot, e.g. .
Linear operators
Linear operators acting on kets
A linear operator is a map that inputs a ket and outputs a ket. (In order to be called "linear", it is required to have certain properties.) In other words, if is a linear operator and is a ket-vector, then is another ket-vector.
In an -dimensional Hilbert space, we can impose a basis on the space and represent in terms of its coordinates as a column vector. Using the same basis for , it is represented by an complex matrix. The ket-vector can now be computed by matrix multiplication.
Linear operators are ubiquitous in the theory of quantum mechanics. For example, observable physical quantities are represented by self-adjoint operators, such as energy or momentum, whereas transformative processes are represented by unitary linear operators such as rotation or the progression of time.
Linear operators acting on bras
Operators can also be viewed as acting on bras from the right hand side. Specifically, if is a linear operator and is a bra, then is another bra defined by the rule
(in other words, a function composition). This expression is commonly written as (cf. energy inner product)
In an -dimensional Hilbert space, can be written as a row vector, and (as in the previous section) is an matrix. Then the bra can be computed by normal matrix multiplication.
If the same state vector appears on both bra and ket side,
then this expression gives the expectation value, or mean or average value, of the observable represented by operator for the physical system in the state .
Outer products
A convenient way to define linear operators on a Hilbert space is given by the outer product: if is a bra and is a ket, the outer product
denotes the rank-one operator with the rule
For a finite-dimensional vector space, the outer product can be understood as simple matrix multiplication:
The outer product is an matrix, as expected for a linear operator.
One of the uses of the outer product is to construct projection operators. Given a ket of norm 1, the orthogonal projection onto the subspace spanned by is
This is an idempotent in the algebra of observables that acts on the Hilbert space.
Hermitian conjugate operator
Just as kets and bras can be transformed into each other (making into ), the element from the dual space corresponding to is , where denotes the Hermitian conjugate (or adjoint) of the operator . In other words,
If is expressed as an matrix, then is its conjugate transpose.
Properties
Bra–ket notation was designed to facilitate the formal manipulation of linear-algebraic expressions. Some of the properties that allow this manipulation are listed herein. In what follows, and denote arbitrary complex numbers, denotes the complex conjugate of , and denote arbitrary linear operators, and these properties are to hold for any choice of bras and kets.
Linearity
Since bras are linear functionals,
By the definition of addition and scalar multiplication of linear functionals in the dual space,
Associativity
Given any expression involving complex numbers, bras, kets, inner products, outer products, and/or linear operators (but not addition), written in bra–ket notation, the parenthetical groupings do not matter (i.e., the associative property holds). For example:
and so forth. The expressions on the right (with no parentheses whatsoever) are allowed to be written unambiguously because of the equalities on the left. Note that the associative property does not hold for expressions that include nonlinear operators, such as the antilinear time reversal operator in physics.
Hermitian conjugation
Bra–ket notation makes it particularly easy to compute the Hermitian conjugate (also called dagger, and denoted ) of expressions. The formal rules are:
The Hermitian conjugate of a bra is the corresponding ket, and vice versa.
The Hermitian conjugate of a complex number is its complex conjugate.
The Hermitian conjugate of the Hermitian conjugate of anything (linear operators, bras, kets, numbers) is itself—i.e.,
Given any combination of complex numbers, bras, kets, inner products, outer products, and/or linear operators, written in bra–ket notation, its Hermitian conjugate can be computed by reversing the order of the components, and taking the Hermitian conjugate of each.
These rules are sufficient to formally write the Hermitian conjugate of any such expression; some examples are as follows:
Kets:
Inner products: Note that is a scalar, so the Hermitian conjugate is just the complex conjugate, i.e.,
Matrix elements:
Outer products:
Composite bras and kets
Two Hilbert spaces and may form a third space by a tensor product. In quantum mechanics, this is used for describing composite systems. If a system is composed of two subsystems described in and respectively, then the Hilbert space of the entire system is the tensor product of the two spaces. (The exception to this is if the subsystems are actually identical particles. In that case, the situation is a little more complicated.)
If is a ket in and is a ket in , the tensor product of the two kets is a ket in . This is written in various notations:
See quantum entanglement and the EPR paradox for applications of this product.
The unit operator
Consider a complete orthonormal system (basis),
for a Hilbert space , with respect to the norm from an inner product .
From basic functional analysis, it is known that any ket can also be written as
with the inner product on the Hilbert space.
From the commutativity of kets with (complex) scalars, it follows that
must be the identity operator, which sends each vector to itself.
This, then, can be inserted in any expression without affecting its value; for example
where, in the last line, the Einstein summation convention has been used to avoid clutter.
In quantum mechanics, it often occurs that little or no information about the inner product of two arbitrary (state) kets is present, while it is still possible to say something about the expansion coefficients and of those vectors with respect to a specific (orthonormalized) basis. In this case, it is particularly useful to insert the unit operator into the bracket one time or more.
For more information, see Resolution of the identity,
where
Since , plane waves follow,
In his book (1958), Ch. III.20, Dirac defines the standard ket which, up to a normalization, is the translationally invariant momentum eigenstate in the momentum representation, i.e., . Consequently, the corresponding wavefunction is a constant, , and
as well as
Typically, when all matrix elements of an operator such as are available, this resolution serves to reconstitute the full operator,
Notation used by mathematicians
The object physicists are considering when using bra–ket notation is a Hilbert space (a complete inner product space).
Let be a Hilbert space and a vector in . What physicists would denote by is the vector itself. That is,
Let be the dual space of . This is the space of linear functionals on . The embedding is defined by , where for every the linear functional satisfies for every the functional equation .
Notational confusion arises when identifying and with and respectively. This is because of literal symbolic substitutions. Let and let . This gives
One ignores the parentheses and removes the double bars.
Moreover, mathematicians usually write the dual entity not at the first place, as the physicists do, but at the second one, and they usually use not an asterisk but an overline (which the physicists reserve for averages and the Dirac spinor adjoint) to denote complex conjugate numbers; i.e., for scalar products mathematicians usually write
whereas physicists would write for the same quantity
See also
Angular momentum diagrams (quantum mechanics)
-slit interferometric equation
Quantum state
Inner product space
Notes
References
. Also see his standard text, The Principles of Quantum Mechanics, IV edition, Clarendon Press (1958),
External links
Richard Fitzpatrick, "Quantum Mechanics: A graduate level course", The University of Texas at Austin. Includes:
1. Ket space
2. Bra space
3. Operators
4. The outer product
5. Eigenvalues and eigenvectors
Robert Littlejohn, Lecture notes on "The Mathematical Formalism of Quantum mechanics", including bra–ket notation. University of California, Berkeley.
Information theory
Quantum information science
Linear algebra
Mathematical notation
Paul Dirac
Linear functionals | Bra–ket notation | [
"Mathematics",
"Technology",
"Engineering"
] | 4,945 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory",
"nan",
"Linear algebra",
"Algebra"
] |
4,543 | https://en.wikipedia.org/wiki/Blue | Blue is one of the three primary colours in the RYB colour model (traditional colour theory), as well as in the RGB (additive) colour model. It lies between violet and cyan on the spectrum of visible light. The term blue generally describes colours perceived by humans observing light with a dominant wavelength that's between approximately 450 and 495 nanometres. Most blues contain a slight mixture of other colours; azure contains some green, while ultramarine contains some violet. The clear daytime sky and the deep sea appear blue because of an optical effect known as Rayleigh scattering. An optical effect called the Tyndall effect explains blue eyes. Distant objects appear more blue because of another optical effect called aerial perspective.
Blue has been an important colour in art and decoration since ancient times. The semi-precious stone lapis lazuli was used in ancient Egypt for jewellery and ornament and later, in the Renaissance, to make the pigment ultramarine, the most expensive of all pigments. In the eighth century Chinese artists used cobalt blue to colour fine blue and white porcelain. In the Middle Ages, European artists used it in the windows of cathedrals. Europeans wore clothing coloured with the vegetable dye woad until it was replaced by the finer indigo from America. In the 19th century, synthetic blue dyes and pigments gradually replaced organic dyes and mineral pigments. Dark blue became a common colour for military uniforms and later, in the late 20th century, for business suits. Because blue has commonly been associated with harmony, it was chosen as the colour of the flags of the United Nations and the European Union.
In the United States and Europe, blue is the colour that both men and women are most likely to choose as their favourite, with at least one recent survey showing the same across several other countries, including China, Malaysia, and Indonesia. Past surveys in the US and Europe have found that blue is the colour most commonly associated with harmony, confidence, masculinity, knowledge, intelligence, calmness, distance, infinity, the imagination, cold, and sadness.
Etymology and linguistics
The modern English word blue comes from Middle English or , from the Old French , a word of Germanic origin, related to the Old High German word (meaning 'shimmering, lustrous'). In heraldry, the word azure is used for blue.
In Russian, Spanish, Mongolian, Irish, and some other languages, there is no single word for blue, but rather different words for light blue (, ; ) and dark blue (, ; ) (see Colour term).
Several languages, including Japanese and Lakota Sioux, use the same word to describe blue and green. For example, in Vietnamese, the colour of both tree leaves and the sky is . In Japanese, the word for blue (, ) is often used for colours that English speakers would refer to as green, such as the colour of a traffic signal meaning "go". In Lakota, the word is used for both blue and green, the two colours not being distinguished in older Lakota (for more on this subject, see Blue–green distinction in language).
Linguistic research indicates that languages do not begin by having a word for the colour blue. Colour names often developed individually in natural languages, typically beginning with black and white (or dark and light), and then adding red, and only much later – usually as the last main category of colour accepted in a language – adding the colour blue, probably when blue pigments could be manufactured reliably in the culture using that language.
Optics and colour theory
The term blue generally describes colours perceived by humans observing light with a dominant wavelength between approximately 450 and 495 nanometres. Blues with a higher frequency and thus a shorter wavelength gradually look more violet, while those with a lower frequency and a longer wavelength gradually appear more green. Purer blues are in the middle of this range, e.g., around 470 nanometres.
Isaac Newton included blue as one of the seven colours in his first description of the visible spectrum. He chose seven colours because that was the number of notes in the musical scale, which he believed was related to the optical spectrum. He included indigo, the hue between blue and violet, as one of the separate colours, though today it is usually considered a hue of blue.
In painting and traditional colour theory, blue is one of the three primary colours of pigments (red, yellow, blue), which can be mixed to form a wide gamut of colours. Red and blue mixed together form violet, blue and yellow together form green. Mixing all three primary colours together produces a dark brown. From the Renaissance onward, painters used this system to create their colours (see RYB colour model).
The RYB model was used for colour printing by Jacob Christoph Le Blon as early as 1725. Later, printers discovered that more accurate colours could be created by using combinations of cyan, magenta, yellow, and black ink, put onto separate inked plates and then overlaid one at a time onto paper. This method could produce almost all the colours in the spectrum with reasonable accuracy.
On the HSV colour wheel, the complement of blue is yellow; that is, a colour corresponding to an equal mixture of red and green light. On a colour wheel based on traditional colour theory (RYB) where blue was considered a primary colour, its complementary colour is considered to be orange (based on the Munsell colour wheel).
LED
In 1993, high-brightness blue LEDs were demonstrated by Shuji Nakamura of Nichia Corporation. In parallel, Isamu Akasaki and Hiroshi Amano of Nagoya University were working on a new development which revolutionized LED lighting.
Nakamura was awarded the 2006 Millennium Technology Prize for his invention.
Nakamura, Hiroshi Amano and Isamu Akasaki were awarded the Nobel Prize in Physics in 2014 for the invention of an efficient blue LED.
Lasers
Lasers emitting in the blue region of the spectrum became widely available to the public in 2010 with the release of inexpensive high-powered 445–447 nm laser diode technology. Previously the blue wavelengths were accessible only through DPSS which are comparatively expensive and inefficient, but still widely used by scientists for applications including optogenetics, Raman spectroscopy, and particle image velocimetry, due to their superior beam quality. Blue gas lasers are also still commonly used for holography, DNA sequencing, optical pumping, among other scientific and medical applications.
Shades and variations
Blue is the colour of light between violet and cyan on the visible spectrum. Hues of blue include indigo and ultramarine, closer to violet; pure blue, without any mixture of other colours; Azure, which is a lighter shade of blue, similar to the colour of the sky; Cyan, which is midway in the spectrum between blue and green, and the other blue-greens such as turquoise, teal, and aquamarine.
Blue also varies in shade or tint; darker shades of blue contain black or grey, while lighter tints contain white. Darker shades of blue include ultramarine, cobalt blue, navy blue, and Prussian blue; while lighter tints include sky blue, azure, and Egyptian blue (for a more complete list see the List of colours).
As a structural colour
In nature, many blue phenomena arise from structural colouration, the result of interference between reflections from two or more surfaces of thin films, combined with refraction as light enters and exits such films. The geometry then determines that at certain angles, the light reflected from both surfaces interferes constructively, while at other angles, the light interferes destructively. Diverse colours therefore appear despite the absence of colourants.
Colourants
Artificial blues
Egyptian blue, the first artificial pigment, was produced in the third millennium BC in Ancient Egypt. It is produced by heating pulverized sand, copper, and natron. It was used in tomb paintings and funereal objects to protect the dead in their afterlife. Prior to the 1700s, blue colourants for artwork were mainly based on lapis lazuli and the related mineral ultramarine. A breakthrough occurred in 1709 when German druggist and pigment maker Johann Jacob Diesbach discovered Prussian blue. The new blue arose from experiments involving heating dried blood with iron sulphides and was initially called Berliner Blau. By 1710 it was being used by the French painter Antoine Watteau, and later his successor Nicolas Lancret. It became immensely popular for the manufacture of wallpaper, and in the 19th century was widely used by French impressionist painters. Beginning in the 1820s, Prussian blue was imported into Japan through the port of Nagasaki. It was called bero-ai, or Berlin blue, and it became popular because it did not fade like traditional Japanese blue pigment, ai-gami, made from the dayflower. Prussian blue was used by both Hokusai, in his wave paintings, and Hiroshige.
In 1799 a French chemist, Louis Jacques Thénard, made a synthetic cobalt blue pigment which became immensely popular with painters.
In 1824 the Societé pour l'Encouragement d'Industrie in France offered a prize for the invention of an artificial ultramarine which could rival the natural colour made from lapis lazuli. The prize was won in 1826 by a chemist named Jean Baptiste Guimet, but he refused to reveal the formula of his colour. In 1828, another scientist, Christian Gmelin then a professor of chemistry in Tübingen, found the process and published his formula. This was the beginning of new industry to manufacture artificial ultramarine, which eventually almost completely replaced the natural product.
In 1878 German chemists synthesized indigo. This product rapidly replaced natural indigo, wiping out vast farms growing indigo. It is now the blue of blue jeans. As the pace of organic chemistry accelerated, a succession of synthetic blue dyes were discovered including Indanthrone blue, which had even greater resistance to fading during washing or in the sun, and copper phthalocyanine.
Dyes for textiles and food
Woad and true indigo were once used but since the early 1900s, all indigo is synthetic. Produced on an industrial scale, indigo is the blue of blue jeans. Blue dyes are organic compounds, both synthetic and natural.
For food, the triarylmethane dye Brilliant blue FCF is used for candies. The search continues for stable, natural blue dyes suitable for the food industry.
Various raspberry-flavoured foods are dyed blue. This was done to distinguish strawberry, watermelon and raspberry-flavoured foods. The company ICEE used Blue No. 1 for their blue raspberry ICEEs.
Pigments for painting and glass
Blue pigments were once produced from minerals, especially lapis lazuli and its close relative ultramarine. These minerals were crushed, ground into powder, and then mixed with a quick-drying binding agent, such as egg yolk (tempera painting); or with a slow-drying oil, such as linseed oil, for oil painting. Two inorganic but synthetic blue pigments are cerulean blue (primarily cobalt(II) stanate: ) and Prussian blue (milori blue: primarily ). The chromophore in blue glass and glazes is cobalt(II). Diverse cobalt(II) salts such as cobalt carbonate or cobalt(II) aluminate are mixed with the silica prior to firing. The cobalt occupies sites otherwise filled with silicon.
Inks
Methyl blue is the dominant blue pigment in inks used in pens. Blueprinting involves the production of Prussian blue in situ.
Inorganic compounds
Certain metal ions characteristically form blue solutions or blue salts. Of some practical importance, cobalt is used to make the deep blue glazes and glasses. It substitutes for silicon or aluminum ions in these materials. Cobalt is the blue chromophore in stained glass windows, such as those in Gothic cathedrals and in Chinese porcelain beginning in the Tang dynasty. Copper(II) (Cu2+) also produces many blue compounds, including the commercial algicide copper(II) sulfate (CuSO4.5H2O). Similarly, vanadyl salts and solutions are often blue, e.g. vanadyl sulfate.
In nature
Sky and sea
When sunlight passes through the atmosphere, the blue wavelengths are scattered more widely by the oxygen and nitrogen molecules, and more blue comes to our eyes. This effect is called Rayleigh scattering, after Lord Rayleigh and confirmed by Albert Einstein in 1911.
The sea is seen as blue for largely the same reason: the water absorbs the longer wavelengths of red and reflects and scatters the blue, which comes to the eye of the viewer. The deeper the observer goes, the darker the blue becomes. In the open sea, only about 1% of light penetrates to a depth of 200 metres (see underwater and euphotic depth).
The colour of the sea is also affected by the colour of the sky, reflected by particles in the water; and by algae and plant life in the water, which can make it look green; or by sediment, which can make it look brown.
The farther away an object is, the more blue it often appears to the eye. For example, mountains in the distance often appear blue. This is the effect of atmospheric perspective; the farther an object is away from the viewer, the less contrast there is between the object and its background colour, which is usually blue. In a painting where different parts of the composition are blue, green and red, the blue will appear to be more distant, and the red closer to the viewer. The cooler a colour is, the more distant it seems. Blue light is scattered more than other wavelengths by the gases in the atmosphere, hence our "blue planet".
Minerals
Some of the most desirable gems are blue, including sapphire and tanzanite. Compounds of copper(II) are characteristically blue and so are many copper-containing minerals.
Azurite (, with a deep blue colour, was once employed in medieval years, but it is unstable pigment, losing its colour especially under dry conditions. Lapis lazuli, mined in Afghanistan for more than three thousand years, was used for jewelry and ornaments, and later was crushed and powdered and used as a pigment. The more it was ground, the lighter the blue colour became. Natural ultramarine, made by grinding lapis lazuli into a fine powder, was the finest available blue pigment in the Middle Ages and the Renaissance. It was extremely expensive, and in Italian Renaissance art, it was often reserved for the robes of the Virgin Mary.
Plants and fungi
Intense efforts have focused on blue flowers and the possibility that natural blue colourants could be used as food dyes. Commonly, blue colours in plants are anthocyanins: "the largest group of water-soluble pigments found widespread in the plant kingdom". In the few plants that exploit structural colouration, brilliant colours are produced by structures within cells. The most brilliant blue colouration known in any living tissue is found in the marble berries of Pollia condensata, where a spiral structure of cellulose fibrils scattering blue light. The fruit of quandong (Santalum acuminatum) can appear blue owing to the same effect.
Animals
Blue-pigmented animals are relatively rare. Examples of which include butterflies of the genus Nessaea, where blue is created by pterobilin. Other blue pigments of animal origin include phorcabilin, used by other butterflies in Graphium and Papilio (specifically P. phorcas and P. weiskei), and sarpedobilin, which is used by Graphium sarpedon. Blue-pigmented organelles, known as "cyanosomes", exist in the chromatophores of at least two fish species, the mandarin fish and the picturesque dragonet. More commonly, blueness in animals is a structural colouration; an optical interference effect induced by organized nanometre-sized scales or fibres. Examples include the plumage of several birds like the blue jay and indigo bunting, the scales of butterflies like the morpho butterfly, collagen fibres in the skin of some species of monkey and opossum, and the iridophore cells in some fish and frogs.
Eyes
Blue eyes do not actually contain any blue pigment. Eye colour is determined by two factors: the pigmentation of the eye's iris and the scattering of light by the turbid medium in the stroma of the iris. In humans, the pigmentation of the iris varies from light brown to black. The appearance of blue, green, and hazel eyes results from the Tyndall scattering of light in the stroma, an optical effect similar to what accounts for the blueness of the sky. The irises of the eyes of people with blue eyes contain less dark melanin than those of people with brown eyes, which means that they absorb less short-wavelength blue light, which is instead reflected out to the viewer. Eye colour also varies depending on the lighting conditions, especially for lighter-coloured eyes.
Blue eyes are most common in Ireland, the Baltic Sea area and Northern Europe, and are also found in Eastern, Central, and Southern Europe. Blue eyes are also found in parts of Western Asia, most notably in Afghanistan, Syria, Iraq, and Iran. In Estonia, 99% of people have blue eyes. In Denmark in 1978, only 8% of the population had brown eyes, though through immigration, today that number is about 11%. In Germany, about 75% have blue eyes.
In the United States, as of 2006, 1 out of every 6 people, or 16.6% of the total population, and 22.3% of the white population, have blue eyes, compared with about half of Americans born in 1900, and a third of Americans born in 1950. Blue eyes are becoming less common among American children. In the US, males are 3–5% more likely to have blue eyes than females.
History
In the ancient world
As early as the 7th millennium BC, lapis lazuli was mined in the Sar-i Sang mines, in Shortugai, and in other mines in Badakhshan province in northeast Afghanistan.
Lapis lazuli artifacts, dated to 7570 BC, have been found at Bhirrana, which is the oldest site of Indus Valley civilisation. Lapis was highly valued by the Indus Valley Civilisation (7570–1900 BC). Lapis beads have been found at Neolithic burials in Mehrgarh, the Caucasus, and as far away as Mauritania. It was used in the funeral mask of Tutankhamun (1341–1323 BC).
A term for Blue was relatively rare in many forms of ancient art and decoration, and even in ancient literature. The Ancient Greek poets described the sea as green, brown or "the colour of wine". The colour is mentioned several times in the Hebrew Bible as 'tekhelet'. Reds, blacks, browns, and ochres are found in cave paintings from the Upper Paleolithic period, but not blue. Blue was also not used for dyeing fabric until long after red, ochre, pink, and purple. This is probably due to the perennial difficulty of making blue dyes and pigments. On the other hand, the rarity of blue pigment made it even more valuable.
The earliest known blue dyes were made from plants – woad in Europe, indigo in Asia and Africa, while blue pigments were made from minerals, usually either lapis lazuli or azurite, and required more. Blue glazes posed still another challenge since the early blue dyes and pigments were not thermally robust. In , the blue glaze Egyptian blue was introduced for ceramics, as well as many other objects. The Greeks imported indigo dye from India, calling it indikon, and they painted with Egyptian blue. Blue was not one of the four primary colours for Greek painting described by Pliny the Elder (red, yellow, black, and white). For the Romans, blue was the colour of mourning, as well as the colour of barbarians. The Celts and Germans reportedly dyed their faces blue to frighten their enemies, and tinted their hair blue when they grew old. The Romans made extensive use of indigo and Egyptian blue pigment, as evidenced, in part, by frescos in Pompeii.
The Romans had many words for varieties of blue, including , , , , , , , and , but two words, both of foreign origin, became the most enduring; , from the Germanic word blau, which eventually became bleu or blue; and , from the Arabic word , which became azure.
Blue was widely used in the decoration of churches in the Byzantine Empire. By contrast, in the Islamic world, blue was of secondary to green, believed to be the favourite colour of the Prophet Mohammed. At certain times in Moorish Spain and other parts of the Islamic world, blue was the colour worn by Christians and Jews, because only Muslims were allowed to wear white and green.
In the Middle Ages
In the art and life of Europe during the early Middle Ages, blue played a minor role. This changed dramatically between 1130 and 1140 in Paris, when the Abbe Suger rebuilt the Saint Denis Basilica. Suger considered that light was the visible manifestation of the Holy Spirit. He installed stained glass windows coloured with cobalt, which, combined with the light from the red glass, filled the church with a bluish violet light. The church became the marvel of the Christian world, and the colour became known as the . In the years that followed even more elegant blue stained glass windows were installed in other churches, including at Chartres Cathedral and Sainte-Chapelle in Paris.
In the 12th century the Roman Catholic Church dictated that painters in Italy (and the rest of Europe consequently) to paint the Virgin Mary with blue, which became associated with holiness, humility and virtue. In medieval paintings, blue was used to attract the attention of the viewer to the Virgin Mary. Paintings of the mythical King Arthur began to show him dressed in blue. The coat of arms of the kings of France became an azure or light blue shield, sprinkled with golden fleur-de-lis or lilies. Blue had come from obscurity to become the royal colour.
Renaissance through 18th century
Blue came into wider use beginning in the Renaissance, when artists began to paint the world with perspective, depth, shadows, and light from a single source. In Renaissance paintings, artists tried to create harmonies between blue and red, lightening the blue with lead white paint and adding shadows and highlights. Raphael was a master of this technique, carefully balancing the reds and the blues so no one colour dominated the picture.
Ultramarine was the most prestigious blue of the Renaissance, being more expensive than gold. Wealthy art patrons commissioned works with the most expensive blues possible. In 1616 Richard Sackville commissioned a portrait of himself by Isaac Oliver with three different blues, including ultramarine pigment for his stockings.
An industry for the manufacture of fine blue and white pottery began in the 14th century in Jingdezhen, China, using white Chinese porcelain decorated with patterns of cobalt blue, imported from Persia. It was first made for the family of the Emperor of China, then was exported around the world, with designs for export adapted to European subjects and tastes. The Chinese blue style was also adapted by Dutch craftsmen in Delft and English craftsmen in Staffordshire in the 17th-18th centuries. in the 18th century, blue and white porcelains were produced by Josiah Wedgwood and other British craftsmen.
19th-20th century
The early 19th century saw the ancestor of the modern blue business suit, created by Beau Brummel (1776–1840), who set fashion at the London Court. It also saw the invention of blue jeans, a highly popular form of workers's costume, invented in 1853 by Jacob W. Davis who used metal rivets to strengthen blue denim work clothing in the California gold fields. The invention was funded by San Francisco entrepreneur Levi Strauss, and spread around the world.
Recognizing the emotional power of blue, many artists made it the central element of paintings in the 19th and 20th centuries. They included Pablo Picasso, Pavel Kuznetsov and the Blue Rose art group, and Kandinsky and Der Blaue Reiter (The Blue Rider) school. Henri Matisse expressed deep emotions with blue, "A certain blue penetrates your soul." In the second half of the 20th century, painters of the abstract expressionist movement use blues to inspire ideas and emotions. Painter Mark Rothko observed that colour was "only an instrument;" his interest was "in expressing human emotions tragedy, ecstasy, doom, and so on".
In society and culture
Uniforms
In the 17th century. The Prince-Elector of Brandenburg, Frederick William I of Prussia, chose Prussian blue as the new colour of Prussian military uniforms, because it was made with Woad, a local crop, rather than Indigo, which was produced by the colonies of Brandenburg's rival, England. It was worn by the German army until World War I, with the exception of the soldiers of Bavaria, who wore sky-blue.
In 1748, the Royal Navy adopted a dark shade of blue for the uniform of officers. It was first known as marine blue, now known as navy blue. The militia organized by George Washington selected blue and buff, the colours of the British Whig Party. Blue continued to be the colour of the field uniform of the US Army until 1902, and is still the colour of the dress uniform.
In the 19th century, police in the United Kingdom, including the Metropolitan Police and the City of London Police also adopted a navy blue uniform. Similar traditions were embraced in France and Austria. It was also adopted at about the same time for the uniforms of the officers of the New York City Police Department.
Gender
Blue is used to represent males. Beginning as a trend the mid-19th century and applying primarily to clothing, gendered associations with blue became more widespread from the 1950s. The colour became associated with males after the second world war.
Religion
Blue in Judaism: In the Torah, the Israelites were commanded to put fringes, tzitzit, on the corners of their garments, and to weave within these fringes a "twisted thread of blue (tekhelet)". In ancient days, this blue thread was made from a dye extracted from a Mediterranean snail called the hilazon. Maimonides claimed that this blue was the colour of "the clear noonday sky"; Rashi, the colour of the evening sky. According to several rabbinic sages, blue is the colour of God's Glory. Staring at this colour aids in mediation, bringing us a glimpse of the "pavement of sapphire, like the very sky for purity", which is a likeness of the Throne of God. (The Hebrew word for glory) Many items in the Mishkan, the portable sanctuary in the wilderness, such as the menorah, many of the vessels, and the Ark of the Covenant, were covered with blue cloth when transported from place to place.
Blue in Christianity: Blue is particularly associated with the Virgin Mary. This was the result of a decree of Pope Gregory I (540–601) who ordered that all religious paintings should tell a story which was clearly comprehensible to all viewers, and that figures should be easily recognizable, especially that of the figure of Mary. If she was alone in the image, her costume was usually painted with the finest blue, ultramarine. If she was with Christ, her costume was usually painted with a less expensive pigment, to avoid outshining him.
Blue in Hinduism: Many of the gods are depicted as having blue-coloured skin, particularly those associated with Vishnu, who is said to be the preserver of the world, and thus intimately connected to water. Krishna and Rama, Vishnu's avatars, are usually depicted with blue skin. Shiva, the destroyer deity, is also depicted in a light-blue hue, and is called Nīlakaṇṭha, or blue-throated, for having swallowed poison to save the universe during the Samudra Manthana, the churning of the ocean of milk. Blue is used to symbolically represent the fifth, and the throat, chakra (Vishuddha).
Blue in Sikhism: The Akali Nihangs warriors wear all-blue attire. Guru Gobind Singh also has a blue roan horse. The Sikh Rehat Maryada states that the Nishan Sahib hoisted outside every Gurudwara should be xanthic (Basanti in Punjabi) or greyish blue (modern day navy blue) (Surmaaee in Punjabi) colour.
Blue in Paganism: Blue is associated with peace, truth, wisdom, protection, and patience. It helps with healing, psychic ability, harmony, and understanding.
Sports
In sports, blue is widely represented in uniforms in part because the majority of national teams wear the colours of their national flag. For example, the national men's football team of France are known as Les Bleus (the Blues). Similarly, Argentina, Italy, and Uruguay wear blue shirts. The Asian Football Confederation and the Oceania Football Confederation use blue text on their logos. Blue is well represented in baseball (Blue Jays), basketball, and American football, and Ice hockey. The Indian national cricket team wears blue uniform during One day international matches, as such the team is also referred to as "Men in Blue".
Politics
Unlike red or green, blue was not strongly associated with any particular country, religion or political movement. As the colour of harmony, it was chosen as the colour for the flags of the United Nations, the European Union, and NATO. In politics, blue is often used as the colour of conservative parties, contrasting with the red associated with left-wing parties. Some conservative parties that use the colour blue include the Conservative Party (UK), Conservative Party of Canada, Liberal Party of Australia, Liberal Party of Brazil, and Likud of Israel. However, in some countries, blue is not associated main conservative party. In the United States, the liberal Democratic Party is associated with blue, while the conservative Republican Party with red. US states which have been won by the Democratic Party in four consecutive presidential elections are termed "blue states", while those that have been won by the Republican Party are termed "red states". South Korea also uses this colour model, with the Democratic Party on the left using blue and the People Power Party on the right using red.
See also
Engineer's blue
Lists of colours
Non-photo blue
Blue pigments
References
Works cited
(page numbers refer to the French translation)
Further reading
External links
"Friday essay: from the Great Wave to Starry Night, how a blue pigment changed the world", By Hugh Davies, theconversation.com
Optical spectrum
Primary colors
Rainbow colors
Secondary colors
Shades of violet
Web colors
Masculinity | Blue | [
"Physics"
] | 6,346 | [
"Optical spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
4,545 | https://en.wikipedia.org/wiki/BDSM | BDSM is a variety of often erotic practices or roleplaying involving bondage, discipline, dominance and submission, sadomasochism, and other related interpersonal dynamics. Given the wide range of practices, some of which may be engaged in by people who do not consider themselves to be practising BDSM, inclusion in the BDSM community or subculture often is said to depend on self-identification and shared experience.
The initialism BDSM is first recorded in a Usenet post from 1991, and is interpreted as a combination of the abbreviations B/D (Bondage and Discipline), D/s (Dominance and submission), and S/M (Sadism and Masochism). BDSM is now used as a catch-all phrase covering a wide range of activities, forms of interpersonal relationships, and distinct subcultures. BDSM communities generally welcome anyone with a non-normative streak who identifies with the community; this may include cross-dressers, body modification enthusiasts, animal roleplayers, rubber fetishists, and others.
Activities and relationships in BDSM are often characterized by the participants' taking on roles that are complementary and involve inequality of power; thus, the idea of informed consent of both the partners is essential. The terms submissive and dominant are often used to distinguish these roles: the dominant partner ("dom") takes psychological control over the submissive ("sub"). The terms top and bottom are also used; the top is the instigator of an action while the bottom is the receiver of the action. The two sets of terms are subtly different: for example, someone may choose to act as bottom to another person, for example, by being whipped, purely recreationally, without any implication of being psychologically dominated, and submissives may be ordered to massage their dominant partners. Although the bottom carries out the action and the top receives it, they have not necessarily switched roles.
The abbreviations sub and dom are frequently used instead of submissive and dominant. Sometimes the female-specific terms mistress, domme, and dominatrix are used to describe a dominant woman, instead of the sometimes gender-neutral term dom. Individuals who change between top/dominant and bottom/submissive roles—whether from relationship to relationship or within a given relationship—are called switches. The precise definition of roles and self-identification is a common subject of debate among BDSM participants.
Fundamentals
BDSM is an umbrella term for certain kinds of erotic behaviour between consenting adults, encompassing various subcultures. Terms for roles vary widely among the subcultures. Top and dominant are widely used for those partner(s) in the relationship or activity who are, respectively, the physically active or controlling participants. Bottom and submissive are widely used for those partner(s) in the relationship or activity who are, respectively, the physically receptive or controlled participants. The interaction between tops and bottoms—where physical or mental control of the bottom is surrendered to the top—is sometimes known as "power exchange", whether in the context of an encounter or a relationship.
BDSM actions can often take place during a specific period of time agreed to by both parties, referred to as "play", a "scene", or a "session". Participants usually derive pleasure from this, even though many of the practices—such as inflicting pain or humiliation or being restrained—would be unpleasant under other circumstances. Explicit sexual activity, such as sexual penetration, may occur within a session, but is not essential. For legal reasons, such explicit sexual interaction is seen only rarely in public play spaces and is sometimes banned by the rules of a party or playspace. Whether it is a public "playspace"—ranging from a party at an established community dungeon to a hosted play "zone" at a nightclub or social event—the parameters of allowance can vary. Some have a policy of panties/nipple sticker for women (underwear for men) and some allow full nudity with explicit sexual acts.
The fundamental principles for the exercise of BDSM require that it be performed with the informed consent of all parties. Since the 1980s, many practitioners and organizations have adopted the motto (originally from the statement of purpose of GMSMA—a gay SM activist organization) safe, sane and consensual (SSC), which means that everything is based on safe activities, that all participants are of sufficiently sound mind to consent, and that all participants do consent. Mutual consent makes a clear legal and ethical distinction between BDSM and such crimes as sexual assault and domestic violence.
Some BDSM practitioners prefer a code of behaviour that differs from SSC. Described as "risk-aware consensual kink" (RACK), this code shows a preference for a style in which the individual responsibility of the involved parties is emphasized more strongly, with each participant being responsible for their own well-being. Advocates of RACK argue that SSC can hamper discussion of risk because no activity is truly "safe", and that discussion of even low-risk possibilities is necessary for truly informed consent. They further argue that setting a discrete line between "safe" and "not-safe" activities ideologically denies consenting adults the right to evaluate risks versus rewards for themselves; that some adults will be drawn to certain activities regardless of the risk; and that BDSM play—particularly higher-risk play or edgeplay—should be treated with the same regard as extreme sports, with both respect and the demand that practitioners educate themselves and practice the higher-risk activities to decrease risk. RACK may be seen as focusing primarily upon awareness and informed consent, rather than accepted safe practices.
Consent is the most important criterion. The consent and compliance for a sadomasochistic situation can be granted only by people who can judge the potential results. For their consent, they must have relevant information (the extent to which the scene will go, potential risks, if a safeword will be used, what that is, and so on) at hand and the necessary mental capacity to judge. The resulting consent and understanding is occasionally summarized in a written "contract", which is an agreement of what can and cannot take place.
BDSM play is usually structured such that it is possible for the consenting partner to withdraw their consent at any point during a scene; for example, by using a safeword that was agreed on in advance. Use of the agreed safeword (or occasionally a "safe symbol" such as dropping a ball or ringing a bell, especially when speech is restricted) is seen by some as an explicit withdrawal of consent. Failure to honor a safeword is considered serious misconduct and could constitute a crime, depending on the relevant law, since the bottom or top has explicitly revoked their consent to any actions that follow the use of the safeword. For other scenes, particularly in established relationships, a safeword may be agreed to signify a warning ("this is getting too intense") rather than explicit withdrawal of consent; and a few choose not to use a safeword at all.
Terminology and subtypes
The initialism BDSM stands for:
Bondage and discipline (B&D)
Dominance and submission (D&s)
Sadomasochism (or S&M)
These terms replaced sadomasochism, as they more broadly cover BDSM activities and focus on the submissive roles instead of psychological pain. The model is only an attempt at phenomenological differentiation. Individual tastes and preferences in the area of human sexuality may overlap among these areas.
Under the initialism BDSM, these psychological and physiological facets are also included:
Male dominance
Male submission
Female dominance
Female submission
The term bondage describes the practice of physical restraint. Bondage is usually, but not always, a sexual practice. While bondage is a very popular variation within the larger field of BDSM, it is nevertheless sometimes differentiated from the rest of this field. A 2015 study of over 1,000 Canadians showed that about half of all men held fantasies of bondage, and almost half of all women did as well. In a strict sense, bondage means binding the partner by tying their appendages together; for example, by the use of handcuffs or ropes, or by lashing their arms to an object. Bondage can also be achieved by spreading the appendages and fastening them with chains or ropes to a St. Andrew's cross or spreader bars.
The term discipline describes the use of rules and punishment to control overt behaviour. Punishment can be pain caused physically (such as caning), humiliation caused psychologically (such as a public flagellation) or loss of freedom caused physically (for example, chaining the submissive partner to the foot of a bed). Another aspect is the structured training of the bottom.
Dominance and submission (also known as D&s, Ds or D/s) is a set of behaviours, customs and rituals relating to the giving and accepting of control of one individual over another in an erotic or lifestyle context. It explores the more mental aspect of BDSM. This is also the case in many relationships not considering themselves as sadomasochistic; it is considered to be a part of BDSM if it is practiced purposefully. The range of its individual characteristics is thereby wide.
Often, BDSM contracts are set out in writing to record the formal consent of the parties to the power exchange, stating their common vision of the relationship dynamic. The purpose of this kind of agreement is primarily to encourage discussion and negotiation in advance and then to document that understanding for the benefit of all parties. Such documents have not been recognized as being legally binding, nor are they intended to be. These agreements are binding in the sense that the parties have the expectation that the negotiated rules will be followed. Often other friends and community members may witness the signing of such a document in a ceremony, and so parties violating their agreement can result in loss of face, respect or status with their friends in the community.
In general, as compared to conventional relationships, BDSM participants go to greater lengths to negotiate the important aspects of their relationships in advance, and to contribute significant effort toward learning about and following safe practices.
In D/s, the dominant is the top and the submissive is the bottom. In S/M, the sadist is usually the top and the masochist the bottom, but these roles are frequently more complicated or jumbled (as in the case of being dominant, masochists who may arrange for their submissive to carry out S/M activities on them). As in B/D, the declaration of the top/bottom may be required, though sadomasochists may also play without any power exchange at all, with both partners equally in control of the play.
Etymology
The term sadomasochism is derived from the words sadism and masochism. These terms differ somewhat from the same terms used in psychology since those require that the sadism or masochism cause significant distress or involve non-consenting partners. Sadomasochism refers to the aspects of BDSM surrounding the exchange of physical or emotional pain. Sadism describes sexual pleasure derived by inflicting pain, degradation, humiliation on another person or causing another person to suffer. On the other hand, the masochist enjoys being hurt, humiliated, or suffering within the consensual scenario. Sadomasochistic scenes sometimes reach a level that appears more extreme or cruel than other forms of BDSM—for example, when a masochist is brought to tears or is severely bruised—and is occasionally unwelcome at BDSM events or parties. Sadomasochism does not imply enjoyment through causing or receiving pain in other situations (for example, accidental injury, medical procedures).
The terms sadism and masochism are derived from the names of the Marquis de Sade and Leopold von Sacher-Masoch, based on the content of the authors' works. Although the names of de Sade and Sacher-Masoch are attached to the terms sadism and masochism respectively, the scenes described in de Sade's works do not meet modern BDSM standards of informed consent. BDSM is solely based on consensual activities, and based on its system and laws. The concepts presented by de Sade are not in accordance with the BDSM culture, even though they are sadistic in nature. In 1843, the Ruthenian physician Heinrich Kaan published (Psychopathy of Sex), a writing in which he converts the sin conceptions of Christianity into medical diagnoses. With his work, the originally theological terms perversion, aberration and deviation became part of the scientific terminology for the first time. The German psychiatrist Richard von Krafft-Ebing introduced the terms sadism and masochism to the medical community in his work (New research in the area of Psychopathy of Sex) in 1890.
In 1905, Sigmund Freud described sadism and masochism in his Three Essays on the Theory of Sexuality as diseases developing from an incorrect development of the child psyche and laid the groundwork for the scientific perspective on the subject in the following decades. This led to the first time use of the compound term sado-masochism (German ) by the Viennese psychoanalytic Isidor Isaak Sadger in their work, "" ("Regarding the sadomasochistic complex") in 1913.
In the later 20th century, BDSM activists have protested against these conceptual models, as they were derived from the philosophies of two singular historical figures. Both Freud and Krafft-Ebing were psychiatrists; their observations on sadism and masochism were dependent on psychiatric patients, and their models were built on the assumption of psychopathology. BDSM activists argue that it is illogical to attribute human behavioural phenomena as complex as sadism and masochism to the "inventions" of two historic individuals. Advocates of BDSM have sought to distinguish themselves from widely held notions of antiquated psychiatric theory by the adoption of the term BDSM as a distinction from the now common usage of those psychological terms, abbreviated as S&M.
Behavioural and physiological aspects
BDSM is commonly mistaken as being "all about pain". Freud was confounded by the complexity and counterintuitiveness of practitioners' doing things that are self-destructive and painful. Rather than pain, BDSM practitioners are primarily concerned with power, humiliation, and pleasure. The aspects of D/s and B/D may not include physical suffering at all, but include the sensations experienced by different emotions of the mind.
Of the three categories of BDSM, only sadomasochism specifically requires pain, but this is typically a means to an end, as a vehicle for feelings of humiliation, dominance, etc. In psychology, this aspect becomes a deviant behaviour once the act of inflicting or experiencing pain becomes a substitute for or the main source of sexual pleasure. In its most extreme, the preoccupation on this kind of pleasure can lead participants to view humans as insensate means of sexual gratification.
Dominance and submission of power are an entirely different experience, and are not always psychologically associated with physical pain. Many BDSM activities involve no pain or humiliation, but just the exchange of power and control. During the activities, the participants may feel endorphin effects comparable to "runner's high" and to the afterglow of orgasm. The corresponding trance-like mental state is also called subspace, for the submissive, and domspace, for the dominant. Some use body stress to describe this physiological sensation. The experience of algolagnia is important, but is not the only motivation for many BDSM practitioners. The philosopher Edmund Burke called the sensation of pleasure derived from pain "sublime". Couples engaging in consensual BDSM tend to show hormonal changes that indicate decreases in stress and increases in emotional bonding.
There is an array of BDSM practitioners who take part in sessions in which they do not receive any personal gratification. They enter such situations solely with the intention to allow their partners to indulge their own needs or fetishes. Professional dominants do this in exchange for money, but non-professionals do it for the sake of their partners.
In some BDSM sessions, the top exposes the bottom to a range of sensual experiences, such as pinching; biting; scratching with fingernails; erotic spanking; erotic electrostimulation; and the use of crops, whips, liquid wax, ice cubes, and Wartenberg wheels. Fixation by handcuffs, ropes, or chains may occur. The repertoire of possible "toys" is limited only by the imagination of both partners. To some extent, everyday items, such as clothespins, wooden spoons, and plastic wrap, are used in sex play. It is commonly considered that a pleasurable BDSM experience during a session depends strongly on the top's competence and experience and the bottom's physical and mental state. Trust and sexual arousal help the partners enter a shared mindset.
Types of play
Following are some of the types of BDSM play:
Animal roleplay
Bondage (BDSM)
Breast torture
Cock and ball torture
Diaper play
Edgeplay
Erotic electrostimulation
Erotic sexual denial
Spanking
Flogging
Human furniture
Japanese bondage
Medical play
Omorashi and bathroom use control
Paraphilic infantilism
Play piercing
Predicament bondage
Pussy torture
Salirophilia
Sexual roleplay
Suspension
Tickle torture
Urolagnia
Wax play
Safety
Besides safe sex, BDSM sessions often require a wider array of safety precautions than vanilla sex (sexual behaviour without BDSM elements). To ensure consent related to BDSM activity, pre-play negotiations are commonplace, especially among partners who do not know each other very well. In practice, pick-up scenes at clubs or parties may sometimes be low in negotiation (much as pick-up sex from singles bars may not involve much negotiation or disclosure). These negotiations concern the interests and fantasies of each partner and establish a framework of both acceptable and unacceptable activities. This kind of discussion is a typical "unique selling proposition" of BDSM sessions and quite commonplace. Additionally, safewords are often arranged to provide for an immediate stop of any activity if any participant should so desire.
Safewords are words or phrases that are called out when things are either not going as planned or have crossed a threshold one cannot handle. They are something both parties can remember and recognize and are, by definition, not words commonly used playfully during any kind of scene. Words such as no, stop, and don't, are often inappropriate as a safeword if the roleplaying aspect includes the illusion of non-consent.
The traffic light system (TLS) is the most commonly used set of safewords.
Red – meaning: stop immediately and check the status of your partner
Yellow – meaning: slow down, be careful
Green – meaning: I'm all good, we can start. If used it's normally uttered by everyone involved before the scene can start.
At most clubs and group-organized BDSM parties and events, dungeon monitors (DMs) provide an additional safety net for the people playing there, ensuring that house rules are followed and safewords respected.
BDSM participants are expected to understand practical safety aspects, such as the potential for harm to body parts. Contusion or scarring of the skin can be a concern. Using crops, whips, or floggers, the top's fine motor skills and anatomical knowledge can make the difference between a satisfying session for the bottom and a highly unpleasant experience that may even entail severe physical harm. The very broad range of BDSM "toys" and physical and psychological control techniques often requires a far-reaching knowledge of details related to the requirements of the individual session, such as anatomy, physics, and psychology. Despite these risks, BDSM activities usually result in far less severe injuries than sports like boxing and football, and BDSM practitioners do not visit emergency rooms any more often than the general population.
It is necessary to be able to identify each person's psychological "squicks" or triggers in advance to avoid them. Such losses of emotional balance due to sensory or emotional overload are a fairly commonly discussed issue. It is important to follow participants' reactions empathetically and continue or stop accordingly. For some players, sparking "freakouts" or deliberately using triggers may be the desired outcome. Safewords are one way for BDSM practices to protect both parties. However, partners should be aware of each other's psychological states and behaviours to prevent instances where the "freakouts" prevent the use of safewords.
After any BDSM activities, it is important that the participants go through sexual aftercare, to process and calm down from the activity. After the sessions, participants can need aftercare because their bodies have experienced trauma and they need to mentally come out of the role play.
Social aspects
Types of relationships
Long term
A 2003 study, the first to look at these relationships, fully demonstrated that "quality long-term functioning relationships" exist among practitioners of BDSM, with either sex being the top or bottom (the study was based on 17 heterosexual couples). Respondents in the study expressed their BDSM orientation to be built into who they are, but considered exploring their BDSM interests an ongoing task, and showed flexibility and adaptability in order to match their interests with their partners. The "perfect match" where both in the relationship shared the same tastes and desires was rare, and most relationships required both partners to take up or put away some of their desires. The BDSM activities that the couples partook in varied in sexual to nonsexual significance for the partners who reported doing certain BDSM activities for "couple bonding, stress release, and spiritual quests". The most reported issue amongst respondents was not finding enough time to be in role with most adopting a lifestyle wherein both partners maintain their dominant or submissive role throughout the day.
Amongst the respondents, it was typically the bottoms who wanted to play harder, and be more restricted into their roles when there was a difference in desire to play in the relationship. The author of the study, Bert Cutler, speculated that tops may be less often in the mood to play due to the increased demand for responsibility on their part: being aware of the safety of the situation and prepared to remove the bottom from a dangerous scenario, being conscious of the desires and limits of the bottom, and so on. The author of the study stressed that successful long-term BDSM relationships came after "early and thorough disclosure" from both parties of their BDSM interests.
Many of those engaged in long-term BDSM relationships learned their skills from larger BDSM organizations and communities. There was a lot of discussion by the respondents on the amount of control the top possessed in the relationships but "no discussion of being better, or smarter, or of more value" than the bottom. Couples were generally of the same mind of whether or not they were in an ongoing relationship, but in such cases, the bottom was not locked up constantly, but that their role in the context of the relationship was always present, even when the top was doing non-dominant activities such as household chores, or the bottom being in a more dominant position. In its conclusion the study states:
The study further goes on to list three aspects that made the successful relationships work: early disclosure of interests and continued transparency, a commitment to personal growth, and the use of the dominant/submissive roles as a tool to maintain the relationship. In closing remarks, the author of the study theorizes that due to the serious potential for harm, couples in BDSM relationships develop increased communication that may be higher than in mainstream relationships.
Professional services
A professional dominatrix or professional dominant, often referred to within the culture as a pro-dom(me), offers services encompassing the range of bondage, discipline, and dominance in exchange for money. The term dominatrix is little-used within the non-professional BDSM scene. A non-professional dominant woman is more commonly referred to simply as a domme, dominant, or femdom (short for female dominance). Professional submissives ("pro-subs"), although far more rare, do exist.
Scenes
In BDSM, a "scene" is the stage or setting where BDSM activity takes place, as well as the activity itself. The physical place where a BDSM activity takes place is usually called a dungeon, though some prefer less dramatic terms, including playspace or club. A BDSM activity can, but need not, involve sexual activity or sexual roleplay. A characteristic of many BDSM relationships is the power exchange from the bottom to the dominant partner, and bondage features prominently in BDSM scenes and sexual roleplay.
"The Scene" (including use of the definite article the) is also used in the BDSM community to refer to the BDSM community as a whole. Thus someone who is on "the Scene", and prepared to play in public, might take part in "a scene" at a public play party.
A scene can take place in private between two or more people and can involve a domestic arrangement, such as servitude or a casual or committed lifestyle master/slave relationship. BDSM elements may involve settings of slave training or punishment for breaches of instructions.
A scene can also take place in a club, where the play can be viewed by others. When a scene takes place in a public setting, it may be because the participants enjoy being watched by others, or because of the equipment available, or because having third parties present adds safety for play partners who have only recently met.
Etiquette
Most standard social etiquette rules still apply when at a BDSM event, such as not intimately touching someone you do not know, not touching someone else's belongings (including toys), and abiding by dress codes. Many events open to the public also have rules addressing alcohol consumption, recreational drugs, cell phones, and photography.
A specific scene takes place within the general conventions and etiquette of BDSM, such as requirements for mutual consent and agreement as to the limits of any BDSM activity. This agreement can be incorporated into a formal contract. In addition, most clubs have additional rules which regulate how onlookers may interact with the actual participants in a scene. As is common in BDSM, these are founded on the catchphrase "safe, sane, and consensual".
Parties and clubs
BDSM play parties are events in which BDSM practitioners and other similarly interested people meet in order to communicate, share experiences and knowledge, and to "play" in an erotic atmosphere. BDSM parties show similarities to ones in the dark culture, being based on a more or less strictly enforced dress code; often clothing made of latex, leather or vinyl/PVC, lycra and so on, emphasizing the body's shape and the primary and secondary sexual characteristics. The requirement for such dress codes differ. While some events have none, others have a policy in order to create a more coherent atmosphere and to prevent outsiders from taking part.
At these parties, BDSM can be publicly performed on a stage, or more privately in separate "dungeons". A reason for the relatively fast spread of this kind of event is the opportunity to use a wide range of "playing equipment", which in most apartments or houses is unavailable. Slings, St. Andrew's crosses (or similar restraining constructs), spanking benches, and punishing supports or cages are often made available. The problem of noise disturbance is also lessened at these events, while in the home setting many BDSM activities can be limited by this factor. In addition, such parties offer both exhibitionists and voyeurs a forum to indulge their inclinations without social criticism. Sexual intercourse is not permitted within most public BDSM play spaces or not often seen in others, because it is not the emphasis of this kind of play. In order to ensure the maximum safety and comfort for the participants, certain standards of behaviour have evolved; these include aspects of courtesy, privacy, respect and safewords. Today BDSM parties are taking place in most of the larger cities in the Western world.
This scene appears particularly on the Internet, in publications, and in meetings such as at fetish clubs (like Torture Garden), SM parties, gatherings called munches, and erotic fairs like Venus Berlin. The annual Folsom Street Fair held in San Francisco is the world's largest BDSM event. It has its roots in the gay leather movement. The weekend-long festivities include a wide range of sadomasochistic erotica in a public clothing optional space between 8th and 13th streets with nightly parties associated with the organization.
There are also conventions such as Living in Leather and Black Rose.
Psychology
Research indicates that there is no evidence that a preference for BDSM is a consequence of childhood abuse. Some reports suggest that people abused as children may have more BDSM injuries and have difficulty with safe words being recognized as meaning stop the previously consensual behaviour; thus, it is possible that people choosing BDSM as part of their lifestyle, who also were previously abused, may have had more police or hospital reports of injuries. In one study of three online surveys, many transgender adults remarked that BDSM influenced their individual experience of gender.
Joseph Merlino, author and psychiatry adviser to the New York Daily News, said in an interview that a sadomasochistic relationship, as long as it is consensual, is not a psychological problem:
Some psychologists agree that experiences during early sexual development can have a profound effect on the character of sexuality later in life. Sadomasochistic desires, however, seem to form at a variety of ages. Some individuals report having had them before puberty, while others do not discover them until well into adulthood. According to one study, the majority of male sadomasochists (53%) developed their interest before the age of 15, while the majority of females (78%) developed their interest afterward (Breslow, Evans, and Langley 1985). The prevalence of sadomasochism within the general population is unknown. Despite female sadists being less visible than males, some surveys have resulted in comparable amounts of sadistic fantasies between females and males. The results of such studies demonstrate that one's sex does not determine preference for sadism.
Following a phenomenological study of nine individuals involved in sexual masochistic sessions who regarded pain as central to their experience, sexual masochism was described as an addiction-like tendency, with several features resembling that of drug addiction: craving, intoxication, tolerance and withdrawal. It was also demonstrated how the first masochistic experience is placed on a pedestal, with subsequent use aiming at retrieving this lost sensation, much as described in the descriptive literature on addiction.
Prevalence
BDSM occurs among people of all genders and sexual orientations, and in varied occurrences and intensities. The spectrum ranges from couples with no connections to the subculture outside of their bedrooms or homes, without any awareness of the concept of BDSM, playing "tie-me-up-games", to public scenes on St. Andrew's crosses at large events such as the Folsom Street Fair in San Francisco. Estimation on the overall percentage of BDSM-related sexual behaviour varies.
Alfred Kinsey stated in his 1953 nonfiction book Sexual Behavior in the Human Female that 12% of females and 22% of males reported having an erotic response to a sadomasochistic story. In that book erotic responses to being bitten were given as:
A non-representative survey on the sexual behaviour of American students published in 1997 and based on questionnaires had a response rate of about 8–9%. Its results showed 15% of homosexual and bisexual males, 21% of lesbian and female bisexual students, 11% of heterosexual males and 9% of female heterosexual students committed to BDSM related fantasies. In all groups the level of practical BDSM experiences were around 6%. Within the group of openly lesbian and bisexual females, the quote was significantly higher, at 21%. Independent of their sexual orientation, about 12% of all questioned students, 16% of lesbians and female bisexuals and 8% of heterosexual males articulated an interest in spanking. Experience with this sexual behaviour was indicated by 30% of male heterosexuals, 33% of female bisexuals and lesbians, and 24% of the male gay and bisexual men and female heterosexual women. Even though this study was not considered representative, other surveys indicate similar dimensions in differing target groups.
A representative study done from 2001 to 2002 in Australia found that 1.8% of sexually active people (2.2% men, 1.3% women but no significant sex difference) had engaged in BDSM activity in the previous year. Of the entire sample, 1.8% of men and 1.3% of women had been involved in BDSM. BDSM activity was significantly more likely among bisexuals and homosexuals of both sexes. But among men in general, there was no relationship effect of age, education, language spoken at home or relationship status. Among women, in this study, activity was most common for those between 16 and 19 years of age and least likely for females over 50 years. Activity was also significantly more likely for women who had a regular partner they did not live with, but was not significantly related with speaking a language other than English or education.
Another representative study, published in 1999 by the German Institut für rationale Psychologie, found that about 2/3 of the interviewed women stated a desire to be at the mercy of their sexual partners from time to time. 69% admitted to fantasies dealing with sexual submissiveness, 42% stated interest in explicit BDSM techniques, 25% in bondage. A 1976 study in the general US population suggests three per cent have had positive experiences with Bondage or master-slave roleplaying. Overall 12% of the interviewed females and 18% of the males were willing to try it. A 1990 Kinsey Institute report stated that 5% to 10% of Americans occasionally engage in sexual activities related to BDSM, 11% of men and 17% of women reported trying bondage. Some elements of BDSM have been popularized through increased media coverage since the middle 1990s. Thus both black leather clothing, sexual jewelry such as chains and dominance roleplay appear increasingly outside of BDSM contexts.
According to yet another survey of 317,000 people in 41 countries, about 20% of the surveyed have at least used masks, blindfolds or other bondage utilities once, and 5% explicitly connected themselves with BDSM. In 2004, 19% mentioned spanking as one of their practices and 22% confirmed the use of blindfolds or handcuffs.
A 1985 study found 52 out of 182 female respondents (28%) were involved in sadomasochistic activities.
Recent surveys
A 2009 study on two separate samples of male undergraduate students in Canada found that 62 to 65%, depending on the sample, had entertained sadistic fantasies, and 22 to 39% engaged in sadistic behaviours during sex. The figures were 62 and 52% for bondage fantasies, and 14 to 23% for bondage behaviours. A 2014 study involving a mixed sample of Canadian college students and online volunteers, both male and female, reported that 19% of male samples and 10% of female samples rated the sadistic scenarios described in a questionnaire as being at least "slightly arousing" on a scale that ranged from "very repulsive" to "very arousing"; the difference was statistically significant. The corresponding figures for the masochistic scenarios were 15% for male students and 17% for female students, a non-significant difference. In a 2011 study on 367 middle-aged and elderly men recruited from the broader community in Berlin, 21.8% of the men self-reported sadistic fantasies and 15.5% sadistic behaviors; 24.8% self-reported any such fantasy and/or behavior. The corresponding figures for self-reported masochism were 15.8% for fantasy, 12.3% for behaviour, and 18.5% for fantasy and/or behaviour. In a 2008 study on gay men in Puerto Rico, 14.8% of the over 425 community volunteers reported any sadistic fantasy, desire or behaviour in their lifetime; the corresponding figure for masochism was 15.7%. A 2017 cross-sectional representative survey among the general Belgian population demonstrated a substantial prevalence of BDSM fantasies and activities; 12.5% of the population performed one of more BDSM-practices on a regular basis.
Recent surveys on the spread of BDSM fantasies and practices show strong variations in the range of their results. Researchers believe that 5 to 25 per cent of the population practices sexual behaviour related to pain or dominance and submission. The population with related fantasies is believed to be even larger.
Medical categorization
Reflecting changes in social norms, modern medical opinion is now moving away from regarding BDSM activities as medical disorders, unless they are nonconsensual or involve significant distress or harm.
In 1995, Denmark became the first European Union country to have completely removed sadomasochism from its national classification of diseases. This was followed by Sweden in 2009, Norway in 2010, Finland in 2011, and Iceland in 2015.
DSM
In the past, the Diagnostic and Statistical Manual of Mental Disorders (DSM), the American Psychiatric Association's manual, defined some BDSM activities as sexual disorders. Following campaigns from advocacy organizations including the National Coalition for Sexual Freedom, the current version of the DSM, DSM-5, excludes consensual BDSM from diagnosis when the sexual interests cause no harm or distress.
ICD
The World Health Organization's International Classification of Diseases (ICD) has changed in regard to BDSM in recent years.
In Europe, an organization called :de:ReviseF65 worked to remove sadomasochism from the ICD.
On 18 June 2018, the WHO (World Health Organization) published ICD-11, in which sadomasochism, together with fetishism and fetishistic transvestism (cross-dressing for sexual pleasure) are now removed as psychiatric diagnoses. Moreover, discrimination against fetish-having and BDSM individuals is considered inconsistent with human rights principles endorsed by the United Nations and The World Health Organization.
The classifications of sexual disorders reflect contemporary sexual norms and have moved from a model of pathologization or criminalization of non-reproductive sexual behaviors to a model that reflects sexual well-being and pathologizes the absence or limitation of consent in sexual relations.
Coming out
Some people who are interested in or curious about BDSM decide to come out of the closet, although many sadomasochists remain closeted. Depending upon a survey's participants, about 5 to 25 per cent of the US population show affinity to the subject. Other than a few artists and writers, practically no celebrities are publicly known as sadomasochists.
Public knowledge of one's BDSM lifestyle can have detrimental vocational and social effects for sadomasochists. Many face severe professional consequences or social rejection if they are exposed, either voluntarily or involuntarily, as sadomasochists.
Within feminist circles, the discussion is split roughly into two camps: some who see BDSM as an aspect or reflection of oppression (for example, Alice Schwarzer) and, on the other side, pro-BDSM feminists, often grouped under the banner of sex-positive feminism (see Samois); both of them can be traced back to the 1970s. Some feminists have criticized BDSM for eroticizing power and violence and reinforcing misogyny. They argue that women who engage in BDSM are making a choice that is ultimately bad for women. Feminist defenders of BDSM argue that consensual BDSM activities are enjoyed by many women and validate the sexual inclinations of these women. They argue that there is no connection between consensual kinky activities and sex crimes, and that feminists should not attack other women's sexual desires as being "anti-feminist". They also state that the main point of feminism is to give an individual woman free choices in her life; which includes her sexual desire. While some feminists suggest connections between consensual BDSM scenes and non-consensual rape and sexual assault, other sex-positive ones find the notion insulting to women.
Roles are not fixed to gender, but personal preferences. The dominant partner in a heterosexual relationship may be the woman rather than the man, or BDSM may be part of male/male or female/female sexual relationships. Finally, some people switch, taking either a dominant or submissive role on different occasions. Several studies investigating the possibility of a correlation between BDSM pornography and the violence against women also indicate a lack of correlation. In 1991, a lateral survey came to the conclusion that between 1964 and 1984, despite the increase in amount and availability of sadomasochistic pornography in the U.S., Germany, Denmark and Sweden, there is no correlation with the national number of rapes to be found.
Operation Spanner in the U.K. proves that BDSM practitioners still run the risk of being stigmatized as criminals. In 2003, the media coverage of Jack McGeorge showed that simply participating and working in BDSM support groups poses risks to one's job, even in countries where no law restricts it. Here a clear difference can be seen to the situation of homosexuality. The psychological strain appearing in some individual cases is normally neither articulated nor acknowledged in public. Nevertheless, it leads to a difficult psychological situation in which the person concerned can be exposed to high levels of emotional stress.
In the stages of "self-awareness", he or she realizes their desires related to BDSM scenarios or decides to be open for such. Some authors call this internal coming-out. Two separate surveys on this topic independently came to the conclusion that 58 per cent and 67 per cent of the sample respectively, had realized their disposition before their 19th birthday. Other surveys on this topic show comparable results. Independent of age, coming-out can potentially result in a difficult life crisis, sometimes leading to thoughts or acts of suicide. While homosexuals have created support networks in the last decades, sadomasochistic support networks are just starting to develop in most countries. In German-speaking countries they are only moderately more developed. The Internet is the prime contact point for support groups today, allowing for local and international networking. In the U.S., Kink Aware Professionals (KAP) a privately funded, non-profit service provides the community with referrals to psychotherapeutic, medical, and legal professionals who are knowledgeable about and sensitive to the BDSM, fetish, and leather community. In the U.S. and the U.K., the Woodhull Freedom Foundation & Federation, National Coalition for Sexual Freedom (NCSF) and Sexual Freedom Coalition (SFC) have emerged to represent the interests of sadomasochists. The German Bundesvereinigung Sadomasochismus is committed to the same aim of providing information and driving press relations. In 1996, the website and mailing list Datenschlag went online in German and English providing the largest bibliography, as well as one of the most extensive historical collections of sources related to BDSM.
Social (non-medical) research
Richters et al. (2008) found that people who engaged in BDSM were more likely to have experienced a wider range of sexual practices (e.g., oral or anal sex, more than one partner, group sex, phone sex, viewed pornography, used a sex toy, fisting, etc.). They were, however, not any more likely to have been coerced, unhappy, anxious, or experiencing sexual difficulties. On the contrary, men who had engaged in BDSM scored lower on a psychological distress scale than men who did not.
There have been few studies on the psychological aspects of BDSM using modern scientific standards. Psychotherapist Charles Moser has said there is no evidence for the theory that BDSM has common symptoms or any common psychopathology, emphasizing that there is no evidence that BDSM practitioners have any special psychiatric other problems based on their sexual preferences.
Problems sometimes occur with self-classification. During the phase of the "coming-out", self-questioning related to one's own "normality" is common. According to Moser, the discovery of BDSM preferences can result in fear of the current non-BDSM relationship's destruction. This, combined with the fear of discrimination in everyday life, leads in some cases to a double life which can be highly burdensome. At the same time, the denial of BDSM preferences can induce stress and dissatisfaction with one's own "vanilla"-lifestyle, feeding the apprehension of finding no partner. Moser states that BDSM practitioners having problems finding BDSM partners would probably have problems in finding a non-BDSM partner as well. The wish to remove BDSM preferences is another possible reason for psychological problems since it is not possible in most cases. Finally, the scientist states that BDSM practitioners seldom commit violent crimes. From his point of view, crimes of BDSM practitioners usually have no connection with the BDSM components existing in their life. Moser's study comes to the conclusion that there is no scientific evidence, which could give reason to refuse members of this group work- or safety certificates, adoption possibilities, custody or other social rights or privileges. The Swiss psychoanalyst Fritz Morgenthaler shares a similar perspective in his book, Homosexuality, Heterosexuality, Perversion (1988). He states that possible problems result not necessarily from the non-normative behavior, but in most cases primarily from the real or feared reactions of the social environment towards their own preferences. In 1940 psychoanalyst Theodor Reik reached implicitly the same conclusion in his standard work Aus Leiden Freuden. Masochismus und Gesellschaft.
Moser's results are further supported by a 2008 Australian study by Richters et al. on the demographic and psychosocial features of BDSM participants. The study found that BDSM practitioners were no more likely to have experienced sexual assault than the control group, and were not more likely to feel unhappy or anxious. The BDSM males reported higher levels of psychological well-being than the controls. It was concluded that "BDSM is simply a sexual interest or subculture attractive to a minority, not a pathological symptom of past abuse or difficulty with 'normal' sex."
Gender differences in research
Several recent studies have been conducted on the gender differences and personality traits of BDSM practitioners. Wismeijer and van Assen (2013) found that "the association of BDSM role and gender was strong and significant" with only 8% of women in the study being dominant compared to 75% being submissive.; Hébert and Weaver (2014) found that 9% of women in their study were dominant compared to 88% submissive; Weierstall1 and Giebel (2017) likewise found a significant difference, with 19% of women in the study as dominant compared to 74% as submissive, and a study from Andrea Duarte Silva (2015) indicated that 61.7% of females who are active in BDSM expressed a preference for a submissive role, 25.7% consider themselves a switch, while 12.6% prefer the dominant role. In contrast, 46.6% of men prefer the submissive role, 24% consider themselves to be switches and 29.5% prefer the dominant role. They concluded that "men more often display an engagement in dominant practices, whereas females take on the submissive part. This result is inline with a recent study about mate preferences that has shown that women have a generally higher preference for a dominant partner than men do (Giebel, Moran, Schawohl, & Weierstall, 2015). Women also prefer dominant men, and even men who are aggressive, for a short-term relationship and for the purpose of sexual intercourse (Giebel, Weierstall, Schauer, & Elbert, 2013)". Similarly, studies on sexual fantasy differences between men and women show the latter prefer submissive and passive fantasies over dominant and active ones, with rape and force being common.
Gender differences in masochistic scripts
One common belief of BDSM and kink is that women are more likely to take on masochistic roles than men. Roy Baumeister (2010) had more male masochists in his study than female, and fewer male dominants than female. The lack of statistical significance in these gender differences suggests that no assumptions should be made regarding gender and masochistic roles in BDSM. One explanation why we might think otherwise lies in our social and cultural ideals about femininity; masochism may emphasize certain stereotypically feminine elements through activities like feminization of men and ultra-feminine clothing for women. But such tendencies of the submissive masochistic role should not be interpreted as a connection between it and the stereotypical female role—many masochistic scripts do not include any of these tendencies.
Baumeister found that masochistic males experienced greater: severity of pain, frequency of humiliation (status-loss, degrading, oral), partner infidelity, active participation by other persons, and cross-dressing. Trends also suggested that male masochism included more bondage and oral sex than female (though the data was not significant). Female masochists, on the other hand, experienced greater: frequency in pain, pain as punishment for 'misdeeds' in the relationship context, display humiliation, genital intercourse, and presence of non-participating audiences. The exclusiveness of dominant males in a heterosexual relationship happens because, historically, men in power preferred multiple partners. Finally, Baumeister observes a contrast between the 'intense sensation' focus of male masochism to a more 'meaning and emotion' centred female masochistic script.
Prior argues that although some of these women may appear to be engaging in traditional subordinate or submissive roles, BDSM allows women in both dominant and submissive roles to express and experience personal power through their sexual identities. In a study that she conducted in 2013, she found that the majority of the women she interviewed identified as bottom, submissive, captive, or slave/sex slave. In turn, Prior was able to answer whether or not these women found an incongruity between their sexual identities and feminist identity. Her research found that these women saw little to no incongruity, and in fact felt that their feminist identity supported identities of submissive and slave. For them, these are sexually and emotionally fulfilling roles and identities that, in some cases, feed other aspects of their lives. Prior contends that third wave feminism provides a space for women in BDSM communities to express their sexual identities fully, even when those identities seem counter-intuitive to the ideals of feminism. Furthermore, women who do identify as submissive, sexually or otherwise, find a space within BDSM where they can fully express themselves as integrated, well-balanced, and powerful women.
Women in S/M culture
Levitt, Moser, and Jamison's 1994 study provides a general, if outdated, description of characteristics of women in the sadomasochistic (S/M) subculture. They state that women in S/M tend to have higher education, become more aware of their desires as young adults, are less likely to be married than the general population. The researchers found the majority of females identified as heterosexual and submissive, a substantial minority were versatile—able to switch between dominant and submissive roles—and a smaller minority identified with the dominant role exclusively. Oral sex, bondage and master-slave script were among the most popular activities, while feces/watersports were the least popular.
Orientation observances in research
BDSM is considered by some of its practitioners to be a sexual orientation. The BDSM and kink scene is more often seen as a diverse pansexual community. Often this is a non-judgmental community where gender, sexuality, orientation, preferences are accepted as is or worked at to become something a person can be happy with. In research, studies have focused on bisexuality and its parallels with BDSM, as well as gay-straight differences between practitioners.
Asexuality
It has been suggested that some asexual people have found a language for navigating relationships through BDSM.
Bisexuality
In Steve Lenius' original 2001 paper, he explored the acceptance of bisexuality in a supposedly pansexual BDSM community. The reasoning behind this is that 'coming-out' had become primarily the territory of the gay and lesbian, with bisexuals feeling the push to be one or the other (and being right only half the time either way). What he found in 2001, was that people in BDSM were open to discussion about the topic of bisexuality and pansexuality and all controversies they bring to the table, but personal biases and issues stood in the way of actively using such labels. A decade later, Lenius (2011) looks back on his study and considers if anything has changed. He concluded that the standing of bisexuals in the BDSM and kink community was unchanged, and believed that positive shifts in attitude were moderated by society's changing views towards different sexualities and orientations. But Lenius (2011) does emphasize that the pansexual promoting BDSM community helped advance greater acceptance of alternative sexualities.
Brandy Lin Simula (2012), on the other hand, argues that BDSM actively resists gender-conforming and identified three different types of BDSM bisexuality: gender-switching, gender-based styles (taking on a different gendered style depending on the gender of partner when playing), and rejection of gender (resisting the idea that gender matters in their play partners). Simula (2012) explains that practitioners of BDSM routinely challenge our concepts of sexuality by pushing the limits on pre-existing ideas of sexual orientation and gender norms. For some, BDSM and kink provides a platform in creating identities that are fluid, ever-changing.
Comparison between gay and straight men in S/M
Demographically, Nordling et al.'s (2006) study found no differences in age, but 43% of gay male respondents compared to 29% of straight males had university-level education. The gay men also had higher incomes than the general population and tended to work in white-collar jobs while straight men tended toward blue-collar ones. Because there were not enough female respondents (22), no conclusions could be drawn from them.
Sexually speaking, the same 2006 study by Nordling et al. found that gay males were aware of their S/M preferences and took part in them at an earlier age, preferring leather, anal sex, rimming, dildos and special equipment or uniform scenes. In contrast, straight men preferred verbal humiliation, mask and blindfolds, gags, rubber/latex outfits, caning, vaginal sex, straitjackets, and cross-dressing among other activities. From the questionnaire, researchers were able to identify four separate sexual themes: hyper-masculinity, giving and receiving pain, physical restriction (i.e. bondage), and psychological humiliation. Gay men preferred activities that tended towards hyper-masculinity while straight men showed greater preference for humiliation, significantly higher master/madame-slave role play at ≈84%. Though there were not enough female respondents to draw a similar conclusion with, the fact that there is a difference in gay and straight men suggests strongly that S/M (and BDSM in general) can not be considered a homogenous phenomenon. As Nordling et al. (2006) puts it, "People who identify as sadomasochists mean different things by these identifications." (54)
History of psychotherapy and current recommendations
Psychiatry has an insensitive history in the area of BDSM. There have been many involvements by institutions of political power to marginalize subgroups and sexual minorities. Mental health professionals have a long history of holding negative assumptions and stereotypes about the BDSM community. Beginning with the DSM-II, Sexual Sadism and Sexual Masochism were listed as sexually deviant behaviours. Sadism and masochism were also found in the personality disorder section. This negative assumption has not changed significantly which is evident in the continued inclusion of Sexual Sadism and Sexual Masochism as paraphilias in the DSM-IV-TR. The DSM-V, however, has depathologized the language around paraphilias in a way that signifies "the APA’s intent to not demand treatment for healthy consenting adult sexual expression". Still, biases and misinformation can result in pathologizing and unintentional harm to clients who identify as sadists and/or masochists and medical professionals who have been trained under older editions of the DSM can be slow to change in their ways of clinical practice.
According to Kolmes et al. (2006), major themes of biased and inadequate care to BDSM clients are:
Considering BDSM to be unhealthy
Requiring a client to give up BDSM activities in order to continue in treatment
Confusing BDSM with abuse
Having to educate the therapist about BDSM
Assuming that BDSM interests are indicative of past family/spousal abuse
Therapists misrepresenting their expertise by stating that they are BDSM-positive when they lack knowledge of BDSM practices
These same researchers suggested that therapists should be open to learning more about BDSM, to show comfort in talking about BDSM issues, and to understand and promote "safe, sane, consensual" BDSM.
There has also been research which suggests BDSM can be a beneficial way for victims of sexual assault to deal with their trauma, most notably by Corie Hammers, but this work is limited in scope and, to date, has not undergone empirical testing as a treatment.
Clinical issues
Nichols (2006) compiled some common clinical issues: countertransference, non-disclosure, coming out, partner/families, and bleed-through.
Countertransference is a common problem in clinical settings. Despite having no evidence, therapists may find themselves believing that their client's pathology is "self-evident". Therapists may feel intense disgust and aversive reactions. Feelings of countertransference can interfere with therapy. Another common problem is when clients conceal their sexual preferences from their therapists. This can compromise any therapy. To avoid non-disclosure, therapists are encouraged to communicate their openness in indirect ways with literature and artworks in the waiting room. Therapists can also deliberately bring up BDSM topics during the course of therapy. With less informed therapists, sometimes they over-focus on clients' sexuality which detracts from original issues such as family relationships, depression, etc. A special subgroup that needs counselling is the "newbie". Individuals just coming out might have internalized shame, fear, and self-hatred about their sexual preferences. Therapists need to provide acceptance, care, and model positive attitude; providing reassurance, psychoeducation, and bibliotherapy for these clients is crucial. The average age when BDSM individuals realize their sexual preference is around 26 years. Many people hide their sexuality until they can no longer contain their desires. However, they may have married or had children by this point.
History
Origins
Practices of BDSM survive from some of the oldest textual records in the world, associated with rituals to the goddess Inanna (Ishtar in Akkadian). Cuneiform texts dedicated to Inanna which incorporate domination rituals. In particular, she points to ancient writings such as Inanna and Ebih (in which the goddess dominates Ebih), and Hymn to Inanna describing cross-dressing transformations and rituals "imbued with pain and ecstasy, bringing about initiation and journeys of altered states of consciousness; punishment, moaning, ecstasy, lament and song, participants exhausting themselves in weeping and grief."
During the 9th century BC, ritual flagellations were performed in Artemis Orthia, one of the most important religious areas of ancient Sparta, where the Cult of Orthia, a pre-Olympic religion, was practiced. Here, ritual flagellation called diamastigosis took place, in which young adolescent men were whipped in a ceremony overseen by the priestess. These are referred to by a number of ancient authors, including Pausanius (III, 16: 10–11).
One of the oldest graphical proofs of sadomasochistic activities is found in the Etruscan Tomb of the Whipping near Tarquinia, which dates to the 5th century BC. Inside the tomb, there is a fresco which portrays two men who flagellate a woman with a cane and a hand during an erotic situation. Another reference related to flagellation is to be found in the sixth book of the Satires of the ancient Roman Poet Juvenal (1st–2nd century A.D.), further reference can be found in Petronius's Satyricon where a delinquent is whipped for sexual arousal. Anecdotal narratives related to humans who have had themselves voluntary bound, flagellated or whipped as a substitute for sex or as part of foreplay reach back to the 3rd and 4th century BC.
In Pompeii, a whip-mistress figure with wings is depicted on the wall of the Villa of Mysteries, as part of an initiation of a young woman into the Mysteries. The whip-mistress role drove the sacred initiation of ceremonial death and rebirth. The archaic Greek Aphrodite may too once have been armed with an implement, with archaeological evidence of armed Aphrodites known from a number of locations in Cythera, Acrocorinth and Sparta, and which may have been a whip.
The Kama Sutra of India describes four different kinds of hitting during lovemaking, the allowed regions of the human body to target and different kinds of joyful "cries of pain" practiced by bottoms. The collection of historic texts related to sensuous experiences explicitly emphasizes that impact play, biting and pinching during sexual activities should only be performed consensually since only some women consider such behavior to be joyful. From this perspective, the Kama Sutra can be considered one of the first written resources dealing with sadomasochistic activities and safety rules. Further texts with sadomasochistic connotation appear worldwide during the following centuries on a regular basis.
There are anecdotal reports of people willingly being bound or whipped, as a prelude to or substitute for sex, during the 14th century. The medieval phenomenon of courtly love in all of its slavish devotion and ambivalence has been suggested by some writers to be a precursor of BDSM. Some sources claim that BDSM as a distinct form of sexual behavior originated at the beginning of the 18th century when Western civilization began medically and legally categorizing sexual behavior (see Etymology).
Flagellation practiced within an erotic setting has been recorded from at least the 1590s evidenced by a John Davies epigram, and references to "flogging schools" in Thomas Shadwell's The Virtuoso (1676) and Tim Tell-Troth's Knavery of Astrology (1680). Visual evidence such as mezzotints and print media is also identified revealing scenes of flagellation, such as "The Cully Flaug'd" from the British Museum collection.
John Cleland's novel Fanny Hill, published in 1749, incorporates a flagellation scene between the character's protagonist Fanny Hill and Mr Barville. A large number of flagellation publications followed, including Fashionable Lectures: Composed and Delivered with Birch Discipline (), promoting the names of women offering the service in a lecture room with rods and cat o' nine tails.
Other sources give a broader definition, citing BDSM-like behavior in earlier times and other cultures, such as the medieval flagellates and the physical ordeal rituals of some Native American societies.
BDSM ideas and imagery have existed on the fringes of Western culture throughout the 20th century. Robert Bienvenu attributes the origins of modern BDSM to three sources, which he names as "European Fetish" (from 1928), "American Fetish" (from 1934), and "Gay Leather" (from 1950). Another source are the sexual games played in brothels, which go back to the 19th century, if not earlier. Charles Guyette was the first American to produce and distribute fetish related material (costumes, footwear, photography, props and accessories) in the U.S. His successor, Irving Klaw, produced commercial sexploitation film and photography with a BDSM theme (most notably with Bettie Page) and issued fetish comics (known then as "chapter serials") by the now-iconic artists John Willie, Gene Bilbrew, and Eric Stanton.
Stanton's model Bettie Page became at the same time one of the first successful models in the area of fetish photography and one of the most famous pin-up girls of American mainstream culture. Italian author and designer Guido Crepax was deeply influenced by him, coining the style and development of European adult comics in the second half of the 20th century. The artists Helmut Newton and Robert Mapplethorpe are the most prominent examples of the increasing use of BDSM-related motives in modern photography and the public discussions still resulting from this.
Alfred Binet first coined the term erotic fetishism in his 1887 book, Du fétichisme dans l'amour Richard von Krafft-Ebing saw BDSM interests as the end of a continuum.
Leather movement
Leather has been a predominantly gay male term to refer to one fetish, but it can stand for many more. Members of the gay male leather community may wear leathers such as motorcycle leathers, or may be attracted to men wearing leather. Leather and BDSM are seen as two parts of one whole. Much of the BDSM culture can be traced back to the gay male leather culture, which formalized itself out of the group of men who were soldiers returning home after World War II (1939–1945). World War II was the setting where countless homosexual men and women tasted the life among homosexual peers. Post-war, homosexual individuals congregated in larger cities such as New York, Chicago, San Francisco, and Los Angeles. They formed leather clubs and bike clubs; some were fraternal services. The establishment of Mr. Leather Contest and Mr. Drummer Contest were made around this time. This was the genesis of the gay male leather community. Many of the members were attracted to extreme forms of sexuality, for which peak expression was in the pre-AIDS 1970s. This subculture is epitomized by the Leatherman's Handbook by Larry Townsend, published in 1972, which describes in detail the practices and culture of gay male sadomasochists in the late 1960s and early 1970s.
In the early 1980s, lesbians also joined the leathermen as a recognizable element of the gay leather community. They also formed leather clubs, but there were some gender differences, such as the absence of leatherwomen's bars. In 1981, the publication of Coming to Power by lesbian-feminist group Samois led to a greater knowledge and acceptance of BDSM in the lesbian community. By the 1990s, the gay men's and women's leather communities were no longer underground and played an important role in the kink community.
Today, the leather movement is generally seen as a part of the BDSM-culture instead of as a development deriving from gay subculture, even if a huge part of the BDSM-subculture was gay in the past. In the 1990s, the so-called New Guard leather subculture evolved. This new orientation started to integrate psychological aspects into their play.
The Leather Archives and Museum (LA&M) in Chicago was founded in 1991 by Chuck Renslow and Tony DeBlase as a “community archives, library, and museum of leather, kink, fetish, and BDSM history and culture.” The LA&M has an extensive collection of leather- and BDSM-related artifacts, including one of three original leather pride flags.
The San Francisco South of Market Leather History Alley consists of four works of art along Ringold Alley honoring leather culture; it opened in 2017. One of the works of art is metal bootprints along the curb which honor 28 people (including Steve McEachern, owner of the Catacombs, a gay and lesbian S/M fisting club, and Cynthia Slater, a founder of the Society of Janus, the second oldest BDSM organization in the United States) who were an important part of the leather communities of San Francisco.
Internet
In the late 1980s, the Internet provided a way of finding people with specialized interests around the world as well as on a local level, and communicating with them anonymously. This brought about an explosion of interest and knowledge of BDSM, particularly on the usenet group alt.sex.bondage. When that group became too cluttered with spam, the focus moved to soc.subculture.bondage-bdsm. With an increased focus on forms of social media, FetLife was formed, which advertises itself as "a social network for the BDSM and fetish community". It operates similarly to other social media sites, with the ability to make friends with other users, events, and pages of shared interests.
In addition to traditional sex shops, which sell sex paraphernalia, there has also been an explosive growth of online adult toy companies that specialize in leather/latex gear and BDSM toys. Once a very niche market, there are now very few sex toy companies that do not offer some sort of BDSM or fetish gear in their catalog. Kinky elements seem to have worked their way into "vanilla" markets. The former niche expanded to an important pillar of the business with adult accessories. Today practically all suppliers of sex toys do offer items which originally found usage in the BDSM subculture. Padded handcuffs, latex and leather garments, as well as more exotic items like soft whips for fondling and TENS for erotic electro stimulation, can be found in catalogs aiming at classical vanilla target groups, indicating that former boundaries increasingly seem to shift.
During the last years, the Internet also provides a central platform for networking among individuals who are interested in the subject. Besides countless private and commercial choices, there is an increasing number of local networks and support groups emerging. These groups often offer comprehensive background and health-related information for people who have been unwillingly outed as well as contact lists with information on psychologists, physicians and lawyers who are familiar with BDSM-related topics.
Legal status
Austria
Section 90 of the Austrian criminal code declares bodily injury (Sections 83–84) or the endangerment of physical security (Section 89) to not be subject to penalty in cases in which the victim has consented and the injury or endangerment does not offend moral sensibilities. Case law from the Austrian Supreme Court has consistently shown that bodily injury is only offensive to moral sensibilities, thus it is only punishable when a "serious injury" (damage to health or an employment disability lasting more than 24 days) or the death of the "victim" results. A light injury is generally considered permissible when the "victim" has consented to it. In cases of threats to bodily well-being, the standard depends on the probability that an injury will occur. If serious injury or even death would be a likely result of a threat being carried out, then even the threat itself is considered punishable.
Canada
In 2004, a judge in Canada ruled that videos seized by the police featuring BDSM activities were not obscene and did not constitute violence, but a "normal and acceptable" sexual activity between two consenting adults.
In 2011, the Supreme Court of Canada ruled in R. v. J.A. that a person must have an active mind during the specific sexual activity in order to legally consent. The Court ruled that it is a criminal offence to perform a sexual act on an unconscious person—whether or not that person consented in advance.
Germany
According to Section 194 of the German criminal code, the charge of insult (slander) can only be prosecuted if the defamed person chooses to press charges. False imprisonment can be charged if the victim—when applying an objective view—can be considered to be impaired in their rights of free movement. According to Section 228, a person inflicting a bodily injury on another person with that person's permission violates the law only in cases where the act can be considered to have violated good morals in spite of permission having been given. On 26 May 2004, the Criminal Panel No. 2 of the Bundesgerichtshof (German Federal Court) ruled that sado-masochistically motivated physical injuries are not per se indecent and thus subject to Section 228.
Following cases in which sado-masochistic practices had been repeatedly used as pressure tactics against former partners in custody cases, the Appeals Court of Hamm ruled in February 2006 that sexual inclinations toward sado-masochism are no indication of a lack of capabilities for successful child-raising.
Italy
In Italian law, BDSM is right on the border between crime and legality, and everything lies in the interpretation of the legal code by the judge. This concept is that anyone willingly causing "injury" to another person is to be punished. In this context, though, "injury" is legally defined as "anything causing a condition of illness", and "illness" is ill-defined itself in two different legal ways. The first is "any anatomical or functional alteration of the organism" (thus technically including little scratches and bruises too); the second is "a significant worsening of a previous condition relevant to organic and relational processes, requiring any kind of therapy". This could make it somewhat risky to play with someone, as later the "victim" may call foul play citing even an insignificant mark as evidence against the partner. Also, any injury requiring over 20 days of medical care must be denounced by the professional medic who discovers it, leading to automatic indictment of the person who caused it.
Nordic countries
In September 2010, a Swedish court acquitted a 32-year-old man of assault for engaging in consensual BDSM play with a 16-year-old girl (the age of consent in Sweden is 15). Norway's legal system has likewise taken a similar position, that safe and consensual BDSM play should not be subject to criminal prosecution. This parallels the stance of the mental health professions in the Nordic countries which have removed sadomasochism from their respective lists of psychiatric illnesses.
Switzerland
The age of consent in Switzerland is 16 years, which also applies to BDSM play. Minors (i.e., those under 16) are not subject to punishment for BDSM play as long as the age difference between them is less than three years. Certain practices, however, require granting consent for light injuries, with only those over 18 permitted to give consent. On 1 April 2002, Articles 135 and 197 of the Swiss Criminal Code were tightened to make ownership of "objects or demonstrations [...] which depict sexual acts with violent content" a punishable offense. This law amounts to a general criminalization of sado-masochism since nearly every sado-masochist will have some kind of media that fulfills this criterion. Critics also object to the wording of the law which puts sado-masochists in the same category as pedophiles and pederasts.
United Kingdom
In British law, consent is an absolute defense to common assault, but not necessarily to actual bodily harm, where courts may decide that consent is not valid, as occurred in the case of R v Brown. Accordingly, consensual activities in the U.K. may not constitute "assault occasioning actual or grievous bodily harm" in law. The Spanner Trust states that this is defined as activities which have caused injury "of a lasting nature" but that only a slight duration or injury might be considered "lasting" in law. The decision contrasts with the later case of R v Wilson in which conviction for non-sexual consensual branding within a marriage was overturned, the appeal court ruling that R v Brown was not an authority in all cases of consensual injury and criticizing the decision to prosecute.
Following Operation Spanner, the European Court of Human Rights ruled in January 1999 in Laskey, Jaggard and Brown v. United Kingdom that no violation of Article 8 occurred because the amount of physical or psychological harm that the law allows between any two people, even consenting adults, is to be determined by the jurisdiction the individuals live in, as it is the State's responsibility to balance the concerns of public health and well-being with the amount of control a State should be allowed to exercise over its citizens. In the Criminal Justice and Immigration Bill 2007, the British Government cited the Spanner case as justification for criminalizing images of consensual acts, as part of its proposed criminalization of possession of "extreme pornography". Another contrasting case was that of Stephen Lock in 2013, who was cleared of actual bodily harm on the grounds that the woman consented. In this case, the act was deemed to be sexual.
United States
The United States Federal law does not list a specific criminal determination for consensual BDSM acts. Many BDSM practitioners cite the legal decision of People v. Jovanovic, 95 N.Y.2d 846 (2000), or the "Cybersex Torture Case", which was the first U.S. appellate decision to hold (in effect) that one does not commit assault if the victim consents. However, many individual states do criminalize specific BDSM actions within their state borders. Some states specifically address the idea of "consent to BDSM acts" within their assault laws, such as the state of New Jersey, which defines "simple assault" to be "a disorderly persons offense unless committed in a fight or scuffle entered into by mutual consent, in which case it is a petty disorderly persons offense".
Oregon Ballot Measure 9 was a ballot measure in the U.S. state of Oregon in 1992, concerning sadism, masochism, gay rights, pedophilia, and public education, that drew widespread national attention. It would have added the following text to the Oregon Constitution:
It was defeated in the 3 November 1992 general election with 638,527 votes in favor, 828,290 votes against.
The National Coalition for Sexual Freedom collects reports about punishment for sexual activities engaged in by consenting adults, and about its use in child custody cases.
Cultural aspects
Today, the BDSM culture exists in most Western countries. This offers BDSM practitioners the opportunity to discuss BDSM relevant topics and problems with like-minded people. This culture is often viewed as a subculture, mainly because BDSM is often still regarded as "unusual" by some of the public. Many people hide their leaning from society since they are afraid of the incomprehension and of social exclusion.
In contrast to frameworks seeking to explain sadomasochism through psychological, psychoanalytic, medical or forensic approaches, which seek to categorize behaviour and desires and find a root "cause", Romana Byrne suggests that such practices can be seen as examples of "aesthetic sexuality", in which a founding physiological or psychological impulse is irrelevant. Rather, sadism and masochism may be practiced through choice and deliberation, driven by certain aesthetic goals tied to style, pleasure, and identity. These practices, in certain circumstances and contexts, can be compared with the creation of art.
Symbols
One of the most commonly used symbols of the BDSM community is a derivation of a triskelion shape within a circle. Various forms of triskele have had many uses and many meanings in many cultures; its BDSM usage derives from the Ring of O in the classic book Story of O. The BDSM Emblem Project claims copyright over one particular specified form of the triskelion symbol; other variants of the triskelion are free from such copyright claims.
The triskelion as a BDSM symbol can easily be perceived as the three separate parts of the acronym BDSM; which are BD, DS, and SM (Bondage & Discipline, Dominance & Submission, Sadism & Masochism). They are three separate items, that are normally associated together.
The leather pride flag, shown to the right, is a symbol for the leather subculture and also widely used within BDSM. In continental Europe, the Ring of O is widespread among BDSM practitioners.
The BDSM rights flag, shown to the right, was designed by Tanos, a Master from the United Kingdom. It is partially loosely based on the design of the leather pride flag, and also includes a version of the BDSM Emblem (but not similar enough to fall within Steve Quagmyr's specific copyright claims for the Emblem). The BDSM rights flag is intended to represent the belief that people whose sexuality or relationship preferences include BDSM practices deserve the same human rights as everyone else, and should not be discriminated against for pursuing BDSM with consenting adults.
The flag is inspired by the leather pride flag and BDSM emblem but is specifically intended to represent the concept of BDSM rights and to be without the other symbols' restrictions against commercial use. It is designed to be recognizable by people familiar with either the leather pride flag or BDSM triskelion (or triskele) as "something to do with BDSM"; and to be distinctive whether reproduced in full colour, or in black and white (or another pair of colours).
BDSM and fetish items and styles have been spread widely in Western societies' everyday life by different factors, such as avant-garde fashion, heavy metal, goth subculture, and science fiction TV series, and are often not consciously connected with their BDSM roots by many people. While it was mainly confined to the punk and BDSM subcultures in the 1990s, it has since spread into wider parts of Western societies.
Film and music
In music: The Romanian singer-songwriter :ro:NAVI featured BDSM and Shibari scenes in her music video "Picture Perfect" (2014). The video was banned in Romania for its explicit content. In 2010, Rihanna's song "S&M" and Christina Aguilera's single "Not Myself Tonight" appeared, both full of BDSM imagery.
In movies: While BDSM activity appeared initially in subtle form, in the 1960s famous works of literature like Story of O and Venus in Furs were filmed explicitly. With the release of the 1986 film 9½ Weeks, the topic of BDSM was transferred to mainstream cinema. From the 1990s, cinematic representation of alternative sexualities, including BDSM, increased dramatically, as seen in documentary productions such as Graphic Sexual Horror (a 2009 film based on the website Insex), Kink (a 2013 film based on the website Kink.com), and movies such as Fifty Shades of Grey (2015) and its two sequels Fifty Shades Darker (2017) and Fifty Shades Freed (2018).
However, mistakenly considered in mainstream society as having BDSM activities that are, rather, abusive are the movies Fifty Shades of Grey (2015) and its two sequels Fifty Shades Darker (2017) and Fifty Shades Freed (2018). "A lot of what happens in the main relationship of Fifty Shades of Grey is domestic abuse, both physical and emotional, and for people whose entire understanding of BDSM now comes from jiggle balls and rooms of pain this is a dangerous misconception to foster."
Theater
Although it would be possible to establish certain elements related to BDSM in classical theater, not until the emergence of contemporary theater would some plays have BDSM as the main theme. Exemplifying this are two works: one Austrian, one German, in which BDSM is not only incorporated but integral to the storyline of the play.
Worauf sich Körper kaprizieren, Austria. Peter Kern directed and wrote the script for this comedy which is a present-day adaption of Jean Genet's 1950 film, . It is about a marriage in which the wife (film veteran Miriam Goldschmidt) submits her husband (Heinrich Herkie) and the butler (Günter Bubbnik) to her sadistic treatment until two new characters take their places.
Ach, Hilde (Oh, Hilda), Germany. This play by Anna Schwemmer premiered in Berlin. A young Hilde becomes pregnant, and after being abandoned by her boyfriend she decides to become a professional dominatrix to earn money. The play carefully crafts a playful and frivolous picture of the field of professional dominatrices.
Literature
Although examples of literature catering to BDSM and fetishistic tastes were created in earlier periods, BDSM literature as it exists today cannot be found much earlier than World War II.
The word sadism originates from the works of Donatien Alphonse François, Marquis de Sade, and the word masochism originates from Leopold von Sacher-Masoch, the author of Venus in Furs. However, it is worth noting that the Marquis de Sade describes non-consensual abuse in his works, such as in Justine. Venus in Furs describes a consensual dom-sub relationship.
A central work in modern BDSM literature is undoubtedly Story of O (1954) by Anne Desclos under the pseudonym Pauline Réage.
Other notable works include 9½ Weeks (1978) by Elizabeth McNeill, some works of the writer Anne Rice (Exit to Eden, and her Claiming of Sleeping Beauty series of books), Jeanne de Berg (L'Image (1956) dedicated to Pauline Réage), the Gor series by John Norman, and naturally all the works of Patrick Califia, Gloria Brame, the group Samois and many of the writer Georges Bataille (Histoire de l'oeil-Story of the Eye, Madame Edwarda, 1937), as well as those of Bob Flanagan (Slave Sonnets (1986), Fuck Journal (1987), A Taste of Honey (1990)). A common part of many of the poems of Pablo Neruda is a reflection on feelings and sensations arising from the relations of EPE or erotic exchange of power. The Fifty Shades trilogy is a series of very popular erotic romance novels by E. L. James which involves BDSM; however, the novels have been criticized for their inaccurate and harmful depiction of BDSM.
In the 21st century, a number of prestigious university presses, such as Duke University, Indiana University and University of Chicago, have published books on BDSM written by professors, thereby lending academic legitimacy to this once taboo topic.
Art
In photography: Eric Kroll and Irving Klaw (with Bettie Page, the first well-known bondage model), and Japanese photographer Araki Nobuyoshi, whose works are exhibited in several major art museums, galleries and private collections, such as the Baroness Marion Lambert, the world's largest holder of contemporary photographic art. Also Robert Mapplethorpe, whose most controversial work is that of the underground BDSM scene in the late 1960s and early 1970s of New York. The homoeroticism of this work fuelled a national debate over the public funding of controversial artwork.
Comic book drawings: Guido Crepax with Histoire d'O (1975), Justine (1979) and Venere in Pelliccia (1984); inspired by the work of Pauline Réage, the Marquis de Sade and Leopold von Sacher-Masoch. John Willie and The Adventures of Sweet Gwendoline (1984) which was the basis for the film The Perils of Gwendoline in the Land of the Yik-Yak. The Sunstone/Mercy (2011-ongoing) books by Stjepan Sejic have become very popular and are found in many conventional bookstores around the world.
In graphic design: Eric Stanton and his work on dominance and female bondage, as well as Hajime Sorayama and Robert Bishop.
In art deco sculpture: Bruno Zach produced perhaps his best known sculpture—called "The Riding Crop" ()—which features a scantily clad dominatrix wielding a riding crop.
See also
Autosadism
Dominance hierarchy
Index of BDSM articles
Glossary of BDSM
List of BDSM equipment
List of BDSM organizations
List of bondage positions
Leather subculture
Outline of BDSM
Vulnerability and care theory of love
References
Further reading
Baldwin, Guy. Ties That Bind: SM/Leather/Fetish Erotic Style: Issues, Communication, and Advice, Daedalus Publishing, 1993. .
Brame, Gloria G., Brame, William D., and Jacobs, Jon. Different Loving: An Exploration of the World of Sexual Dominance and Submission Villard Books, New York, 1993.
Brame, Gloria. Come Hither: A Commonsense Guide to Kinky Sex, Fireside, 2000. .
Califia, Pat. Sensuous Magic. New York, Masquerade Books, 1993.
Dollie Llama. Diary of an S&M Romance., PEEP! Press (California), 2006,
Henkin, Wiliiam A., Sybil Holiday. Consensual Sadomasochism: How to Talk About It and How to Do It Safely, Daedalus Publishing, 1996. .
Janus, Samuel S., and Janus, Cynthia L. The Janus Report on Sexual Behavior, John Wiley & Sons, 1994.
Masters, Peter. This Curious Human Phenomenon: An Exploration of Some Uncommonly Explored Aspects of BDSM. The Nazca Plains Corporation, 2008.
Phillips, Anita. A Defence of Masochism, Faber and Faber, new edition 1999.
Newmahr, Staci (2011). Playing on the Edge: Sadomasochism, Risk and Intimacy. Bloomington: Indiana University Press. .
Nomis, Anne O. (2013) The History & Arts of the Dominatrix. Mary Egan Publishing & Anna Nomis Ltd, U.K.
Rinella, Jack. The Complete Slave: Creating and Living an Erotic Dominant/submissive Lifestyle, Daedalus Publishing, 2002. .
Saez, Fernando y Viñuales, Olga, Armarios de Cuero, Ed. Bellaterra, 2007. .
Townsend, Larry. Leatherman's Handbook, first edition 1972.
Tupper, Peter. A Lover's Pinch: A Cultural History of Sadomasochism. United States: Rowman & Littlefield, 2018. .
Wiseman, Jay. SM 101: A Realistic Introduction (1st ed., 1992); 2nd ed. Greenery Press, 2000. .
Byrne, Romana (2013) Aesthetic Sexuality: A Literary History of Sadomasochism , New York: Bloomsbury.
Dominari, Rajan (2019). Welcome to the Darkside: A BDSM Primer. AKO Publishing Company. .
External links
"Pain and the erotic" by Lesley Hall (archived 17 December 2008)
Spoken articles
Control (social and political)
Sexual acts | BDSM | [
"Biology"
] | 19,139 | [
"Sexual acts",
"Behavior",
"Sexuality",
"Mating"
] |
4,548 | https://en.wikipedia.org/wiki/Blizzard | A blizzard is a severe snowstorm characterized by strong sustained winds and low visibility, lasting for a prolonged period of time—typically at least three or four hours. A ground blizzard is a weather condition where snow that has already fallen is being blown by wind. Blizzards can have an immense size and usually stretch to hundreds or thousands of kilometres.
Definition and etymology
In the United States, the National Weather Service defines a blizzard as a severe snow storm characterized by strong winds causing blowing snow that results in low visibilities. The difference between a blizzard and a snowstorm is the strength of the wind, not the amount of snow. To be a blizzard, a snow storm must have sustained winds or frequent gusts that are greater than or equal to with blowing or drifting snow which reduces visibility to or less and must last for a prolonged period of time—typically three hours or more.
Environment Canada defines a blizzard as a storm with wind speeds exceeding accompanied by visibility of or less, resulting from snowfall, blowing snow, or a combination of the two. These conditions must persist for a period of at least four hours for the storm to be classified as a blizzard, except north of the arctic tree line, where that threshold is raised to six hours.
The Australia Bureau of Meteorology describes a blizzard as, "Violent and very cold wind which is laden with snow, some part, at least, of which has been raised from snow covered ground."
While severe cold and large amounts of drifting snow may accompany blizzards, they are not required. Blizzards can bring whiteout conditions, and can paralyze regions for days at a time, particularly where snowfall is unusual or rare.
A severe blizzard has winds over , near zero visibility, and temperatures of or lower. In Antarctica, blizzards are associated with winds spilling over the edge of the ice plateau at an average velocity of .
Ground blizzard refers to a weather condition where loose snow or ice on the ground is lifted and blown by strong winds. The primary difference between a ground blizzard as opposed to a regular blizzard is that in a ground blizzard no precipitation is produced at the time, but rather all the precipitation is already present in the form of snow or ice at the surface.
The Oxford English Dictionary concludes the term blizzard is likely onomatopoeic, derived from the same sense as blow, blast, blister, and bluster; the first recorded use of it for weather dates to 1829, when it was defined as a "violent blow". It achieved its modern definition by 1859, when it was in use in the western United States. The term became common in the press during the harsh winter of 1880–81.
United States storm systems
In the United States, storm systems powerful enough to cause blizzards usually form when the jet stream dips far to the south, allowing cold, dry polar air from the north to clash with warm, humid air moving up from the south.
When cold, moist air from the Pacific Ocean moves eastward to the Rocky Mountains and the Great Plains, and warmer, moist air moves north from the Gulf of Mexico, all that is needed is a movement of cold polar air moving south to form potential blizzard conditions that may extend from the Texas Panhandle to the Great Lakes and Midwest. A blizzard also may be formed when a cold front and warm front mix together and a blizzard forms at the border line.
Another storm system occurs when a cold core low over the Hudson Bay area in Canada is displaced southward over southeastern Canada, the Great Lakes, and New England. When the rapidly moving cold front collides with warmer air coming north from the Gulf of Mexico, strong surface winds, significant cold air advection, and extensive wintry precipitation occur.
Low pressure systems moving out of the Rocky Mountains onto the Great Plains, a broad expanse of flat land, much of it covered in prairie, steppe and grassland, can cause thunderstorms and rain to the south and heavy snows and strong winds to the north. With few trees or other obstructions to reduce wind and blowing, this part of the country is particularly vulnerable to blizzards with very low temperatures and whiteout conditions. In a true whiteout, there is no visible horizon. People can become lost in their own front yards, when the door is only away, and they would have to feel their way back. Motorists have to stop their cars where they are, as the road is impossible to see.
Nor'easter blizzards
A nor'easter is a macro-scale storm that occurs off the New England and Atlantic Canada coastlines. It gets its name from the direction the wind is coming from. The usage of the term in North America comes from the wind associated with many different types of storms, some of which can form in the North Atlantic Ocean and some of which form as far south as the Gulf of Mexico. The term is most often used in the coastal areas of New England and Atlantic Canada. This type of storm has characteristics similar to a hurricane. More specifically, it describes a low-pressure area whose center of rotation is just off the coast and whose leading winds in the left-forward quadrant rotate onto land from the northeast. High storm waves may sink ships at sea and cause coastal flooding and beach erosion. Notable nor'easters include The Great Blizzard of 1888, one of the worst blizzards in U.S. history. It dropped of snow and had sustained winds of more than that produced snowdrifts in excess of . Railroads were shut down and people were confined to their houses for up to a week. It killed 400 people, mostly in New York.
Historic events
1972 Iran blizzard
The 1972 Iran blizzard, which caused 4,000 reported deaths, was the deadliest blizzard in recorded history. Dropping as much as of snow, it completely covered 200 villages. After a snowfall lasting nearly a week, an area the size of Wisconsin was entirely buried in snow.
2008 Afghanistan blizzard
The 2008 Afghanistan blizzard, was a fierce blizzard that struck Afghanistan on 10 January 2008. Temperatures fell to a low of , with up to of snow in the more mountainous regions, killing at least 926 people. It was the third deadliest blizzard in history. The weather also claimed more than 100,000 sheep and goats, and nearly 315,000 cattle died.
The Snow Winter of 1880–1881
The winter of 1880–1881 is widely considered the most severe winter ever known in many parts of the United States.
The initial blizzard in October 1880 brought snowfalls so deep that two-story homes experienced accumulations, as opposed to drifts, up to their second-floor windows. No one was prepared for deep snow so early in the winter. Farmers from North Dakota to Virginia were caught flat with fields unharvested, what grain that had been harvested unmilled, and their suddenly all-important winter stocks of wood fuel only partially collected. By January train service was almost entirely suspended from the region. Railroads hired scores of men to dig out the tracks but as soon as they had finished shoveling a stretch of line a new storm arrived, burying it again.
There were no winter thaws and on February 2, 1881, a second massive blizzard struck that lasted for nine days. In towns the streets were filled with solid drifts to the tops of the buildings and tunneling was necessary to move about. Homes and barns were completely covered, compelling farmers to construct fragile tunnels in order to feed their stock.
When the snow finally melted in late spring of 1881, huge sections of the plains experienced flooding. Massive ice jams clogged the Missouri River, and when they broke the downstream areas were inundated. Most of the town of Yankton, in what is now South Dakota, was washed away when the river overflowed its banks after the thaw.
Novelization
Many children—and their parents—learned of "The Snow Winter" through the children's book The Long Winter by Laura Ingalls Wilder, in which the author tells of her family's efforts to survive. The snow arrived in October 1880 and blizzard followed blizzard throughout the winter and into March 1881, leaving many areas snowbound throughout the winter. Accurate details in Wilder's novel include the blizzards' frequency and the deep cold, the Chicago and North Western Railway stopping trains until the spring thaw because the snow made the tracks impassable, the near-starvation of the townspeople, and the courage of her future husband Almanzo and another man, Cap Garland, who ventured out on the open prairie in search of a cache of wheat that no one was even sure existed.
The Storm of the Century
The Storm of the Century, also known as the Great Blizzard of 1993, was a large cyclonic storm that formed over the Gulf of Mexico on March 12, 1993, and dissipated in the North Atlantic Ocean on March 15. It is unique for its intensity, massive size and wide-reaching effect. At its height, the storm stretched from Canada towards Central America, but its main impact was on the United States and Cuba. The cyclone moved through the Gulf of Mexico, and then through the Eastern United States before moving into Canada. Areas as far south as northern Alabama and Georgia received a dusting of snow and areas such as Birmingham, Alabama, received up to with hurricane-force wind gusts and record low barometric pressures. Between Louisiana and Cuba, hurricane-force winds produced high storm surges across northwestern Florida, which along with scattered tornadoes killed dozens of people. In the United States, the storm was responsible for the loss of electric power to over 10 million customers. It is purported to have been directly experienced by nearly 40 percent of the country's population at that time. A total of 310 people, including 10 from Cuba, perished during this storm. The storm cost $6 to $10 billion in damages.
List of blizzards
North America
1700 to 1799
The Great Snow 1717 series of four snowstorms between February 27 and March 7, 1717. There were reports of about five feet of snow already on the ground when the first of the storms hit. By the end, there were about ten feet of snow and some drifts reaching , burying houses entirely. In the colonial era, this storm made travel impossible until the snow simply melted.
Blizzard of 1765. March 24, 1765. Affected area from Philadelphia to Massachusetts. High winds and over of snowfall recorded in some areas.
Blizzard of 1772. "The Washington and Jefferson Snowstorm of 1772". January 26–29, 1772. One of largest D.C. and Virginia area snowstorms ever recorded. Snow accumulations of recorded.
The "Hessian Storm of 1778". December 26, 1778. Severe blizzard with high winds, heavy snows and bitter cold extending from Pennsylvania to New England. Snow drifts reported to be high in Rhode Island. Storm named for stranded Hessian troops in deep snows stationed in Rhode Island during the Revolutionary War.
The Great Snow of 1786. December 4–10, 1786. Blizzard conditions and a succession of three harsh snowstorms produced snow depths of to from Pennsylvania to New England. Reportedly of similar magnitude of 1717 snowstorms.
The Long Storm of 1798. November 19–21, 1798. Heavy snowstorm produced snow from Maryland to Maine.
1800 to 1850
Blizzard of 1805. January 26–28, 1805. Cyclone brought heavy snowstorm to New York City and New England. Snow fell continuously for two days where over of snow accumulated.
New York City Blizzard of 1811. December 23–24, 1811. Severe blizzard conditions reported on Long Island, in New York City, and southern New England. Strong winds and tides caused damage to shipping in harbor.
Luminous Blizzard of 1817. January 17, 1817. In Massachusetts and Vermont, a severe snowstorm was accompanied by frequent lightning and heavy thunder. St. Elmo's fire reportedly lit up trees, fence posts, house roofs, and even people. John Farrar professor at Harvard, recorded the event in his memoir in 1821.
Great Snowstorm of 1821. January 5–7, 1821. Extensive snowstorm and blizzard spread from Virginia to New England.
Winter of Deep Snow in 1830. December 29, 1830. Blizzard storm dumped in Kansas City and in Illinois. Areas experienced repeated storms thru mid-February 1831.
"The Great Snowstorm of 1831" January 14–16, 1831. Produced snowfall over widest geographic area that was only rivaled, or exceeded by, the 1993 Blizzard. Blizzard raged from Georgia, to Ohio Valley, all the way to Maine.
"The Big Snow of 1836" January 8–10, 1836. Produced to of snowfall in interior New York, northern Pennsylvania, and western New England. Philadelphia got a reported and New York City of snow.
1851 to 1900
Plains Blizzard of 1856. December 3–5, 1856. Severe blizzard-like storm raged for three days in Kansas and Iowa. Early pioneers suffered.
"The Cold Storm of 1857" January 18–19, 1857. Produced severe blizzard conditions from North Carolina to Maine. Heavy snowfalls reported in east coast cities.
Midwest Blizzard of 1864. January 1, 1864. Gale-force winds, driving snow, and low temperatures all struck simultaneously around Chicago, Wisconsin and Minnesota.
Plains Blizzard of 1873. January 7, 1873. Severe blizzard struck the Great Plains. Many pioneers from the east were unprepared for the storm and perished in Minnesota and Iowa.
Great Plains Easter Blizzard of 1873. April 13, 1873
Seattle Blizzard of 1880. January 6, 1880. Seattle area's greatest snowstorm to date. An estimated fell around the town. Many barns collapsed and all transportation halted.
The Hard Winter of 1880-81. October 15, 1880. A blizzard in eastern South Dakota marked the beginning of this historically difficult season. Laura Ingalls Wilder's book The Long Winter details the effects of this season on early settlers.
In the three year winter period from December 1885 to March 1888, the Great Plains and Eastern United States suffered a series of the worst blizzards in this nation's history ending with the Schoolhouse Blizzard and the Great Blizzard of 1888. The massive explosion of the volcano Krakatoa in the South Pacific late in August 1883 is a suspected cause of these huge blizzards during these several years. The clouds of ash it emitted continued to circulate around the world for many years. Weather patterns continued to be chaotic for years, and temperatures did not return to normal until 1888. Record rainfall was experienced in Southern California during July 1883 to June 1884. The Krakatoa eruption injected an unusually large amount of sulfur dioxide (SO2) gas high into the stratosphere which reflects sunlight and helped cool the planet over the next few years until the suspended atmospheric sulfur fell to ground.
Plains Blizzard of late 1885. In Kansas, heavy snows of late 1885 had piled drifts high.
Kansas Blizzard of 1886. First week of January 1886. Reported that 80 percent of the cattle were frozen to death in that state alone from the cold and snow.
January 1886 Blizzard. January 9, 1886. Same system as Kansas 1886 Blizzard that traveled eastward.
Great Plains Blizzards of late 1886. On November 13, 1886, it reportedly began to snow and did not stop for a month in the Great Plains region.
Great Plains Blizzard of 1887. January 9–11, 1887. Reported 72-hour blizzard that covered parts of the Great Plains in more than of snow. Winds whipped and temperatures dropped to around . So many cows that were not killed by the cold soon died from starvation. When spring arrived, millions of the animals were dead, with around 90 percent of the open range's cattle rotting where they fell. Those present reported carcasses as far as the eye could see. Dead cattle clogged up rivers and spoiled drinking water. Many ranchers went bankrupt and others simply called it quits and moved back east. The "Great Die-Up" from the blizzard effectively concluded the romantic period of the great Plains cattle drives.
Schoolhouse Blizzard of 1888 North American Great Plains. January 12–13, 1888. What made the storm so deadly was the timing (during work and school hours), the suddenness, and the brief spell of warmer weather that preceded it. In addition, the very strong wind fields behind the cold front and the powdery nature of the snow reduced visibilities on the open plains to zero. People ventured from the safety of their homes to do chores, go to town, attend school, or simply enjoy the relative warmth of the day. As a result, thousands of people—including many schoolchildren—got caught in the blizzard.
Great Blizzard of March 1888 March 11–14, 1888. One of the most severe recorded blizzards in the history of the United States. On March 12, an unexpected northeaster hit New England and the mid-Atlantic, dropping up to of snow in the space of three days. New York City experienced its heaviest snowfall recorded to date at that time, all street railcars were stranded, and the storm led to the creation of the NYC subway system. Snowdrifts reached up to the second story of some buildings. Some 400 people died from this blizzard, including many sailors aboard vessels that were beset by gale-force winds and turbulent seas.
Great Blizzard of 1899 February 11–14, 1899. An extremely unusual blizzard in that it reached into the far southern states of the US. It hit in February, and the area around Washington, D.C., experienced 51 hours straight of snowfall. The port of New Orleans was totally iced over; revelers participating in the New Orleans Mardi Gras had to wait for the parade routes to be shoveled free of snow. Concurrent with this blizzard was the extremely cold arctic air. Many city and state record low temperatures date back to this event, including all-time records for locations in the Midwest and South. State record lows: Nebraska reached , Ohio experienced , Louisiana bottomed out at , and Florida dipped below zero to .
1901 to 1939
Great Lakes Storm of 1913 November 7–10, 1913. "The White Hurricane" of 1913 was the deadliest and most destructive natural disaster ever to hit the Great Lakes Basin in the Midwestern United States and the Canadian province of Ontario. It produced wind gusts, waves over high, and whiteout snowsqualls. It killed more than 250 people, destroyed 19 ships, and stranded 19 others.
Blizzard of 1918. January 11, 1918. Vast blizzard-like storm moved through Great Lakes and Ohio Valley.
1920 North Dakota blizzard March 15–18, 1920
Knickerbocker Storm January 27–28, 1922
1940 to 1949
Armistice Day Blizzard of 1940 November 10–12, 1940. Took place in the Midwest region of the United States on Armistice Day. This "Panhandle hook" winter storm cut a through the middle of the country from Kansas to Michigan. The morning of the storm was unseasonably warm but by mid afternoon conditions quickly deteriorated into a raging blizzard that would last into the next day. A total of 145 deaths were blamed on the storm, almost a third of them duck hunters who had taken time off to take advantage of the ideal hunting conditions. Weather forecasters had not predicted the severity of the oncoming storm, and as a result the hunters were not dressed for cold weather. When the storm began many hunters took shelter on small islands in the Mississippi River, and the winds and waves overcame their encampments. Some became stranded on the islands and then froze to death in the single-digit temperatures that moved in over night. Others tried to make it to shore and drowned.
North American blizzard of 1947 December 25–26, 1947. Was a record-breaking snowfall that began on Christmas Day and brought the Northeast United States to a standstill. Central Park in New York City got of snowfall in 24 hours with deeper snows in suburbs. It was not accompanied by high winds, but the snow fell steadily with drifts reaching . Seventy-seven deaths were attributed to the blizzard.
The Blizzard of 1949 - The first blizzard started on Sunday, January 2, 1949; it lasted for three days. It was followed by two more months of blizzard after blizzard with high winds and bitter cold. Deep drifts isolated southeast Wyoming, northern Colorado, western South Dakota and western Nebraska, for weeks. Railroad tracks and roads were all drifted in with drifts of and more. Hundreds of people that had been traveling on trains were stranded. Motorists that had set out on January 2 found their way to private farm homes in rural areas and hotels and other buildings in towns; some dwellings were so crowded that there wasn't enough room for all to sleep at once. It would be weeks before they were plowed out. The Federal government quickly responded with aid, airlifting food and hay for livestock. The total rescue effort involved numerous volunteers and local agencies plus at least ten major state and federal agencies from the U.S. Army to the National Park Service. Private businesses, including railroad and oil companies, also lent manpower and heavy equipment to the work of plowing out. The official death toll was 76 people and one million livestock. Youtube video Storm of the Century - the Blizzard of '49 Storm of the Century - the Blizzard of '49
1950 to 1959
Great Appalachian Storm of November 1950 November 24–30, 1950
March 1958 Nor'easter blizzard March 18–21, 1958.
The Mount Shasta California Snowstorm of 1959 – The storm dumped of snow on Mount Shasta. The bulk of the snow fell on unpopulated mountainous areas, barely disrupting the residents of the Mount Shasta area. The amount of snow recorded is the largest snowfall from a single storm in North America.
1960 to 1969
March 1960 Nor'easter blizzard March 2–5, 1960
December 1960 Nor'easter blizzard December 12–14, 1960. Wind gusts up to .
March 1962 Nor'easter Great March Storm of 1962 – Ash Wednesday. North Carolina and Virginia blizzards. Struck during Spring high tide season and remained mostly stationary for almost 5 days causing significant damage along eastern coast, Assateague island was under water, and dumped of snow in Virginia.
North American blizzard of 1966 January 27–31, 1966
Chicago Blizzard of 1967 January 26–27, 1967
February 1969 nor'easter February 8–10, 1969
March 1969 Nor'easter blizzard March 9, 1969
December 1969 Nor'easter blizzard December 25–28, 1969.
1970 to 1979
The Great Storm of 1975 known as the "Super Bowl Blizzard" or "Minnesota's Storm of the Century". January 9–12, 1975. Wind chills of to recorded, deep snowfalls.
Groundhog Day gale of 1976 February 2, 1976
Buffalo Blizzard of 1977 January 28 – February 1, 1977. There were several feet of packed snow already on the ground, and the blizzard brought with it enough snow to reach Buffalo's record for the most snow in one season – .
Great Blizzard of 1978 also called the "Cleveland Superbomb". January 25–27, 1978. Was one of the worst snowstorms the Midwest has ever seen. Wind gusts approached , causing snowdrifts to reach heights of in some areas, making roadways impassable. Storm reached maximum intensity over southern Ontario Canada.
Northeastern United States Blizzard of 1978 – February 6–7, 1978. Just one week following the Cleveland Superbomb blizzard, New England was hit with its most severe blizzard in 90 years since 1888.
Chicago Blizzard of 1979 January 13–14, 1979
1980 to 1989
February 1987 Nor'easter blizzard February 22–24, 1987
1990 to 1999
1991 Halloween blizzard Upper Mid-West US, October 31 – November 3, 1991
December 1992 Nor'easter blizzard December 10–12, 1992
1993 Storm of the Century March 12–15, 1993. While the southern and eastern U.S. and Cuba received the brunt of this massive blizzard, the Storm of the Century impacted a wider area than any in recorded history.
February 1995 Nor'easter blizzard February 3–6, 1995
Blizzard of 1996 January 6–10, 1996
April Fool's Day Blizzard March 31 – April 1, 1997. US East Coast
1997 Western Plains winter storms October 24–26, 1997
Mid West Blizzard of 1999 January 2–4, 1999
2000 to 2009
January 25, 2000 Southeastern United States winter storm January 25, 2000. North Carolina and Virginia
December 2000 Nor'easter blizzard December 27–31, 2000
North American blizzard of 2003 February 14–19, 2003 (Presidents' Day Storm II)
December 2003 Nor'easter blizzard December 6–7, 2003
North American blizzard of 2005 January 20–23, 2005
North American blizzard of 2006 February 11–13, 2006
Early winter 2006 North American storm complex Late November 2006
Colorado Holiday Blizzards (2006–07) December 20–29, 2006 Colorado
February 2007 North America blizzard February 12–20, 2007
January 2008 North American storm complex January, 2008 West Coast US
North American blizzard of 2008 March 6–10, 2008
2009 Midwest Blizzard 6–8 December 2009, a bomb cyclogenesis event that also affected parts of Canada
North American blizzard of 2009 December 16–20, 2009
2009 North American Christmas blizzard December 22–28, 2009
2010 to 2019
February 5–6, 2010 North American blizzard February 5–6, 2010 Referred to at the time as Snowmageddon was a Category 3 ("major") nor'easter and severe weather event.
February 9–10, 2010 North American blizzard February 9–10, 2010
February 25–27, 2010 North American blizzard February 25–27, 2010
October 2010 North American storm complex October 23–28, 2010
December 2010 North American blizzard December 26–29, 2010
January 31 – February 2, 2011 North American blizzard January 31 – February 2, 2011. Groundhog Day Blizzard of 2011
2011 Halloween nor'easter October 28 – Nov 1, 2011
Hurricane Sandy October 29–31, 2012. West Virginia, western North Carolina, and southwest Pennsylvania received heavy snowfall and blizzard conditions from this hurricane
November 2012 nor'easter November 7–10, 2012
December 17–22, 2012 North American blizzard December 17–22, 2012
Late December 2012 North American storm complex December 25–28, 2012
February 2013 nor'easter February 7–20, 2013
February 2013 Great Plains blizzard February 19 – March 6, 2013
March 2013 nor'easter March 6, 2013
October 2013 North American storm complex October 3–5, 2013
Buffalo, NY blizzard of 2014. Buffalo got over of snow during November 18–20, 2014.
January 2015 North American blizzard January 26–27, 2015
Late December 2015 North American storm complex December 26–27, 2015 Was one of the most notorious blizzards in the state of New Mexico and West Texas ever reported. It had sustained winds of over and continuous snow precipitation that lasted over 30 hours. Dozens of vehicles were stranded in small county roads in the areas of Hobbs, Roswell, and Carlsbad New Mexico. Strong sustained winds destroyed various mobile homes.
January 2016 United States blizzard January 20–23, 2016
February 2016 North American storm complex February 1–8, 2016
February 2017 North American blizzard February 6–11, 2017
March 2017 North American blizzard March 9–16, 2017
Early January 2018 nor’easter January 3–6, 2018
March 2019 North American blizzard March 8–16, 2019
April 2019 North American blizzard April 10–14, 2019
2020 to present
December 5–6, 2020 nor'easter December 5–6, 2020
January 31 – February 3, 2021 nor'easter January 31 – February 3, 2021
February 13–17, 2021 North American winter storm February 13–17, 2021
March 2021 North American blizzard March 11–14, 2021
January 2022 North American blizzard January 27–30, 2022
December 2022 North American winter storm December 21–26, 2022
March 2023 North American winter storm March 12–15, 2023
January 8–10, 2024 North American storm complex January 8–10, 2024
January 10–13, 2024 North American storm complex January 10–13, 2024
January 5–6, 2025 United States blizzard January 5–6, 2025
January 20–22, 2025 Gulf Coast blizzard January 20–22, 2025
Canada
The Eastern Canadian Blizzard of 1971 – Dumped a foot and a half (45.7 cm) of snow on Montreal and more than elsewhere in the region. The blizzard caused the cancellation of a Montreal Canadiens hockey game for the first time since 1918.
Saskatchewan blizzard of 2007 – January 10, 2007, Canada
United Kingdom
Great Frost of 1709
Blizzard of January 1881
Winter of 1894–95 in the United Kingdom
Winter of 1946–1947 in the United Kingdom
Winter of 1962–1963 in the United Kingdom
January 1987 Southeast England snowfall
Winter of 1990–91 in Western Europe
February 2009 Great Britain and Ireland snowfall
Winter of 2009–10 in Great Britain and Ireland
Winter of 2010–11 in Great Britain and Ireland
Early 2012 European cold wave
Other locations
1954 Romanian blizzard
1972 Iran blizzard
Winter of 1990–1991 in Western Europe
July 2007 Argentine winter storm
2008 Afghanistan blizzard
2008 Chinese winter storms
Winter storms of 2009–2010 in East Asia
See also
Cold wave
Lake-effect snow
Nor'easter
European windstorm
Whiteout (weather)
Blowing snow advisory
Ground blizzard
Severe weather terminology (Canada)
Snowsquall
Blowing snow
List of blizzards
References
External links
Digital Snow Museum Photos of historic blizzards and snowstorms.
Farmers Almanac List of Worst Blizzards in the United States
United States Search and Rescue Task Force: About Blizzards
A Historical Review On The Origin and Definition of the Word Blizzard Dr Richard Wild
Snow or ice weather phenomena
Storm
Weather hazards
Hazards of outdoor recreation | Blizzard | [
"Physics"
] | 5,989 | [
"Weather",
"Physical phenomena",
"Weather hazards"
] |
4,571 | https://en.wikipedia.org/wiki/Baku%20%28mythology%29 | are Japanese supernatural beings that are said to devour nightmares. They originate from the Chinese Mo. According to legend, they were created by the spare pieces that were left over when the gods finished creating all other animals. They have a long history in Japanese folklore and art, and more recently have appeared in manga and anime.
The Japanese term baku has two current meanings, referring to both the traditional dream-devouring creature and to the Malayan tapir. In recent years, there have been changes in how the baku is depicted.
History and description
The traditional Japanese nightmare-devouring baku originates in Chinese folklore from the mo 貘 (giant panda) and was familiar in Japan as early as the Muromachi period (14th–15th century). Hori Tadao has described the dream-eating abilities attributed to the traditional baku and relates them to other preventatives against nightmare such as amulets. Kaii-Yōkai Denshō Database, citing a 1957 paper, and Mizuki also describe the dream-devouring capacities of the traditional baku.
Before its adaptation to the Japanese dream-caretaker myth creature, an early 17th-century Japanese manuscript, the Sankai Ibutsu (), describes the baku as a shy, Chinese mythical chimera with the trunk and tusks of an elephant, the ears of a rhinoceros, the tail of a cow, the body of a bear and the paws of a tiger, which protected against pestilence and evil, although eating nightmares was not included among its abilities. However, in a 1791 Japanese wood-block illustration, a specifically dream-destroying baku is depicted with an elephant's head, tusks, and trunk, with horns and tiger's claws. The elephant's head, trunk, and tusks are characteristic of baku portrayed in classical era (pre-Meiji) Japanese wood-block prints (see illustration) and in shrine, temple, and netsuke carvings.
Writing in the Meiji period, Lafcadio Hearn (1902) described a baku with very similar attributes that was also able to devour nightmares.
Legend has it that a person who wakes up from a bad dream can call out to baku. A child having a nightmare in Japan will wake up and repeat three times, "Baku-san, come eat my dream." Legends say that the baku will come into the child's room and devour the bad dream, allowing the child to go back to sleep peacefully. However, calling to the baku must be done sparingly, because if he remains hungry after eating one's nightmare, he may also devour their hopes and desires as well, leaving them to live an empty life. The baku can also be summoned for protection from bad dreams prior to falling asleep at night. In the 1910s, it was common for Japanese children to keep a baku talisman at their bedside.
Gallery
See also
Dreamcatcher
References
Bibliography
Kaii-Yōkai Denshō Database. International Research Center for Japanese Studies. Retrieved on 2007-05-12. (Summary of excerpt from Warui Yume o Mita Toki (, When You've Had a Bad Dream?) by Keidō Matsushita, published in volume 5 of the journal Shōnai Minzoku (, Shōnai Folk Customs) on June 15, 1957).
External links
Baku – The Dream Eater at hyakumonogatari.com (English).
Netsuke: masterpieces from the Metropolitan Museum of Art, an exhibition catalog from The Metropolitan Museum of Art (fully available online as PDF), which contains many representations of Baku
Chinese legendary creatures
Japanese legendary creatures
Legendary mammals
Mythological hybrids
Fictional tapirs
Sleep in mythology and folklore | Baku (mythology) | [
"Biology"
] | 749 | [
"Behavior",
"Sleep",
"Sleep in mythology and folklore"
] |
4,580 | https://en.wikipedia.org/wiki/Battery%20Park%20City | Battery Park City is a mainly residential planned community and neighborhood on the west side of the southern tip of the island of Manhattan in New York City. It is bounded by the Hudson River on the west, the Hudson River shoreline on the north and south, and the West Side Highway on the east. The neighborhood is named for the Battery, formerly known as Battery Park, located directly to the south.
More than one-third of the development is parkland. The land upon which it is built was created in the 1970s by land reclamation on the Hudson River using over of soil and rock excavated during the construction of the World Trade Center, the New York City Water Tunnel, and certain other construction projects, as well as from sand dredged from New York Harbor off Staten Island. The neighborhood includes Brookfield Place (formerly the World Financial Center), along with numerous buildings designed for housing, commercial, and retail.
Battery Park City is part of Manhattan Community District 1. It is patrolled by the 1st Precinct of the New York City Police Department.
Geography
Battery Park City is bounded on the east by West Street, which separates the area from the Financial District of Lower Manhattan. To the west, north, and south, the area is surrounded by the Hudson River.
The development consists of roughly five major sections. Traveling north to south, the first neighborhood has high-rise residential buildings, the Stuyvesant High School, a Regal Entertainment Group movie theater, and the Battery Park City branch of the New York Public Library. It is also the site of the 463-suite Conrad New York luxury hotel, which has a ballroom and a conference center. Other restaurants located in that hotel, as well as a DSW store and a New York Sports Club branch, were closed in 2009 after the takeover of the property by Goldman Sachs. Former undeveloped lots in the area have been developed into high-rise buildings; for example, Goldman Sachs built a new headquarters at 200 West Street.
Nearby is Brookfield Place, a complex of several commercial buildings formerly known as the World Financial Center.
Current residential neighborhoods of Battery Park City are divided into northern and southern sections, separated by Brookfield Place. The northern section consists entirely of large, 20–45-story buildings, all various shades of orange brick. The southern section, extending down from the Winter Garden, which is located in Brookfield Place, contains residential apartment buildings such as Gateway Plaza and the Rector Place apartment buildings. In this section lies the majority of Battery Park City's residential areas, in three sections: Gateway Plaza, a high-rise building complex; the "Rector Place Residential Neighborhood"; and the" Battery Place Residential Neighborhood". These subsections contain most of the area's residential buildings, along with park space, supermarkets, restaurants, and movie theaters. Construction of residential buildings began north of the World Financial Center in the late 1990s, and completion of the final lots took place in early 2011. Additionally, a park restoration was completed in 2013.
History
Site and formation
Throughout the 19th century and early-20th century, the area adjoining today's Battery Park City was known as Little Syria with Lebanese, Greeks, Armenians, and other ethnic groups. In 1929, the land was the proposed site of a $50 million (equivalent to $ million in ) residential development that would have served workers in the Wall Street area. The Battery Tower project was left unfinished after workers digging the foundation ran into forty feet of old bulkheads, sunken docks, and ships.
By the late-1950s, the once-prosperous port area of downtown Manhattan was occupied by a number of dilapidated shipping piers, casualties of the rise of container shipping which drove sea traffic to Port Elizabeth, New Jersey. The initial proposal to reclaim this area through landfill was offered in the early-1960s by private firms and supported by the mayor, part of a long history of Lower Manhattan expansion. That plan became complicated when Governor Nelson Rockefeller announced his desire to redevelop a part of the area as a separate project. The various groups reached a compromise, and in 1966 the governor unveiled the proposal for what would become Battery Park City. The creation of architect Wallace K. Harrison, the proposal called for a 'comprehensive community' consisting of housing, social infrastructure and light industry.
In 1968, the New York State Legislature created the Battery Park City Authority (BPCA) to oversee development. Rockefeller named Charles J. Urstadt as the first chairman of the authority's board that year. He then served as the chief executive officer from 1973 to 1978. Urstadt later served as the authority's vice chair from 1996 to 2010. The New York State Urban Development Corporation and ten other public agencies were also involved in the development project. For the next several years, the BPCA made slow progress. In April 1969, it unveiled a master plan for the area, which was approved in October. In early-1972, the BPCA issued $200 million in bonds to fund construction efforts, with Harry B. Helmsley designated as the developer. That same year, the city approved plans to alter the number of apartments designated for lower, middle and upper income renters. Urstadt said the changes were needed to make the financing for the project viable. In addition to the change in the mix of units, the city approved adding nine acres, which extended the northern boundary from Reade Street to Duane Street.
Landfill material from construction of the World Trade Center and other buildings in Lower Manhattan was used to add fill for the southern portion. Cellular cofferdams were constructed to retain the material. After removal of the piers, wooden piles and overburden of silt, the northern portion (north of, and including the marina) was filled with sand dredged from areas adjacent to Ambrose Channel in the Atlantic Ocean, as well as stone from the construction of Water Tunnel #3. By 1976, the landfill was completed. Seating stands for viewing the American Bicentennial "Operation Sail" flotilla parade were set up on the completed landfill in July 1976. Construction efforts ground to a halt in 1977, as a result of the city's fiscal crisis. That year, the presidential administration of Jimmy Carter approved mortgage insurance for 1,600 of the development's proposed units. In 1979, the title to the landfill was transferred from the city to the Battery Park City Authority, which financially restructured itself and created a new, more viable master plan, designed by Alex Cooper of Cooper, Robertson & Partners and Stanton Eckstut. By that time, only two of the proposed development's buildings had been built, and the $200 million bond issue was supposed to have been paid off the next year.
The design of BPC to some degree reflects the values of vibrant city neighborhoods championed by Jane Jacobs. The Urban Land Institute (ULI) awarded the Battery Park City Master Plan its 2010 Heritage Award, for having "facilitated the private development of of commercial space, of residential space, and nearly of open space in lower Manhattan, becoming a model for successful large-scale planning efforts and marking a positive shift away from the urban renewal mindset of the time."
Construction and early development
During the late-1970s and early-1980s, the site hosted Creative Time's landmark Art on the Beach sculpture exhibitions. On September 23, 1979, the landfill was the site of an anti-nuclear rally attended by 200,000 people.
In 1978, a temporary heliport operated by the Port Authority of New York and New Jersey opened at the southern end of the landfill and was initially used by New York Airways helicopters providing scheduled service to Kennedy, LaGuardia and Newark airports. The helicopter landing pad later accommodated flights diverted from the Downtown Manhattan Heliport while that facility was closed for reconstruction from 1983 to 1987. The Battery Park City Heliport was located on the south side of the future site of the Museum of Jewish Heritage.
Construction began on the first residential building in June 1980. In April 1981, the New York State Urban Development Corporation (now the Empire State Development Corporation) issued a request for proposal, ultimately selecting six real-estate companies to develop over 1,800 residential units. The same year, the World Financial Center started construction; Olympia and York of Toronto was named as the developer for the World Financial Center, who then hired Cesar Pelli as the lead architect. By 1985, construction was completed and the World Financial Center (later renamed Brookfield Place New York) saw its first tenants. The newly completed development was lauded by The New York Times as "a triumph of urban design", with the World Financial Center being deemed "a symbol of change".
During early construction, two acres of land in the southern section of the Battery Park landfill was used by artist Agnes Denes to plant wheat in an exhibition titled Wheatfield – A Confrontation. The project was a visual contradiction: a golden field of wheat set among the steel skyscrapers of downtown Manhattan. It was created during a six-month period in the spring, summer, and fall of 1982 when Denes, with the support of the Public Art Fund, planted the field of wheat on rubble-strewn land near Wall Street and the World Trade Center site. Denes stated that her "decision to plant a wheatfield in Manhattan, instead of designing just another public sculpture, grew out of a long-standing concern and need to call attention to our misplaced priorities and deteriorating human values."
Throughout the 1980s, the BPCA oversaw a great deal of construction, including the entire Rector Place neighborhood and the river esplanade. It was during that period that Amanda Burden, later City Planning Department Director in the Bloomberg administration, worked on Battery Park City. During the 1980s, a total of 13 buildings were constructed. The Vietnam Veterans Plaza was established by Edward I. Koch in 1985. Constructed at a cost of $150 million (equivalent to $ million in ) and with a capacity for 2,700 students, Battery Park City became the new home of the Stuyvesant High School in 1992. During the 1990s, an additional six buildings were added to the neighborhood. By the turn of the 21st century, Battery Park City was mostly completed, with the exception of some ongoing construction on West Street.
Initially, in the 1980s, 23 buildings were built in the area. By the 1990s, 9 more buildings were built, followed by the construction of 11 buildings in the 2000s and 3 buildings in the 2010s. The Battery Park City Authority, wishing to attract more middle-class residents, started providing subsidies in 1998 to households whose annual incomes were $108,000 or less. By the end of the decade, nearly the entire landfill had been developed.
Early 21st century
The September 11 terrorist attacks in 2001 had a major impact on Battery Park City. The residents of Lower Manhattan and particularly of Battery Park City were displaced for an extended period of time. Parts of the community were an official crime scene and therefore residents were unable to return to live or even collect property. Many of the displaced residents were not allowed to return to the area for months and none were given government guidance of where to live temporarily on the already-crowded island of Manhattan. With most hotel rooms booked, residents, including young children and the elderly, were forced to fend for themselves. When they were finally allowed to return to Battery Park City, some found that their homes had been looted.
Upon residents' return, the air in the area was still filled with toxic smoke from the World Trade Center fires that persisted until December 2001. More than half of the area's residents moved away permanently from the community after the adjacent World Trade Center towers collapsed and spread toxic dust, debris, and smoke. Gateway Plaza's 600 building, Hudson View East, and Parc Place (now Rector Square) were punctured by airplane parts. The Winter Garden and other portions of the World Financial Center were severely damaged. Environmental concerns regarding dust from the Trade Center are a continuing source of concern for many residents, scientists, and elected officials. Since the attacks, the damage has been repaired. Temporarily reduced rents and government subsidies helped restore residential occupancy in the years following the attacks.
After September 11, 2001, residents of Battery Park City and Tribeca formed the TriBattery Pops Tom Goodkind Conductor in response to the events of the attacks. The "Pops" have been Grammy-nominated and are the first lower Manhattan all-volunteer community band in a century.
Since then, real estate development in the area has continued robustly. Commercial development includes the 200 West Street, the Goldman Sachs global headquarters, which began construction in 2005 and opened for occupancy in October 2009. 200 West Street received in 2010 gold-level certification under the United States Green Building Council's Leadership in Energy and Environmental Design (LEED) program by incorporating various water and energy conservation features. As of 2018, there is no new construction planned.
Ownership and maintenance
Battery Park City is owned and managed by the Hugh L. Carey Battery Park City Authority (BPCA), a Class A New York State public-benefit corporation created by New York State in 1968 to redevelop outmoded and deteriorated piers, a project that has involved reclaiming the land, replanning the area and facilitating new construction of a mixed commercial and residential community. It has operated under the authority of the Urban Development Corporation. Its mission is "to plan, create, coordinate and sustain a balanced community of commercial, residential, retail, and park space within its designated 92-acre site on the lower west side of Manhattan". The authority's board is composed of seven uncompensated members who are appointed by the governor and who serve six-year terms. Raju Mann is the president and chief executive officer. The BPCA is invested with substantial powers: it can acquire, hold and dispose of real property, enter into lease agreements, borrow money and issue debt, and manage the project. Like other public benefit corporations, the BPCA is exempt from property taxes and has the ability to issue tax exempt bonds. In 2021, the BPCA has operating expenses of $69.1 million as well as an outstanding debt of $875.09 million, and it employed 200 people.
Under the 1989 agreement between the BPCA and the City of New York, $600 million was transferred by the BPCA to the city. Charles J. Urstadt, the first chairman and CEO of the BPCA, noted in an August 19, 2007, op-ed piece in the New York Post that the aggregate figure of funds transferred to the City of New York is above $1.4 billion, with the BPCA continuing to contribute $200 million a year. The Independent Budget Office of the City of New York also recommended the city take over Battery Park City in a report published in February 2020. The report echoed Urstadt's proposal as a way to increase revenue to the city. An article published by The Broadsheet Daily described the complex shared ownership structure of Battery Park City between the city and state that was set up by Urstadt.
Excess revenue from the area was to be contributed to other housing efforts, typically low-income projects in the Bronx and Harlem. Much of this funding has historically been diverted to general city expenses, under section 3.d of the 1989 agreement. However, in July 2006, Mayor Michael Bloomberg, Governor George Pataki, and Comptroller William C. Thompson Jr. announced the final approval for the New York City Housing Trust Fund derived from $130 million in Battery Park City revenues. The fund aimed to preserve or create 4,300 units of low- and moderate-income housing by 2009. It also provided seed financing for the New York Acquisition Fund, a $230 million initiative that aims to serve as a catalyst for the construction and preservation of more than 30,000 units of affordable housing citywide by 2016. The Acquisition Fund has since established itself as a model for similar funds in cities and states across the country.
By 2018, thirty residential buildings had been built in Battery Park City and no new construction was planned. The Battery Park City Authority's main focus turned to maintenance of existing infrastructure, security and conservancy of the public spaces. The authority was creating over 1,000 free activities per year.
Condo owners in Battery Park City pay higher monthly charges than owners of comparable apartments elsewhere in New York City because residents pay their building's common charges in addition to PILOT (payment in lieu of taxes). The PILOT payments replace real estate taxes and the land lease. The cumulative effect is lower property values for homeowners.
Because none of the properties in Battery Park City own the land they are built on, many banks have refused to write loans when those ground leases are periodically up for renewal. This has been a regular source of anger and frustration for owners in Battery Park City who are looking to sell.
Demographics
For census purposes, the New York City government classifies Battery Park City as part of a larger neighborhood tabulation area called Battery Park City-Lower Manhattan. Based on data from the 2010 United States census, the population of Battery Park City-Lower Manhattan was 39,699, an increase of 19,611 (97.6%) from the 20,088 counted in 2000. Covering an area of , the neighborhood had a population density of . The racial makeup of the neighborhood was 65.4% (25,965) White, 3.2% (1,288) African American, 0.1% (35) Native American, 20.2% (8,016) Asian, 0.0% (17) Pacific Islander, 0.4% (153) from other races, and 3.0% (1,170) from two or more races. Hispanic or Latino of any race were 7.7% (3,055) of the population.
The entirety of Community District 1, which comprises Battery Park City and other Lower Manhattan neighborhoods, had 63,383 inhabitants as of NYC Health's 2018 Community Health Profile, with an average life expectancy of 85.8 years. This is higher than the median life expectancy of 81.2 for all New York City neighborhoods. Most inhabitants are young to middle-aged adults: half (50%) are between the ages of 25 and 44, while 14% are between 0 and 17, and 18% between 45 and 64. The ratio of college-aged and elderly residents was lower, at 11% and 7% respectively.
As of 2017, the median household income in Community Districts 1 and 2 (including Greenwich Village and SoHo) was $144,878, though the median income in Battery Park City individually was $126,771. In 2018, an estimated 9% of Battery Park City and Lower Manhattan residents lived in poverty, compared to 14% in all of Manhattan and 20% in all of New York City. One in twenty-five residents (4%) were unemployed, compared to 7% in Manhattan and 9% in New York City. Rent burden, or the percentage of residents who have difficulty paying their rent, is 38% in Battery Park City and Lower Manhattan, compared to the boroughwide and citywide rates of 45% and 51% respectively. Based on this calculation, , Battery Park City and Lower Manhattan are considered high-income relative to the rest of the city and not gentrifying.
, about 10,000 people live in Battery Park City, most of whom are upper middle class and upper class (54.0% of households have incomes over $100,000). When fully built out, the neighborhood is projected to have 14,000 residents.
Census
Based on the 2020 census, the racial makeup of Northern Battery Park City (10282) was 66% White, 2% Black, 0% Native American, 16% Asian, 0% Islander, 0% from other races, and 5% from two or more races. Hispanic of Latino of any race were 11% of the population. The racial makeup of South Battery Park City (10280) was 69% White, 1% Black, 0% Native, 17% Asian, 0% Islander, 0% from other races, 3% from two or more races, and 11% Hispanic.
As of 2020, the population of the area was 16,169.
Buildings
Residential
The first residential building in Battery Park City, Gateway Plaza, was completed in 1983. , the population of the area was 13,386. Some of the more prominent residential buildings include:
Millennium Point, a , 38-story skyscraper built from 1999 to 2001. It occupies the street addresses 25–39 Battery Place. However, due to the September 11 attacks which hit the nearby World Trade Center, opening of Millennium Point was delayed until January 2002. The building won the 2001 Silver Emporis Skyscraper Award. The tower section contains 113 luxury condominiums. The wider, lower 12 floors are occupied by a 5-star hotel, The Wagner at the Battery (formerly the Ritz-Carlton Battery Park). The hotel has 298 rooms, including 44 suites, with the largest suite spanning in area. The Skyscraper Museum occupies a small space on the first floor of the building. A restaurant is located on the 14th floor.
The Solaire, the first green residential building in the United States, as well as the first residential high-rise building in New York City to be certified by the U.S. Green Building Council. It was designed by Pelli Clarke Pelli and completed in 2003. The Solaire is located at 20 River Terrace. The developer received funding from the State of New York, which was somewhat controversial as the developer was only required to agree to set aside 10% of the units as "affordable housing" or "moderate income", rather than the usual 80:20 agreement. When the building opened, rents ranged from roughly $2,500 to $9,001 depending on the size of the unit. The building has been rated LEED Platinum. The energy conserving building design is 35% more energy-efficient than code requires, resulting in a 67% lower electricity demand during peak hours, resulting in, among other benefits, lower electric bills for residents. Photovoltaic panels convert sunlight to electricity, supplemented by a computerized building management system and environmentally responsible operating and maintenance practices to further reduce the building's environmental impact.
Other residential condominiums include:
Battery Pointe, 300 Rector Place
Cove Club, 2 South End Avenue
Hudson Tower, 350 Albany Street
Hudson View East, 250 South End Avenue
Hudson View West, 300 Albany Street
Liberty Court, 200 Rector Place
Liberty Green, 300 North End Avenue
Liberty House, 377 Rector Place
Liberty Luxe, 200 North End Avenue
Liberty Terrace, 380 Rector Place
Liberty View, 99 Battery Place
Millennium Tower Residences, 30 West Street
The Regatta, 21 South End Avenue
Ritz Carlton Residence, 10 West Street
Riverhouse, One Rockefeller Park
The Soundings, 280 Rector Place
The Visionaire, 70 Little West Street
1 Rector Park, 333 Rector Place
Other residential apartments include:
212 Warren (formerly 22 River Terrace)
Gateway Plaza, 345-395 South End Avenue
The Hallmark, 455 North End Avenue
Rector Square, 225 Rector Place
River Watch, 70 Battery Place
The Solaire, 20 River Terrace
South Cove Plaza, 50 Battery Place
Tribeca Bridge Tower, 450 North End Avenue
Tribeca Green, 325 North End Avenue
Tribeca Park, 400 Chambers Street
Tribeca Pointe, River Terrace
The Verdesian, 211 North End Avenue
Office
Battery Park City, which is mainly residential, also has a few office buildings. The seven buildings including the Brookfield Place complex, as well as 200 West Street, are the neighborhood's only office buildings.
Brookfield Place complex
Located in the middle of Battery Park City and overlooking the Hudson River, Brookfield Place, designed by César Pelli and owned mostly by Toronto-based Brookfield Properties, has been home to offices of various major companies, including Merrill Lynch, RBC Capital Markets, Nomura Group, American Express and Brookfield Asset Management, among others. Brookfield Place also serves as the United States headquarters for Brookfield Properties, which has its headquarters located in 200 Vesey Street. Brookfield Place also has its own zip code, 10281.
Brookfield Place's ground floor and portions of the second floor are occupied by a mall; its center point is a steel-and-glass atrium known as the Winter Garden. Outside of the Winter Garden lies a sizeable yacht harbor on the Hudson known as North Cove.
The building's original developer was Olympia and York of Toronto, Ontario. It used to be named the World Financial Center, but in 2014, the complex was given its current name following the completion of extensive renovations. The World Financial Center complex was built by Olympia and York between 1982 and 1988; it was damaged in the September 11 attacks but later repaired. It has six constituent buildings – 200 Liberty Street, 225 Liberty Street, 200 Vesey Street, 250 Vesey Street, the Winter Garden Atrium, and One North End Avenue (a.k.a. the New York Mercantile Exchange building).
200 West Street
200 West Street is the location of the global headquarters of Goldman Sachs, an investment banking firm. A , 44-story building located on the west side of West Street between Vesey and Murray Streets, it is north of Brookfield Place and the Conrad Hotels, across the street from the Verizon Building, and diagonally opposite the World Trade Center. It is distinctive for being the only office building in the northern section of Battery Park City. It started construction in 2005 and opened in 2009.
Police and crime
Battery Park City and Lower Manhattan are patrolled by the 1st Precinct of the NYPD, located at 16 Ericsson Place. The 1st Precinct ranked 63rd safest out of 69 patrol areas for per-capita crime in 2010. Though the number of crimes is low compared to other NYPD precincts, the residential population is also much lower. , with a non-fatal assault rate of 24 per 100,000 people, Battery Park City and Lower Manhattan's rate of violent crimes per capita is less than that of the city as a whole. The incarceration rate of 152 per 100,000 people is lower than that of the city as a whole.
The 1st Precinct has a lower crime rate than in the 1990s, with crimes across all categories having decreased by 86.3% between 1990 and 2018. The 1st precinct reported 2 murders, 15 rapes, 135 robberies, 121 felony assaults, 191 burglaries, 848 grand larcenies, and 68 grand larcenies auto in 2021.
Fire safety
Battery Park City is served by the New York City Fire Department (FDNY)'s Engine Co. 10/Ladder Co. 10 fire station, located at 124 Liberty Street.
Health
, preterm births and births to teenage mothers are less common in Battery Park City and Lower Manhattan than in other places citywide. In Battery Park City and Lower Manhattan, there were 77 preterm births per 1,000 live births (compared to 87 per 1,000 citywide), and 2.2 teenage births per 1,000 live births (compared to 19.3 per 1,000 citywide), though the teenage birth rate is based on a small sample size. Battery Park City and Lower Manhattan have a low population of residents who are uninsured. In 2018, this population of uninsured residents was estimated to be 4%, less than the citywide rate of 12%, though this was based on a small sample size.
The concentration of fine particulate matter, the deadliest type of air pollutant, in Battery Park City and Lower Manhattan is , more than the city average. Sixteen percent of Battery Park City and Lower Manhattan residents are smokers, which is more than the city average of 14% of residents being smokers. In Battery Park City and Lower Manhattan, 4% of residents are obese, 3% are diabetic, and 15% have high blood pressure, the lowest rates in the city—compared to the citywide averages of 24%, 11%, and 28% respectively. In addition, 5% of children are obese, the lowest rate in the city, compared to the citywide average of 20%.
Ninety-six percent of residents eat some fruits and vegetables every day, which is more than the city's average of 87%. In 2018, 88% of residents described their health as "good", "very good", or "excellent", more than the city's average of 78%. For every supermarket in Battery Park City and Lower Manhattan, there are 6 bodegas.
The nearest major hospital is NewYork-Presbyterian Lower Manhattan Hospital in the Civic Center area.
Post office and ZIP Codes
Battery Park City is located within two ZIP Codes. The neighborhood north of Brookfield Place is covered by 10282, while much of the neighborhood south of Brookfield Place is covered by 10280. Brookfield Place is part of 10281, and the southernmost tip is part of 10004. The United States Postal Service does not operate any post offices in Battery Park City. The nearest post office is the Church Street Station at 90 Church Street in the Financial District.
Education
Battery Park City and Lower Manhattan generally have a higher rate of college-educated residents than the rest of the city . The vast majority of residents age 25 and older (84%) have a college education or higher, while 4% have less than a high school education and 12% are high school graduates or have some college education. By contrast, 64% of Manhattan residents and 43% of city residents have a college education or higher. The percentage of Battery Park City and Lower Manhattan students excelling in math rose from 61% in 2000 to 80% in 2011, and reading achievement increased from 66% to 68% during the same time period.
Battery Park City and Lower Manhattan's rate of elementary school student absenteeism is lower than the rest of New York City. In Battery Park City and Lower Manhattan, 6% of elementary school students missed twenty or more days per school year, less than the citywide average of 20%. Additionally, 96% of high school students in Battery Park City and Lower Manhattan graduate on time, more than the citywide average of 75%.
Schools
The New York City Department of Education operates the following public schools in Battery Park City:
P.S. 89
I.S. 289
P.S./I.S. 276 Battery Park City School
Stuyvesant High School, which moved into a new waterfront building in Battery Park City in 1992
P.S. M094
P226M
Library
Battery Park City has a New York Public Library branch at 175 North End Avenue, designed by 1100 Architect and completed in 2010. A , two-story library on the street level of a high-rise residential building, it utilizes several sustainable design features, earning it LEED Gold certification.
Sustainability was a driving factor in the design of the library including use of an energy-efficient lighting system, maximization of natural lighting, and use of recycled materials. 1100 Architect, in collaboration with Atelier Ten, an international team of environmental design consultants and building services engineers, designed the library's energy-efficient lighting system. The open plan layout and large use of glass allow for ample natural daylight year-round and low-energy LED light illuminates communal spaces. Recycled materials are incorporated into the design including carpet made from re-purposed truck tires, floors made from reclaimed window frame wood, and furniture made from FSC-certified plywood and recycled steel. Design features include a seemingly "floating" origami-style ceiling made up of triangular panels hung at varying angles and a padded reading nook fitted into the library's terrazzo-finished steel and concrete staircase. The interior uses an easy-to-navigate layout with its three distinct spatial areas of entry area, first floor space, and mezzanine visually unified through the ceiling.
The building also won the Interior Design, Best of Year Merit Award in 2011, followed by The National Terrazzo and Mosaic Association, Port Morris Tile and Marble Corporation Craftsmanship Award in 2011 and the Contract, Public Space Interiors Award in 2012.
Transportation
Currently, the Metropolitan Transportation Authority provides bus service to the area. , the bus lines service parts of Battery Park City, with the nearby at Battery Park. Additionally, the Downtown Alliance provides a free bus service that runs along North End Avenue and South End Avenue, connecting the various residential complexes with subway stations on the other side of West Street.
There is currently no New York City Subway access in Battery Park City proper; however, the West Street pedestrian bridges, as well as crosswalks across West Street, connect Battery Park City to subway stations and the PATH station in the nearby Financial District. The West Concourse, a tunnel from Brookfield Place passing under West Street, also provides access from Battery Park City to the World Trade Center PATH station, the WTC Cortlandt station, and the Fulton Street station (New York City Subway).
The Battery Park City Ferry Terminal is at the foot of Vesey Street opposite the New York Mercantile Exchange and provides ferry transportation to various points in New Jersey via NY Waterway, Seastreak, and Liberty Water Taxi routes. NYC Ferry's St. George route, to West Midtown Ferry Terminal and St. George Terminal, stops at Battery Park City Ferry Terminal.
The West Thames Street Bridge, one of the West Street pedestrian bridges connecting Battery Park City to the Financial District, was completed in 2019, replacing the older Rector Street Bridge. On June 11, 2021, it was dedicated as the Robert F. Douglass Bridge. Its namesake, who died in 2016, was an early advocate for lower Manhattan as a senior advisor to Governor Nelson Rockefeller and later as a founding member and chairman of the Downtown Alliance and board member of the Lower Manhattan Development Corporation.
Parks and open spaces
More than one-third of the neighborhood is parkland.
Some large open spaces and parks include:
Teardrop Park sits midblock, near the corner of Warren Street and River Terrace. Before construction, the site was empty and flat; part of the neighborhood's development plan, the park was designed in anticipation of four high residential towers on its west and east. Although a New York City public park, maintenance is overseen by the Battery Park City Parks Conservancy and the park was designed for the Battery Park City Authority. The park opened on September 30, 2004. There is also a southern extension to this park.
Washington Street Plaza, a pedestrian plaza on Washington Street between Carlisle and Albany Streets, opened on May 23, 2013.
In addition, there are:
Community Ballfields, North End Avenue between Murray and Warren Streets
The Esplanade, along the Hudson River from Stuyvesant High School to Battery Park
Monsignor Kowsky Plaza, east of the Esplanade
Nelson A. Rockefeller State Park, north end of Battery Park City west of River Terrace
North Cove, on the river between Liberty Street and Vesey Street.
Oval Lawn, east of the Esplanade
Rector Park, South End Avenue at Rector Place
Robert F. Wagner, Jr. Park, north of Battery Park off Battery Place
South Cove, on the Esplanade, between First and Third Places
West Thames Park, West Street between Albany and West Thames Streets
World Financial Center Plaza, within Brookfield Place
Museums and memorials
Irish Hunger Memorial, located on a site at Vesey Street and North End Avenue. It is dedicated to raising awareness of the Great Irish Famine. Construction began in March 2001, and the memorial was completed and dedicated on July 16, 2002.
Museum of Jewish Heritage, a memorial to those who were murdered in the Holocaust
Skyscraper Museum, an architecture museum in Millennium Point
Hurricane Maria Memorial honors the victims of Hurricane Maria, which struck Puerto Rico on September 20, 2017.
Mother Cabrini Memorial, dedicated on October 12, 2020, honors the patroness of immigrants.
9/11 Memorial at South Cove, created and dedicated on September 9, 2015.
NYC Police Memorial is located at Liberty Street and South End Avenue, and was dedicated on October 20, 1997.
Notable residents
Notable residents include:
Tyra Banks (born 1973), TV personality
Leonardo DiCaprio, actor, resident of 1 Rockefeller Park
Sacha Baron Cohen, actor and comedian, former resident of 1 Rockefeller Park
Isla Fisher, actress, former resident of 1 Rockefeller Park
Dave Gahan, musician, resident of 1 Rockefeller Park
Kris Humphries, basketball player, resident of Liberty Luxe
See also
Hudson River Park Trust
New York Convention Center Operating Corporation
Lower Manhattan Development Corporation
Municipal Assistance Corporation for the City of NY
Roosevelt Island Operating Corporation
United Nations Development Corporation
References
Notes
Further reading
Gordon, David L.A. (1997) Battery Park City: Politics and Planning on the New York Waterfront, Gordon and Breach Publishers
Urstadt, Charles J.; Gene Brown (2005). Battery Park City: The Early Years. Bloomington.
External links
(Hugh L. Carey Battery Park City Authority)
Neighborhoods in Manhattan
Leadership in Energy and Environmental Design certified buildings
Redeveloped ports and waterfronts in the United States
New Urbanism communities
Land reclamation in the United States | Battery Park City | [
"Engineering"
] | 7,542 | [
"Building engineering",
"Leadership in Energy and Environmental Design certified buildings"
] |
4,595 | https://en.wikipedia.org/wiki/Wireless%20broadband | Wireless broadband is a telecommunications technology that provides high-speed wireless Internet access or computer networking access over a wide area. The term encompasses both fixed and mobile broadband.
The term broadband
Originally the word "broadband" had a technical meaning, but became a marketing term for any kind of relatively high-speed computer network or Internet access technology.
According to the 802.16-2004 standard, broadband means "having instantaneous bandwidths greater than 1 MHz and supporting data rates greater than about 1.5 Mbit/s."
The Federal Communications Commission (FCC) recently re-defined the word to mean download speeds of at least 25 Mbit/s and upload speeds of at least 3 Mbit/s.
Technology and speeds
A wireless broadband network is an outdoor fixed and/or mobile wireless network providing point-to-multipoint or point-to-point terrestrial wireless links for broadband services.
Wireless networks can feature data rates exceeding 1 Gbit/s. Many fixed wireless networks are exclusively half-duplex (HDX), however, some licensed and unlicensed systems can also operate at full-duplex (FDX) allowing communication in both directions simultaneously.
Outdoor fixed wireless broadband networks commonly use a priority TDMA based protocol in order to divide communication into timeslots. This timeslot technique eliminates many of the issues common to 802.11 Wi-Fi protocol in outdoor networks such as the hidden node problem.
Few wireless Internet service providers (WISPs) provide download speeds of over 100 Mbit/s; most broadband wireless access (BWA) services are estimated to have a range of from a tower. Technologies used include Local Multipoint Distribution Service (LMDS) and Multichannel Multipoint Distribution Service (MMDS), as well as heavy use of the industrial, scientific and medical (ISM) radio bands and one particular access technology was standardized by IEEE 802.16, with products known as WiMAX.
WiMAX is highly popular in Europe but has not been fully accepted in the United States because cost of deployment. In 2005 the Federal Communications Commission adopted a Report and Order that revised the FCC's rules to open the 3650 MHz band for terrestrial wireless broadband operations.
Another system that is popular with cable internet service providers uses point-to-multipoint wireless links that extend the existing wired network using a transparent radio connection. This allows the same DOCSIS modems to be used for both wired and wireless customers.
Development in the United States
On November 14, 2007, the Commission released Public Notice DA 07–4605 in which the Wireless Telecommunications Bureau announced the start date for licensing and registration process for the 3650–3700 MHz band.
In 2010 the FCC adopted the TV White Space Rules (TVWS) and allowed some of the better no line of sight frequency (700 MHz) into the FCC Part-15 Rules. The Wireless Internet Service Providers Association, a national association of WISPs, petitioned the FCC and won.
Initially, WISPs were found only in rural areas not covered by cable or DSL. These early WISPs would employ a high-capacity T-carrier, such as a T1 or DS3 connection, and then broadcast the signal from a high elevation, such as at the top of a water tower. To receive this type of Internet connection, consumers mount a small dish to the roof of their home or office and point it to the transmitter. Line of sight is usually necessary for WISPs operating in the 2.4 and 5 GHz bands with 900 MHz offering better NLOS (non-line-of-sight) performance.
Residential Wireless Internet
Providers of fixed wireless broadband services typically provide equipment to customers and install a small antenna or dish somewhere on the roof. This equipment is usually deployed as a service and maintained by the company providing that service. Fixed wireless services have become particularly popular in many rural areas where cable, DSL or other typical home internet services are not available.
Business Wireless Internet
Many companies in the US and worldwide have started using wireless alternatives to incumbent and local providers for internet and voice service. These providers tend to offer competitive services and options in areas where there is a difficulty getting affordable Ethernet connections from terrestrial providers such as ATT, Comcast, Verizon and others. Also, companies looking for full diversity between carriers for critical uptime requirements may seek wireless alternatives to local options.
Demand for spectrum
To cope with increased demand for wireless broadband, increased spectrum would be needed. Studies began in 2009, and while some unused spectrum was available, it appeared broadcasters would have to give up at least some spectrum. This led to strong objections from the broadcasting community. In 2013, auctions were planned, and for now any action by broadcasters is voluntary.
Mobile wireless broadband
In the United States, mobile broadband technologies include services from providers such as Verizon, AT&T Mobility, and T-Mobile which allow a more mobile version of Internet access. Consumers can purchase a PC card, laptop card, USB equipment, or mobile broadband modem, to connect their PC or laptop to the Internet via cell phone towers. This type of connection would be stable in almost any area that could also receive a strong cell phone connection. These connections can cost more for portable convenience as well as having speed limitations in all but urban environments.
On June 2, 2010, after months of discussion, AT&T became the first wireless Internet provider in the US to announce plans to charge according to usage. As the only iPhone service in the United States, AT&T experienced the problem of heavy Internet use more than other providers. About 3 percent of AT&T smart phone customers account for 40 percent of the technology's use. 98 percent of the company's customers use less than 2 gigabytes (4000 page views, 10,000 emails or 200 minutes of streaming video), the limit under the $25 monthly plan, and 65 percent use less than 200 megabytes, the limit for the $15 plan. For each gigabyte in excess of the limit, customers would be charged $10 a month starting June 7, 2010, though existing customers would not be required to change from the $30 a month unlimited service plan. The new plan would become a requirement for those upgrading to the new iPhone technology later in the summer.
Licensing
A wireless connection can be either licensed or unlicensed. In the US, licensed connections use a private spectrum the user has secured rights to from the Federal Communications Commission (FCC). The unlicensed mobile wireless broadband, in US operates on CBRS Which has three tiers. Tier 1 – Incumbent Access, reserved for US Federals Government, Tier 2 – Priority Access, a paid access with priority on the spectrum, Tier 3 – General Authorized Access (GAA), a shared spectrum. In other countries, spectrum is licensed from the country's national radio communications authority (such as the ACMA in Australia or Nigerian Communications Commission in Nigeria (NCC)). Licensing is usually expensive and often reserved for large companies who wish to guarantee private access to spectrum for use in point to point communication. Because of this, most wireless ISP's use unlicensed spectrum which is publicly shared.
See also
CorDECT
HIPERMAN
iBurst
802.20
Connect card
Policies promoting wireless broadband
References
External links
Wireless Internet Service Provider Association
Wireless Internet Service Provider Ontario
Wireless networking
Broadband | Wireless broadband | [
"Technology",
"Engineering"
] | 1,487 | [
"Wireless networking",
"Computer networks engineering"
] |
4,603 | https://en.wikipedia.org/wiki/Booch%20method | The Booch method is a method for object-oriented software development. It is composed of an object modeling language, an iterative object-oriented development process, and a set of recommended practices.
The method was authored by Grady Booch when he was working for Rational Software (acquired by IBM), published in 1992 and revised in 1994. It was widely used in software engineering for object-oriented analysis and design and benefited from ample documentation and support tools.
The notation aspect of the Booch methodology was superseded by the Unified Modeling Language (UML), which features graphical elements from the Booch method along with elements from the object-modeling technique (OMT) and object-oriented software engineering (OOSE). Methodological aspects of the Booch method have been incorporated into several methodologies and processes, the primary such methodology being the Rational Unified Process (RUP).
Content of the method
The Booch notation is characterized by cloud shapes to represent classes and distinguishes the following diagrams:
The process is organized around a macro and a micro process.
The macro process identifies the following activities cycle:
Conceptualization : establish core requirements
Analysis : develop a model of the desired behavior
Design : create an architecture
Evolution: for the implementation
Maintenance : for evolution after the delivery
The micro process is applied to new classes, structures or behaviors that emerge during the macro process. It is made of the following cycle:
Identification of classes and objects
Identification of their semantics
Identification of their relationships
Specification of their interfaces and implementation
References
External links
Class diagrams, Object diagrams, State Event diagrams and Module diagrams.
The Booch Method of Object-Oriented Analysis & Design
Software design
Object-oriented programming
Programming principles
de:Grady Booch#Booch-Notation | Booch method | [
"Engineering"
] | 344 | [
"Design",
"Software design"
] |
4,628 | https://en.wikipedia.org/wiki/Bilinear%20transform | The bilinear transform (also known as Tustin's method, after Arnold Tustin) is used in digital signal processing and discrete-time control theory to transform continuous-time system representations to discrete-time and vice versa.
The bilinear transform is a special case of a conformal mapping (namely, a Möbius transformation), often used for converting a transfer function of a linear, time-invariant (LTI) filter in the continuous-time domain (often named an analog filter) to a transfer function of a linear, shift-invariant filter in the discrete-time domain (often named a digital filter although there are analog filters constructed with switched capacitors that are discrete-time filters). It maps positions on the axis, , in the s-plane to the unit circle, , in the z-plane. Other bilinear transforms can be used for warping the frequency response of any discrete-time linear system (for example to approximate the non-linear frequency resolution of the human auditory system) and are implementable in the discrete domain by replacing a system's unit delays with first order all-pass filters.
The transform preserves stability and maps every point of the frequency response of the continuous-time filter, to a corresponding point in the frequency response of the discrete-time filter, although to a somewhat different frequency, as shown in the Frequency warping section below. This means that for every feature that one sees in the frequency response of the analog filter, there is a corresponding feature, with identical gain and phase shift, in the frequency response of the digital filter but, perhaps, at a somewhat different frequency. The change in frequency is barely noticeable at low frequencies but is quite evident at frequencies close to the Nyquist frequency.
Discrete-time approximation
The bilinear transform is a first-order Padé approximant of the natural logarithm function that is an exact mapping of the z-plane to the s-plane. When the Laplace transform is performed on a discrete-time signal (with each element of the discrete-time sequence attached to a correspondingly delayed unit impulse), the result is precisely the Z transform of the discrete-time sequence with the substitution of
where is the numerical integration step size of the trapezoidal rule used in the bilinear transform derivation; or, in other words, the sampling period. The above bilinear approximation can be solved for or a similar approximation for can be performed.
The inverse of this mapping (and its first-order bilinear approximation) is
The bilinear transform essentially uses this first order approximation and substitutes into the continuous-time transfer function,
That is
Stability and minimum-phase property preserved
A continuous-time causal filter is stable if the poles of its transfer function fall in the left half of the complex s-plane. A discrete-time causal filter is stable if the poles of its transfer function fall inside the unit circle in the complex z-plane. The bilinear transform maps the left half of the complex s-plane to the interior of the unit circle in the z-plane. Thus, filters designed in the continuous-time domain that are stable are converted to filters in the discrete-time domain that preserve that stability.
Likewise, a continuous-time filter is minimum-phase if the zeros of its transfer function fall in the left half of the complex s-plane. A discrete-time filter is minimum-phase if the zeros of its transfer function fall inside the unit circle in the complex z-plane. Then the same mapping property assures that continuous-time filters that are minimum-phase are converted to discrete-time filters that preserve that property of being minimum-phase.
Transformation of a General LTI System
A general LTI system has the transfer function
The order of the transfer function is the greater of and (in practice this is most likely as the transfer function must be proper for the system to be stable). Applying the bilinear transform
where is defined as either or otherwise if using frequency warping, gives
Multiplying the numerator and denominator by the largest power of present, , gives
It can be seen here that after the transformation, the degree of the numerator and denominator are both .
Consider then the pole-zero form of the continuous-time transfer function
The roots of the numerator and denominator polynomials, and , are the zeros and poles of the system. The bilinear transform is a one-to-one mapping, hence these can be transformed to the z-domain using
yielding some of the discretized transfer function's zeros and poles and
As described above, the degree of the numerator and denominator are now both , in other words there is now an equal number of zeros and poles. The multiplication by means the additional zeros or poles are
Given the full set of zeros and poles, the z-domain transfer function is then
Example
As an example take a simple low-pass RC filter. This continuous-time filter has a transfer function
If we wish to implement this filter as a digital filter, we can apply the bilinear transform by substituting for the formula above; after some reworking, we get the following filter representation:
{|
|-
|
|
|-
|
|
|-
|
|
|-
|
|
|}
The coefficients of the denominator are the 'feed-backward' coefficients and the coefficients of the numerator are the 'feed-forward' coefficients used for implementing a real-time digital filter.
Transformation for a general first-order continuous-time filter
It is possible to relate the coefficients of a continuous-time, analog filter with those of a similar discrete-time digital filter created through the bilinear transform process. Transforming a general, first-order continuous-time filter with the given transfer function
using the bilinear transform (without prewarping any frequency specification) requires the substitution of
where
.
However, if the frequency warping compensation as described below is used in the bilinear transform, so that both analog and digital filter gain and phase agree at frequency , then
.
This results in a discrete-time digital filter with coefficients expressed in terms of the coefficients of the original continuous time filter:
Normally the constant term in the denominator must be normalized to 1 before deriving the corresponding difference equation. This results in
The difference equation (using the Direct form I) is
General second-order biquad transformation
A similar process can be used for a general second-order filter with the given transfer function
This results in a discrete-time digital biquad filter with coefficients expressed in terms of the coefficients of the original continuous time filter:
Again, the constant term in the denominator is generally normalized to 1 before deriving the corresponding difference equation. This results in
The difference equation (using the Direct form I) is
Frequency warping
To determine the frequency response of a continuous-time filter, the transfer function is evaluated at which is on the axis. Likewise, to determine the frequency response of a discrete-time filter, the transfer function is evaluated at which is on the unit circle, . The bilinear transform maps the axis of the s-plane (which is the domain of ) to the unit circle of the z-plane, (which is the domain of ), but it is not the same mapping which also maps the axis to the unit circle. When the actual frequency of is input to the discrete-time filter designed by use of the bilinear transform, then it is desired to know at what frequency, , for the continuous-time filter that this is mapped to.
{|
|-
|
|
|-
|
|
|-
|
|
|-
|
|
|-
|
|
|-
|
|
|}
This shows that every point on the unit circle in the discrete-time filter z-plane, is mapped to a point on the axis on the continuous-time filter s-plane, . That is, the discrete-time to continuous-time frequency mapping of the bilinear transform is
and the inverse mapping is
The discrete-time filter behaves at frequency the same way that the continuous-time filter behaves at frequency . Specifically, the gain and phase shift that the discrete-time filter has at frequency is the same gain and phase shift that the continuous-time filter has at frequency . This means that every feature, every "bump" that is visible in the frequency response of the continuous-time filter is also visible in the discrete-time filter, but at a different frequency. For low frequencies (that is, when or ), then the features are mapped to a slightly different frequency; .
One can see that the entire continuous frequency range
is mapped onto the fundamental frequency interval
The continuous-time filter frequency corresponds to the discrete-time filter frequency and the continuous-time filter frequency correspond to the discrete-time filter frequency
One can also see that there is a nonlinear relationship between and This effect of the bilinear transform is called frequency warping. The continuous-time filter can be designed to compensate for this frequency warping by setting for every frequency specification that the designer has control over (such as corner frequency or center frequency). This is called pre-warping the filter design.
It is possible, however, to compensate for the frequency warping by pre-warping a frequency specification (usually a resonant frequency or the frequency of the most significant feature of the frequency response) of the continuous-time system. These pre-warped specifications may then be used in the bilinear transform to obtain the desired discrete-time system. When designing a digital filter as an approximation of a continuous time filter, the frequency response (both amplitude and phase) of the digital filter can be made to match the frequency response of the continuous filter at a specified frequency , as well as matching at DC, if the following transform is substituted into the continuous filter transfer function. This is a modified version of Tustin's transform shown above.
However, note that this transform becomes the original transform
as .
The main advantage of the warping phenomenon is the absence of aliasing distortion of the frequency response characteristic, such as observed with Impulse invariance.
See also
Impulse invariance
Matched Z-transform method
References
External links
MIT OpenCourseWare Signal Processing: Continuous to Discrete Filter Design
Lecture Notes on Discrete Equivalents
The Art of VA Filter Design
Digital signal processing
Transforms
Control theory | Bilinear transform | [
"Mathematics"
] | 2,126 | [
"Functions and mappings",
"Applied mathematics",
"Control theory",
"Mathematical objects",
"Mathematical relations",
"Transforms",
"Dynamical systems"
] |
4,640 | https://en.wikipedia.org/wiki/Bogie | A bogie ( ) (or truck in North American English) comprises two or more wheelsets (two wheels on an axle), in a frame, attached under a vehicle by a pivot. Bogies take various forms in various modes of transport. A bogie may remain normally attached (as on many railroad cars and semi-trailers) or be quickly detachable (as for a dolly in a road train or in railway bogie exchange). It may include suspension components within it (as most rail and trucking bogies do), or be solid and in turn be suspended (as are most bogies of tracked vehicles). It may be mounted on a swivel, as traditionally on a railway carriage or locomotive, additionally jointed and sprung (as in the landing gear of an airliner), or held in place by other means (centreless bogies).
Although bogie is the preferred spelling and first-listed variant in various dictionaries, bogey and bogy are also used.
Railway
A bogie in the UK, or a railroad truck, wheel truck, or simply truck in North America, is a structure underneath a railway vehicle (wagon, coach or locomotive) to which axles (hence, wheels) are attached through bearings. In Indian English, bogie may also refer to an entire railway carriage. In South Africa, the term bogie is often alternatively used to refer to a freight or goods wagon (shortened from bogie wagon).
A locomotive with a bogie was built by engineer William Chapman in 1812. It hauled itself along by chains and was not successful, but Chapman built a more successful locomotive with two gear-driven bogies in 1814. The bogie was first used in America for wagons on the Quincy Granite Railroad in 1829. The first successful locomotive with a bogie to guide the locomotive into curves while also supporting the smokebox was built by John B. Jervis in 1831. The concept took decades before it was widely accepted but eventually became a component of the vast majority of mainline locomotive designs. The first use of bogie coaches in Britain was in 1872 by the Festiniog Railway.The first standard gauge British railway to build coaches with bogies, instead of rigidly mounted axles, was the Midland Railway in 1874.
Purpose
Bogies serve a number of purposes:
Support of the rail vehicle body
Stability on both straight and curved track
Improve ride quality by absorbing vibration and minimizing the impact of centrifugal forces when the train runs on curves at high speed
Minimizing generation of track irregularities and rail abrasion
Usually, two bogies are fitted to each carriage, wagon or locomotive, one at each end. Another configuration is often used in articulated vehicles, which places the bogies (often Jacobs bogies) under the connection between the carriages or wagons.
Most bogies have two axles, but some cars designed for heavy loads have more axles per bogie. Heavy-duty cars may have more than two bogies using span bolsters to equalize the load and connect the bogies to the cars.
Usually, the train floor is at a level above the bogies, but the floor of the car may be lower between bogies, such as for a bilevel rail car to increase interior space while staying within height restrictions, or in easy-access, stepless-entry, low-floor trains.
Components
Key components of a bogie include:
The bogie frame: This can be of inside frame type where the main frame and bearings are between the wheels, or (more commonly) of outside frame type where the main frame and bearings are outside the wheels.
Suspension to absorb shocks between the bogie frame and the rail vehicle body. Common types are coil springs, leaf springs and rubber airbags.
At least one wheelset, composed of an axle with bearings and a wheel at each end.
The bolster, the main crossmember, connected to the bogie frame through the secondary suspension. The railway car is supported at the pivot point on the bolster.
Axle box suspensions absorb shocks between the axle bearings and the bogie frame. The axle box suspension usually consists of a spring between the bogie frame and axle bearings to permit up-and-down movement, and sliders to prevent lateral movement. A more modern design uses solid rubber springs.
Brake equipment: Two main types are used: brake shoes that are pressed against the tread of the wheel, and disc brakes and pads.
In powered vehicles, some form of transmission, usually electrically powered traction motors with a single speed gearbox or a hydraulically powered torque converter.
The connections of the bogie with the rail vehicle allow a certain degree of rotational movement around a vertical axis pivot (bolster), with side bearers preventing excessive movement. More modern, bolsterless bogie designs omit these features, instead taking advantage of the sideways movement of the suspension to permit rotational movement.
Locomotives
Diesel and electric
Modern diesel and electric locomotives are mounted on bogies. Those commonly used in North America include Type A, Blomberg, HT-C and Flexicoil trucks.
Steam
On a steam locomotive, the leading and trailing wheels may be mounted on bogies like Bissel trucks (also known as pony trucks). Articulated locomotives (e.g. Fairlie, Garratt or Mallet locomotives) have power bogies similar to those on diesel and electric locomotives.
Rollbock
A rollbock is a specialized type of bogie that is inserted under the wheels of a rail wagon/car, usually to convert for another track gauge. Transporter wagons carry the same concept to the level of a flatcar specialized to take other cars as its load.
Archbar bogies
In archbar or diamond frame bogies, the side frames are fabricated rather than cast.
Tramway
Modern
Tram bogies are much simpler in design because of their axle load, and the tighter curves found on tramways mean tram bogies almost never have more than two axles. Furthermore, some tramways have steeper gradients and vertical as well as horizontal curves, which means tram bogies often need to pivot on the horizontal axis, as well.
Some articulated trams have bogies located under articulations, a setup referred to as a Jacobs bogie. Often, low-floor trams are fitted with nonpivoting bogies; many tramway enthusiasts see this as a retrograde step, as it leads to more wear of both track and wheels and also significantly reduces the speed at which a tram can round a curve.
Historic
In the past, many different types of bogie (truck) have been used under tramcars (e.g. Brill, Peckham, maximum traction). A maximum traction truck has one driving axle with large wheels and one nondriving axle with smaller wheels. The bogie pivot is located off-centre, so more than half the weight rests on the driving axle.
Hybrid systems
The retractable stadium roof on Toronto's Rogers Centre used modified off-the-shelf train bogies on a circular rail. The system was chosen for its proven reliability.
Rubber-tyred metro trains use a specialised version of railway bogies. Special flanged steel wheels are behind the rubber-tired running wheels, with additional horizontal guide wheels in front of and behind the running wheels, as well. The unusually large flanges on the steel wheels guide the bogie through standard railroad switches, and in addition keep the train from derailing in case the tires deflate.
Variable gauge axles
To overcome breaks of gauge some bogies are being fitted with variable gauge axles (VGA) so that they can operate on two different gauges. These include the SUW 2000 system from ZNTK Poznań.
Radial steering truck
Radial-steering trucks, also known as radial bogies, allow the individual axles to align with curves in addition to the bogie frame as a whole pivoting. For non-radial bogies, the more axles in the assembly, the more difficulty it has negotiating curves, due to wheel flange to rail friction. For radial bogies, the wheel sets actively steer through curves, thus reducing wear at the wheel's flange-to-rail interface and improving adhesion.
In the US, radial steering has been implemented in EMD and GE locomotives. The EMD version, designated HTCR, was made standard equipment for the SD70 series, first sold in 1993. The HTCR in operation had mixed results and relatively high purchase and maintenance costs. EMD subsequently introduced the HTSC truck, essentially the HTCR stripped of radial components. GE introduced their version in 1995 as a buyer option for the AC4400CW and later Evolution Series locomotives. However, it also met with limited acceptance because of its relatively high purchase and maintenance costs, and customers have generally chosen GE Hi-Ad standard trucks for newer and rebuilt locomotives.
A 19th century configuration of self-steering axles on rolling stock established the principle of radial steering. The Cleminson system involved three axles, each mounted on a frame that had a central pivot; the central axle could slide transversely. The three axles were connected by linkages that kept them parallel on the straight and moved the end ones radially on a curve, so that all three axles were continually at right angles to the rails. The configuration, invented by British engineer John James Davidge Cleminson, was first granted a patent in the UK in 1883. The system was widely used on British narrow-gauge rolling stock, such as on the Isle of Man and Manx Northern Railways. The Holdfast Bay Railway Company in South Australia, which later became the Glenelg Railway Company, purchased Cleminson-configured carriages in 1880 from the American Gilbert & Bush Company for its broad-gauge line.
Articulated bogie
An articulated bogie is any one of a number of bogie designs that allow railway equipment to safely turn sharp corners, while reducing or eliminating the "screeching" normally associated with metal wheels rounding a bend in the rails. There are a number of such designs, and the term is also applied to train sets that incorporate articulation in the vehicle, as opposed to the bogies themselves.
If one considers a single bogie "up close", it resembles a small rail car with axles at either end. The same effect that causes the bogies to rub against the rails at longer radius causes each of the pairs of wheels to rub on the rails and cause the screeching. Articulated bogies add a second pivot point between the two axles (wheelsets) to allow them to rotate to the correct angle even in these cases.
Articulated lorries (tractor-trailers)
In trucking, a bogie is the subassembly of axles and wheels that supports a semi-trailer, whether permanently attached to the frame (as on a single trailer) or making up the dolly that can be hitched and unhitched as needed when hitching up a second or third semi-trailer (as when pulling doubles or triples).
Tracked vehicles
Some tanks and other tracked vehicles have bogies as external suspension components (see armoured fighting vehicle suspension). This type of bogie usually has two or more road wheels and some type of sprung suspension to smooth the ride across rough terrain. Bogie suspensions keep much of their components on the outside of the vehicle, saving internal space. Although vulnerable to antitank fire, they can often be repaired or replaced in the field.
Aircraft bogies
See also
Articles on bogies and trucks
Arnoux system
Bissel bogie
Blomberg B
Gölsdorf axle
Jacobs bogie
Krauss-Helmholtz bogie
Lateral motion device
Mason Bogie
Pony truck
Rocker-bogie
Scheffel bogie
Schwartzkopff-Eckhardt bogie
Syntegra
Related topics
Caster
Dolly
Flange
List of railroad truck parts
Luttermöller axle
Road–rail vehicle
Skateboard truck
Spring (device)
Timmis system, an early form of coil spring used on railway axles.
Trailing wheel
Wheel arrangement
Wheelbase
Wheelset
References
Further reading
External links
Truck (bogie) with tyres
Track modelling
Bogies/Trucks
Barber truck parts
Suspension systems
Locomotive’s Bogies & Components
Locomotive parts
Rail technologies
Vehicle technology | Bogie | [
"Engineering"
] | 2,507 | [
"Vehicle technology",
"Mechanical engineering by discipline"
] |
4,650 | https://en.wikipedia.org/wiki/Black%20hole | A black hole is a region of spacetime wherein gravity is so strong that no matter or electromagnetic energy (e.g. light) can escape it. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole. The boundary of no escape is called the event horizon. A black hole has a great effect on the fate and circumstances of an object crossing it, but it has no locally detectable features according to general relativity. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly.
Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first published the interpretation of "black hole" as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The discovery of neutron stars by Jocelyn Bell Burnell in 1967 sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971.
Black holes of stellar mass form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses () may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies.
The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Any matter that falls toward a black hole can form an external accretion disk heated by friction, forming quasars, some of the brightest objects in the universe. Stars passing too close to a supermassive black hole can be shredded into streamers that shine very brightly before being "swallowed." If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses.
History
The idea of a body so big that even light could not escape was briefly proposed by English astronomical pioneer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars rather than the modern model of stars with extraordinary density.
Mitchel's idea appeared in a letter published in November 1784. Michell's simplistic calculations assumed such a body might have the same density as the Sun, and concluded that one would form when a star's diameter exceeds the Sun's by a factor of 500, and its surface escape velocity exceeds the usual speed of light. Michell correctly noted that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies.
In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in journal edited by von Zach.
Scholars of the time were initially excited by the proposal that giant but invisible 'dark stars' might be hiding in plain view, but enthusiasm dampened when the wavelike nature of light became apparent in the early nineteenth century, as if light were a wave rather than a particle, it was unclear what, if any, influence gravity would have on escaping light waves.
General relativity
In 1915, Albert Einstein developed his theory of general relativity, having earlier shown that gravity does influence light's motion. Only a few months later, Karl Schwarzschild found a solution to the Einstein field equations that describes the gravitational field of a point mass and a spherical mass. A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution for the point mass and wrote more extensively about its properties. This solution had a peculiar behaviour at what is now called the Schwarzschild radius, where it became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this surface was not quite understood at the time.
In 1924, Arthur Eddington showed that the singularity disappeared after a change of coordinates. In 1933, Georges Lemaître realised that this meant the singularity at the Schwarzschild radius was a non-physical coordinate singularity. Arthur Eddington commented on the possibility of a star with mass compressed to the Schwarzschild radius in a 1926 book, noting that Einstein's theory allows us to rule out overly large densities for visible stars like Betelgeuse because "a star of 250 million km radius could not possibly have so high a density as the Sun. Firstly, the force of gravitation would be so great that light would be unable to escape from it, the rays falling back to the star like a stone to the earth. Secondly, the red shift of the spectral lines would be so great that the spectrum would be shifted out of existence. Thirdly, the mass would produce so much curvature of the spacetime metric that space would close up around the star, leaving us outside (i.e., nowhere)."
In 1931, Subrahmanyan Chandrasekhar calculated, using special relativity, that a non-rotating body of electron-degenerate matter above a certain limiting mass (now called the Chandrasekhar limit at ) has no stable solutions. His arguments were opposed by many of his contemporaries like Eddington and Lev Landau, who argued that some yet unknown mechanism would stop the collapse. They were partly correct: a white dwarf slightly more massive than the Chandrasekhar limit will collapse into a neutron star, which is itself stable.
In 1939, Robert Oppenheimer and others predicted that neutron stars above another limit, the Tolman–Oppenheimer–Volkoff limit, would collapse further for the reasons presented by Chandrasekhar, and concluded that no law of physics was likely to intervene and stop at least some stars from collapsing to black holes. Their original calculations, based on the Pauli exclusion principle, gave it as . Subsequent consideration of neutron-neutron repulsion mediated by the strong force raised the estimate to approximately to . Observations of the neutron star merger GW170817, which is thought to have generated a black hole shortly afterward, have refined the TOV limit estimate to ~.
Oppenheimer and his co-authors interpreted the singularity at the boundary of the Schwarzschild radius as indicating that this was the boundary of a bubble in which time stopped. This is a valid point of view for external observers, but not for infalling observers. The hypothetical collapsed stars were called "frozen stars", because an outside observer would see the surface of the star frozen in time at the instant where its collapse takes it to the Schwarzschild radius.
Also in 1939, Einstein attempted to prove that black holes were impossible in his publication "On a Stationary System with Spherical Symmetry Consisting of Many Gravitating Masses", using his theory of general relativity to defend his argument. Months later, Oppenheimer and his student Hartland Snyder provided the Oppenheimer–Snyder model in their paper "On Continued Gravitational Contraction", which predicted the existence of black holes. In the paper, which made no reference to Einstein's recent publication, Oppenheimer and Snyder used Einstein's own theory of general relativity to show the conditions on how a black hole could develop, for the first time in contemporary physics.
Golden age
In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, "a perfect unidirectional membrane: causal influences can cross it in only one direction". This did not strictly contradict Oppenheimer's results, but extended them to include the point of view of infalling observers. Finkelstein's solution extended the Schwarzschild solution for the future of observers falling into a black hole. A complete extension had already been found by Martin Kruskal, who was urged to publish it.
These results came at the beginning of the golden age of general relativity, which was marked by general relativity and black holes becoming mainstream subjects of research. This process was helped by the discovery of pulsars by Jocelyn Bell Burnell in 1967, which, by 1969, were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities; but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse.
In this period more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the axisymmetric solution for a black hole that is both rotating and electrically charged. Through the work of Werner Israel, Brandon Carter, and David Robinson the no-hair theorem emerged, stating that a stationary black hole solution is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge.
At first, it was suspected that the strange features of the black hole solutions were pathological artefacts from the symmetry conditions imposed, and that the singularities would not appear in generic situations. This view was held in particular by Vladimir Belinsky, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions. However, in the late 1960s Roger Penrose and Stephen Hawking used global techniques to prove that singularities appear generically. For this work, Penrose received half of the 2020 Nobel Prize in Physics, Hawking having died in 2018. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole.
Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation.
Observation
On 11 February 2016, the LIGO Scientific Collaboration and the Virgo collaboration announced the first direct detection of gravitational waves, representing the first observation of a black hole merger. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. , the nearest known body thought to be a black hole, Gaia BH1, is around away. Though only a couple dozen black holes have been found so far in the Milky Way, there are thought to be hundreds of millions, most of which are solitary and do not cause emission of radiation. Therefore, they would only be detectable by gravitational lensing.
Etymology
Science writer Marcia Bartusiak traces the term "black hole" to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive.
The term "black hole" was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio.
In December 1967, a student reportedly suggested the phrase "black hole" at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and it quickly caught on, leading some to credit Wheeler with coining the phrase.
Properties and structure
The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes under the laws of modern physics is currently an unsolved problem.
These properties are special because they are visible from outside a black hole. For example, a charged black hole repels other like charges just like any other charged object. Similarly, the total mass inside a sphere containing a black hole can be found by using the gravitational analogue of Gauss's law (through the ADM mass), far away from the black hole. Likewise, the angular momentum (or spin) can be measured from far away using frame dragging by the gravitomagnetic field, through for example the Lense–Thirring effect.
When an object falls into a black hole, any information about the shape of the object or distribution of charge on it is evenly distributed along the horizon of the black hole, and is lost to outside observers. The behaviour of the horizon in this situation is a dissipative system that is closely analogous to that of a conductive stretchy membrane with friction and electrical resistance—the membrane paradigm. This is different from other field theories such as electromagnetism, which do not have any friction or resistivity at the microscopic level, because they are time-reversible.
Because a black hole eventually achieves a stable state with only three parameters, there is no way to avoid losing information about the initial conditions: the gravitational and electric fields of a black hole give very little information about what went in. The information that is lost includes every quantity that cannot be measured far away from the black hole horizon, including approximately conserved quantum numbers such as the total baryon number and lepton number. This behaviour is so puzzling that it has been called the black hole information loss paradox.
Physical properties
The simplest static black holes have mass but neither electric charge nor angular momentum. These black holes are often referred to as Schwarzschild black holes after Karl Schwarzschild who discovered this solution in 1916. According to Birkhoff's theorem, it is the only vacuum solution that is spherically symmetric. This means there is no observable difference at a distance between the gravitational field of such a black hole and that of any other spherical object of the same mass. The popular notion of a black hole "sucking in everything" in its surroundings is therefore correct only near a black hole's horizon; far away, the external gravitational field is identical to that of any other body of the same mass.
Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum.
While the mass of a black hole can take any positive value, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality
for a black hole of mass M. Black holes with the minimum possible mass satisfying this inequality are called extremal. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These solutions have so-called naked singularities that can be observed from the outside, and hence are deemed unphysical. The cosmic censorship hypothesis rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. This is supported by numerical simulations.
Due to the relatively large strength of the electromagnetic force, black holes forming from the collapse of stars are expected to retain the nearly neutral charge of the star. Rotation, however, is expected to be a universal feature of compact astrophysical objects. The black-hole candidate binary X-ray source GRS 1915+105 appears to have an angular momentum near the maximum allowed value. That uncharged limit is
allowing definition of a dimensionless spin parameter such that
Black holes are commonly classified according to their mass, independent of angular momentum, J. The size of a black hole, as determined by the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through
where r is the Schwarzschild radius and is the mass of the Sun. For a black hole with nonzero spin or electric charge, the radius is smaller, until an extremal black hole could have an event horizon close to
Event horizon
The defining feature of a black hole is the appearance of an event horizon—a boundary in spacetime through which matter and light can pass only inward towards the mass of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach an outside observer, making it impossible to determine whether such an event occurred.
As predicted by general relativity, the presence of a mass deforms spacetime in such a way that the paths taken by particles bend towards the mass. At the event horizon of a black hole, this deformation becomes so strong that there are no paths that lead away from the black hole.
To a distant observer, clocks near a black hole would appear to tick more slowly than those farther away from the black hole. Due to this effect, known as gravitational time dilation, an object falling into a black hole appears to slow as it approaches the event horizon, taking an infinite amount of time to reach it. At the same time, all processes on this object slow down, from the viewpoint of a fixed outside observer, causing any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. Eventually, the falling object fades away until it can no longer be seen. Typically this process happens very rapidly with an object disappearing from view within less than a second.
On the other hand, indestructible observers falling into a black hole do not notice any of these effects as they cross the event horizon. According to their own clocks, which appear to them to tick normally, they cross the event horizon after a finite time without noting any singular behaviour; in classical general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.
The topology of the event horizon of a black hole at equilibrium is always spherical. For non-rotating (static) black holes the geometry of the event horizon is precisely spherical, while for rotating black holes the event horizon is oblate.
Singularity
At the centre of a black hole, as described by general relativity, may lie a gravitational singularity, a region where the spacetime curvature becomes infinite. For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation. In both cases, the singular region has zero volume. It can also be shown that the singular region contains all the mass of the black hole solution. The singular region can thus be thought of as having infinite density.
Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. They can prolong the experience by accelerating away to slow their descent, but only up to a limit. When they reach the singularity, they are crushed to infinite density and their mass is added to the total of the black hole. Before that happens, they will have been torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the "noodle effect".
In the case of a charged (Reissner–Nordström) or rotating (Kerr) black hole, it is possible to avoid the singularity. Extending these solutions as far as possible reveals the hypothetical possibility of exiting the black hole into a different spacetime with the black hole acting as a wormhole. The possibility of travelling to another universe is, however, only theoretical since any perturbation would destroy this possibility. It also appears to be possible to follow closed timelike curves (returning to one's own past) around the Kerr singularity, which leads to problems with causality like the grandfather paradox. It is expected that none of these peculiar effects would survive in a proper quantum treatment of rotating and charged black holes.
The appearance of singularities in general relativity is commonly perceived as signalling the breakdown of the theory. This breakdown, however, is expected; it occurs in a situation where quantum effects should describe these actions, due to the extremely high density and therefore particle interactions. To date, it has not been possible to combine quantum and gravitational effects into a single theory, although there exist attempts to formulate such a theory of quantum gravity. It is generally expected that such a theory will not feature any singularities.
Photon sphere
The photon sphere is a spherical boundary where photons that move on tangents to that sphere would be trapped in a non-stable but circular orbit around the black hole.
For non-rotating black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius. Their orbits would be dynamically unstable, hence any small perturbation, such as a particle of infalling matter, would cause an instability that would grow over time, either setting the photon on an outward trajectory causing it to escape the black hole, or on an inward spiral where it would eventually cross the event horizon.
While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Hence any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. For a Kerr black hole the radius of the photon sphere depends on the spin parameter and on the details of the photon orbit, which can be prograde (the photon rotates in the same sense of the black hole spin) or retrograde.
Ergosphere
Rotating black holes are surrounded by a region of spacetime in which it is impossible to stand still, called the ergosphere. This is the result of a process known as frame-dragging; general relativity predicts that any rotating mass will tend to slightly "drag" along the spacetime immediately surrounding it. Any object near the rotating mass will tend to start moving in the direction of rotation. For a rotating black hole, this effect is so strong near the event horizon that an object would have to move faster than the speed of light in the opposite direction to just stand still.
The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but is at a much greater distance around the equator.
Objects and radiation can escape normally from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole. Thereby the rotation of the black hole slows down. A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei.
Innermost stable circular orbit (ISCO)
In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists an innermost stable circular orbit (often called the ISCO), for which any infinitesimal inward perturbations to a circular orbit will lead to spiraling into the black hole, and any outward perturbations will, depending on the energy, result in spiraling in, stably orbiting between apastron and periastron, or escaping to infinity. The location of the ISCO depends on the spin of the black hole, in the case of a Schwarzschild black hole (spin zero) is:
and decreases with increasing black hole spin for particles orbiting in the same direction as the spin.
Plunging region
The final observable region of spacetime around a black hole is called the plunging region. In this area it is no longer possible for matter to follow circular orbits or to stop a final descent into the black hole. Instead it will rapidly plunge toward the black hole close to the speed of light.
Formation and evolution
Given the bizarre character of black holes, it was long questioned whether such objects could actually exist in nature or whether they were merely pathological solutions to Einstein's equations. Einstein himself wrongly thought black holes would not form, because he held that the angular momentum of collapsing particles would stabilise their motion at some radius. This led the general relativity community to dismiss all results to the contrary for many years. However, a minority of relativists continued to contend that black holes were physical objects, and by the end of the 1960s, they had persuaded the majority of researchers in the field that there is no obstacle to the formation of an event horizon.
Penrose demonstrated that once an event horizon forms, general relativity without quantum mechanics requires that a singularity will form within. Shortly afterwards, Hawking showed that many cosmological solutions that describe the Big Bang have singularities without scalar fields or other exotic matter. The Kerr solution, the no-hair theorem, and the laws of black hole thermodynamics showed that the physical properties of black holes were simple and comprehensible, making them respectable subjects for research. Conventional black holes are formed by gravitational collapse of heavy objects such as stars, but they can also in theory be formed by other processes.
Gravitational collapse
Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. For stars this usually occurs either because a star has too little "fuel" left to maintain its temperature through stellar nucleosynthesis, or because a star that would have been stable receives extra matter in a way that does not raise its core temperature. In either case the star's temperature is no longer high enough to prevent it from collapsing under its own weight.
The collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. The result is one of the various types of compact star. Which type forms depends on the mass of the remnant of the original star left if the outer layers have been blown away (for example, in a Type II supernova). The mass of the remnant, the collapsed object that survives the explosion, can be substantially less than that of the original star. Remnants exceeding are produced by stars that were over before the collapse.
If the mass of the remnant exceeds about (the Tolman–Oppenheimer–Volkoff limit), either because the original star was very heavy or because the remnant collected additional mass through accretion of matter, even the degeneracy pressure of neutrons is insufficient to stop the collapse. No known mechanism (except possibly quark degeneracy pressure) is powerful enough to stop the implosion and the object will inevitably collapse to form a black hole.
The gravitational collapse of heavy stars is assumed to be responsible for the formation of stellar mass black holes. Star formation in the early universe may have resulted in very massive stars, which upon their collapse would have produced black holes of up to . These black holes could be the seeds of the supermassive black holes found in the centres of most galaxies. It has further been suggested that massive black holes with typical masses of ~ could have formed from the direct collapse of gas clouds in the young universe. These massive objects have been proposed as the seeds that eventually formed the earliest quasars observed already at redshift . Some candidates for such objects have been found in observations of the young universe.
While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the light emitted just before the event horizon forms delayed an infinite amount of time. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away.
Primordial black holes and the Big Bang
Gravitational collapse requires great density. In the current epoch of the universe these high densities are found only in stars, but in the early universe shortly after the Big Bang densities were much greater, possibly allowing for the creation of black holes. High density alone is not enough to allow black hole formation since a uniform mass distribution will not allow the mass to bunch up. In order for primordial black holes to have formed in such a dense medium, there must have been initial density perturbations that could then grow under their own gravity. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging in size from a Planck mass ( ≈ ≈ ) to hundreds of thousands of solar masses.
Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the expansion rate was greater than the attraction. Following inflation theory there was a net repulsive gravitation in the beginning until the end of inflation. Since then the Hubble flow was slowed by the energy density of the universe.
Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang.
High-energy collisions
Gravitational collapse is not the only process that could create black holes. In principle, black holes could be formed in high-energy collisions that achieve sufficient density. As of 2002, no such events have been detected, either directly or indirectly as a deficiency of the mass balance in particle accelerator experiments. This suggests that there must be a lower limit for the mass of black holes. Theoretically, this boundary is expected to lie around the Planck mass, where quantum effects are expected to invalidate the predictions of general relativity.
This would put the creation of black holes firmly out of reach of any high-energy process occurring on or near the Earth. However, certain developments in quantum gravity suggest that the minimum black hole mass could be much lower: some braneworld scenarios for example put the boundary as low as . This would make it conceivable for micro black holes to be created in the high-energy collisions that occur when cosmic rays hit the Earth's atmosphere, or possibly in the Large Hadron Collider at CERN. These theories are very speculative, and the creation of black holes in these processes is deemed unlikely by many specialists. Even if micro black holes could be formed, it is expected that they would evaporate in about 10 seconds, posing no threat to the Earth.
Growth
Once a black hole has formed, it can continue to grow by absorbing additional matter. Any black hole will continually absorb gas and interstellar dust from its surroundings. This growth process is one possible way through which some supermassive black holes may have been formed, although the formation of supermassive black holes is still an open field of research. A similar process has been suggested for the formation of intermediate-mass black holes found in globular clusters. Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes.
Evaporation
In 1974, Hawking predicted that black holes are not entirely black but emit small amounts of thermal radiation at a temperature ħc/(8πGMk); this effect has become known as Hawking radiation. By applying quantum field theory to a static black hole background, he determined that a black hole should emit particles that display a perfect black body spectrum. Since Hawking's publication, many others have verified the result through various approaches. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which, for a Schwarzschild black hole, is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.
A stellar black hole of has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre.
If a black hole is very small, the radiation effects are expected to become very strong. A black hole with the mass of a car would have a diameter of about 10 m and take a nanosecond to evaporate, during which time it would briefly have a luminosity of more than 200 times that of the Sun. Lower-mass black holes are expected to evaporate even faster; for example, a black hole of mass 1 TeV/c would take less than 10 seconds to evaporate completely. For such a small black hole, quantum gravity effects are expected to play an important role and could hypothetically make such a small black hole stable, although current developments in quantum gravity do not indicate this is the case.
The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception, however, is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes. NASA's Fermi Gamma-ray Space Telescope launched in 2008 will continue the search for these flashes.
If black holes evaporate via Hawking radiation, a solar mass black hole will evaporate (beginning once the temperature of the cosmic microwave background drops below that of the black hole) over a period of 10 years. A supermassive black hole with a mass of will evaporate in around 2×10 years. Some monster black holes in the universe are predicted to continue to grow up to perhaps during the collapse of superclusters of galaxies. Even these would evaporate over a timescale of up to 10 years.
Observational evidence
By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. For example, a black hole's existence can sometimes be inferred by observing its gravitational influence on its surroundings.
Direct interferometry
The Event Horizon Telescope (EHT) is an active program that directly observes the immediate environment of black holes' event horizons, such as the black hole at the centre of the Milky Way. In April 2017, EHT began observing the black hole at the centre of Messier 87. "In all, eight radio observatories on six mountains and four continents observed the galaxy in Virgo on and off for 10 days in April 2017" to provide the data yielding the image in April 2019.
After two years of data processing, EHT released the first direct image of a black hole. Specifically, the supermassive black hole that lies in the centre of the aforementioned galaxy. What is visible is not the black hole—which shows as black because of the loss of all light within this dark region. Instead, it is the gases at the edge of the event horizon, displayed as orange or red, that define the black hole.
On 12 May 2022, the EHT released the first image of Sagittarius A*, the supermassive black hole at the centre of the Milky Way galaxy. The published image displayed the same ring-like structure and circular shadow as seen in the M87* black hole, and the image was created using the same techniques as for the M87 black hole. The imaging process for Sagittarius A*, which is more than a thousand times smaller and less massive than M87*, was significantly more complex because of the instability of its surroundings. The image of Sagittarius A* was partially blurred by turbulent plasma on the way to the galactic centre, an effect which prevents resolution of the image at longer wavelengths.
The brightening of this material in the 'bottom' half of the processed EHT image is thought to be caused by Doppler beaming, whereby material approaching the viewer at relativistic speeds is perceived as brighter than material moving away. In the case of a black hole, this phenomenon implies that the visible material is rotating at relativistic speeds (>), the only speeds at which it is possible to centrifugally balance the immense gravitational attraction of the singularity, and thereby remain in orbit above the event horizon. This configuration of bright material implies that the EHT observed M87* from a perspective catching the black hole's accretion disc nearly edge-on, as the whole system rotated clockwise.
The extreme gravitational lensing associated with black holes produces the illusion of a perspective that sees the accretion disc from above. In reality, most of the ring in the EHT image was created when the light emitted by the far side of the accretion disc bent around the black hole's gravity well and escaped, meaning that most of the possible perspectives on M87* can see the entire disc, even that directly behind the "shadow".
In 2015, the EHT detected magnetic fields just outside the event horizon of Sagittarius A* and even discerned some of their properties. The field lines that pass through the accretion disc were a complex mixture of ordered and tangled. Theoretical studies of black holes had predicted the existence of magnetic fields.
In April 2023, an image of the shadow of the Messier 87 black hole and the related high-energy jet, viewed together for the first time, was presented.
Detection of gravitational waves from merging black holes
On 14 September 2015, the LIGO gravitational wave observatory made the first-ever successful direct observation of gravitational waves. The signal was consistent with theoretical predictions for the gravitational waves produced by the merger of two black holes: one with about 36 solar masses, and the other around 29 solar masses. This observation provides the most concrete evidence for the existence of black holes to date. For instance, the gravitational wave signal suggests that the separation of the two objects before the merger was just 350 km, or roughly four times the Schwarzschild radius corresponding to the inferred masses. The objects must therefore have been extremely compact, leaving black holes as the most plausible interpretation.
More importantly, the signal observed by LIGO also included the start of the post-merger ringdown, the signal produced as the newly formed compact object settles down to a stationary state. Arguably, the ringdown is the most direct way of observing a black hole. From the LIGO signal, it is possible to extract the frequency and damping time of the dominant mode of the ringdown. From these, it is possible to infer the mass and angular momentum of the final object, which match independent predictions from numerical simulations of the merger. The frequency and decay time of the dominant mode are determined by the geometry of the photon sphere. Hence, observation of this mode confirms the presence of a photon sphere; however, it cannot exclude possible exotic alternatives to black holes that are compact enough to have a photon sphere.
The observation also provides the first observational evidence for the existence of stellar-mass black hole binaries. Furthermore, it is the first observational evidence of stellar-mass black holes weighing 25 solar masses or more.
Since then, many more gravitational wave events have been observed.
Stars orbiting Sagittarius A*
The proper motions of stars near the centre of our own Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. By fitting their motions to Keplerian orbits, the astronomers were able to infer, in 1998, that a object must be contained in a volume with a radius of 0.02 light-years to cause the motions of those stars.
Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass to and a radius of less than 0.002 light-years for the object causing the orbital motion of those stars. The upper limit on the object's size is still too large to test whether it is smaller than its Schwarzschild radius. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes.
Accretion of matter
Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object. Artists' impressions such as the accompanying representation of a black hole with corona commonly depict the black hole as if it were a flat-space body hiding the part of the disk just behind it, but in reality gravitational lensing would greatly distort the image of the accretion disk.
Within such a disk, friction would cause angular momentum to be transported outward, allowing matter to fall farther inward, thus releasing potential energy and increasing the temperature of the gas.
When the accreting object is a neutron star or a black hole, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the compact object. The resulting friction is so significant that it heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays). These bright X-ray sources may be detected by telescopes. This process of accretion is one of the most efficient energy-producing processes known. Up to 40% of the rest mass of the accreted material can be emitted as radiation. In nuclear fusion only about 0.7% of the rest mass will be emitted as energy. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data.
As such, many of the universe's more energetic phenomena have been attributed to the accretion of matter on black holes. In particular, active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. Similarly, X-ray binaries are generally accepted to be binary star systems in which one of the two stars is a compact object accreting matter from its companion. It has also been suggested that some ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes.
Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation.
In November 2011 the first direct observation of a quasar accretion disk around a supermassive black hole was reported.
X-ray binaries
X-ray binaries are binary star systems that emit a majority of their radiation in the X-ray part of the spectrum. These X-ray emissions are generally thought to result when one of the stars (compact object) accretes matter from another (regular) star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole.
If such a system emits signals that can be directly traced back to the compact object, it cannot be a black hole. The absence of such a signal does, however, not exclude the possibility that the compact object is a neutron star. By studying the companion star it is often possible to obtain the orbital parameters of the system and to obtain an estimate for the mass of the compact object. If this is much larger than the Tolman–Oppenheimer–Volkoff limit (the maximum mass a star can have without collapsing) then the object cannot be a neutron star and is generally expected to be a black hole.
The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Some doubt remained, due to the uncertainties that result from the companion star being much heavier than the candidate black hole. Currently, better candidates for black holes are found in a class of X-ray binaries called soft X-ray transients. In this class of system, the companion star is of relatively low mass allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star during this period. One of the best such candidates is V404 Cygni.
Quasi-periodic oscillations
The X-ray emissions from accretion disks sometimes flicker at certain frequencies. These signals are called quasi-periodic oscillations and are thought to be caused by material moving along the inner edge of the accretion disk (the innermost stable circular orbit). As such their frequency is linked to the mass of the compact object. They can thus be used as an alternative way to determine the mass of candidate black holes.
Galactic nuclei
Astronomers use the term "active galaxy" to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the activity in these active galactic nuclei (AGN) may be explained by the presence of supermassive black holes, which can be millions of times more massive than stellar ones. The models of these AGN consist of a central black hole that may be millions or billions of times more massive than the Sun; a disk of interstellar gas and dust called an accretion disk; and two jets perpendicular to the accretion disk.
Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, M32, M87, NGC 3115, NGC 3377, NGC 4258, NGC 4889, NGC 1277, OJ 287, APM 08279+5255 and the Sombrero Galaxy.
It is now widely accepted that the centre of nearly every galaxy, not just active ones, contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself.
Microlensing
Another way the black hole nature of an object may be tested is through observation of effects caused by a strong gravitational field in their vicinity. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, such as light passing through an optic lens. Observations have been made of weak gravitational lensing, in which light rays are deflected by only a few arcseconds. Microlensing occurs when the sources are unresolved and the observer sees a small brightening. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole.
Another possibility for observing gravitational lensing by a black hole would be to observe stars orbiting the black hole. There are several candidates for such an observation in orbit around Sagittarius A*.
Alternatives
The evidence for stellar black holes strongly relies on the existence of an upper limit for the mass of a neutron star. The size of this limit heavily depends on the assumptions made about the properties of dense matter. New exotic phases of matter could push up this bound. A phase of free quarks at high density might allow the existence of dense quark stars, and some supersymmetric models predict the existence of Q stars. Some extensions of the standard model posit the existence of preons as fundamental building blocks of quarks and leptons, which could hypothetically form preon stars. These hypothetical models could potentially explain a number of observations of stellar black hole candidates. However, it can be shown from arguments in general relativity that any such object will have a maximum mass.
Since the average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass, supermassive black holes are much less dense than stellar black holes. The average density of a black hole is comparable to that of water. Consequently, the physics of matter forming a supermassive black hole is much better understood and the possible alternative explanations for supermassive black hole observations are much more mundane. For example, a supermassive black hole could be modelled by a large cluster of very dark objects. However, such alternatives are typically not stable enough to explain the supermassive black hole candidates.
The evidence for the existence of stellar and supermassive black holes implies that in order for black holes not to form, general relativity must fail as a theory of gravity, perhaps due to the onset of quantum mechanical corrections. A much anticipated feature of a theory of quantum gravity is that it will not feature singularities or event horizons and thus black holes would not be real artefacts. For example, in the fuzzball model based on string theory, the individual states of a black hole solution do not generally have an event horizon or singularity, but for a classical/semiclassical observer the statistical average of such states appears just as an ordinary black hole as deduced from general relativity.
A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. These include the gravastar, the black star, related nestar and the dark-energy star.
Open questions
Entropy and thermodynamics
In 1971, Hawking showed under general conditions that the total area of the event horizons of any collection of classical black holes can never decrease, even if they collide and merge. This result, now known as the second law of black hole mechanics, is remarkably similar to the second law of thermodynamics, which states that the total entropy of an isolated system can never decrease. As with classical objects at absolute zero temperature, it was assumed that black holes had zero entropy. If this were the case, the second law of thermodynamics would be violated by entropy-laden matter entering a black hole, resulting in a decrease in the total entropy of the universe. Therefore, Bekenstein proposed that a black hole should have an entropy, and that it should be proportional to its horizon area.
The link with the laws of thermodynamics was further strengthened by Hawking's discovery in 1974 that quantum field theory predicts that a black hole radiates blackbody radiation at a constant temperature. This seemingly causes a violation of the second law of black hole mechanics, since the radiation will carry away energy from the black hole causing it to shrink. The radiation also carries away entropy, and it can be proven under general assumptions that the sum of the entropy of the matter surrounding a black hole and one quarter of the area of the horizon as measured in Planck units is in fact always increasing. This allows the formulation of the first law of black hole mechanics as an analogue of the first law of thermodynamics, with the mass acting as energy, the surface gravity as temperature and the area as entropy.
One puzzling feature is that the entropy of a black hole scales with its area rather than with its volume, since entropy is normally an extensive quantity that scales linearly with the volume of the system. This odd property led Gerard 't Hooft and Leonard Susskind to propose the holographic principle, which suggests that anything that happens in a volume of spacetime can be described by data on the boundary of that volume.
Although general relativity can be used to perform a semiclassical calculation of black hole entropy, this situation is theoretically unsatisfying. In statistical mechanics, entropy is understood as counting the number of microscopic configurations of a system that have the same macroscopic qualities, such as mass, charge, pressure, etc. Without a satisfactory theory of quantum gravity, one cannot perform such a computation for black holes. Some progress has been made in various approaches to quantum gravity. In 1995, Andrew Strominger and Cumrun Vafa showed that counting the microstates of a specific supersymmetric black hole in string theory reproduced the Bekenstein–Hawking entropy. Since then, similar results have been reported for different black holes both in string theory and in other approaches to quantum gravity like loop quantum gravity.
Information loss paradox
Because a black hole has only a few internal parameters, most of the information about the matter that went into forming the black hole is lost. Regardless of the type of matter which goes into a black hole, it appears that only information concerning the total mass, charge, and angular momentum are conserved. As long as black holes were thought to persist forever this information loss is not that problematic, as the information can be thought of as existing inside the black hole, inaccessible from the outside, but represented on the event horizon in accordance with the holographic principle. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information appears to be gone forever.
The question whether information is truly lost in black holes (the black hole information paradox) has divided the theoretical physics community. In quantum mechanics, loss of information corresponds to the violation of a property called unitarity, and it has been argued that loss of unitarity would also imply violation of conservation of energy, though this has also been disputed. Over recent years evidence has been building that indeed information and unitarity are preserved in a full quantum gravitational treatment of the problem.
One attempt to resolve the black hole information paradox is known as black hole complementarity. In 2012, the "firewall paradox" was introduced with the goal of demonstrating that black hole complementarity fails to solve the information paradox. According to quantum field theory in curved spacetime, a single emission of Hawking radiation involves two mutually entangled particles. The outgoing particle escapes and is emitted as a quantum of Hawking radiation; the infalling particle is swallowed by the black hole. Assume a black hole formed a finite time in the past and will fully evaporate away in some finite time in the future. Then, it will emit only a finite amount of information encoded within its Hawking radiation. According to research by physicists like Don Page and Leonard Susskind, there will eventually be a time by which an outgoing particle must be entangled with all the Hawking radiation the black hole has previously emitted.
This seemingly creates a paradox: a principle called "monogamy of entanglement" requires that, like any quantum system, the outgoing particle cannot be fully entangled with two other systems at the same time; yet here the outgoing particle appears to be entangled both with the infalling particle and, independently, with past Hawking radiation. In order to resolve this contradiction, physicists may eventually be forced to give up one of three time-tested principles: Einstein's equivalence principle, unitarity, or local quantum field theory. One possible solution, which violates the equivalence principle, is that a "firewall" destroys incoming particles at the event horizon. In general, which—if any—of these assumptions should be abandoned remains a topic of debate.
In science fiction
Christopher Nolan's 2014 science fiction epic Interstellar features a black hole known as Gargantua, which is the central object of a planetary system in a distant galaxy. Humanity accessed this system via a wormhole in the outer solar system, near Saturn.
See also
Black brane or Black string
Black Hole Initiative
Black hole starship
Black holes in fiction
Blanet
BTZ black hole
Golden binary
Hypothetical black hole (disambiguation)
Kugelblitz (astrophysics)
List of black holes
List of nearest black holes
Outline of black holes
Sonic black hole
Virtual black hole
Susskind-Hawking battle
Timeline of black hole physics
White hole
Planck star
Dark star (dark matter)
Notes
References
Sources
, the lecture notes on which the book was based are available for free from Sean Carroll's website
Further reading
Popular reading
University textbooks and monographs
Review papers
Lecture notes from 2005 SLAC Summer Institute.
External links
Stanford Encyclopedia of Philosophy: "Singularities and Black Holes" by Erik Curiel and Peter Bokulich.
Black Holes: Gravity's Relentless Pull – Interactive multimedia Web site about the physics and astronomy of black holes from the Space Telescope Science Institute (HubbleSite)
ESA's Black Hole Visualization
Frequently Asked Questions (FAQs) on Black Holes
Schwarzschild Geometry
Black holes - basic (NYT; April 2021)
Videos
16-year-long study tracks stars orbiting Sagittarius A*
Movie of Black Hole Candidate from Max Planck Institute
Computer visualisation of the signal detected by LIGO
Two Black Holes Merge into One (based upon the signal GW150914)
Galaxies
Theory of relativity
Concepts in astronomy
Articles containing video clips | Black hole | [
"Physics",
"Astronomy"
] | 12,394 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Concepts in astronomy",
"Galaxies",
"Unsolved problems in physics",
"Astrophysics",
"Density",
"Theory of relativity",
"Stellar phenomena",
"Astronomical objects"
] |
4,651 | https://en.wikipedia.org/wiki/Beta%20decay | In nuclear physics, beta decay (β-decay) is a type of radioactive decay in which an atomic nucleus emits a beta particle (fast energetic electron or positron), transforming into an isobar of that nuclide. For example, beta decay of a neutron transforms it into a proton by the emission of an electron accompanied by an antineutrino; or, conversely a proton is converted into a neutron by the emission of a positron with a neutrino in what is called positron emission. Neither the beta particle nor its associated (anti-)neutrino exist within the nucleus prior to beta decay, but are created in the decay process. By this process, unstable atoms obtain a more stable ratio of protons to neutrons. The probability of a nuclide decaying due to beta and other forms of decay is determined by its nuclear binding energy. The binding energies of all existing nuclides form what is called the nuclear band or valley of stability. For either electron or positron emission to be energetically possible, the energy release (see below) or Q value must be positive.
Beta decay is a consequence of the weak force, which is characterized by relatively long decay times. Nucleons are composed of up quarks and down quarks, and the weak force allows a quark to change its flavour by means of a virtual W boson leading to creation of an electron/antineutrino or positron/neutrino pair. For example, a neutron, composed of two down quarks and an up quark, decays to a proton composed of a down quark and two up quarks.
Electron capture is sometimes included as a type of beta decay, because the basic nuclear process, mediated by the weak force, is the same. In electron capture, an inner atomic electron is captured by a proton in the nucleus, transforming it into a neutron, and an electron neutrino is released.
Description
The two types of beta decay are known as beta minus and beta plus. In beta minus (β−) decay, a neutron is converted to a proton, and the process creates an electron and an electron antineutrino; while in beta plus (β+) decay, a proton is converted to a neutron and the process creates a positron and an electron neutrino. β+ decay is also known as positron emission.
Beta decay conserves a quantum number known as the lepton number, or the number of electrons and their associated neutrinos (other leptons are the muon and tau particles). These particles have lepton number +1, while their antiparticles have lepton number −1. Since a proton or neutron has lepton number zero, β+ decay (a positron, or antielectron) must be accompanied with an electron neutrino, while β− decay (an electron) must be accompanied by an electron antineutrino.
An example of electron emission (β− decay) is the decay of carbon-14 into nitrogen-14 with a half-life of about 5,730 years:
→ + +
In this form of decay, the original element becomes a new chemical element in a process known as nuclear transmutation. This new element has an unchanged mass number , but an atomic number that is increased by one. As in all nuclear decays, the decaying element (in this case ) is known as the parent nuclide while the resulting element (in this case ) is known as the daughter nuclide.
Another example is the decay of hydrogen-3 (tritium) into helium-3 with a half-life of about 12.3 years:
→ + +
An example of positron emission (β+ decay) is the decay of magnesium-23 into sodium-23 with a half-life of about 11.3 s:
→ + +
β+ decay also results in nuclear transmutation, with the resulting element having an atomic number that is decreased by one.
The beta spectrum, or distribution of energy values for the beta particles, is continuous. The total energy of the decay process is divided between the electron, the antineutrino, and the recoiling nuclide. In the figure to the right, an example of an electron with 0.40 MeV energy from the beta decay of 210Bi is shown. In this example, the total decay energy is 1.16 MeV, so the antineutrino has the remaining energy: . An electron at the far right of the curve would have the maximum possible kinetic energy, leaving the energy of the neutrino to be only its small rest mass.
History
Discovery and initial characterization
Radioactivity was discovered in 1896 by Henri Becquerel in uranium, and subsequently observed by Marie and Pierre Curie in thorium and in the new elements polonium and radium. In 1899, Ernest Rutherford separated radioactive emissions into two types: alpha and beta (now beta minus), based on penetration of objects and ability to cause ionization. Alpha rays could be stopped by thin sheets of paper or aluminium, whereas beta rays could penetrate several millimetres of aluminium. In 1900, Paul Villard identified a still more penetrating type of radiation, which Rutherford identified as a fundamentally new type in 1903 and termed gamma rays. Alpha, beta, and gamma are the first three letters of the Greek alphabet.
In 1900, Becquerel measured the mass-to-charge ratio () for beta particles by the method of J.J. Thomson used to study cathode rays and identify the electron. He found that for a beta particle is the same as for Thomson's electron, and therefore suggested that the beta particle is in fact an electron.
In 1901, Rutherford and Frederick Soddy showed that alpha and beta radioactivity involves the transmutation of atoms into atoms of other chemical elements. In 1913, after the products of more radioactive decays were known, Soddy and Kazimierz Fajans independently proposed their radioactive displacement law, which states that beta (i.e., ) emission from one element produces another element one place to the right in the periodic table, while alpha emission produces an element two places to the left.
Neutrinos
The study of beta decay provided the first physical evidence for the existence of the neutrino. In both alpha and gamma decay, the resulting alpha or gamma particle has a narrow energy distribution, since the particle carries the energy from the difference between the initial and final nuclear states. However, the kinetic energy distribution, or spectrum, of beta particles measured by Lise Meitner and Otto Hahn in 1911 and by Jean Danysz in 1913 showed multiple lines on a diffuse background. These measurements offered the first hint that beta particles have a continuous spectrum. In 1914, James Chadwick used a magnetic spectrometer with one of Hans Geiger's new counters to make more accurate measurements which showed that the spectrum was continuous. The results, which appeared to be in contradiction to the law of conservation of energy, were validated by means of calorimetric measurements in 1929 by Lise Meitner and Wilhelm Orthmann. If beta decay were simply electron emission as assumed at the time, then the energy of the emitted electron should have a particular, well-defined value. For beta decay, however, the observed broad distribution of energies suggested that energy is lost in the beta decay process. This spectrum was puzzling for many years.
A second problem is related to the conservation of angular momentum. Molecular band spectra showed that the nuclear spin of nitrogen-14 is 1 (i.e., equal to the reduced Planck constant) and more generally that the spin is integral for nuclei of even mass number and half-integral for nuclei of odd mass number. This was later explained by the proton-neutron model of the nucleus. Beta decay leaves the mass number unchanged, so the change of nuclear spin must be an integer. However, the electron spin is 1/2, hence angular momentum would not be conserved if beta decay were simply electron emission.
From 1920 to 1927, Charles Drummond Ellis (along with Chadwick and colleagues) further established that the beta decay spectrum is continuous. In 1933, Ellis and Nevill Mott obtained strong evidence that the beta spectrum has an effective upper bound in energy. Niels Bohr had suggested that the beta spectrum could be explained if conservation of energy was true only in a statistical sense, thus this principle might be violated in any given decay. However, the upper bound in beta energies determined by Ellis and Mott ruled out that notion. Now, the problem of how to account for the variability of energy in known beta decay products, as well as for conservation of momentum and angular momentum in the process, became acute.
In a famous letter written in 1930, Wolfgang Pauli attempted to resolve the beta-particle energy conundrum by suggesting that, in addition to electrons and protons, atomic nuclei also contained an extremely light neutral particle, which he called the neutron. He suggested that this "neutron" was also emitted during beta decay (thus accounting for the known missing energy, momentum, and angular momentum), but it had simply not yet been observed. In 1931, Enrico Fermi renamed Pauli's "neutron" the "neutrino" ('little neutral one' in Italian). In 1933, Fermi published his landmark theory for beta decay, where he applied the principles of quantum mechanics to matter particles, supposing that they can be created and annihilated, just as the light quanta in atomic transitions. Thus, according to Fermi, neutrinos are created in the beta-decay process, rather than contained in the nucleus; the same happens to electrons. The neutrino interaction with matter was so weak that detecting it proved a severe experimental challenge. Further indirect evidence of the existence of the neutrino was obtained by observing the recoil of nuclei that emitted such a particle after absorbing an electron. Neutrinos were finally detected directly in 1956 by the American physicists Clyde Cowan and Frederick Reines in the Cowan–Reines neutrino experiment. The properties of neutrinos were (with a few minor modifications) as predicted by Pauli and Fermi.
decay and electron capture
In 1934, Frédéric and Irène Joliot-Curie bombarded aluminium with alpha particles to effect the nuclear reaction + → + , and observed that the product isotope emits a positron identical to those found in cosmic rays (discovered by Carl David Anderson in 1932). This was the first example of decay (positron emission), which they termed artificial radioactivity since is a short-lived nuclide which does not exist in nature. In recognition of their discovery, the couple were awarded the Nobel Prize in Chemistry in 1935.
The theory of electron capture was first discussed by Gian-Carlo Wick in a 1934 paper, and then developed by Hideki Yukawa and others. K-electron capture was first observed in 1937 by Luis Alvarez, in the nuclide 48V. Alvarez went on to study electron capture in 67Ga and other nuclides.
Non-conservation of parity
In 1956, Tsung-Dao Lee and Chen Ning Yang noticed that there was no evidence that parity was conserved in weak interactions, and so they postulated that this symmetry may not be preserved by the weak force. They sketched the design for an experiment for testing conservation of parity in the laboratory. Later that year, Chien-Shiung Wu and coworkers conducted the Wu experiment showing an asymmetrical beta decay of at cold temperatures that proved that parity is not conserved in beta decay. This surprising result overturned long-held assumptions about parity and the weak force. In recognition of their theoretical work, Lee and Yang were awarded the Nobel Prize for Physics in 1957. However Wu, who was female, was not awarded the Nobel prize.
β− decay
In decay, the weak interaction converts an atomic nucleus into a nucleus with atomic number increased by one, while emitting an electron () and an electron antineutrino (). decay generally occurs in neutron-rich nuclei. The generic equation is:
→ + +
where and are the mass number and atomic number of the decaying nucleus, and X and X′ are the initial and final elements, respectively.
Another example is when the free neutron () decays by decay into a proton ():
→ + + .
At the fundamental level (as depicted in the Feynman diagram on the right), this is caused by the conversion of the negatively charged () down quark to the positively charged () up quark promoteby by a virtual boson; the boson subsequently decays into an electron and an electron antineutrino:
→ + + .
β+ decay
In decay, or positron emission, the weak interaction converts an atomic nucleus into a nucleus with atomic number decreased by one, while emitting a positron () and an electron neutrino (). decay generally occurs in proton-rich nuclei. The generic equation is:
→ + +
This may be considered as the decay of a proton inside the nucleus to a neutron:
p → n + +
However, decay cannot occur in an isolated proton because it requires energy, due to the mass of the neutron being greater than the mass of the proton. decay can only happen inside nuclei when the daughter nucleus has a greater binding energy (and therefore a lower total energy) than the mother nucleus. The difference between these energies goes into the reaction of converting a proton into a neutron, a positron, and a neutrino and into the kinetic energy of these particles. This process is opposite to negative beta decay, in that the weak interaction converts a proton into a neutron by converting an up quark into a down quark resulting in the emission of a or the absorption of a . When a boson is emitted, it decays into a positron and an electron neutrino:
→ + + .
Electron capture (K-capture/L-capture)
In all cases where decay (positron emission) of a nucleus is allowed energetically, so too is electron capture allowed. This is a process during which a nucleus captures one of its atomic electrons, resulting in the emission of a neutrino:
+ → +
An example of electron capture is one of the decay modes of krypton-81 into bromine-81:
+ → +
All emitted neutrinos are of the same energy. In proton-rich nuclei where the energy difference between the initial and final states is less than 2, decay is not energetically possible, and electron capture is the sole decay mode.
If the captured electron comes from the innermost shell of the atom, the K-shell, which has the highest probability to interact with the nucleus, the process is called K-capture. If it comes from the L-shell, the process is called L-capture, etc.
Electron capture is a competing (simultaneous) decay process for all nuclei that can undergo β+ decay. The converse, however, is not true: electron capture is the only type of decay that is allowed in proton-rich nuclides that do not have sufficient energy to emit a positron and neutrino.
Nuclear transmutation
If the proton and neutron are part of an atomic nucleus, the above described decay processes transmute one chemical element into another. For example:
:{|border="0"
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||(beta minus decay)
|- style="height:2em;"
| || || ||→ || ||+ || ||+ || ||(beta plus decay)
|- style="height:2em;"
| ||+ || ||→ || ||+ || || || ||(electron capture)
|}
Beta decay does not change the number () of nucleons in the nucleus, but changes only its charge . Thus the set of all nuclides with the same can be introduced; these isobaric nuclides may turn into each other via beta decay. For a given there is one that is most stable. It is said to be beta stable, because it presents a local minimum of the mass excess: if such a nucleus has numbers, the neighbour nuclei and have higher mass excess and can beta decay into , but not vice versa. For all odd mass numbers , there is only one known beta-stable isobar. For even , there are up to three different beta-stable isobars experimentally known; for example, , , and are all beta-stable. There are about 350 known beta-decay stable nuclides.
Competition of beta decay types
Usually unstable nuclides are clearly either "neutron rich" or "proton rich", with the former undergoing beta decay and the latter undergoing electron capture (or more rarely, due to the higher energy requirements, positron decay). However, in a few cases of odd-proton, odd-neutron radionuclides, it may be energetically favorable for the radionuclide to decay to an even-proton, even-neutron isobar either by undergoing beta-positive or beta-negative decay. An often-cited example is the single isotope (29 protons, 35 neutrons), which illustrates three types of beta decay in competition. Copper-64 has a half-life of about 12.7 hours. This isotope has one unpaired proton and one unpaired neutron, so either the proton or the neutron can decay. This particular nuclide (though not all nuclides in this situation) is almost equally likely to decay through proton decay by positron emission () or electron capture () to , as it is through neutron decay by electron emission () to .
Stability of naturally occurring nuclides
Most naturally occurring nuclides on earth are beta stable. Nuclides that are not beta stable have half-lives ranging from under a second to periods of time significantly greater than the age of the universe. One common example of a long-lived isotope is the odd-proton odd-neutron nuclide , which undergoes all three types of beta decay (, and electron capture) with a half-life of .
Conservation rules for beta decay
Baryon number is conserved
where
is the number of constituent quarks, and
is the number of constituent antiquarks.
Beta decay just changes neutron to proton or, in the case of positive beta decay (electron capture) proton to neutron so the number of individual quarks doesn't change. It is only the baryon flavor that changes, here labelled as the isospin.
Up and down quarks have total isospin and isospin projections
All other quarks have .
In general
Lepton number is conserved
so all leptons have assigned a value of +1, antileptons −1, and non-leptonic particles 0.
Angular momentum
For allowed decays, the net orbital angular momentum is zero, hence only spin quantum numbers are considered.
The electron and antineutrino are fermions, spin-1/2 objects, therefore they may couple to total (parallel) or (anti-parallel).
For forbidden decays, orbital angular momentum must also be taken into consideration.
Energy release
The value is defined as the total energy released in a given nuclear decay. In beta decay, is therefore also the sum of the kinetic energies of the emitted beta particle, neutrino, and recoiling nucleus. (Because of the large mass of the nucleus compared to that of the beta particle and neutrino, the kinetic energy of the recoiling nucleus can generally be neglected.) Beta particles can therefore be emitted with any kinetic energy ranging from 0 to . A typical is around 1 MeV, but can range from a few keV to a few tens of MeV.
Since the rest mass of the electron is 511 keV, the most energetic beta particles are ultrarelativistic, with speeds very close to the speed of light.
In the case of Re, the maximum speed of the beta particle is only 9.8% of the speed of light.
The following table gives some examples:
β− decay
Consider the generic equation for beta decay
→ + + .
The value for this decay is
,
where is the mass of the nucleus of the atom, is the mass of the electron, and is the mass of the electron antineutrino. In other words, the total energy released is the mass energy of the initial nucleus, minus the mass energy of the final nucleus, electron, and antineutrino. The mass of the nucleus is related to the standard atomic mass by
That is, the total atomic mass is the mass of the nucleus, plus the mass of the electrons, minus the sum of all electron binding energies for the atom. This equation is rearranged to find , and is found similarly. Substituting these nuclear masses into the -value equation, while neglecting the nearly-zero antineutrino mass and the difference in electron binding energies, which is very small for high- atoms, we have
This energy is carried away as kinetic energy by the electron and antineutrino.
Because the reaction will proceed only when the value is positive, β− decay can occur when the mass of atom is greater than the mass of atom .
β+ decay
The equations for β+ decay are similar, with the generic equation
→ + +
giving
However, in this equation, the electron masses do not cancel, and we are left with
Because the reaction will proceed only when the value is positive, β+ decay can occur when the mass of atom exceeds that of by at least twice the mass of the electron.
Electron capture
The analogous calculation for electron capture must take into account the binding energy of the electrons. This is because the atom will be left in an excited state after capturing the electron, and the binding energy of the captured innermost electron is significant. Using the generic equation for electron capture
+ → +
we have
which simplifies to
where is the binding energy of the captured electron.
Because the binding energy of the electron is much less than the mass of the electron, nuclei that can undergo β+ decay can always also undergo electron capture, but the reverse is not true.
Beta emission spectrum
Beta decay can be considered as a perturbation as described in quantum mechanics, and thus Fermi's Golden Rule can be applied. This leads to an expression for the kinetic energy spectrum of emitted betas as follows:
where is the kinetic energy, is a shape function that depends on the forbiddenness of the decay (it is constant for allowed decays), is the Fermi Function (see below) with Z the charge of the final-state nucleus, is the total energy, is the momentum, and is the Q value of the decay. The kinetic energy of the emitted neutrino is given approximately by minus the kinetic energy of the beta.
As an example, the beta decay spectrum of 210Bi (originally called RaE) is shown to the right.
Fermi function
The Fermi function that appears in the beta spectrum formula accounts for the Coulomb attraction / repulsion between the emitted beta and the final state nucleus. Approximating the associated wavefunctions to be spherically symmetric, the Fermi function can be analytically calculated to be:
where is the final momentum, Γ the Gamma function, and (if is the fine-structure constant and the radius of the final state nucleus) , (+ for electrons, − for positrons), and .
For non-relativistic betas (), this expression can be approximated by:
Other approximations can be found in the literature.
Kurie plot
A Kurie plot (also known as a Fermi–Kurie plot) is a graph used in studying beta decay developed by Franz N. D. Kurie, in which the square root of the number of beta particles whose momenta (or energy) lie within a certain narrow range, divided by the Fermi function, is plotted against beta-particle energy. It is a straight line for allowed transitions and some forbidden transitions, in accord with the Fermi beta-decay theory. The energy-axis (x-axis) intercept of a Kurie plot corresponds to the maximum energy imparted to the electron/positron (the decay's value). With a Kurie plot one can find the limit on the effective mass of a neutrino.
Helicity (polarization) of neutrinos, electrons and positrons emitted in beta decay
After the discovery of parity non-conservation (see History), it was found that, in beta decay, electrons are emitted mostly with negative helicity, i.e., they move, naively speaking, like left-handed screws driven into a material (they have negative longitudinal polarization). Conversely, positrons have mostly positive helicity, i.e., they move like right-handed screws. Neutrinos (emitted in positron decay) have negative helicity, while antineutrinos (emitted in electron decay) have positive helicity.
The higher the energy of the particles, the higher their polarization.
Types of beta decay transitions
Beta decays can be classified according to the angular momentum ( value) and total spin ( value) of the emitted radiation. Since total angular momentum must be conserved, including orbital and spin angular momentum, beta decay occurs by a variety of quantum state transitions to various nuclear angular momentum or spin states, known as "Fermi" or "Gamow–Teller" transitions. When beta decay particles carry no angular momentum (), the decay is referred to as "allowed", otherwise it is "forbidden".
Other decay modes, which are rare, are known as bound state decay and double beta decay.
Fermi transitions
A Fermi transition is a beta decay in which the spins of the emitted electron (positron) and anti-neutrino (neutrino) couple to total spin , leading to an angular momentum change between the initial and final states of the nucleus (assuming an allowed transition). In the non-relativistic limit, the nuclear part of the operator for a Fermi transition is given by
with the weak vector coupling constant, the isospin raising and lowering operators, and running over all protons and neutrons in the nucleus.
Gamow–Teller transitions
A Gamow–Teller transition is a beta decay in which the spins of the emitted electron (positron) and anti-neutrino (neutrino) couple to total spin , leading to an angular momentum change between the initial and final states of the nucleus (assuming an allowed transition).
In this case, the nuclear part of the operator is given by
with the weak axial-vector coupling constant, and the spin Pauli matrices, which can produce a spin-flip in the decaying nucleon.
Forbidden transitions
When , the decay is referred to as "forbidden". Nuclear selection rules require high values to be accompanied by changes in nuclear spin () and parity (). The selection rules for the th forbidden transitions are:
where corresponds to no parity change or parity change, respectively. The special case of a transition between isobaric analogue states, where the structure of the final state is very similar to the structure of the initial state, is referred to as "superallowed" for beta decay, and proceeds very quickly. The following table lists the Δ and Δ values for the first few values of :
Rare decay modes
Bound-state β decay
A very small minority of free neutron decays (about four per million) are "two-body decays": the proton, electron and antineutrino are produced, but the electron fails to gain the 13.6 eV energy necessary to escape the proton, and therefore simply remains bound to it, as a neutral hydrogen atom. In this type of beta decay, in essence all of the neutron decay energy is carried off by the antineutrino.
For fully ionized atoms (bare nuclei), it is possible in likewise manner for electrons to fail to escape the atom, and to be emitted from the nucleus into low-lying atomic bound states (orbitals). This cannot occur for neutral atoms with low-lying bound states which are already filled by electrons.
Bound-state β decays were predicted by Daudel, Jean, and Lecoin in 1947, and the phenomenon in fully ionized atoms was first observed for Dy in 1992 by Jung et al. of the Darmstadt Heavy-Ion Research Center. Though neutral Dy is stable, fully ionized Dy undergoes β decay into the K and L shells with a half-life of 47 days. The resulting nucleus – Ho – is stable only in this almost fully ionized state and will decay via electron capture into Dy in the neutral state. Likewise, while being stable in the neutral state, the fully ionized Tl undergoes bound-state β decay to Pb with a half-life of days. The half-lives of neutral Ho and Pb are respectively 4570 years and years. In addition, it is estimated that β decay is energetically impossible for natural atoms but theoretically possible when fully ionized also for 193Ir, 194Au, 202Tl, 215At, 243Am, and 246Bk.
Another possibility is that a fully ionized atom undergoes greatly accelerated β decay, as observed for Re by Bosch et al., also at Darmstadt. Neutral Re does undergo β decay, with half-life years, but for fully ionized Re this is shortened to only 32.9 years. This is because Re is energetically allowed to undergo β decay to the first-excited state in Os, a process energetically disallowed for natural Re. Similarly, neutral Pu undergoes β decay with a half-life of 14.3 years, but in its fully ionized state the beta-decay half-life of Pu decreases to 4.2 days. For comparison, the variation of decay rates of other nuclear processes due to chemical environment is less than 1%. Moreover, current mass determinations cannot decisively determine whether Rn is energetically possible to undergo β decay (the decay energy given in AME2020 is (−6 ± 8) keV), but in either case it is predicted that β will be greatly accelerated for fully ionized Rn.
Double beta decay
Some nuclei can undergo double beta decay (2β) where the charge of the nucleus changes by two units. Double beta decay is difficult to study, as it has an extremely long half-life. In nuclei for which both β decay and 2β are possible, the rarer 2β process is effectively impossible to observe. However, in nuclei where β decay is forbidden but 2β is allowed, the process can be seen and a half-life measured. Thus, 2β is usually studied only for beta stable nuclei. Like single beta decay, double beta decay does not change ; thus, at least one of the nuclides with some given has to be stable with regard to both single and double beta decay.
"Ordinary" 2β results in the emission of two electrons and two antineutrinos. If neutrinos are Majorana particles (i.e., they are their own antiparticles), then a decay known as neutrinoless double beta decay will occur. Most neutrino physicists believe that neutrinoless 2β has never been observed.
See also
Common beta emitters
Neutrino
Betavoltaics
Particle radiation
Radionuclide
Tritium illumination, a form of fluorescent lighting powered by beta decay
Pandemonium effect
Total absorption spectroscopy
References
Bibliography
External links
The Live Chart of Nuclides - IAEA with filter on decay type
Beta decay simulation
Nuclear physics
Radioactivity | Beta decay | [
"Physics",
"Chemistry"
] | 6,572 | [
"Radioactivity",
"Nuclear physics"
] |
4,662 | https://en.wikipedia.org/wiki/Bookkeeping | Bookkeeping is the recording of financial transactions, and is part of the process of accounting in business and other organizations. It involves preparing source documents for all transactions, operations, and other events of a business. Transactions include purchases, sales, receipts and payments by an individual person, organization or corporation. There are several standard methods of bookkeeping, including the single-entry and double-entry bookkeeping systems. While these may be viewed as "real" bookkeeping, any process for recording financial transactions is a bookkeeping process.
The person in an organisation who is employed to perform bookkeeping functions is usually called the bookkeeper (or book-keeper). They usually write the daybooks (which contain records of sales, purchases, receipts, and payments), and document each financial transaction, whether cash or credit, into the correct daybook—that is, petty cash book, suppliers ledger, customer ledger, etc.—and the general ledger. Thereafter, an accountant can create financial reports from the information recorded by the bookkeeper. The bookkeeper brings the books to the trial balance stage, from which an accountant may prepare financial reports for the organisation, such as the income statement and balance sheet.
History
The origin of book-keeping is lost in obscurity, but recent research indicates that methods of keeping accounts have existed from the remotest times of human life in cities. Babylonian records written with styli on small slabs of clay have been found dating to 2600 BC. Mesopotamian bookkeepers kept records on clay tablets that may date back as far as 7,000 years. Use of the modern double entry bookkeeping system was described by Luca Pacioli in 1494.
The term "waste book" was used in colonial America, referring to the documenting of daily transactions of receipts and expenditures. Records were made in chronological order, and for temporary use only. Daily records were then transferred to a daybook or account ledger to balance the accounts and to create a permanent journal; then the waste book could be discarded, hence the name.
Process
The primary purpose of bookkeeping is to record the financial effects of transactions. An important difference between a manual and an electronic accounting system is the former's latency between the recording of a financial transaction and its posting in the relevant account. This delay, which is absent in electronic accounting systems due to nearly instantaneous posting to relevant accounts, is characteristic of manual systems, and gave rise to the primary books of accounts—cash book, purchase book, sales book, etc.—for immediately documenting a financial transaction.
In the normal course of business, a document is produced each time a transaction occurs. Sales and purchases usually have invoices or receipts. Historically, deposit slips were produced when lodgements (deposits) were made to a bank account; and checks (spelled "cheques" in the UK and several other countries) were written to pay money out of the account. Nowadays such transactions are mostly made electronically. Bookkeeping first involves recording the details of all of these source documents into multi-column journals (also known as books of first entry or daybooks). For example, all credit sales are recorded in the sales journal; all cash payments are recorded in the cash payments journal. Each column in a journal normally corresponds to an account. In the single entry system, each transaction is recorded only once. Most individuals who balance their check-book each month are using such a system, and most personal-finance software follows this approach.
After a certain period, typically a month, each column in each journal is totalled to give a summary for that period. Using the rules of double-entry, these journal summaries are then transferred to their respective accounts in the ledger, or account book. For example, the entries in the Sales Journal are taken and a debit entry is made in each customer's account (showing that the customer now owes us money), and a credit entry might be made in the account for "Sale of class 2 widgets" (showing that this activity has generated revenue for us). This process of transferring summaries or individual transactions to the ledger is called posting. Once the posting process is complete, accounts kept using the "T" format (debits on the left side of the "T" and credits on the right side) undergo balancing, which is simply a process to arrive at the balance of the account.
As a partial check that the posting process was done correctly, a working document called an unadjusted trial balance is created. In its simplest form, this is a three-column list. Column One contains the names of those accounts in the ledger which have a non-zero balance. If an account has a debit balance, the balance amount is copied into Column Two (the debit column); if an account has a credit balance, the amount is copied into Column Three (the credit column). The debit column is then totalled, and then the credit column is totalled. The two totals must agree—which is not by chance—because under the double-entry rules, whenever there is a posting, the debits of the posting equal the credits of the posting. If the two totals do not agree, an error has been made, either in the journals or during the posting process. The error must be located and rectified, and the totals of the debit column and the credit column recalculated to check for agreement before any further processing can take place.
Once the accounts balance, the accountant makes a number of adjustments and changes the balance amounts of some of the accounts. These adjustments must still obey the double-entry rule: for example, the inventory account and asset account might be changed to bring them into line with the actual numbers counted during a stocktake. At the same time, the expense account associated with use of inventory is adjusted by an equal and opposite amount. Other adjustments such as posting depreciation and prepayments are also done at this time. This results in a listing called the adjusted trial balance. It is the accounts in this list, and their corresponding debit or credit balances, that are used to prepare the financial statements.
Finally financial statements are drawn from the trial balance, which may include:
the income statement, also known as the statement of financial results, profit and loss account, or P&L
the balance sheet, also known as the statement of financial position
the cash flow statement
the statement of changes in equity, also known as the statement of total recognised gains and losses
Single-entry system
The primary bookkeeping record in single-entry bookkeeping is the cash book, which is similar to a checking account register (in UK: cheque account, current account), except all entries are allocated among several categories of income and expense accounts. Separate account records are maintained for petty cash, accounts payable and accounts receivable, and other relevant transactions such as inventory and travel expenses. To save time and avoid the errors of manual calculations, single-entry bookkeeping can be done today with do-it-yourself bookkeeping software.
Double-entry system
A double-entry bookkeeping system is a set of rules for recording financial information in a financial accounting system in which every transaction or event changes at least two different ledger accounts.
Daybooks
A daybook is a descriptive and chronological (diary-like) record of day-to-day financial transactions; it is also called a book of original entry. The daybook's details must be transcribed formally into journals to enable posting to ledgers. Daybooks include:
Sales daybook, for recording sales invoices.
Sales credits daybook, for recording sales credit notes.
Purchases daybook, for recording purchase invoices.
Purchases debits daybook, for recording purchase debit notes.
Cash daybook, usually known as the cash book, for recording all monies received and all monies paid out. It may be split into two daybooks: a receipts daybook documenting every money-amount received, and a payments daybook recording every payment made.
General Journal daybook, for recording journal entries.
Petty cash book
A petty cash book is a record of small-value purchases before they are later transferred to the ledger and final accounts; it is maintained by a petty or junior cashier. This type of cash book usually uses the imprest system: a certain amount of money is provided to the petty cashier by the senior cashier. This money is to cater for minor expenditures (hospitality, minor stationery, casual postage, and so on) and is reimbursed periodically on satisfactory explanation of how it was spent.
The balance of petty cash book is Asset.
Journals
Journals are recorded in the general journal daybook. A journal is a formal and chronological record of financial transactions before their values are accounted for in the general ledger as debits and credits. A company can maintain one journal for all transactions, or keep several journals based on similar activity (e.g., sales, cash receipts, revenue, etc.), making transactions easier to summarize and reference later. For every debit journal entry recorded, there must be an equivalent credit journal entry to maintain a balanced accounting equation.
Ledgers
A ledger is a record of accounts. The ledger is a permanent summary of all amounts entered in supporting Journals which list individual transactions by date. These accounts are recorded separately, showing their beginning/ending balance. A journal lists financial transactions in chronological order, without showing their balance but showing how much is going to be entered in each account. A ledger takes each financial transaction from the journal and records it into the corresponding accounts. The ledger also determines the balance of every account, which is transferred into the balance sheet or the income statement. There are three different kinds of ledgers that deal with book-keeping:
Sales ledger, which deals mostly with the accounts receivable account. This ledger consists of the records of the financial transactions made by customers to the business.
Purchase ledger is the record of the company's purchasing transactions; it goes hand in hand with the Accounts Payable account.
General ledger, representing the original five, main accounts: assets, liabilities, equity, income, and expenses.
Abbreviations used in bookkeeping
Chart of accounts
A chart of accounts is a list of the accounts codes that can be identified with numeric, alphabetical, or alphanumeric codes allowing the account to be located in the general ledger. The equity section of the chart of accounts is based on the fact that the legal structure of the entity is of a particular legal type. Possibilities include sole trader, partnership, trust, and company.
Computerized bookkeeping
Computerized bookkeeping removes many of the paper "books" that are used to record the financial transactions of a business entity; instead, relational databases are used today, but typically, these still enforce the norms of bookkeeping including the single-entry and double-entry bookkeeping systems. Certified Public Accountants (CPAs) supervise the internal controls for computerized bookkeeping systems, which serve to minimize errors in documenting the numerous activities a business entity may initiate or complete over an accounting period.
See also
Accounting
Comparison of accounting software
POS system: records sales and updates stock levels
Bookkeeping Associations
References
External links
Guide to the Account Book from Italy 1515–1520
Accounting systems
Accounting | Bookkeeping | [
"Technology"
] | 2,294 | [
"Information systems",
"Accounting systems"
] |
15,977,138 | https://en.wikipedia.org/wiki/Table%20of%20nuclides%20%28segmented%2C%20wide%29 | These isotope tables show all of the known isotopes of the chemical elements, arranged with increasing atomic number from left to right and increasing neutron number from top to bottom.
Half lives are indicated by the color of each isotope's cell (see color chart in each section). Colored borders indicate half lives of the most stable nuclear isomer states.
The data for these tables came from Brookhaven National Laboratory which has an interactive Table of Nuclides with data on ~3000 nuclides. Recent discoveries are sourced from M. Thoennessen's "Discovery of Nuclides Project" website .
Isotopes for elements 0-29
← Previous | Next →Go to Unitized table (all elements)Go to Periodic table
Isotopes for elements 30-59
← Previous | Next →Go to Unitized table (all elements)Go to Periodic table
Isotopes for elements 60-89
← Previous | Next →Go to Unitized table (all elements)Go to Periodic table
Isotopes for elements 90-118
← Previous | Next →Go to Unitized table (all elements)Go to Periodic table
External links
Interactive Chart of Nuclides (Brookhaven National Laboratory)
The Lund/LBNL Nuclear Data Search
An isotope table with clickable information on every isotope and its decay routes is available at chemlab.pc.maricopa.edu
An example of free Universal Nuclide Chart with decay information for over 3000 nuclides is available at Nucleonica.net.
app for mobiles: Android or Apple - for PC use The Live Chart of Nuclides - IAEA
Links to other charts of nuclides, including printed posters and journal articles, is available at nds.iaea.org.
Tables of nuclides | Table of nuclides (segmented, wide) | [
"Chemistry"
] | 355 | [
"Tables of nuclides",
"Nuclear chemistry stubs",
"Isotope stubs",
"Isotopes"
] |
15,977,395 | https://en.wikipedia.org/wiki/Telly%20%28home%20entertainment%20server%29 | The Telly home entertainment server is range of computer systems designed to store, manage, and access all forms of digital media in the home. Based on Interact-TV's Linux Media Center software, it provides user managed libraries for music, photos, and all forms of video from recorded television programming to DVDs.
Expandable hard drive configurations accommodate growing libraries of home entertainment content and provide an alternative to Desktop and Laptop PCs for entertainment content. Networked configurations distribute content shared from all units throughout a network and allow recording at each location. Content on Telly systems appears to both Windows and Mac PCs as local networked volumes and can be accessed over the network. The Telly server web site provides management of and access to music, photos, and video.
Telly home entertainment servers use a trackball driven user interface and are offered with full high-definition television (HDTV) outputs, built-in digital video recorder (DVR) capabilities and a variety of other accessories. As a home entertainment server, Telly systems differ from traditional media center systems in that it is designed from inception to be configured and operated from a TV-based menu, and as a true server, permits integrated file sharing and secure volume managed expandable storage.
Key Features
Telly Servers are available in a range of models from small set-top sized to rack-mount systems. Each system contains a Linux OS and motherboard and one or more hard drives. All models may additionally have tuners and CD/DVD burners for both import and archiving owners content.
Video Functionality
Users can watch or save DVDs in a Telly server video library for full resolution, progressive scan DVD playback
Facilities are provided for watching live, delayed or recorded TV, home videos and Internet downloaded video, all of which are stored in the Telly Video Library
Video Library contents can be viewed and sorted by Cover Art Navigation or In-Depth Program Information
Music DVDs (such as recorded concerts) in the Video Library can also be copied as music tracks into the Music Library - this permits mixing tracks from music DVD with tracks from the Music Library.
HD Telly systems upsample the native resolution of recorded TV and regular DVDs from 480i to 720p.
When users stop any playback of any video, a Resume button is provided to allow any networked Telly server to pick up where it left off
Telly home entertainment servers support all common non-DRM video formats including AVI, DivX, MPEG-1, MPEG-2 and MPEG-4
Networking allows users to move media between other Telly systems, PCs or portable media players
Video files can be imported to Telly from other PCs on the network
Internet video sources such as YouTube can be played on Telly servers
Music Functionality
Jukebox capabilities play single songs or queue any music collection, genre, artist, or album
Sort the Jukebox looking at traditional lists or album cover art and build collections for playback.
Created playlists of selected music can be played at any time or burn them on a CD
Users can view music animations during playback and interleave music DVD tracks from the Video Library
Users can set the music quality when storing CDs with the built-in CD player including the option of Lossless Audio Encoding (a format that maintains the original CD quality and still creates files that are compressed to save disc space)
Support is provided for drag and drop music collections from PCs and Macs to Telly server Import Music folder in most popular non-DRM formats including MP3, AAC, and Flac
Telly systems provide a built-in Web Server allowing any computer to access the Music Library to manage tracks or edit album details
Telly servers can stream music to any browser
Telly systems support internet radio playout
Photo Functionality
Telly servers scale all photos to fit the connected TV screen and will support full 1080 HDTV resolution for presentation of high quality display of Photo Library contents
Telly Photo Library contents can be browsed on the TV screen as thumbnails or as automatic or manually advanced slide shows with predetermined playback or random order playback
The built-in web site access provides the ability to create, add to, and edit Photo Library contents
Networking Functionality
Telly Servers can be networked to grow as users' digital media libraries grow - most systems provide 1000/100/10 Base Ethernet connectivity and can be outfitted with wireless or power-line networking capability.
Each Telly Server can act as a standalone system to manage and access digital media
Each system provides a set of video and audio outputs to connect directly to a TV (either standard definition or high definition) and audio system
The analog stereo and digital audio outputs can be connected to a TV's audio input or to an audio receiver with surround sound
Built-in network sharing provides for access to or copying digital media between your Windows or Mac and Telly servers using any common web browser and network folder sharing including tools like the My Network Places and Macs using Bonjour
All that entertainment content can be made available to other rooms of the house by adding a TellyVizion Playback Unit to your home network over any Ethernet connection between the Telly Server and the TellyVizion.
Systems can be configured with a central Telly Server with a TellyVizion as discussed above or by adding a second Telly Server providing the following advantages:
Two servers double (or more depending on the Telly Server configuration) the amount of storage for digital media.
Content on each Telly Server can be personalized to exclude shows you don’t want to share with your kids (like The Sopranos) and to exclude shows you don’t care to watch (like The Wiggles), you can keep the content of each server separate by turning off the sharing capability, and content on individual servers can be isolated to prevent accidental deletion
As time passes and tastes change, users can always turn sharing back on for access the content from any Telly Server on the home network
With a large digital media libraries, users can further extend storage by adding a TellyRAID system, providing any amount of storage, plus the added protection of RAID in case a hard disk drive fails - TellyRAID connects to one master Telly Server, which then provides the digital media stored on the TellyRAID to other TellyVizions or other Telly Servers throughout the house
History
Corporate Info
Interact-TV, Inc. is a Delaware C Corporation, founded in 2000 by a group of television professionals and was originally headquartered in Westminster, Colorado. Interact-TV, Inc. is currently a public company quoted on the OTC Markets under the symbol ITVI, and headquartered in Delaware.
Interact-TV was formed to develop software products that centralize both the Entertainment and Information experience. Interact-TV products blend Digital Media, Broadband, and Home Networking, then bring it to the end-user through a Television Interface. Interact-TV products focus on the increasing availability of broadband access to the home, and a pervasive demand for higher-value entertainment.
Its initial product, The Telly MC1000 Digital Entertainment Center, began shipping in 2002. This was the first integrated system to allow users to access most forms of digital entertainment including broadband Internet, cable and satellite television, digital audio and video entertainment, and digital home networking.
The Telly home entertainment server product line was continually enhanced and includes a complete line of servers, available through a network of dealers. The Interact-TV Telly product line established a prestigious business‑to‑business customer base including a significant partnership in November 2005 with Turner Broadcasting Systems (TBS) for over 500 Telly product units and special software work for Video‑On‑Demand (VOD) trials and services.
In 2009 Interact-TV, Inc. acquired Viscount Records, Inc. and Viscount Media Trust from the Medley family, in a deal structured by investment banker, Stan Medley. Shortly thereafter, Interact-TV, Inc. transferred all of its Telly properties operations (except for the Telly Minder system which was and is still under development) to JDV Solutions Inc., a spinoff of Interact-TV Inc. Since late 2009, Telly Systems (except for the Telly Minder) have been sold and serviced by JDV Solutions.
Since 2010 Interact-TV, Inc. has shifted its focus more to the production side of entertainment with the formation of Pocket Kid Records, and a Web Channel in 2010. Pocket Kid Records has been operating successfully with the band Dead Sara under contract and has had less success with the development of its Web-Channel and Telly Minder properties.
Operational Requirements
Telly home entertainment servers require an Internet connection for many features. Telly systems network over Ethernet to other computers and work with most cable set top boxes and both Dish and Direct-TV satellite via a simple infrared interface.
See also
Home theater PC
Media center (disambiguation)
Dreambox
Moxi
References
External links
Interact-TV Inc. web site
Digital television
Digital video recorders
Entertainment companies of the United States
Interactive television
Mass media companies of the United States | Telly (home entertainment server) | [
"Technology"
] | 1,828 | [
"Digital video recorders",
"Recording devices"
] |
15,978,374 | https://en.wikipedia.org/wiki/Martin%27s%20maximum | In set theory, a branch of mathematical logic, Martin's maximum, introduced by and named after Donald Martin, is a generalization of the proper forcing axiom, itself a generalization of Martin's axiom. It represents the broadest class of forcings for which a forcing axiom is consistent.
Martin's maximum states that if D is a collection of dense subsets of a notion of forcing that preserves stationary subsets of ω1, then there is a D-generic filter. Forcing with a ccc notion of forcing preserves stationary subsets of ω1, thus extends . If (P,≤) is not a stationary set preserving notion of forcing, i.e., there is a stationary subset of ω1, which becomes nonstationary when forcing with (P,≤), then there is a collection D of dense subsets of (P,≤), such that there is no D-generic filter. This is why is called the maximal extension of Martin's axiom.
The existence of a supercompact cardinal implies the consistency of Martin's maximum. The proof uses Shelah's theories of semiproper forcing and iteration with revised countable supports.
implies that the value of the continuum is and that the ideal of nonstationary sets on ω1 is -saturated. It further implies stationary reflection, i.e., if S is a stationary subset of some regular cardinal κ ≥ ω2 and every element of S has countable cofinality, then there is an ordinal α < κ such that S ∩ α is stationary in α. In fact, S contains a closed subset of order type ω1.
Notes
References
correction
See also
Transfinite number
Forcing (mathematics) | Martin's maximum | [
"Mathematics"
] | 355 | [
"Forcing (mathematics)",
"Mathematical logic"
] |
15,981,895 | https://en.wikipedia.org/wiki/Small%20activating%20RNA | Small activating RNAs (saRNAs) are small double-stranded RNAs (dsRNAs) that target gene promoters to induce transcriptional gene activation in a process known as RNA activation (RNAa).
Small dsRNAs, such as small interfering RNAs (siRNAs) and microRNAs (miRNAs), are known to be the trigger of an evolutionarily conserved mechanism known as RNA interference (RNAi). RNAi invariably leads to gene silencing via remodeling of chromatin to thereby suppress transcription, degrading complementary mRNA, or blocking protein translation. Later it was found that dsRNAs can also act to activate transcription and was thus designated saRNA. By targeting selected sequences in gene promoters, saRNAs induce target gene expression at the transcriptional/epigenetic level.
saRNAs are typically 21 nucleotides in length with a 2 nucleotide overhang at the 3' end of each strand, the same structure as a typical siRNA. To identify an saRNA that can activate a gene of interest, several saRNAs need to be designed within a 1- to 2-kbp promoter region by following a set of rules and tested in cultured cells. In some reports, saRNAs are designed in such a way to target non-coding transcripts that overlap the promoter sequence of a protein coding gene. Both chemically synthesized saRNAs and saRNAs expressed as short hairpin RNA (shRNA) have been used in in vitro and in vivo experiments.
An online resource for saRNAs has been developed to integrate experimentally verified saRNAs and proteins involved.
Therapeutic use of saRNAs have been suggested. They have been tested in animal models to treat bladder tumors,
liver carcinogenesis,
pancreatic cancer,
and erectile dysfunction.
In 2016, a phase I clinical trial involving advanced liver cancer patients was launched for the saRNA drug MTL-CEBPA. It aimed to complete in 2021.
References
Further reading
Morris KV (2008). RNA and the Regulation of Gene Expression: A Hidden Layer of Complexity. Caister Academic Press
Tost J (2008). Epigenetics. Caister Academic Press
External links
Small RNAs Reveal an Activating Side. Science News of the Week
How to get your genes switched on. New Scientist 16 November 2006
Bladder cancer: Intravesical RNA activation—a new treatment concept. Nature Review Urology October 2012
RNA
Gene expression | Small activating RNA | [
"Chemistry",
"Biology"
] | 496 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
15,982,593 | https://en.wikipedia.org/wiki/Dead%20mileage | Dead mileage, dead running, light running, empty cars or deadheading in public transport and empty leg in air charter is when a revenue-gaining vehicle operates without carrying or accepting passengers, such as when coming from a garage to begin its first trip of the day. Similar terms in the UK include empty coaching stock (ECS) move and dead in tow (DIT).
The term deadheading or jumpseating also applies to the practice of allowing employees of a common carrier to travel in a vehicle as a non-revenue passenger. For example, an airline might assign a pilot living in New York to a flight from Denver to Los Angeles, and the pilot would simply catch any flight going to Denver, either wearing their uniform or showing ID, in lieu of buying a ticket. Also, some transport companies will allow employees to use the service when off duty, such as a city bus line allowing an off-duty driver to commute to and from work for free.
Additionally, inspectors from a regulatory agency may use transport on a deadhead basis to do inspections, such as a Federal Railroad Administration inspector riding a freight train to inspect for safety violations.
Causes
Dead mileage routinely occurs when a route starts or finishes in a location away from a terminal or maintenance facility, and the start or end of a shift requires moving the vehicle without passengers.
Effects, prevention, and mitigation
Dead mileage incurs costs for the operator in terms of non-revenue earning fuel use, wages, and a reduction in the driver's legal availability for revenue-generating driving.
Operators will often reduce dead mileage by starting or finishing the first or last service of the day, or shift, at a garage along the route, a so-called part service or part route. Dead mileage may also be reduced by the operation of routes specifically timed and routed to facilitate bus movements rather than the passenger need. Often changing routes slightly (and ensuring high on time performance) can greatly increase the useful time-to-dead-mileage ratio for both crew and vehicles.
Cutting-edge technology has been also leveraged to preempt dead miles and find innovative ways to reduce downtime or in some cases replenish the empty or dead miles with revenue-generating backhaul. Artificial intelligence and other data science techniques can be used in a connected platform to predict and allocate loads in real-time and meet on-demand requests.
Dead mileage has increasingly become an issue with privatised competition for bus services, most notable with the privatisation of London bus services. Often operators will come to an arrangement to share garage facilities to reduce dead mileage.
Some air charter companies lease out their planes at a lower rate for the empty leg flights to attract rentals for originally non-revenue flights.
See also
Deadheading (employee)
Positioning flight, a similar concept in aviation
References
Scheduling (transportation)
Transport operations
Public transport
Aircraft operations
Economic efficiency | Dead mileage | [
"Physics"
] | 595 | [
"Physical systems",
"Transport",
"Transport operations"
] |
15,982,783 | https://en.wikipedia.org/wiki/Winchester%20measure | Winchester measure is a set of legal standards of volume instituted in the late 15th century (1495) by King Henry VII of England and in use, with some modifications, until the present day. It consists of the Winchester bushel and its dependent quantities, the peck, (dry) gallon and (dry) quart. They would later become known as the Winchester Standards, named because the examples were kept in the city of Winchester.
Winchester measure may also refer to:
the systems of weights and measures used in the Kingdom of Wessex during the Anglo-Saxon period, later adopted as the national standards of England, as well as the physical standards (prototypes) associated with these systems of units
a set of avoirdupois weight standards dating to the mid-14th century, in particular, the 56-pound standard commissioned by King Edward III, which served as the prototype for Queen Elizabeth I's reform of the avoirdupois weight system in 1588
a type of glass bottle, usually amber, used in the drug and chemical industry, known variously as the Boston round, Winchester bottle, or Winchester quart bottle
History
During the 10th century, the capital city of the English king, Edgar, was at Winchester and, at his direction, standards of measurement were instituted. However, nothing is known of these standards except that, following the Norman Conquest, the physical standards (prototypes) were removed to London. In 1496, a law of King Henry VII instituted the bushel that would later come to be known by the name "Winchester".
In 1588 Queen Elizabeth I, while reforming the English weight system (which, at the time, included no less than three different pounds going by the name "avoirdupois") based the new Exchequer standard on an ancient set of bronze weights found at Winchester and dating to the reign of Edward III.
These incidents have led to the widespread belief that the Winchester units of dry capacity measure, namely, the bushel and its dependent quantities the peck, gallon and quart, must have originated in the time of King Edgar. However, contemporary scholarship can find no evidence for the existence of any these units in Britain prior to the Norman Conquest. Furthermore, all of the units associated with Winchester measure (quarter, bushel, peck, gallon, pottle, quart, pint) have names of French derivation, at least suggestive of Norman origin.
Capacity measures in the Anglo-Saxon period
Prior to the Norman Conquest, the following units of capacity measure were used: sester, amber, mitta, coomb, and seam. A statute of 1196 (9 Ric. 1. c. 27) decreed: It is established that all measures of the whole of England be of the same amount, as well of corn as of vegetables and of like things, to wit, one good horse load; and that this measure be level as well in cities and boroughs as without. This appears to be a description of the seam, which would later be equated with the quarter. The word seam is of Latin derivation (from the Vulgar Latin sauma = packsaddle). Some of the other units are likewise of Latin derivation, sester from sextarius, amber from amphora. The sester could thus be taken as roughly a pint, the amber a bushel. However, the values of these units, as well as their relationships to one another, varied considerably over the centuries so that no clear definitions are possible except by specifying the time and place in which the units were used.
After the Norman Conquest
One of the earliest documents defining the gallon, bushel and quarter is the Assize of Weights and Measures, also known as the Tractatus de Ponderibus et Mensuris, sometimes attributed to Henry III or Edward I, but nowadays generally listed under Ancient Statutes of Uncertain Date and presumed to be from −1305. It states, By Consent of the whole Realm the King’s Measure was made, so that an English Penny, which is called the Sterling, round without clipping, shall weigh Thirty-two Grains of Wheat dry in the midst of the Ear; Twenty-pence make an Ounce; and Twelve Ounces make a Pound, and Eight Pounds make a Gallon of Wine; and Eight Gallons of Wine make a Bushel of London; which is the Eighth Part of a Quarter.
In 1496, An Act for Weights and Measures (12 Hen. 7. c. 5) stated That the Measure of a Bushel contain Gallons of Wheat, and that every Gallon contain of Wheat of Troy Weight, and every Pound contain Ounces of Troy Weight, and every Ounce contain Sterlings, and every Sterling be of the Weight of Corns of Wheat that grew in the Midst of the Ear of Wheat, according to the old Laws of this Land. Even though this bushel does not quite fit the description of the Winchester bushel, the national standard prototype bushel constructed the following year (and still in existence) is near enough to a Winchester bushel that it is generally considered the first, even though it was not known by that name at the time.
The Winchester bushel is first mentioned by name in a statute of 1670 entitled An Act for ascertaining the Measures of Corn and Salt (22 Cha. 2. c. 8) which states, And that if any person or persons after the time aforesaid shall sell any sort of corn or grain, ground or unground, or any kind of salt, usually sold by the bushel, either in open market, or any other place, by any other bushel or measure than that which is agreeable to the standard, marked in his Majesty's exchequer, commonly called the Winchester measure, containing eight gallons to the bushel, and no more or less, and the said bushel strucken even by the wood or brim of the same by the seller, and sealed as this act directs, he or they shall forfeit for every such offence the sum of forty shillings.
It is first defined in law by a statute of 1696–97 (8 & 9 Will. 3. c. 22 ss. 9 & 45) And to the End all His Majesties Subjects may know the Content of the Winchester Bushell whereunto this Act refers, and that all Disputes and Differences about Measure may be prevented for the future, it is hereby declared that every round Bushel with a plain and even Bottom, being Eighteen Inches and a Halfe wide throughout, & Eight Inches deep, shall be esteemed a legal Winchester Bushel according to the Standard in His Majesty's Exchequer.
In 1824 a new Act was passed in which the gallon was defined as the volume of ten pounds of pure water at with the other units of volume changing accordingly. The "Winchester bushel", which was some 3% smaller than the new bushel (eight new gallons), was retained in the English grain trade until formally abolished in 1835. In 1836, the United States Department of the Treasury formally adopted the Winchester bushel as the standard for dealing in grain and, defined as 2,150.42 cubic inches, it remains so today.
While the United Kingdom and the British Colonies changed to "Imperial" measures in 1826, the US continued to use Winchester measures and still does.
Measures in the city museum
None of Edgar's standard measures, which were probably made of wood, remain, but the city's copy of the standard yard, although stamped with the official mark of Elizabeth I, may date from the early twelfth century, during the reign of Henry I. Preserved standard weights date from 1357, and although the original bushel is lost, a standard bushel, gallon and quart made of bronze, issued in 1497 and stamped with the mark of Henry VII are still held.
See also
History of measurement
Winchester bottle
References
Further reading
Stevenson, Maurice. Weights and Measures of the City of Winchester, Winchester Museums Service
Oxford English Dictionary: Winchester
British History Online, University of London
Encyclopædia Britannica: British Imperial System
External links
Exchequer standard Winchester bushel measure, 1601 Image from Science Museum, London
Imperial units
Measurement
1490s in law
Obsolete units of measurement
Systems of units
History of Winchester | Winchester measure | [
"Physics",
"Mathematics"
] | 1,671 | [
"Obsolete units of measurement",
"Systems of units",
"Physical quantities",
"Quantity",
"Measurement",
"Size",
"Units of measurement"
] |
15,983,150 | https://en.wikipedia.org/wiki/Himalayan%20salt | Himalayan salt is rock salt (halite) mined from the Punjab region of Pakistan. The salt, which often has a pinkish tint due to trace minerals, is primarily used as a food additive to replace refined table salt but is also used for cooking and food presentation, decorative lamps, and spa treatments. The product is often promoted with unsupported claims that it has health benefits.
Geology
Himalayan salt is mined from the Salt Range mountains, the southern edge of a fold-and-thrust belt that underlies the Pothohar Plateau south of the Himalayas in Pakistan. Himalayan salt comes from a thick layer of Ediacaran to early Cambrian evaporites of the Salt Range Formation. This geological formation consists of crystalline halite intercalated with potash salts, overlain by gypsiferous marl and interlayered with beds of gypsum and dolomite with infrequent seams of oil shale that accumulated between 600 and 540 million years ago. These strata and the overlying Cambrian to Eocene sedimentary rocks were thrust southward over younger sedimentary rocks, and eroded to create the Salt Range.
History
Local legend traces the discovery of the Himalayan salt deposits to the army of Alexander the Great. However, the first records of mining are from the Janjua clan in the 1200s. The salt is mostly mined at the Khewra Salt Mine in Khewra, Jhelum District, Punjab, Pakistan, which is situated in the foothills of the Salt Range hill system between the Indus River and the Punjab Plain. It is primarily exported in bulk, and processed in other countries for the consumer market.
Mineral composition
Himalayan salt is a table salt. There is a common misconception that Himalayan salt has lower sodium than conventional table salt, but the levels are similar. Analysis of a range of Khewra salt samples showed them to be between 96% and 99% sodium chloride, with trace presence of calcium, iron, zinc, chromium, magnesium, and sulfates, all at varying safe levels below 1%.
Some salt crystals from this region have an off-white to transparent color, while the trace minerals in some veins of salt give it a pink, reddish, or beet-red color.
Nutritionally, Himalayan salt is similar to common table salt. A study of pink salts in Australia showed Himalayan salt to contain higher levels of a range of trace elements compared to table salt, but that the levels were too low for nutritional significance without an "exceedingly high intake", at which point any nutritional benefit would be outweighed by the risks of elevated sodium consumption. One notable exception regards the essential mineral iodine. Commercial table salt in many countries is supplemented with iodine, and this has significantly reduced disorders of iodine deficiency. Himalayan salt lacks these beneficial effects of iodine supplementation.
Uses
Himalayan salt is used to flavor food. Due mainly to marketing costs, pink Himalayan salt is up to 20 times more expensive than table salt or sea salt. The impurities giving it its distinctive pink hue, as well as its unprocessed state and lack of anti-caking agents, have given rise to the unsupported belief that it is healthier than common table salt. There is no scientific basis for such claimed health benefits. In the United States, the Food and Drug Administration warned a manufacturer of dietary supplements, including one consisting of Himalayan salt, to discontinue marketing the products using unproven claims of health benefits.
Slabs of salt are used as serving dishes, baking stones, and griddles, and it is also used to make tequila shot glasses. In such uses, small amounts of salt transfer to the food or drink and alter its flavor profile.
It is also used to make that radiate a pinkish or orangish hue, manufactured by placing a light source within the hollowed-out interior of a block of Himalayan salt. Claims that their use results in the release of ions that benefit health have no scientific foundation. Similar scientifically unsupported claims underlie the use of Himalayan salt to line the walls of spas, along with its use for salt-inhalation spa treatments. Salt lamps can be a danger to pets, who may suffer salt poisoning after licking them.
See also
Health effects of salt
List of edible salts
List of topics characterized as pseudoscience
Sea salt
Table salt
References
External links
Edible salt
Pseudoscience
Salt industry in Pakistan | Himalayan salt | [
"Chemistry"
] | 901 | [
"Edible salt",
"Salts"
] |
15,983,843 | https://en.wikipedia.org/wiki/DIDO%20%28software%29 | DIDO ( ) is a MATLAB optimal control toolbox for solving general-purpose optimal control problems. It is widely used in academia, industry, and NASA. Hailed as a breakthrough software, DIDO is based on the pseudospectral optimal control theory of Ross and Fahroo. The latest enhancements to DIDO are described in Ross.
Usage
DIDO utilizes trademarked expressions and objects that facilitate a user to quickly formulate and solve optimal control problems. Rapidity in formulation is achieved through a set of DIDO expressions which are based on variables commonly used in optimal control theory. For example, the state, control and time variables are formatted as:
primal.states,
primal.controls, and
primal.time
The entire problem is codified using the key words, cost, dynamics, events and path:
problem.cost
problem.dynamics
problem.events, and
problem.path
A user runs DIDO using the one-line command:
[cost, primal, dual] = dido(problem, algorithm),
where the object defined by algorithm allows a user to choose various options. In addition to the cost value and the primal solution, DIDO automatically outputs all the dual variables that are necessary to verify and validate a computational solution. The output dual is computed by an application of the covector mapping principle.
Theory
DIDO implements a spectral algorithm based on pseudospectral optimal control theory founded by Ross and his associates. The covector mapping principle of Ross and Fahroo eliminates the curse of sensitivity associated in solving for the costates in optimal control problems. DIDO generates spectrally accurate solutions whose extremality can be verified using Pontryagin's Minimum Principle. Because no knowledge of pseudospectral methods is necessary to use it, DIDO is often used as a fundamental mathematical tool for solving optimal control problems. That is, a solution obtained from DIDO is treated as a candidate solution for the application of Pontryagin's minimum principle as a necessary condition for optimality.
Applications
DIDO is used world wide in academia, industry and government laboratories. Thanks to NASA, DIDO was flight-proven in 2006. On November 5, 2006, NASA used DIDO to maneuver the International Space Station to perform the zero-propellant maneuver.
Since this flight demonstration, DIDO was used for the International Space Station and other NASA spacecraft. It is also used in other industries. Most recently, DIDO has been used to solve traveling salesman type problems in aerospace engineering.
MATLAB optimal control toolbox
DIDO is primarily available as a stand-alone MATLAB optimal control toolbox. That is, it does not require any third-party software like SNOPT or IPOPT or other nonlinear programming solvers. In fact, it does not even require the MATLAB Optimization Toolbox.
The MATLAB/DIDO toolbox does not require a "guess" to run the algorithm. This and other distinguishing features have made DIDO a popular tool to solve optimal control problems.
The MATLAB optimal control toolbox has been used to solve problems in aerospace, robotics and search theory.
History
The optimal control toolbox is named after Dido, the legendary founder and first queen of Carthage who is famous in mathematics for her remarkable solution to a constrained optimal control problem even before the invention of calculus. Invented by Ross, DIDO was first produced in 2001. The software is widely cited and has many firsts to its credit:
First general-purpose object-oriented optimal control software
First general-purpose pseudospectral optimal control software
First flight-proven general-purpose optimal control software
First embedded general-purpose optimal control solver
First guess-free general-purpose optimal control solver
Versions
The early versions, widely adopted in academia, have undergone significant changes since 2007. The latest version of DIDO, available from Elissar Global, does not require a "guess" to start the problem and eliminates much of the minutia of coding by simplifying the input-output structure. Low-cost student versions and discounted academic versions are also available from Elissar Global.
See also
Bellman pseudospectral method
Chebyshev pseudospectral method
Covector mapping principle
Fariba Fahroo
Flat pseudospectral methods
I. Michael Ross
Legendre pseudospectral method
Ross–Fahroo lemma
Ross' π lemma
Ross–Fahroo pseudospectral methods
References
Further reading
External links
How DIDO Works at How Stuff Works
NASA News
Pseudospectral optimal control: Part 1
Pseudospectral optimal control: Part 2
Numerical software
Mathematical optimization software
Optimal control | DIDO (software) | [
"Mathematics"
] | 937 | [
"Numerical software",
"Mathematical software"
] |
15,984,030 | https://en.wikipedia.org/wiki/Avanti%20%28project%29 | Avanti was established by the UK Department of Trade and Industry in 2002 to formulate an approach to collaborative working in order to enable construction project partners to work together effectively. The project was promoted by the Department of Trade and Industry with the support of most of the largest UK firms in the construction industry. Avanti also involved the International Alliance for Interoperability (IAI), Loughborough University and Co-Construct, a network of five construction research and information organizations.
The Avanti programme aimed to help overcome problems caused by incomplete,
inaccurate and ambiguous information.
Background
The Tavistock Institute report (1965) printed an extract from an RICS meeting of 1910 which stated "Architectural information is invariably inaccurate, ambiguous and incomplete". By the 1940s, the impact of this was valued at an additional 10% to the construction cost. By 1994 the Latham Report Constructing the Team suggested waste in the industry accounted for 25-30% of project costs. This figure is supported by earlier publications from the Building Research Establishment (BRE) and the Building Services Research and Information Association (BSRIA), which led to reports in 1987 from the Coordinating Committee for Project Information (now CPIC) that recommended a common format and disciplined approach to reduce the problem. These suggestions were proven on DTI-funded projects and case studies at Endeavour House at London Stansted Airport and the BAA Heathrow Express, which showed that significant savings in project cost and drawing production could be made by addressing people and process issues.
The problem and approach
Most construction projects are complex and involve co-ordination between many project participants both on- and off-site. As projects have increased in size and complexity, the number of participants involved in a single project has also increased. To improve collaboration between participants, the industry needs tools that work well, appropriate processes to implement them, and the right approach from the people involved. Tackling these points was at the heart of the Avanti action research programme.
Its purpose was to support live construction project teams, as they collaborate to deliver projects, by helping them apply the appropriate technologies. It worked through a team of partners representing each stage of the project process, with the objective of improving project and business performance through the use of information and communication technologies (ICT) to support collaborative working. It set out to show the advantages of CAD systems over conventional drawings, with information managed as a database rather than a drawing cabinet, and aimed to demonstrate that 3D Computer Aided Design systems could revolutionise projects, provided they were used for more than visualisation.
To reduce the risks involved in adopting new working methods, Avanti brought together areas of current best practice (such as CPIC protocols) that had previously gained too little market penetration to have significant impact on the sector. The objective was to deliver improved project and business performance by using ICT to support collaborative working.
Avanti focused on early access to all project information by all partners, on early involvement of the supply chain, and on sharing of information, drawings and schedules, in an agreed and consistent manner. The Avanti approach was supported by handbooks, toolkits and on-site mentoring and was reliant on advice and materials provided by CPIC.
Avanti mobilised existing technologies to improve business performance by increasing the quality of information and predictability of outcomes and by reducing risk and waste. The core of the Avanti approach to a project's whole life cycle was based on team-working and access to a common information model. All CAD information was generated with the same origin, orientation and scale, and organised in layers that could be shared. All layers and CAD model files were named consistently within a specific Avanti convention to allow others to find the relevant CAD data.
Guidance and Phases
Learning from early projects, Avanti produced practical working documentation and tested the methods on live projects. This information supplemented the CPIC document Code of Procedure for the Construction Industry and addressed problems of incomplete, inaccurate and ambiguous information. Avanti guidance included:
Standard Method and Procedure - advice on classification standards, spatial and technical co-ordination, example file and directory structures, and naming conventions.
Design Management Principles - comprising typical design management processes, information management processes, guidance on detailed task management and typical responsibilities of delivery team leaders.
The advice also covers areas such as:
a business case on the commercial imperatives for collaborative ICT
guidance on ensuring the software chain supported the design and supply chains
authoring information in a common data environment and controlling the sharing of information in a multi-author environment
For most projects, using this guidance consisted of three phases:
Phase 1: Change the poor industry practice of producing information to individual standards and throwing it ‘over the wall’ to the next person in the supply chain who then reproduces their own information. Using an agreed standard for the project, spatially co-ordinated and high-quality 2D information will then be produced for the parties involved.
Phase 2: Implementation of those same principles to generate 3D project models to produce 2D drawings and material measurement.
Phase 3: Full implementation of 3D object-oriented and Building information modelling systems. Each live project captures the lessons learned and the benefits gained.
Further development
In July 2006, the Avanti DTI Project documentation and brand ownership was transferred to Constructing Excellence. Since the handover, Constructing Excellence endeavoured to promote the savings demonstrated on live projects. Further work was also carried out to make Avanti part of the update of BS 1192. The BS 5555 committee coded the methods.
Results
Evaluation of the impacts of the Avanti project showed:
Early commitment offering up to 80% saving on implementation cost on medium size project.
50-85% saving on effort spent receiving information and formatting for reuse.
60-80% saving on effort spent finding information and documents.
75–80% saving in effort to achieve design co-ordination.
50% saving on time spent to assess tenders and award sub-contracts.
50% saving on effort in sub-contractor design approval.
The future
Constructing Excellence is developing a self-sustaining business to support roll out of the Avanti methodology to the UK construction industry. It will also draw on the results of other DTI-supported collaborative research such as Building Down Barriers.
See also
CPIC
References
External links
Avanti website
Constructing Excellence Avanti
What is Avanti?
People + Process + Tools
IAI home page
Civil engineering
Construction in the United Kingdom
Research projects
Organizations established in 2002
Loughborough University
2002 establishments in the United Kingdom
Research in the United States
Projects established in 2002 | Avanti (project) | [
"Engineering"
] | 1,306 | [
"Construction",
"Civil engineering"
] |
15,985,108 | https://en.wikipedia.org/wiki/Afghanistan%20Information%20Management%20Services | Afghanistan Information Management Services (AIMS) is a Kabul-based Afghan non-governmental organisation (NGO). It specialises in the application of information, communication and technology (Information Communication Technology) solutions, software development, and project management.
Background
In 1997, AIMS was established under the United Nations Office for the Coordination of Humanitarian Affairs (UNOCHA) in Islamabad, Pakistan, to serve the information management needs of Afghanistan. In 2001, it was relocated to Kabul and became a project of the United Nations Development Programme (UNDP). Since its inception, AIMS has served the Government of Afghanistan, non-governmental organizations and the international donor community as a provider of information management services including the development of software applications, database solutions, geospatial information and maps.
AIMS is located in a new office facility in Kabul, and in 2002 AIMS established offices in five regional centers: Jalalabad, Kandahar, Herat, Mazar-i-Sharif, and Kunduz. The regional offices provide services in information management, capacity development (i.e. job training), and mapping.
In 2004, AIMS shifted its focus from simple data provision to the development of information management capacity in the Government of the Islamic Republic of Afghanistan (GIRoA) and the broader humanitarian community, especially in the areas of Geographic Information Systems GIS and database development.
In September 2008, it transitioned from being a UNDP project into a national NGO as authorized by a Memorandum of Agreement between UNDP and the Ministry of Communications and Information Technology (MoCIT). The AIMS Transition Plan and Official Charter were authored by J. Brown (International Development Advisor) and Lisa Gomer (International Attorney). The Executive Director for AIMS at the time of the Charter was Neal Bratschun. Mr. J Nicholas Martyn was the 1st Executive Directory retained by AIMS to lead the organization.
AIMS served as a certified training site for ESRI and authored multiple software solutions for the International Development Community such as GEOBASE, and PIMSS.
References
The writer is Editor, Communications and Advocacy, Afghanistan Research and Evaluation Unit (AREU).
External links
Afghanistan Information Management Services website
Information management
Non-profit organisations based in Afghanistan
United Nations organizations based in Asia
Mass media in Afghanistan
Afghanistan and the United Nations
Defunct organisations based in Afghanistan | Afghanistan Information Management Services | [
"Technology"
] | 461 | [
"Information systems",
"Information management"
] |
15,985,233 | https://en.wikipedia.org/wiki/Quantum%20Byzantine%20agreement | Byzantine fault tolerant protocols are algorithms that are robust to arbitrary types of failures in distributed algorithms. The Byzantine agreement protocol is an essential part of this task. The constant-time quantum version of the Byzantine protocol, is described below.
Introduction
The Byzantine Agreement protocol is a protocol in distributed computing.
It takes its name from a problem formulated by Lamport, Shostak and Pease in 1982, which itself is a reference to a historical problem. The Byzantine army was divided into divisions with each division being led by a General with the following properties:
Each General is either loyal or a traitor to the Byzantine state.
All Generals communicate by sending and receiving messages.
There are only two commands: attack and retreat.
All loyal Generals should agree on the same plan of action: attack or retreat.
A small linear fraction of bad Generals should not cause the protocol to fail (less than a fraction).
(See for the proof of the impossibility result).
The problem usually is equivalently restated in the form of a commanding General and loyal Lieutenants with the General being either loyal or a traitor and the same for the Lieutenants with the following properties.
All loyal Lieutenants carry out the same order.
If the commanding General is loyal, all loyal Lieutenants obey the order that he sends.
A strictly less than fraction including the commanding General are traitors.
Byzantine failure and resilience
Failures in an algorithm or protocol can be categorized into three main types:
A failure to take another execution step in the algorithm: This is usually referred to as a "fail stop" fault.
A random failure to execute correctly: This is called a "random fault" or "random Byzantine" fault.
An arbitrary failure where the algorithm fails to execute the steps correctly (usually in a clever way by some adversary to make the whole algorithm fail) which also encompasses the previous two types of faults; this is called a "Byzantine fault".
A Byzantine resilient or Byzantine fault tolerant protocol or algorithm is an algorithm that is robust to all the kinds of failures mentioned above. For example, given a space shuttle with multiple redundant processors, if the processors give conflicting data, which processors or sets of processors should be believed? The solution can be formulated as a Byzantine fault tolerant protocol.
Sketch of the algorithm
We will sketch here the asynchronous algorithm
The algorithm works in two phases:
Phase 1 (Communication phase):
All messages are sent and received in this round.
A coin flipping protocol is a procedure that allows two parties A and B that do not trust each other to toss a coin to win a particular object.
There are two types of coin flipping protocols:
Weak coin flipping protocols: The two players A and B initially start with no inputs and they are to compute some value and be able to accuse anyone of cheating. The protocol is successful if A and B agree on the outcome. The outcome 0 is defined as A winning and 1 as B winning. The protocol has the following properties:
If both players are honest (they follow the protocol), then they agree on the outcome of the protocol with for .
If one of the players is honest (i.e., the other player may deviate arbitrarily from the protocol in his or her local computation), then the other party wins with probability at most . In other words, if B is dishonest, then , and if A is dishonest, then .
A strong coin flipping protocol: In a strong coin flipping protocol, the goal is instead to produce a random bit which is biased away from any particular value 0 or 1. Clearly, any strong coin flipping protocol with bias leads to weak coin flipping with the same bias.
Verifiable secret sharing
A verifiable secret sharing protocol: A (n,k) secret sharing protocol allows a set of n players to share a secret, s such that only a quorum of k or more players can discover the secret. The player sharing (distributing the secret pieces) the secret is usually referred to as the dealer. A verifiable secret sharing protocol differs from a basic secret sharing protocol in that players can verify that their shares are consistent even in the presence of a malicious dealer.
The fail-stop protocol
Protocol quantum coin flip for player
Round 1 generate the GHZ state on qubits and send the th qubit to the th player keeping one part
Generate the state on qudits (quantum-computing components consistent of multiple qubits), an equal superposition of the numbers between 1 and . Distribute the qudits between all the players
Receive the quantum messages from all players and wait for the next communication round, thus forcing the adversary to choose which messages were passed.
Round 2: Measure (in the standard base) all qubits received in round I. Select the player with the highest leader value (ties broken arbitrarily) as the "leader" of the round. Measure the leader's coin in the standard base.
Set the output of the QuantumCoinFlip protocol: = measurement outcome of the leader's coin.
The Byzantine protocol
To generate a random coin assign an integer in the range [0,n-1] to each player and each player is not allowed to choose its own
random ID as each player selects a random number for every other player and distributes this using a verifiable secret sharing scheme.
At the end of this phase players agree on which secrets were properly shared, the secrets are then opened and each player is assigned the value
This requires private information channels so we replace the random secrets by the superposition . In which the state is encoded using a quantum verifiable secret sharing protocol (QVSS). We cannot distribute the state since the bad players can collapse the state. To prevent bad players from doing so we encode the state using the Quantum verifiable secret sharing (QVSS) and send each player their share of the secret. Here again the verification requires Byzantine Agreement, but replacing the agreement by the grade-cast protocol is enough.
Grade-cast protocol
A grade-cast protocol has the following properties using the definitions in
Informally, a graded broadcast protocol is a protocol with a designated player called “dealer” (the one who broadcasts) such that:
If the dealer is good, all the players get the same message.
Even if the dealer is bad, if some good player accepts the message, all the good players get the same message (but they may or may not accept it).
A protocol P is said to be achieve graded broadcast if, at the beginning of the protocol, a designated player D (called the dealer) holds a value v, and at the end of the protocol, every player outputs a pair such that the following properties hold:
If D is honest, then = v and = 2 for every honest player .
For any two honest players and .
(Consistency) For any two honest players and , if and , then .
For the verification stage of the QVSS protocol guarantees that for a good dealer the correct state will be encoded, and that for any, possibly faulty dealer, some particular state will be recovered during the recovery stage. We note that for the purpose of our Byzantine quantum coin flip protocol the recovery stage is much simpler. Each player measures his share of the QVSS and sends the classical value to all other players. The verification stage guarantees, with high probability, that in the presence of up to faulty players all the good players will recover the same classical value (which is the same value that would result from a direct measurement of the encoded state).
Remarks
In 2007, a quantum protocol for Byzantine Agreement was demonstrated experimentally using a four-photon polarization-entangled state. This shows that the quantum implementation of classical Byzantine Agreement protocols is indeed feasible.
References
Quantum information science
Cryptography
Distributed computing problems
Fault tolerance
Engineering failures
Theory of computation | Quantum Byzantine agreement | [
"Mathematics",
"Technology",
"Engineering"
] | 1,590 | [
"Systems engineering",
"Cybersecurity engineering",
"Cryptography",
"Distributed computing problems",
"Reliability engineering",
"Applied mathematics",
"Technological failures",
"Computational problems",
"Engineering failures",
"Fault tolerance",
"Civil engineering",
"Mathematical problems"
] |
15,985,775 | https://en.wikipedia.org/wiki/Protonosphere | The protonosphere is a layer of the Earth's atmosphere (or any planet with a similar atmosphere) where the dominant components are atomic hydrogen and ionic hydrogen (protons). It is the outer part of the ionosphere, and extends to the interplanetary medium. Hydrogen dominates in the outermost layers because it is the lightest gas, and in the heterosphere, mixing is not strong enough to overcome differences in constituent gas densities. Charged particles are created by incoming ionizing radiation, mostly from solar radiation.
References
Ionosphere | Protonosphere | [
"Physics",
"Astronomy"
] | 112 | [
"Plasma physics",
"Astronomy stubs",
"Astrophysics",
"Astrophysics stubs",
"Plasma physics stubs"
] |
10,806,718 | https://en.wikipedia.org/wiki/Data%20sharing | Data sharing is the practice of making data used for scholarly research available to other investigators. Many funding agencies, institutions, and publication venues have policies regarding data sharing because transparency and openness are considered by many to be part of the scientific method.
A number of funding agencies and science journals require authors of peer-reviewed papers to share any supplemental information (raw data, statistical methods or source code) necessary to understand, develop or reproduce published research. A great deal of scientific research is not subject to data sharing requirements, and many of these policies have liberal exceptions. In the absence of any binding requirement, data sharing is at the discretion of the scientists themselves. In addition, in certain situations governments and institutions prohibit or severely limit data sharing to protect proprietary interests, national security, and subject/patient/victim confidentiality. Data sharing may also be restricted to protect institutions and scientists from use of data for political purposes.
Data and methods may be requested from an author years after publication. In order to encourage data sharing and prevent the loss or corruption of data, a number of funding agencies and journals established policies on data archiving. Access to publicly archived data is a recent development in the history of science made possible by technological advances in communications and information technology. To take full advantage of modern rapid communication may require consensual agreement on the criteria underlying mutual recognition of respective contributions. Models recognized for improving the timely sharing of data for more effective response to emergent infectious disease threats include the data sharing mechanism introduced by the GISAID Initiative.
Despite policies on data sharing and archiving, data withholding still happens. Authors may fail to archive data or they only archive a portion of the data. Failure to archive data alone is not data withholding. When a researcher requests additional information, an author sometimes refuses to provide it. When authors withhold data like this, they run the risk of losing the trust of the science community. A 2022 study identified about 3500 research papers which contained statements that the data was available, but upon request and further seeking the data, found that it was unavailable for 94% of papers.
Data sharing may also indicate the sharing of personal information on a social media platform.
U.S. government policies
Federal law
On August 9, 2007, President Bush signed the America COMPETES Act (or the "America Creating Opportunities to Meaningfully Promote Excellence in Technology, Education, and Science Act") requiring civilian federal agencies to provide guidelines, policies and procedures, to facilitate and optimize the open exchange of data and research between agencies, the public and policymakers. See Section 1009.
NIH data sharing policy
The NIH Final Statement of Sharing of Research Data says:
NSF Policy from Grant General Conditions
Office of Research Integrity
Allegations of misconduct in medical research carry severe consequences. The United States Department of Health and Human Services established an office to oversee investigations of allegations of misconduct, including data withholding. The website defines the mission:
Ideals in data sharing
Some research organizations feel particularly strongly about data sharing. Stanford University's WaveLab has a philosophy about reproducible research and disclosing all algorithms and source code necessary to reproduce the research. In a paper titled "WaveLab and Reproducible Research," the authors describe some of the problems they encountered in trying to reproduce their own research after a period of time. In many cases, it was so difficult they gave up the effort. These experiences are what convinced them of the importance of disclosing source code. The philosophy is described:
The idea is: An article about computational science in a scientific publication is not the scholarship itself, it is merely advertising of the scholarship. The actual scholarship is the complete software development environment and the complete set of instructions which generated the figures.
The Data Observation Network for Earth (DataONE) and Data Conservancy are projects supported by the National Science Foundation to encourage and facilitate data sharing among research scientists and better support meta-analysis. In environmental sciences, the research community is recognizing that major scientific advances involving integration of knowledge in and across fields will require that researchers overcome not only the technological barriers to data sharing but also the historically entrenched institutional and sociological barriers. Dr. Richard J. Hodes, director of the National Institute on Aging has stated, "the old model in which researchers jealously guarded their data is no longer applicable".
The Alliance for Taxpayer Access is a group of organizations that support open access to government sponsored research. The group has expressed a "Statement of Principles" explaining why they believe open access is important. They also list a number of international public access policies. This is no more so than in timely communication of essential information to effectively respond to health emergencies. While public domain archives have been embraced for depositing data, mainly post formal publication, they have failed to encourage rapid data sharing during health emergencies, among them the Ebola and Zika, outbreaks. More clearly defined principles are required to recognize the interests of those generating the data while permitting free, unencumbered access to and use of the data (pre-publication) for research and practical application, such as those adopted by the GISAID Initiative to counter emergent threats from influenza.
International policies
Australia
Austria
Europe — Commission of European Communities
Germany
United Kingdom
'Omic Data Sharing — a list of policies of major science funders FAIRsharing.org Catalogue of Data Policies
India -National Data Sharing and Accessibility Policy – Government of India
Data sharing problems in academia
Genetics
Withholding of data has become so commonplace in genetics that researchers at Massachusetts General Hospital published a journal article on the subject. The study found that "Because they were denied access to data, 28% of geneticists reported that they had been unable to confirm published research."
Psychology
In a 2006 study, it was observed that, of 141 authors of a publication from the American Psychological Association (APA) empirical articles, 103 (73%) did not respond with their data over a 6-month period. In a follow-up study published in 2015, it was found that 246 out of 394 contacted authors of papers in APA journals did not share their data upon request (62%).
Archaeology
A 2018 study reported on study of a random sample of 48 articles published during February–May 2017 in the Journal of Archaeological Science which found openly available raw data for 18 papers (53%), with compositional and dating data being the most frequently shared types. The same study also emailed authors of articles on experiments with stone artifacts that were published during 2009 and 2015 to request data relating to the publications. They contacted the authors of 23 articles and received 15 replies, resulting in a 70% response rate. They received five responses that included data files, giving an overall sharing rate of 20%.
Scientists in training
A study of scientists in training indicated many had already experienced data withholding. This study has given rise to the fear the future generation of scientists will not abide by the established practices.
Differing approaches in different fields
Requirements for data sharing are more commonly imposed by institutions, funding agencies, and publication venues in the medical and biological sciences than in the physical sciences. Requirements vary widely regarding whether data must be shared at all, with whom the data must be shared, and who must bear the expense of data sharing.
Funding agencies such as the NIH and NSF tend to require greater sharing of data, but even these requirements tend to acknowledge the concerns of patient confidentiality, costs incurred in sharing data, and the legitimacy of the request. Private interests and public agencies with national security interests (defense and law enforcement) often discourage sharing of data and methods through non-disclosure agreements.
Data sharing poses specific challenges in participatory monitoring initiatives, for example where forest communities collect data on local social and environmental conditions. In this case, a rights-based approach to the development of data-sharing protocols can be based on principles of free, prior and informed consent, and prioritise the protection of the rights of those who generated the data, and/or those potentially affected by data-sharing.
See also
Data archive
Data dissemination
Data privacy
Data publishing
Data citation
FAIR data
File sharing
Information sharing
Knowledge sharing
Open data
Registry of Research Data Repositories
References
Literature
— discusses the international exchange of data in the natural sciences.
External links
"The Selfish Gene : Data Sharing and Withholding in Academic Genetics" by Eric Campbell and David Blumenthal published May 31, 2002.
Data sharing and data archiving ― American Psychological Association
The Public Domain of Digital Research Data
WaveLab and Reproducible Research by Jonathan B. Buckheit and David L. Donoho of Stanford University
The Role of Data and Program Code Archives in the Future of Economic Research published by The Federal Reserve Bank of St. Louis
Ecological Society of America data sharing and archiving initiative
FAIRsharing.org A website on data sharing and data policies in biology
UK Data Archive: Manage and Share data
Data Management Plan Resources and Examples - Inter-university Consortium for Political and Social Research.
DataONE
Nature Scientific Data: open-access, online-only publication for descriptions of scientifically valuable datasets.
Data
Scientific method
Scholarly communication
Academic publishing
Open access (publishing)
Open data
Open science
Scientific misconduct
Sharing | Data sharing | [
"Technology"
] | 1,850 | [
"Scientific misconduct",
"Information technology",
"Data",
"Ethics of science and technology",
"Data publishing"
] |
10,806,798 | https://en.wikipedia.org/wiki/NGC%202244 | NGC 2244 (also known as Caldwell 50 or the Satellite Cluster) is an open cluster in the Rosette Nebula, which is located in the constellation Monoceros. This cluster has several O-type stars, super hot stars that generate large amounts of radiation and stellar wind.
The age of this cluster has been estimated to be less than 5 million years. The brightest star in the direction of the cluster is 12 Monocerotis, a foreground K-class giant. The two brightest members of the cluster are HD 46223 of spectral class O4V, 400,000 times brighter than the Sun, and approximately 50 times more massive, and HD 46150, whose spectral type is O5V, has a luminosity 450,000 time larger than that of our star, and is up to 60 times more massive, but it may actually be a double star. These stars do not seem to pulsate, which is in agreement with stellar modeling of stars with similar global parameters.
A study from 2023 found that brown dwarfs in NGC 2244 form closer to OB-stars than to other stars. This could be explained by the photoevaporation of the outer layers of prestellar cores that otherwise would form low-mass stars or intermediate mass stars. The study also found a low disk fraction for low-mass objects of 39±9% for objects later than K0. One cluster member was discovered in the past to show signs of an eroding disk, reminiscent of a proplyd.
See also
List of most massive stars
References
External links
NASA – photo and information on NGC 2244
Spitzer Space Telescope site – Photo of NGC 2244
SEDS – NGC 2244
Open clusters
Monoceros
2244
050b | NGC 2244 | [
"Astronomy"
] | 355 | [
"Monoceros",
"Constellations"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.